2

Automated Testing

This chapter focuses on automated testing and how helpful it can be for crafting better software. It also covers a few different types of tests and the foundation of test-driven development (TDD). We also outline how testable ASP.NET Core is and how much easier it is to test ASP.NET Core applications than old ASP.NET MVC applications. This chapter is an overview of automated testing, its principles, xUnit, and more. While other books cover this topic more in-depth, this chapter covers the foundational aspect of automated testing, built upon throughout the book.

In this chapter, we cover the following topics:

  • An overview of automated testing
  • Testing .NET applications
  • Important testing principles

An overview of automated testing

Testing is an integral part of the development process, and automated testing becomes crucial in the long run. You can always run your ASP.NET Core website, open a browser, and click everywhere to test your features. That’s a legitimate approach, but it is harder to test individual rules or more complex algorithms that way. Another downside is the lack of automation; when you first start with a small app containing a few pages, a few endpoints, or a few features, it may be fast to perform those tests manually. However, as your app grows, it becomes more tedious, takes longer, and the likelihood of making a mistake increases. Don’t get me wrong here; you will always need real users to test out your applications, but you may want those tests to focus on the UX, the content, or on some experimental features that you are building instead of bug reports that automated tests could have caught early on.

There are multiple types of tests, and developers are very creative at finding new ways to test things. Here is a list of three broad categories that represent how we can divide automated testing from a code correctness standpoint:

  • Unit tests
  • Integration tests
  • End-to-end (E2E) tests

The test pyramid is a good way of explaining a few concepts around automated testing. You want different granularity of tests, and you want a different number of tests depending on their complexity. The following test pyramid shows the three types of tests stated above. However, you can add all the other types of tests in there if you want to. Moreover, that’s just an abstract guideline to give you an idea. The most important aspect is the return on investment (ROI). If you can write one integration test that covers a large surface and is fast enough, this might be worth doing instead of multiple unit tests.

Figure 2.1: The test pyramid

Unit testing

Unit tests focus on individual units, like testing the outcome of a method. Unit tests should be fast and should not rely on any infrastructure such as a database. Those are the kinds of tests you want the most because they run fast, and each one tests a precise code path. They should also help you design your application better because you use your code in the tests, so you become its first consumer, leading to you finding some design flaws and making your code better. If you don’t like using your code in your tests, that is a good indicator that nobody else will. Unit tests should focus on testing algorithms (the ins and outs) and domain logic, not the code itself; how you wrote the code should have no impact on the intent of the test. For example, you are testing that a Purchase method executes the logic required to purchase one or more items, not that you created the variable X or Y or Z inside that method. Don’t discourage yourself if you find it challenging; writing a good test suite is not as easy as it sounds.

Integration testing

Integration tests focus on the interaction between components, such as what happens when a component queries the database or what happens when two components interact with each other.

Integration tests often require some infrastructure to interact with, which makes them slower to run. By following the classic testing model, you want integration tests, but you want fewer of them than unit tests. An integration test can be very close to an E2E test but without using a production-like environment.

Note

We break the test pyramid rule later, so always be critical of rules and principles; sometimes, it can be better to break or bend them. For example, having one good integration test can be better than N unit tests; don’t discard that fact when writing your tests.

End-to-end testing

End-to-end tests focus on application-wide behaviors, such as what happens when a user clicks on a specific button, navigates to a particular page, posts a form, or sends a PUT request to some web API endpoint. E2E tests focus on testing the whole application from the user’s perspective, not just part of it, as unit and integration tests do. E2E tests are usually run on actual infrastructure to test your application and your deployment.

Other types of tests

There are other types of automated tests. For example, we could do load testing, performance testing, regression testing, contract testing, penetration testing, functional testing, smoke testing, and more. You can automate tests for almost anything you want to validate, but some tests are more challenging to automate or more fragile than others, such as UI tests. That said, if you can automate a test in a reasonable timeframe, do it! In the long run, it should pay off.

One more thing; don’t blindly rely on metrics such as code coverage. Those metrics make for cute badges in your GitHub project’s readme.md file but can lead you off track, resulting in you writing useless tests. Don’t get me wrong, code coverage is a great metric when used correctly, but remember that one good test can be better than a lousy test suite covering 100% of your codebase.

Writing good tests is not easy and comes with practice.

Note

One piece of advice: keep your test suite healthy by adding missing test cases and removing obsolete or useless tests. Think about use case coverage, not about how many lines of code are covered by your tests.

Before moving forward to testing styles, let’s inspect a hypothetical system and explore a more efficient way to test it.

Picking the right test style

Next is a dependency map of a hypothetical system. We use that diagram to pick the most meaningful type of test possible for each piece of the program. In real life, that diagram will most likely be in your head, but in this case, I drew it out. Let’s inspect that diagram before I explain its content:

Diagram  Description automatically generated

Figure 2.2: Dependency map of a hypothetical system

In the diagram, the Actor can be anything from a user to another system. Presentation is the piece of the system that the Actor is interacting with that forwards the request to the system itself (this could be a user interface). D1 is a component that has to decide what to do next based on the user input. C1 to C6 are other components of the system (could be classes, for example). DB is a database.

D1 must choose between three code paths: interact with the components C1, C4, or C6. This type of logic is usually a good subject for unit tests, ensuring the algorithm yields the correct result based on the input parameter. Why pick a unit test? We can test multiple scenarios very quickly, try extreme cases, out-of-bound data cases, and more. We usually mock the dependencies away in this type of test and assert that the subject under test made the expected call on the desired component.

Then, if we look at the other code paths, we could write one or more integration tests for component C1, testing the whole chain in one go (C1, C5, and C3) instead of writing multiple mock-heavy unit tests for each component. If there is any logic that we need to test in components C1, C5, or C3, we can always add a few unit tests; that’s what they are for.

Finally, C4 and C6 are both using C2. Depending on the code (that we don’t have here), we could write integration tests for C4 and C6, testing C2 simultaneously. Another way would be to unit test C4 and C6, and then write integration tests between C2 and the DB. If C2 has no logic, the latter could be the best and the fastest, while the former will most likely yield results that give you more confidence in your test suite in a continuous delivery model.

When it is an option, I recommend evaluating the possibility of writing fewer meaningful integration tests that assert the correctness of a use case over a suite of mock-heavy unit tests.

That may seem to go “against” the test pyramid, but does it? If you spend less time (thus lower costs) testing more use cases (adding more value), that sounds like a win to me. Moreover, we must not forget that mocking dependencies tends to make you waste time fighting the framework or other libraries instead of testing something meaningful.

Now that we have explored the fundamentals of automated testing, it is time to explore testing approaches and TDD, which is a way to apply those testing concepts.

Testing approaches

There are various approaches to testing, such as behavior-driven development (BDD), acceptance test-driven development (ATDD), and test-driven development (TDD). The DevOps culture brings a mindset to the table that focuses on embracing automated testing in line with its continuous integration (CI) and continuous deployment (CD) ideals. CD is really where a robust and healthy suite of tests shine, giving you a high degree of confidence in your code, high enough to deploy the program when all tests pass.

TDD is a method of developing software that states that you should write one or more tests before writing the actual code. In a nutshell, you invert your development flow by following the Red-Green-Refactor technique, which goes like this:

  1. You write a failing test (red).
  2. You write just enough code to make your test pass (green).
  3. You refactor that code to improve the design by ensuring that all of the tests are still passing.

Note

We explore the meaning of refactoring in the next section.

ATDD is similar to TDD but focuses on acceptance (or functional) tests instead of software units and involves multiple parties like customers, developers, and testers.

BDD is another complementary technique originating from TDD and ATDD. BDD focuses on formulating test cases around application behaviors using spoken language and also involves multiple parties like customers, developers, and testers. Moreover, practitioners of BDD often leverage the given–when–then grammar to formalize their test cases. Because of that, BDD output is in a human-readable format allowing stakeholders to consult such artifacts.

The given–when–then template defines the way to describe the behavior of a user story or acceptance test, like this:

  • Given one or more preconditions (context)
  • When something happens (behavior)
  • Then one or more observable changes are expected (measurable side effects)

For the sake of simplicity, we stick to unit testing, integration testing, and a tad of TDD in the book. ATDD and BDD are great areas to dig deeper into and can help design better apps; defining precise user-centric specifications can help build only what is needed, prioritize better, and improve communication between parties. Nonetheless, let’s go back to the main track and define refactoring.

Refactoring

Refactoring is about (continually) improving the code without changing its behavior.

Having an automated test suite should help you achieve that goal and should help you discover when you break something. No matter whether you do TDD or not, I do recommend refactoring as often as possible; this helps clean your codebase, and it should also help you get rid of some technical debt at the same time.

Okay, but what is technical debt?

Technical debt

Technical debt represents the corners you cut short while developing a feature or a system. That happens no matter how hard you try because life is life, and there are delays, deadlines, budgets, and people, including developers.

The most important point is to understand that you cannot avoid technical debt altogether, so it’s better to embrace that fact and learn to live with it instead of fighting it. From that point forward, you can only try to limit the amount of technical debt that you, or someone else, generates.

One way to limit the piling up of technical debt is to refactor the code often. So, factor the refactoring time into your time estimates. Another way is to improve collaboration between all the parties involved. Everyone must work toward the same goal if you want your projects to succeed.

You will at some point cut the usage of best practices short due to external forces like people or time constraints. The key is to come back at it as soon as possible to repay that technical debt, and automated tests are there to help you refactor that code and get rid of that debt elegantly. Depending on the size of your workplace, there will be more or fewer people between you and that decision.

Tip

I realize that some of these things might be out of your control, so you may have to live with more technical debt than you had hoped for. However, even when things are out of your control, nothing stops you from becoming a pioneer and working toward changing the enterprise’s culture for the better. Don’t be afraid to become a leader, an agent of change.

Nevertheless, don’t let the technical debt pile up too high, or you may not be able to pay it back, and at some point, that’s where a project begins to break and fail. Don’t be mistaken; a project in production can be a failure. Delivering a product does not guarantee success, and I’m talking about the quality of the code here, not the amount of generated revenue (I’ll leave that to other people to evaluate).

Next, we look at testing ASP.NET Core applications.

Testing .NET applications

The ASP.NET Core team made our life easier by designing ASP.NET Core for testability; most testing is way easier than before the ASP.NET Core era. Back when .NET Core was in pre-release, I discovered that the .NET team was using xUnit to test their code and that it was the only testing framework available. xUnit has become my favorite testing framework, and I use it throughout the book.

We are not going into full TDD mode, as it would deviate our focus from the matter at hand, but I did my best to tag automated testing along for the ride! Why are we talking about tests in an architectural book? Because testability is usually the sign of a good design, which allows some concepts to be proven by using tests instead of words.

Moreover, in many code samples, the test cases are the consumers, making the program lighter without building an entire user interface over it. That allows us to focus on the patterns we are exploring instead of getting our focus scattered over some boilerplate code.

Let’s start by creating a test project.

Creating an xUnit test project

To create a new xUnit test project, you can run the dotnet new xunit command, and the CLI does the job for you by creating a project containing a UnitTest1 class. That command does the same as creating a new xUnit project from Visual Studio.

For unit testing projects, name the project the same as the project you want to test and append .Tests to it. For example, MyProject would have an associated MyProject.Tests project associated with it. We explore more details in the Organizing your tests section below.

The template already defines all the required NuGet packages, so you can start testing right away; after adding a reference to your project under test, of course.

Next, we explore some xUnit features.

Getting started with xUnit

In xUnit, the [Fact] attribute is the way to create unique test cases, while the [Theory] attribute is the way to make data-driven test cases. Let’s start with facts.

Facts

Any method with no parameter can become a test method by decorating it with a [Fact] attribute, like this:

public class FactTest
{
    [Fact]
    public void Should_be_equal()
    {
        var expectedValue = 2;
        var actualValue = 2;
        Assert.Equal(expectedValue, actualValue);
    }
}

You can also decorate asynchronous methods with the fact attribute when the code under test needs it:

public class AsyncFactTest
{
    [Fact]
    public async Task Should_be_equal()
    {
        var expectedValue = 2;
        var actualValue = 2;
        await Task.Yield();
        Assert.Equal(expectedValue, actualValue);
    }
}

In the preceding code, the highlighted line conceptually represents an asynchronous operation and does nothing more than allow the use of the async/await keywords.

Note

The test classes are nested in the xUnitFeaturesTest class, part of the MyApp namespace, and under the MyApp.Tests project.

From the Visual Studio Test Explorer, that test case looks like this:

A screenshot of a computer  Description automatically generated with medium confidence

Figure 2.3: Test results in Visual Studio

Running the dotnet test CLI command should yield a result similar to the following:

Passed!  - Failed:     0, Passed:    23, Skipped:     0, Total:    23, Duration: 22 ms - MyApp.Tests.dll (net6.0)

As we can read from the preceding output, all tests are passing, none have failed, and none were skipped. It is as simple as that to create test cases using xUnit.

Have you noticed the Assert keyword? If you are not familiar with it, we explore assertions next.

Assertions

We just learned about facts and will head toward theories next. Meanwhile, let’s visit a few ways to assert correctness. We use barebone xUnit functionality in this section, but you can bring in the assertion library of your choice if you have one.

In xUnit, the assertion throws an exception when it fails. You do not have to handle those; that’s the mechanism to propagate the failure result up to the test runner.

We won’t explore all possibilities here, but let’s start with a few common use cases. The code is broken down to make the explanations clearer:

public class AssertionTest
{
    [Fact]
    public void Exploring_xUnit_assertions()
    {
        object obj1 = new MyClass { Name = "Object 1" };
        object obj2 = new MyClass { Name = "Object 1" };
        object obj3 = obj1;
        object? obj4 = default(MyClass);

In the preceding code, we declare a few objects that are used by the assertions next. All variables are of the object type to leverage the IsType method later. The MyClass class is defined after the assertions:

        Assert.Equal(expected: 2, actual: 2);
        Assert.NotEqual(expected: 2, actual: 1);

The preceding two assertions are explicit and compare whether the actual value is equal, or not equal, to the expected value. Assert.Equal is probably the most commonly used assertion method.

Tip

As a rule of thumb, it is better to assert a result (Equal) than assert that the value is different (NotEqual). Except in a few rare cases, asserting equality will yield more accurate results and close the door to missing defects.

        Assert.Same(obj1, obj3);
        Assert.NotSame(obj2, obj3);
        Assert.Equal(obj1, obj2);

The first two assertions are very similar to the equality ones, but assert that the objects are the same instance, or not (have the same reference or not). The third one asserts that the two objects are equals and leverages record classes to make it that easy; obj1 and obj2 are not the same but are equal (see Appendix A for more information on record classes):

        Assert.Null(obj4);
        Assert.NotNull(obj3);

These two are also very explicit, asserting that the value is null or not:

        var instanceOfMyClass = Assert.IsType<MyClass>(obj1);
        Assert.Equal(expected: "Object 1", actual: instanceOfMyClass.Name);

The first preceding line asserts that obj1 is of the MyClass type and then returns the argument (obj1) converted to the asserted type (MyClass). If the type is incorrect, the IsType method will throw an exception:

        var exception = Assert.Throws<SomeCustomException>(
            testCode: () => OperationThatThrows("Toto")
        );
        Assert.Equal(expected: "Toto", actual: exception.Name);
        static void OperationThatThrows(string name)
        {
            throw new SomeCustomException { Name = name };
        }

The highlighted line of the preceding code asserts that the testCode argument throws an exception of the SomeCustomException type. The testCode argument is executing the OperationThatThrows inline function, which does just that. What is often very important is to test the fact that the exception properties, like the message, are well-formatted. Whether you want to assert the error message or another property of the exception, it is a well-used pattern that the Throws method allows us to do easily, as the second line does by asserting that the value of the exception.Name property is equal to the one passed as an argument of the inline function ("Toto"). The same behavior as IsType happens here; if the exception is of the wrong type or no exception is thrown at all, the Throws method will throw an exception:

    }
    private record class MyClass
    {
        public string? Name { get; set; }
    }
    private class SomeCustomException : Exception
    {
        public string? Name { get; set; }
    }
}

The remaining two classes are utilities used in the tests with nothing special to them; their purpose was to help us play with xUnit assertions.

We covered a few assertion methods, but many others are part of xUnit, like the Collection, Contains, False, and True methods. We use many assertions throughout the book, so if these are still unclear, you will have a chance to learn more about them.

Next, let’s look at data-driven test cases using theories.

Theories

For more complex test cases, we can use theories. A theory is defined in two parts:

  • A [Theory] attribute.
  • At least one of the three following data attributes: [InlineData], [MemberData], or [ClassData].

Interestingly, you are not limited to only one type of data attribute; you can use as many as you need to suit your needs and feed a theory with the appropriate data.

When writing a theory, your primary constraint is to ensure that the number of values matches the number of parameters defined in the test method. For example, a theory with one parameter must be fed with one value. Let’s look at some examples.

The [InlineData] attribute is the most suitable for constant values or smaller sets of values. Inline data is the most straightforward way of the three because of the proximity of the test values and the test method.

Here is an example of a theory using inline data:

public class InlineDataTest
{
    [Theory]
    [InlineData(1, 1)]
    [InlineData(2, 2)]
    [InlineData(5, 5)]
    public void Should_be_equal(int value1, int value2)
    {
        Assert.Equal(value1, value2);
    }
}

That test method yields three test cases in the Test Explorer, where each can pass or fail individually:

Figure 2.3 – Test results

Figure 2.4: Test results

Then, the [MemberData] and [ClassData] attributes can be used to simplify the test method’s declaration. When it is impossible to instantiate the data in the attribute, reuse the data in multiple test methods, or encapsulate the data away from the test class.

Here is an example of [MemberData] usage:

public class MemberDataTest
{
    public static IEnumerable<object[]> Data => new[]
    {
        new object[] { 1, 2, false },
        new object[] { 2, 2, true },
        new object[] { 3, 3, true },
    };
    public static TheoryData<int, int, bool> TypedData =>new TheoryData<int, int, bool>
    {
        { 3, 2, false },
        { 2, 3, false },
        { 5, 5, true },
    };
    [Theory]
    [MemberData(nameof(Data))]
    [MemberData(nameof(TypedData))]
    [MemberData(nameof(ExternalData.GetData), 10, MemberType = typeof(ExternalData))]
    [MemberData(nameof(ExternalData.TypedData), MemberType = typeof(ExternalData))]
    public void Should_be_equal(int value1, int value2, bool shouldBeEqual)
    {
        if (shouldBeEqual)
        {
            Assert.Equal(value1, value2);
        }
        else
        {
            Assert.NotEqual(value1, value2);
       }
    }
    public class ExternalData
    {
        public static IEnumerable<object[]> GetData(int start) => new[]
        {
            new object[] { start, start, true },
            new object[] { start, start + 1, false },
            new object[] { start + 1, start + 1, true },
        };
        public static TheoryData<int, int, bool> TypedData => new TheoryData<int, int, bool>
        {
            { 20, 30, false },
            { 40, 50, false },
            { 50, 50, true },
        };
    }
}

The preceding test case should yield 12 results. If we break it down, the code starts by loading three sets of data from the IEnumerable<object[]> Data property by decorating the test method with the [MemberData(nameof(Data))] attribute. This is how to load data from a member of the class the test method is declared in.

Then, the second property is very similar to the Data property, but replaces IEnumerable<object[]> with a TheoryData<…> class, making it more readable and type-safe. This is my preferred way of defining member data and what I recommend you to do. Like the first one, we feed those three sets of data to the test method by decorating it with the [MemberData(nameof(TypedData))] attribute. Once again, it is part of the test class.

The third data feeds three more sets of data to the test method. However, that data originates from the GetData method of the ExternalData class, sending 10 as an argument during the execution (the start parameter). To do that, we must specify the MemberType instance where the method is located so xUnit knows where to look. In this case, we pass the argument 10 as the second parameter of the MemberData constructor. However, in other cases, you can pass zero or more arguments there.

Finally, we are doing the same for the ExternalData.TypedData property, which is represented by the [MemberData(nameof(ExternalData.TypedData), MemberType = typeof(ExternalData))] attribute. Once again, the only difference is that the property is defined using TheoryData instead of IEnumerable<object[]>, which makes its intent clearer.

When running the tests, the data provided by the [MemberData] attributes is combined, which yields the following result in the Test Explorer:

Figure 2.4 – Test results

Figure 2.5: Test results

These are only a few examples of what we can do with the [MemberData] attribute. The goal is to cover just enough cases to get you started.

Last but not least, the [ClassData] attribute gets its data from a class implementing IEnumerable<object[]> or inheriting from TheoryData<…>. The concept is the same as the other two. Here is an example:

public class ClassDataTest
{
    [Theory]
    [ClassData(typeof(TheoryDataClass))]
    [ClassData(typeof(TheoryTypedDataClass))]
    public void Should_be_equal(int value1, int value2, bool shouldBeEqual)
    {
        if (shouldBeEqual)
        {
            Assert.Equal(value1, value2);
        }
        else
        {
            Assert.NotEqual(value1, value2);
        }
    }
    public class TheoryDataClass : IEnumerable<object[]>
    {
        public IEnumerator<object[]> GetEnumerator()
        {
            yield return new object[] { 1, 2, false };
            yield return new object[] { 2, 2, true };
            yield return new object[] { 3, 3, true };
        }
        IEnumerator IEnumerable.GetEnumerator() => GetEnumerator();
    }
    public class TheoryTypedDataClass : TheoryData<int, int, bool>
    {
        public TheoryTypedDataClass()
        {
            Add(102, 104, false);
        }
    }
}

These are very similar to [MemberData], but instead of pointing to a member, we point to a type.

In TheoryDataClass, implementing the IEnumerable<object[]> interface makes it easy to yield return the results. On the other hand, in the TheoryTypedDataClass class, by inheriting TheoryData, we can leverage a list-like Add method. Once again, I find inheriting from TheoryData more explicit, but either way works with xUnit. You have many options, so choose the best one for your use case.

Here is the result in the Test Explorer, which is very similar to the other attributes:

Figure 2.5 – Test Explorer

Figure 2.6: Test Explorer

That’s it for the theories—next, a few last words before organizing our tests.

Closing words

Now that facts, theories, and assertions are out of the way, xUnit offers other mechanics to allow developers to inject dependencies into their test classes. These are named fixtures. Fixtures allow dependencies to be reused by all of the test methods of a test class by implementing the IClassFixture<T> interface. Fixtures are very helpful for costly dependencies, like creating an in-memory database. With fixtures, you can create the dependency once and use it multiple times. The ValuesControllerTest class in the MyApp.IntegrationTests project shows that in action.

It is important to note that xUnit creates an instance of the test class for every test run, so your dependencies are recreated every time if you are not using the fixtures.

You can also share the dependency provided by the fixture between multiple test classes by using ICollectionFixture<T>, [Collection], and [CollectionDefinition] instead. We won’t get into the details here, but at least you know it’s possible and know what types to look for when you need something similar.

Finally, if you have worked with other testing frameworks, you might have encountered setup and teardown methods. In xUnit, there are no particular attributes or mechanisms for handling setup and teardown code. Instead, xUnit uses existing OOP concepts:

  • To set up your tests, use the class constructor.
  • To tear down (clean up) your tests, implement IDisposable or IAsyncDisposable and dispose of your resources there.

That’s it, xUnit is very simple and powerful, which is the main reason why I adopted it as my main testing framework several years ago and why I chose it for this book.

Next, we learn to write readable test methods.

Arrange, Act, Assert

One well-known method for writing readable tests is Arrange, Act, Assert (AAA or 3A). This technique allows you to clearly define your setup (arrange), the operation under test (act), and your assertions (assert). One efficient way to use this technique is to start by writing the 3A as comments in your test case and then write the test code in between. Here is an example:

[Fact]
public void Should_be_equals()
{
    // Arrange
    var a = 1;
    var b = 2;
    var expectedResult = 3;
    // Act
    var result = a + b;
    // Assert
    Assert.Equal(expectedResult, result);
}

Of course, that test case cannot fail, but the three blocks are easily identifiable with the 3A comments.

In general, you want the Act block of your unit tests to be a single line, making the test focus clear. If you need more than one line, the chances are that something is wrong in the test or the design.

One last tip before learning to organize tests into projects, directories, and files: when the tests are very small (only a few lines), getting rid of the comments might help readability. Furthermore, when you don’t need the Arrange block, please don’t leave the comment there; delete it.

Organizing your tests

There are many ways of organizing test projects inside a solution, and I tend to create a unit test project for each project in the solution and one or more integration test projects. It depends on the type of project.

Since unit tests are directly related to single units of code, it makes sense to organize them into a one-on-one relationship. Since integration tests could also span multiple projects, it is hard to put a hard rule in place. One integration test project could be fine, while one integration test project per project under test could be better in another context. Trust your judgment and change the solution structure if your first choice causes you trouble later.

Note

Some people may recommend creating a single unit test project per solution instead of one per project, and I think that for most solutions, it is a matter of preference. If you have a preferred way to organize yours, by all means, use that approach instead! That said, I find that one unit test project per assembly is more portable and easier to navigate.

Folder-wise, at the solution level, creating the application and its related libraries in an src directory helps isolate the actual solution code from the test projects created under a test directory, like this:

Figure 2.7: The Automated Testing Solution Explorer, displaying how the projects are organized

That’s a well-known and effective way of organizing a solution in the .NET world.

However, sometimes, it is not possible to do that. One such use case would be microservices written under a single solution. In that case, you might want the tests to live closer to your microservices, and not split them between a root src and test folders.

Let’s now dig deeper into organizing unit tests.

Unit tests

I find it convenient to create unit tests in the same namespace as the subject under test when creating unit tests. That helps get tests and code aligned without adding any additional using statements. To make it easier when creating files, you can change the default namespace used by Visual Studio when creating a new class in your test project by adding <RootNamespace>[Project under test namespace]</RootNamespace> to a PropertyGroup of the test project file (*.csproj), like this:

<PropertyGroup>
  ...
  <RootNamespace>MyApp</RootNamespace>
</PropertyGroup>

By convention, I name test classes [class under test]Test.cs and create them in the same directory as in the original project, depicted by the following solution with the ValuesController class:

Figure 2.8: The Automated Testing Solution Explorer, displaying how tests are organized

Finding tests is easy when you follow that simple rule. For the test code itself, I follow a multi-level structure similar to the following:

  • One test class is named the same as the class under test
    • One nested test class per method to test from the class under test
      • One test method per test case of the method under test

I find this helps to organize tests efficiently by test case while keeping a clear hierarchy. Let’s look at a small test class:

namespace MyApp.IntegrationTests.Controllers 
{
    public class ValuesControllerTest
    {
        public class Get : ValuesControllerTest
        {
            [Fact]
            public void Should_return_the_expected_strings()
            {
                // Arrange
                var sut = new ValuesController();
                // Act
                var result = sut.Get();
                // Assert
                Assert.Collection(result.Value,
                    x => Assert.Equal("value1", x), 
                    x => Assert.Equal("value2", x) 
                );
            }
        }
    }
}

This convention allows you to set up tests step by step. For example, by inheriting the outer class (the ValuesControllerTest class here), you can create top-level private mocks or classes shared by all nested classes. Then, for each method to test, you can modify the setup or create other private test elements in the nested classes (the Get class here). Finally, you can do more configuration per test case inside the test method (the Should_return_the_expected_strings method here).

One word of advice: don’t go too hard on reusability inside your test classes as it can make tests harder to read from an external eye, such as a reviewer or another developer that needs to play there. Unit tests should remain clear, small, and easy to read: a unit of code testing another unit of code.

Now that we have explored organizing unit tests, let’s have a look at integration tests.

Integration tests

Integration tests are harder to organize because they depend on multiple units and can cross project boundaries and interact with various dependencies.

As mentioned before, you can create one integration test project for most simple solutions or many for more complex scenarios. When writing many integration tests without crossing project boundaries, I’d look at creating one integration test project per project to test by following a similar convention as with unit tests: [Project under test].IntegrationTests.

Inside those projects, it depends on how you want to attack the problem and the structure of the solution itself. Start by identifying the features under test. Name the test classes in a way that mimics your requirements, organize those into sub-folders (maybe a sub-unit of the requirements), and code test cases as methods. You can also leverage nested classes, as we did with unit tests.

Next, we implement an integration test by leveraging ASP.NET Core features.

ASP.NET Core integration testing

Microsoft built ASP.NET Core from the ground up. They fixed and improved so many things that I cannot enumerate them all here, including testability. Let’s start by talking about the structure of a .NET program. There are two ways to structure your program:

  • The classic ASP.NET Core Program and the Startup classes. You might find this model in existing projects (created prior to .NET 6).
  • The minimal hosting model introduced in .NET 6 encourages you to write the start up code in the Program.cs file by leveraging top-level statements. You will most likely find this model in new projects (created after the release of .NET 6).

No matter how you choose to write your program, that’s the place to define how the application boots and its composition. Moreover, we can leverage the same testing tools more or less seamlessly.

The scope of our integration test is to call the endpoint of a controller over HTTP and assert the response. Luckily, in .NET Core 2.1, the .NET team added the WebApplicationFactory<TEntry> class to make the integration testing of web applications easier. With that class, we can boot up an ASP.NET Core application in-memory and query it using the supplied HttpClient—all of that in just a few lines of code. The test classes also provide extension points to configure the server, such as replacing implementations with mocks, stubs, or any other test-specific elements that we may require.

Classic web application

In a classic ASP.NET Core application, the TEntry generic parameter is usually the Startup or Program class of your project under test but could be anything. I created a few test cases in the Automated Testing solution under the MyApp.IntegrationTests project to show you this functionality.

Here is the broken-down code:

namespace MyApp.IntegrationTests.Controllers
{
    public class ValuesControllerTest : 
    IClassFixture<WebApplicationFactory<Startup>>
    {
        private readonly HttpClient _httpClient;
        public ValuesControllerTest(WebApplicationFactory<Startup>
                                    webApplicationFactory)
        {
            _httpClient = webApplicationFactory.CreateClient();
        }

In the preceding class declaration, we are injecting a WebApplicationFactory<Startup> object into the constructor. That is possible because the class is implementing the IClassFixture<T> interface. We could also use the factory to configure the test server, but since it was not needed here, we only keep a reference on the HttpClient, preconfigured to connect to the in-memory test server:

public class Get : ValuesControllerTest {
    public Get(WebApplicationFactory<Startup> webApplicationFactory) : base(webApplicationFactory) { }
    [Fact]
    public async Task Should_respond_a_status_200_OK()
    {
        // Act
        var result = await _httpClient.GetAsync("/api/values");
        // Assert
        Assert.Equal(HttpStatusCode.OK, result.StatusCode);
    }

In the preceding test case, we use HttpClient to query the http: //localhost/api/values URI, accessible through the in-memory server. Then, we assert that the status code of the HTTP response was a success (200 OK):

    [Fact]
    public async Task Should_respond_the_expected_strings()
    {
        // Act
        var result = await _httpClient
             .GetFromJsonAsync<string[]>("/api/values"); 
        // Assert
        Assert.Collection(result,
            x => Assert.Equal("value1", x),
            x => Assert.Equal("value2", x)
        );
    }
}}}

This last test sends an HTTP request to the in-memory server, like the previous one, but deserializes the body’s content as a string[] to ensure the values are the same as expected instead of validating the status code. If you’ve worked with an HttpClient before, this should be very familiar to you.

When running those tests, an in-memory web server starts. Then, HTTP requests are sent to that server, testing the complete application. In this case, the tests are simple, but you can create more complex test cases in more complex programs.

You can run .NET tests within Visual Studio or use the CLI by running the dotnet test command. In VS Code, you can use the CLI or find an extension to help with test runs.

Next, we explore how to do the same for minimal APIs.

Minimal hosting

If you are using minimal hosting, you must use a workaround. We explore a few workarounds here and leverage minimal APIs, allowing you to pick the one you prefer. These work with regular MVC projects as well.

The first workaround is to use any other class in the assembly as the TEntryPoint of WebApplicationFactory<TEntryPoint> instead of the Program or Startup class. This makes what WebApplicationFactory does a little less explicit, but that’s all.

The second workaround is to add a line at the bottom of the Program.cs file (or anywhere else in the project for that matter) to make the internal autogenerated program class public so that the compiler does not complain about inconsistent accessibility.

Here is the complete Program.cs file with that added line (highlighted):

var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();
app.MapGet("/", () => "Hello World!");
app.Run();
public partial class Program { }

Then, the test cases are very similar to the ones of the classic web application that we just explored:

namespace MyMinimalApiApp.IntegrationTests
{
    public class ProgramTest : IClassFixture<WebApplicationFactory<Program>>
    {
        private readonly HttpClient _httpClient;
        public ProgramTest(WebApplicationFactory<Program> webApplicationFactory)
        {
            _httpClient = webApplicationFactory.CreateClient();
        }
        public class Get : ProgramTest
        {
            public Get(WebApplicationFactory<Program> webApplicationFactory) : base(webApplicationFactory) { }
            [Fact]
            public async Task Should_respond_a_status_200_OK()
            {
                var result = await _httpClient.GetAsync("/");
                Assert.Equal(HttpStatusCode.OK, result.StatusCode);
            }
            [Fact]
            public async Task Should_respond_hello_world()
            {
                var result = await _httpClient
                    .GetStringAsync("/");
                Assert.Equal("Hello World!", result );
            }
        }
    }
}

The only change is the expected result as the endpoint returns a text/plain string instead of a collection of strings serialized as JSON. If the two endpoints were producing the same thing, those parts of the tests would have also been the same.

The third workaround is to instantiate WebApplicationFactory manually. However, instead of using the Program class, which should not exist or be inaccessible, we can use the AutoGeneratedProgram class or any other class from that assembly. I prefer the Program or AutoGeneratedProgram class to make the intent clearer, but I ran into some issues when using AutoGeneratedProgram with .NET 6 builds.

Experiment

I found that executing the two tests under ProgramTestWithoutFixture always takes a few more milliseconds than using the IClassFixture. The same happened for the tests in the ProgramTestWithoutFixtureNoReuse class, which always takes a few more milliseconds than the other two classes. This experiment led me to think it could get way worse with more than two tests, so I recommend sticking to class fixtures.

The code is very similar to the previous workaround, but WebApplicationFactory is instantiated manually instead:

public class ProgramTestWithoutFixture : IAsyncDisposable
{
    private readonly WebApplicationFactory<SomeOtherClass> _webApplicationFactory;
    private readonly HttpClient _httpClient;
    public ProgramTestWithoutFixture()
    {
        _webApplicationFactory = new WebApplicationFactory<SomeOtherClass>();
        _httpClient = _webApplicationFactory.CreateClient();
    }
    //…
}

I omitted the test cases in the preceding code block because they are the same as the previous workarounds. The full source code is available on GitHub: https://adpg.link/vzkr.

And that’s it. We have covered multiple ways to work around integration testing minimal APIs simplistically and elegantly. Next, we explore a few testing principles before moving to architectural principles in the next chapter.

Important testing principles

One essential thing to remember when writing tests is to test use cases, not the code itself; we are testing features’ correctness, not code correctness. Of course, if the expected outcome of a feature is correct, that also means the codebase is correct. However, it is not always true for the other way around; correct code may yield an incorrect outcome. Also, remember that code costs money to write while features deliver value.

To help with that, the test requirements usually revolve around the inputs and outputs. When specific values go into your subject under test, you expect particular values to come out. Whether you are testing a simple Add method where the ins are two or more numbers and the out is the sum of those numbers, or a more complex feature where the ins come from a form and the out is the record getting persisted in a database, most of the time, we are testing the ins and outs.

That’s the first principle you must know about. The interaction between two components or two systems should always be tied to a data contract, whether using a classic request/response model over a REST API where the data contract is the API signature, or using an event-driven architecture approach and the data contract is the event signature or, even simpler, ComponentA returns an object that is injected into ComponentB; the correctness of those interactions gravitates around the ins and outs. Test those as units or test the integration between those units, and you should be on the right way to writing strong test suites.

The second concept I want you to learn is a trick to divide those units: everything in a program is a query or a command. No matter how you organize your code, from a simple single-file application to a microservices architecture-base Netflix clone, all operations, single or compounded, are queries or commands. Thinking about a system this way should help you test the ins and outs.

But what’s a query? A query means getting some ins, like the unique identifier of a database record, and getting some outs, like the record itself. It could also be some part of the code asking how many times it should retry an HTTP GET request when it fails. These are the easiest to test: you push some ins and receive some outs to assert.

And what’s a command? We could see a command as a unit of code that mutates the state of an entity. A command could be to hide a panel in a GUI or update a record in a database. It does not matter what the command does as much as the fact that it changes something somewhere.

Now that we have laid this out, it should become easier to write tests if you divide your code into small units, like commands and queries. But what if a unit must perform multiple operations, such as read from a database, and then send multiple commands? Well, if you create multiple smaller units, and then another unit that interacts with those other building blocks, you should be able to test each piece in isolation and integrate them together.

In a nutshell, when writing automated tests, we assert the output of the unit undergoing testing. That unit optionally had some input parameters and is a query or a command.

We explore numerous techniques throughout the book to help you achieve that level of separation, starting with architectural principles in the next chapter.

Summary

This chapter covered automated testing such as unit and integration tests. We also briefly covered end-to-end tests, but it would be tough to cover that in a few pages since this is tied to an application and its implementation. Nonetheless, all is not lost since the notions covered to write integration tests can also be used for end-to-end testing.

We looked at xUnit, the testing framework used throughout the book, and a way of organizing tests. We explored ways to pick the correct type of test and some guidelines about choosing the right quantity of each kind of test. Then we saw how ASP.NET Core makes it easier than ever before to test our web applications by allowing us to mount and run our ASP.NET Core application in memory. Finally, we explored some high-level concepts that should guide you in writing testable, flexible, and reliable programs.

Now that we have talked about testing, we are ready to explore a few architectural principles to help us increase programs’ testability. Those are a crucial part of modern software engineering and go hand in hand with automated testing.

Questions

Let’s take a look at a few practice questions:

  1. Is it true that in TDD, you write tests before the code to be tested?
  2. What is the role of unit tests?
  3. How big can a unit test be?
  4. What type of test is usually used when the subject under test has to access a database?
  5. Is doing TDD required?

Further reading

Here are some links to build upon what we have learned in the chapter:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.137.169