Chapter 3. Building a Microservice with ASP.NET Core

Up to this point in the book we have only been scratching at the surface of the capabilities of .NET Core. In this chapter we’re going to expand on the simple “hello world” middleware we’ve built and create our first microservice.

We’ll spend a little time defining what a microservice is (and is not), and discuss concepts like API First and Test-Driven Development. Then we’ll build a sample service that manages teams and team membership.

Microservices Defined

Today, as I have been quoted to say, we can’t swing a dead cat without hitting a microservice.1

The word is everywhere, and unfortunately, it is as overloaded and potentially misleading as the acronym SOA was years ago. Every time we see the word, we’re left with questions like, “What is a service, really?” and “Just how micro is micro?” and “Why don’t we just call them ’services'?”

These are all great questions that we should be asking. In many cases, the answer is “It depends.” However, in my years of building modular and highly scalable applications, I’ve come up with a definition of microservice:

A microservice is a standalone unit of deployment that supports a specific business goal. It interacts with backing services, and allows interaction through semantically versioned, well-defined APIs. Its defining characteristic is a strict adherence to the Single Responsibility Principle (SRP).

This might seem like a somewhat controversial definition. You’ll notice it doesn’t mention REST or JSON or XML anywhere. You can have a microservice that interacts with consumers via queues, distributed messaging, or traditional RESTful APIs. The shape and nature of the service’s API is not the thing that qualifies it as a service or as “micro.”

It is a service because it, as the name implies, provides a service. It is micro because it does one and only one thing. It’s not micro because it consumes a small amount of RAM, or because it consumes a small amount of disk, or because it was handcrafted by artisanal, free-range, grass-fed developers.

The definition also makes a point to mention semantic versioning. You cannot continually grow and maintain an organically changing microservice ecosystem without strict adherence to semantic versioning and API compatibility rules. You’re welcome to disagree, but consider this: are you building a service that will be deployed to production once, in a vacuum, or building an app that will have dozens of services deployed to production frequently with independent release cycles? If you answered the latter, then you should spend some time considering your API versioning and backward compatibility strategies.

When building a microservice from scratch, ask yourself about the frequency of changes you expect to make to this service and how much of the service might be unrelated to the change (and thus potentially a candidate for being in a separate service).

This brings to mind Sam Newman’s golden rule of microservices change:

Can you make a change to a service and deploy it by itself without changing anything else?

Sam Newman, Building Microservices (O’Reilly)

There’s no magic to microservices. In fact, most of us simply consider the current trend toward microservices as just the way Service-Oriented Architecture (SOA) should have been done originally.

The small footprint, easy deployment, and stateless nature of true microservices make them ideal for operating in an elastically scaling cloud environment, which is the focus of this book.

Introducing the Team Service

As fantastic as the typical “hello world” sample might be, it has no practical value whatsoever. More importantly, since we’re building our sample with testing in mind, we need real functionality to test. As such, we’re going to build a real, semi-useful service that attempts to solve a real problem.

Whether it’s sales teams, development teams, support, or any other kind of team, companies with geographically distributed team members often have a difficult time keeping track of those members: their locations, contact information, project assignments, and so forth.

The team service aims to help solve this problem. The service will allow clients to query team lists as well as team members and their details. It should also be possible to add or remove teams and team members.

When designing this service, I tried to think of the many different team visualizations that should be supported by this service, including a map with pins for each team member as well as traditional lists and tables.

In the interest of keeping this sample realistic, individuals should be able to belong to more than one team at a time. If removing a person from a team orphans that person (they’re without a team), then that person will be removed. This might not be optimal, but we have to start somewhere and starting with an imperfect solution is far better than waiting for a perfect one.

API First Development

Before we write a single line of code we’re going to go through the exercise of defining our service’s API. In this section, we’ll talk about why API First makes sense as a development strategy for teams working on microservices, and then we’ll talk about the API for our sample team management service.

Why API First?

If your team is building a “hello world” application that sits in isolation and has no interaction with any other system, then the API First concept isn’t going to buy you much.

But in the real world, especially when we’re deploying all of our services onto a platform that abstracts away our infrastructure (like Kubernetes, AWS, GCP, Cloud Foundry, etc.), even the simplest of services is going to consume other services and will be consumed by services or applications.

Imagine we’re building a service used by the services owned and maintained by two other teams. In turn, our service relies upon two more services. Each of the upstream and downstream services is also part of a dependency chain that may or may not be linear. This complexity wasn’t a problem back in the day when we would schedule our releases six months out and release everything at the same time.

This is not how modern software is built. We’re striving for an environment where each of our teams can add features, fix bugs, make enhancements, and deploy to production live without impacting any other services. Ideally we also want to be able to perform this deployment with zero downtime, without even affecting any live consumers of our service.

If the organization is relying on shared code and other sources of tight, internal coupling between these services, then we run the risk of breaking all kinds of things every time we deploy, and we return to the dark days where we faced a production release with the same sense of dread and fear as a zombie apocalypse.

On the other hand, if every team agrees to conform to published, well-documented and semantically versioned2 APIs as a firm contract, then it frees up each team to work on its own release cadence. Following the rules of semantic versioning will allow teams to enhance their APIs without breaking ones already in use by existing consumers.

You may find that adherence to practices like API First is far more important as a foundation to the success of a microservice ecosystem than the technology or code used to construct it.

If you’re looking for guidance on the mechanics of documenting and sharing APIs, you might want to check out API Blueprint and websites like Apiary. There are innumerable other standards, such as the OpenAPI Specification (formerly known as Swagger), but I tend to favor the simplicity offered by documenting APIs with Markdown. Your mileage may vary, and the more rigid format of the OpenAPI Spec may be more suitable for your needs.

The Team Service API

In general, there is nothing requiring the API for a microservice to be RESTful. The API can be a contract defining message queues and message payload formats, or it can be another form of messaging that might include a technology like Google’s Protocol Buffers.3 The point is that RESTful APIs are just one of many ways in which to expose an API from a service.

That said, we’re going to be using RESTful APIs for most (but not all) of the services in this book. Our team service API will expose a root resource called teams. Beneath that we will have resources that allow consumers to query and manipulate the teams themselves as well as to add and remove members of teams.

For the purposes of simplicity in this chapter, there is no security involved, so any consumer can use any resource. Table 3-1 represents our public API (we’ll show the JSON payload formats later).

Table 3-1. Team service API
Resource Method Description
/teams GET Gets a list of all teams
/teams/{id} GET Gets details for a single team
/teams/{id}/members GET Gets members of a team
/teams POST Creates a new team
/teams/{id}/members POST Adds a member to a team
/teams/{id} PUT Updates team properties
/teams/{id}/members/{memberId} PUT Updates member properties
/teams/{id}/members/{memberId} DELETE Removes a member from the team
/teams/{id} DELETE Deletes an entire team

Before settling on a final API design, we could use a website like Apiary to take our API Blueprint documentation and turn it into a functioning stub that we can play with until we’re satisfied that the API feels right. This exercise might seem like a waste of time, but we would rather discover ugly smells in an API using an automated tool first rather than discovering them after we’ve already written a test suite to certify that our (ugly) API works.

For example, we might use a mocking tool like Apiary to eventually discover that there’s no way to get to a member’s information without first knowing the ID of a team to which she belongs. This might irritate us, or we might be fine with it. The important piece is that this discovery might not have happened until too late if we didn’t at least simulate exercising the API for common client use cases.

Test-First Controller Development

In this section of the chapter we’re going to build a controller to support our newly defined team API. While the focus of this book is not on TDD and I may choose not to show the code for tests in some chapters, I did want to go through the exercise of building a controller test-first so you can experience this in ASP.NET Core.

To start with, we can copy over a couple of the scaffolding classes we created in the previous chapter to create an empty project. I’m trying to avoid using wizards and IDEs as a starting point to avoid locking people into any one platform that would negate the advantages of Core’s cross-platform nature. It is also incredibly valuable to know what the wizards are doing and why. Think of this like the math teacher withholding the “easy way” until you’ve understood why the “hard way” works.

In classic Test-Driven Development (TDD), we start with a failing test. We then make the test pass by writing just enough code to make the light go green. Then we write another failing test, and make that one pass. We repeat the entire process until the list of passing tests includes all of our API design that we’ve done in the preceding table and we have a test case that asserts the positives and negatives for each of the things the API must support.

We need to write tests that certify that if we send garbage data, we get an HTTP 400 (bad request) back. We need to write tests that certify that all of our controller methods behave as expected in the presence of missing, corrupt, or otherwise invalid data.

One of the key tenets of TDD that a lot of people don’t pick up on is that a compilation failure is a failing test. If we write a test asserting that our controller returns some piece of data and the controller doesn’t yet exist, that’s still a failing test. We make that test pass by creating the controller class, and adding a method that returns just enough data to make the test pass. From there, we can continue iterating through expanding the test to go through the fail–pass–repeat cycle.

This cycle relies on very small iterations, but adhering to it and building habits around it can dramatically increase your confidence in your code. Confidence in your code is a key factor in making rapid and automated releases successful.

If you want to learn more about TDD in general, then I highly recommend reading Test Driven Development by Kent Beck (Addison-Wesley Professional). The book is old but the concepts outlined within it still hold true today. Further, if you’re curious as to the naming conventions used for the tests in this book, they are the same guidelines as those used by the Microsoft engineering team that built ASP.NET Core.

Each of our unit test methods will have three components:

Arrange

Perform any setup necessary to prepare the test.

Act

Execute the code under test.

Assert

Verify the test conditions in order to determine pass/fail.

The “arrange, act, assert” pattern is a pretty common one for organizing the code in unit tests but, like all patterns, is a recommendation and doesn’t apply universally.

Our first test is going to be very simple, though as you’ll see, it’s often the one that takes the most time because we’re starting with nothing. This test will be called QueryTeamListReturnsCorrectTeams. The first thing this method does is verify that we get any result back from the controller. We’ll want to verify more than that eventually, but we have to start somewhere, and that’s with a failing test.

First, we need a test project. This is going to be a separate module that contains our tests. Per Microsoft convention, if we have an assembly called Foo, then the test assembly is called Foo.Tests.

In our case, we are building applications for a fictitious company called the Statler and Waldorf Corporation. As such, our team service will be in a project called StatlerWaldorfCorp.TeamService and the tests will be in StatlerWaldorfCorp.TeamService.Tests. If you’re curious about the inspiration for this company, it is a combination of the appreciation of cranky old hecklers and the Muppets of the same name.

To set this up, we’ll create a single root directory that will contain both the main project and the test project. The main project will be in src/StatlerWaldorfCorp.TeamService and the test project will be in test/StatlerWaldorfCorp.TeamService.Tests. To get started, we’re just going to reuse the Program.cs and Startup.cs boilerplate from the last chapter so that we just have something to compile, so we can add a reference to it from our test module.

To give you an idea of the solution that we’re building toward, Example 3-1 is an illustration of the directory structure and the files that we’ll be building.

Example 3-1. Eventual project structure for the team service
├── src
│   └── StatlerWaldorfCorp.TeamService
│       ├── Models
│       │   ├── Member.cs
│       │   └── Team.cs
│       ├── Program.cs
│       ├── Startup.cs
│       ├── StatlerWaldorfCorp.TeamService.csproj
│       └── TeamsController.cs
└── test
    └── StatlerWaldorfCorp.TeamService.Tests
        ├── StatlerWaldorfCorp.TeamService.Tests.csproj
        └── TeamsControllerTest.cs

If you’re using the full version of Visual Studio, then creating this project structure is fairly easy to do, as is creating and manipulating the relevant .csproj files. A point on which I will continue to harp is that for automation and simplicity, all of this needs to be something you can do with simple text editors and command-line tools.

As such, Example 3-2 contains the XML for the StatlerWaldorf.TeamService.Tests.csproj project file. Pay special attention to how the test project references the project under test and how we do not have to redeclare dependencies we inherit from the main project.

Example 3-2. StatlerWaldorfCorp.TeamService.Tests.csproj
<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp1.1</TargetFramework>
  </PropertyGroup>

  <ItemGroup>
    <ProjectReference 
      Include
="../../src/StatlerWaldorfCorp.TeamService/StatlerWaldorfCorp.TeamService.csproj"/>
    <PackageReference Include="Microsoft.NET.Test.Sdk" 
       Version="15.0.0-preview-20170210-02" />
    <PackageReference Include="xunit" 
       Version="2.2.0" />
    <PackageReference Include="xunit.runner.visualstudio" 
       Version="2.2.0" />
  </ItemGroup>
</Project>

Before we create a controller test and a controller, let’s just create a class for the Team model, as in Example 3-3.

Example 3-3. src/StatlerWaldorfCorp.TeamService/Models/Team.cs
using System;
using System.Collections.Generic;

namespace StatlerWaldorfCorp.TeamService.Models
{
    public class Team {

        public string Name { get; set; }
        public Guid ID { get; set; }
        public ICollection<Member> Members { get; set; }

        public Team()
        {
            this.Members = new List<Member>();
        }

        public Team(string name) : this()
        {
            this.Name = name;
        }

        public Team(string name, Guid id)  : this(name) 
        {
            this.ID = id;
        }

        public override string ToString() {
            return this.Name;
        }
    }
}

Since each team is going to need a collection of Member objects in order to compile, let’s create the Member class now as well, as in Example 3-4.

Example 3-4. src/StatlerWaldorfCorp.TeamService/Models/Member.cs
using System;

namespace StatlerWaldorfCorp.TeamService.Models
{
    public class Member {
        public Guid ID { get; set; }
        public string FirstName { get; set; }
        public string LastName { get; set; }

        public Member() {
        }

        public Member(Guid id) : this() {
            this.ID = id;
        }

        public Member(string firstName, 
          string lastName, Guid id) : this(id) {
            this.FirstName = firstName;
            this.LastName = lastName;
        }        

        public override string ToString() {
            return this.LastName;
        }        
    }
}

In a complete, 100% pure TDD world, we would have created the failing test first and then gone and created all of the things we need to allow it to compile. Since these are just simple model objects, I don’t mind skipping a few steps.

Now let’s create our first failing test, shown in Example 3-5.

Example 3-5. test/StatlerWaldorfCorp.TeamService.Tests/TeamsControllerTest.cs
using Xunit;
using StatlerWaldorfCorp.TeamService.Models;
using System.Collections.Generic;

namespace StatlerWaldorfCorp.TeamService
{
    public class TeamsControllerTest
    {
        TeamsController controller = new TeamsController();

        [Fact]
        public void QueryTeamListReturnsCorrectTeams()
        {
            List<Team> teams = new List<Team>(
               controller.GetAllTeams());
        }
    }
}

To see this test fail, open a terminal and cd to the test/StatlerWaldorf.TeamService.Tests directory. Then run the following commands:

$ dotnet restore
...
$ dotnet test
...

The dotnet test command invokes the test runner and executes all discovered tests. We use dotnet restore to make sure that the test runner has all the dependencies and transitive dependencies necessary to build and run. As expected, the test command will fail if either the test code or the project being tested fails to compile.

This test doesn’t compile because we’re missing the controller we want to test. To make this pass, we’re going to need to add a TeamsController to our main project that looks like Example 3-6.

Example 3-6. src/StatlerWaldorfCorp.TeamService/Controllers/TeamsController.cs
using System;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Mvc;
using System.Collections.Generic;
using System.Linq;
using StatlerWaldorfCorp.TeamService.Models;

namespace StatlerWaldorfCorp.TeamService
{
  public class TeamsController
  {  
    public TeamsController() {
    }

    [HttpGet]
    public IEnumerable<Team> GetAllTeams()
    {                      
      return Enumerable.Empty<Team>();
    }
  }
}

With this first test passing (it just asserts that we can call the method), we want to add a new assertion that we know is going to fail. In this case, we want to check that we get the right number of teams in response. Since we don’t (yet) have a mock, we’ll come up with an arbitrary number:

List<Team> teams = new List<Team>(controller.GetAllTeams());
Assert.Equal(teams.Count, 2);

Now let’s make this test pass by hardcoding some random nonsense in the controller. A lot of people like to skip this step because they’re in a hurry, they’re over-caffeinated, or they don’t fully appreciate the iterative nature of TDD.

You don’t need those kinds of people in your life.

The small iterations of writing just enough code to make a test pass is the part of the discipline that not only makes it work, but builds high confidence levels in tested code. I also find that the practice of writing just enough code to make something pass allows me to avoid creating bloated APIs and lets me refine my APIs and interfaces as I test.

Example 3-7 shows the updated TeamsController class to support the new test.

Example 3-7. Updated src/StatlerWaldorfCorp.TeamService/Controllers/TeamsController.cs
using System;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Mvc;
using System.Collections.Generic;
using System.Linq;
using StatlerWaldorfCorp.TeamService.Models;

namespace StatlerWaldorfCorp.TeamService
{
  public class TeamsController
  {  
    public TeamsController() {
    }

    [HttpGet]
    public IEnumerable<Team> GetAllTeams()
    {                      
      return new Team[] { new Team("one"), new Team("two") };
    }
  }
}

There are very few negative tests we can do for a simple GET method that operates on a collection without parameters, so let’s move on to the method for adding a team.

To test this, we’re going to query the team list; we’ll then invoke a new CreateTeam method, and then we’re going to query the team list again. Our assertion should be that our new team is in the list.

In the strictest adherence to TDD, we wouldn’t preemptively change things unless we did so to make a test pass. However, to keep the listings in the book down to a reasonable size I decided to bypass that. So far, our controller hasn’t inherited from a base class, nor has it been returning anything that allows us to control the HTTP response itself (it’s been returning raw values).

This isn’t going to be sustainable, so we’re going to change the way we’re defining our controller methods and reflect our desire for this new pattern in the failing test shown in Example 3-8.

Example 3-8. TeamsControllerTest.cs—the CreateTeamAddsTeamToList test
  [Fact]
  public async void CreateTeamAddsTeamToList() 
  {
    TeamsController controller = new TeamsController();
    var teams = (IEnumerable<Team>)
      (await controller.GetAllTeams() as ObjectResult).Value;
    List<Team> original = new List<Team>(teams);
            
    Team t = new Team("sample");
    var result = await controller.CreateTeam(t);

    var newTeamsRaw = 
     (IEnumerable<Team>)
       (await controller.GetAllTeams() as ObjectResult).Value;
    
    List<Team> newTeams = new List<Team>(newTeamsRaw);
    Assert.Equal(newTeams.Count, original.Count+1);
    var sampleTeam = 
     newTeams.FirstOrDefault( 
       target => target.Name == "sample");
    Assert.NotNull(sampleTeam);            
  }

The code here looks a little rough around the edges, but that’s okay for now. While tests are passing, we can refactor both our tests and the code under test.

To make this test pass, we need to create the CreateTeam method on the controller. Once we get into the thick of that method, we’ll need some way to store teams. In a real-world service, we don’t want to do that in memory because that would violate the stateless rule for cloud-native services.

However, for testing it’s ideal because we can easily manufacture any state we like for testing. So, we’ll create the CreateTeam method that is a no-op, and we’ll see that our test now compiles but fails. To make this pass, we’re going to need a  repository.

Injecting a Mock Repository

We know that we’re going to have to get our CreateTeamAddsTeamToList test to pass by giving the test suite control over the controller’s internal storage. This is typically done through mocks or through injecting fakes, or a combination of both.

I’ve elided a few of the iterations of test-driven development necessary to get us to the point where we can build an interface to represent the repository and refactor the controller to accept it.

We’re now going to create an interface called ITeamRepository (shown in Example 3-9), which is the interface that will be used by our tests for a fake and eventually by the service project for a real persistence medium, but we won’t code that yet. Remember, we’re not going to code anything that doesn’t convert a failing test into a passing one.

Example 3-9. src/StatlerWaldorfCorp.TeamService/Persistence/ITeamRepository.cs
using System.Collections.Generic;

namespace StatlerWaldorfCorp.TeamService.Persistence
{
  public interface ITeamRepository {
       IEnumerable<Team> GetTeams();
       void AddTeam(Team team);
  }
}

We could probably try and predict something more useful than a void return value for AddTeam, but right now we don’t need to. So let’s create an in-memory implementation of this repository interface in the service project, as in Example 3-10.

Example 3-10. src/StatlerWaldorfCorp.TeamService/Persistence/MemoryTeamRepository.cs
using System.Collections.Generic;

namespace StatlerWaldorfCorp.TeamService.Persistence
{
  public class MemoryTeamRepository :  ITeamRepository {
    protected static ICollection<Team> teams;

    public MemoryTeamRepository() {
      if(teams == null) {
        teams = new List<Team>();
      }
    }

    public MemoryTeamRepository(ICollection<Team> teams) {
      teams = teams;
    }

    public IEnumerable<Team> GetTeams() {
      return teams; 
    }

    public void AddTeam(Team t) 
    {
      teams.Add(t);
    }
  }
}

If you’re cringing at the sight of a static collection as a private member of a class, then that’s a good thing—you can smell bad code when you’re within range. This is, however, code just good enough to make a test pass. If we were intending to use this class for anything other than tests, we’d include multiple rounds of refactoring after we had a complete test suite.

Injecting this interface into our controller is actually quite easy. ASP.NET Core already comes equipped with a scope-aware dependency injection (DI) system. Using this DI system, we’re going to add the repository as a service in our Startup class, as shown in the following snippet:

public void ConfigureServices(IServiceCollection services)
{
  services.AddMvc();
  services.AddScoped<ITeamRepository, MemoryTeamRepository>();
}

Using the services model, we can now use constructor injection in our controllers and ASP.NET Core will automatically add an instance of the repository to any controller that wants it.

We use the AddScoped method because we want the DI subsystem to create a new instance of this repository for every request. At this point we don’t really know what our actual backing repository is going to be—SQL Server, a document database, or maybe even another microservice. We do know that we want this microservice to be stateless, and the best way to do that is to start with per-request repositories and only switch to singletons if we have no other alternative.

Property Versus Constructor Injection

The debate over which method is best will continue raging until long after human beings are even writing code. I prefer constructor injection because it makes the dependencies of a class explicit. There’s no magic, no detective work involved, and constructor injection is much easier to test with mocks and stubs.

Now that we’ve got a class we can use for our repository, let’s modify the controller so that we can inject it by adding a simple constructor parameter:

public class TeamsController : Controller
{
    ITeamRepository repository;

    public TeamsController(ITeamRepository repo) 
    {
      repository = repo;
    }

    ...
}

Note that there are no attributes or annotations required to enable this parameter for dependency injection. This may seem like a triviality, but I’ve grown quite fond of this fact when working with large codebases.

Now we can modify our existing controller method so that it uses the repository instead of returning hardcoded data:

[HttpGet]
public async virtual Task<IActionResult> GetAllTeams()
{                  
  return this.Ok(repository.GetTeams());            
}

Next we can make our existing tests pass by going back into our test module and pre-populating the repository with a set of test teams (our tests assume two teams). The test for the collection’s getter method will use whatever we supply in the repository so we can make reliable assertions.

It’s worth reiterating that our goal with controller tests is to test only the responsibility of the controller. At this point, that means we’re only testing to make sure that the appropriate methods are being called on the repository. We could have used a mocking framework to avoid creating a custom repository, but the in-memory version is so simple we decided not to incur the overhead of mocking.

Completing the Unit Test Suite

I’m not going to bloat the pages in this book by listing every line of code in all of the tests. To finish the unit test suite, we’re going to continue with our iterative process of adding a failing test and then writing just enough code to make that test pass.

The source code for the full set of tests can be found in the master branch on GitHub.

The following is an overview of some of the features of the code enabled through TDD:

  • You cannot add members to nonexistent teams.
  • You can add a member to an existing team, verified by querying the team details.
  • You can remove a member from an existing team, verified by querying team details.
  • You cannot remove members from a team to which they don’t belong.

One thing you’ll note about these tests is that they don’t dictate the internal manner of persisting teams and their members. Under the current design, the API doesn’t allow independent access to people; you have to go through a team. We might want to change that in the future, but for now that’s what we’re going with because a functioning product can be refactored, whereas a beautiful yet nonexistent product cannot.

To see these tests in action, first build the main source project, then go into the test/StatlerWaldorfCorp.TeamService.Tests folder and issue the following commands:

$ dotnet restore
...
$ dotnet build
...
$ dotnet test
Build started, please wait...
Build completed.

Test run for /Users/kevin/Code/microservices-aspnetcore/ 
teamservice/test/StatlerWaldorfCorp.TeamService.Tests/bin/Debug/ 
netcoreapp1.1/StatlerWaldorfCorp.TeamService.Tests.dll(
  .NETCoreApp,Version=v1.1)
Microsoft (R) Test Execution Command Line Tool Version 15.0.0.0
Copyright (c) Microsoft Corporation.  All rights reserved.

Starting test execution, please wait...
[xUnit.net 00:00:01.1279308]   Discovering: StatlerWaldorfCorp.TeamService.Tests
[xUnit.net 00:00:01.3207980]   Discovered:  StatlerWaldorfCorp.TeamService.Tests
[xUnit.net 00:00:01.3977448]   Starting:    StatlerWaldorfCorp.TeamService.Tests
[xUnit.net 00:00:01.6546338]   Finished:    StatlerWaldorfCorp.TeamService.Tests

Total tests: 18. Passed: 18. Failed: 0. Skipped: 0.
Test Run Successful.
Test execution time: 2.5591 Seconds

Happily, it appears that all 18 of our unit tests have passed!

Creating a CI Pipeline

Having tests is great, but they don’t do anyone any good if they aren’t run all the time, every time someone commits code to a branch. Continuous integration is a key aspect of being able to rapidly deliver new features and fixes, regardless of your team size or geographic makeup.

In the previous chapter, we created a Wercker account and we went through all of the steps necessary to use the Wercker CLI and Docker to automate testing and deploying our applications. It should now be incredibly easy to take our fully unit-tested codebase and set up an automated build pipeline.

Let’s take a look at the wercker.yml  file for the team service, shown in Example 3-11.

Example 3-11. wercker.yml
box: microsoft/dotnet:1.1.1-sdk
no-response-timeout: 10
build:
  steps: 
    - script:
        name: restore
        cwd: src/StatlerWaldorfCorp.TeamService
        code: |
          dotnet restore
    - script:
        name: build
        cwd: src/StatlerWaldorfCorp.TeamService
        code: |
          dotnet build  
    - script:
        name: publish
        cwd: src/StatlerWaldorfCorp.TeamService
        code: |
          dotnet publish -o publish
    - script:
        name: test-restore
        cwd: test/StatlerWaldorfCorp.TeamService.Tests
        code: |
           dotnet restore
    - script:
        name: test-build
        cwd: test/StatlerWaldorfCorp.TeamService.Tests
        code: |
          dotnet build
    - script:
        name: test-run
        cwd: test/StatlerWaldorfCorp.TeamService.Tests
        code: |
          dotnet test
    - script:
        name: copy binary
        cwd: src/StatlerWaldorfCorp.TeamService
        code: |
          cp -r . $WERCKER_OUTPUT_DIR/app 
deploy:
  steps:
    - internal/docker-push:
        cwd: $WERCKER_OUTPUT_DIR/app
        username: $USERNAME
        password: $PASSWORD
        repository: dotnetcoreservices/teamservice
        registry: https://registry.hub.docker.com
        entrypoint: "/pipeline/source/app/docker_entrypoint.sh"

The first thing to notice is the choice of box in the configuration. This needs to be a docker hub image that already contains the .NET Core command-line tooling. In this case, I chose microsoft/dotnet:1.1.1-sdk. This may change depending on which version is the most current as you’re reading this, so be sure to check the official Microsoft docker hub repository for the latest tags and check the GitHub repository for this book to see what boxes are being used for tests.

In some cases we can skip certain steps and go directly to testing, but if a step is going to fail, we want it to be as small as possible so we can troubleshoot it. You can execute all of these build steps on your development workstation, assuming you have the Wercker CLI installed and a running Docker installation. Just execute the buildlocal.sh script that you can find in this chapter’s GitHub repository. This script contains the following code and will execute the same build locally that Wercker will execute remotely:

rm -rf _builds _steps _projects _cache _temp
wercker build --git-domain github.com 
   --git-owner microservices-aspnetcore 
   --git-repository teamservice
rm -rf _builds _steps _projects _cache _temp

Integration Testing

The most official definition of integration testing that I’ve been able to find indicates that it is the stage of testing when individual components are combined and tested as a group. This phase occurs after unit testing and before validation (also called acceptance) testing.

There are some subtleties about this definition that are important. Unit tests verify that your modules do what you expect them do. An integration test should not verify that you get the right answers from the system; it should verify that all of the components of the system are connected and you get suitable responses. In other words, if you’re performing complex calculations using components already covered by unit tests, your integration tests need not retest those components. Integration tests would simply verify that you can invoke your web server, trigger the right RESTful endpoint, invoke the complex calculator, and get an appropriate response.

One of the hardest parts of integration testing usually ends up being the technology or code involved in spinning up an instance of the web hosting machinery so that you can send and receive full HTTP messages.

Thankfully, this has already been taken care of for us with the Microsoft.AspNetCore.TestHost.TestServer class. We can instantiate one of these and build it with whatever options we like and then use it to create an instance of an HttpClient that is preconfigured to talk to our test server. The creation of these two classes is usually done in an integration test’s constructor, as shown in this snippet:

testServer = new TestServer(new WebHostBuilder()
                    .UseStartup<Startup>());
testClient = testServer.CreateClient();

Note that the Startup class we’re using here is the exact same one we’re using in our main service project. This means that the dependency injection setup, configuration sources, and services will all be exactly as they would be if we were running the real service.

With the test server and test client in place, we can test various scenarios, like adding a team to the teams collection and querying the results to ensure that it’s still there. This gives us a chance to fully exercise the JSON deserialization and use our service the way a completely external consumer might, as shown in Example 3-12.

Example 3-12. test/StatlerWaldorfCorp.TeamService.Tests.Integration/SimpleIntegrationTests.cs
public class SimpleIntegrationTests
{
  private readonly TestServer testServer;
  private readonly HttpClient testClient;
        
  private readonly Team teamZombie;        

  public SimpleIntegrationTests()
  {
      testServer = new TestServer(new WebHostBuilder()
              .UseStartup<Startup>());
      testClient = testServer.CreateClient();

      teamZombie = new Team() {
          ID = Guid.NewGuid(),
          Name = "Zombie"
      };
  }

  [Fact]
  public async void TestTeamPostAndGet()
  {
     StringContent stringContent = new StringContent(            
         JsonConvert.SerializeObject(teamZombie),
         UnicodeEncoding.UTF8,
         "application/json");

     HttpResponseMessage postResponse = 
        await testClient.PostAsync(
          "/teams",
          stringContent);
     postResponse.EnsureSuccessStatusCode();

     var getResponse = await testClient.GetAsync("/teams");
     getResponse.EnsureSuccessStatusCode();

     string raw = await getResponse.Content.ReadAsStringAsync();            
     List<Team> teams = 
         JsonConvert.DeserializeObject<List<Team>>(raw);
     Assert.Equal(1, teams.Count());
     Assert.Equal("Zombie", teams[0].Name);
     Assert.Equal(teamZombie.ID, teams[0].ID);
  }
}    

Once we’re satisfied that this test works properly, we can continue adding more complex scenarios to ensure that various scenarios are supported and working properly.

With our integration tests ready to roll we can update our wercker.yml file to execute the integration tests by adding a few script executions:

- script:
    name: integration-test-restore
    cwd: test/StatlerWaldorfCorp.TeamService.Tests.Integration
    code: |
      dotnet restore
- script:
    name: integration-test-build
    cwd: test/StatlerWaldorfCorp.TeamService.Tests.Integration
    code: |
      dotnet build
- script:
    name: integration-test-run
    cwd: test/StatlerWaldorfCorp.TeamService.Tests.Integration
    code: |
      dotnet test

For such a simple service as this one, it might seem like we’ve gone to some needless trouble in creating a separate project for our integration tests and using separate CI pipeline build steps.

However, developing habits and practices that you use even on the smallest projects will pay off in the long run. This is one of them. When we get to the stage where we’re building services that rely on other services, we’re going to want to start up versions of those services while running integration tests. We want the ability to selectively only run unit tests versus integration tests in our pipelines so we can have a “slow build” and a “fast build” if we want. Also, separating the integration tests into their own project gives us a little bit more cleanliness and organization—some of the integration tests I’ve written in the past have gotten very large, especially when it comes to fabricating test data and expected response JSON payloads for complex services.

Running the Team Service Docker Image

Now that the CI pipeline is working for the team service, it should automatically be deploying a Docker image to docker hub for us. With this Docker image in hand, we can deploy it to Amazon Web Services, Google Cloud Platform, Microsoft Azure, or regular virtual machines. We could orchestrate this image inside Docker Swarm or Kubernetes or push it to Cloud Foundry.

Our options are nearly endless, but they’re endless because we’re using Docker images as deployment artifacts.

Let’s run this using a command you should be pretty familiar with by now:

$ docker run -p 8080:8080 dotnetcoreservices/teamservice
Unable to find image 'dotnetcoreservices/teamservice:latest' locally
latest: Pulling from dotnetcoreservices/teamservice
693502eb7dfb: Already exists 
081cd4bfd521: Already exists 
5d2dc01312f3: Already exists 
36c0e9895097: Already exists 
3a6b0262adbb: Already exists 
79e416d3fe9d: Already exists 
d96153ed695f: Pull complete 
Digest: sha256:fc3ea65afe84c33f5644bbec0976b4d2d9bc943ddba997103dd3fb731f56ca5b
Status: Downloaded newer image for dotnetcoreservices/teamservice:latest
Hosting environment: Production
Content root path: /pipeline/source/app/publish
Now listening on: http://0.0.0.0:8080
Application started. Press Ctrl+C to shut down.

With the port mapping in place, we can treat http://localhost:8080 as the host of our service now. The following curl command issues a POST to the /teams resource of the service. (If you don’t have access to curl, I highly recommend the Postman plug-in for Chrome.) Per our test specification, this should return a JSON payload containing the newly created team:

$ curl -H "Content-Type:application/json" 
   -X POST -d 
   '{"id":"e52baa63-d511-417e-9e54-7aab04286281", 
    "name":"Team Zombie"}'  
    http://localhost:8080/teams

{"name":"Team Zombie","id":"e52baa63-d511-417e-9e54-7aab04286281",
  "members":[]}

Note that the reply in the preceding snippet contains an empty array for the members property. To make sure that the service is maintaining state between requests (even if it is doing so with little more than an in-memory list at the moment), we can use the following curl command:

$ curl http://localhost:8080/teams
  [{"name":"Team Zombie",
  "id":"e52baa63-d511-417e-9e54-7aab04286281",
   "members":[]}]

And that’s it—we’ve got a fully functioning team service automatically tested and automatically deployed to docker hub, ready for scheduling in a cloud computing environment in response to every single Git commit.

Summary

In this chapter we took our first step toward building real microservices with ASP.NET Core. We took a look at the definition of a microservice and we discussed the concept of API First and how it is an essential part of building the discipline and habits necessary to allow multiple teams to have independent release cadences.

Finally, we built a sample service in a test-first fashion and looked at some of the tools we have at our disposal for automatically testing, building, and deploying our services.

In the coming chapters, we’re going to expand on these skills as we build more complex and powerful services.

1 Origins of the “can’t swing a dead cat” phrase are as morbid as they are plentiful. I have been unable to discover a single credible source for the original quote.

2 For more information on semver, check out http://semver.org/.

3 Protocol Buffers, or “protobufs” for short, are a platform-neutral, high-performance serialization format documented at https://developers.google.com/protocol-buffers/.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.3.175