Chapter 2. Delivering Continuously

One of the driving reasons why developers choose to build microservice ecosystems over traditional monoliths is the ability to rapidly deploy enhancements and fixes to small, independently scalable pieces of the system.

This only works if you have confidence that those services are going to work in production before you deploy them.

Introducing Docker

Lately Docker has been gathering momentum and becoming increasingly popular both as a tool to aid development and as one to aid deployment and operations. It is a container tool that utilizes Linux kernel features like cgroups and namespaces to isolate network, file, and memory resources without incurring the burden of a full, heavyweight virtual machine.1

There are countless platforms and frameworks available today that either support or integrate tightly with Docker. You can deploy Docker images to AWS (Amazon Web Services), GCP (Google Cloud Platform), Azure, virtual machines, and combinations of those running orchestration platforms like Kubernetes, Docker Swarm, CoreOS Fleet, Mesosphere Marathon, Cloud Foundry, and many others. The beauty of Docker is that it works in all of those environments without changing the container format.2

As you’ll see throughout this book, Docker gives us the ability to create an immutable release artifact that will run anywhere, regardless of the target environment. An immutable release means that we can test a Docker image in a lower environment like development or QA and have reasonable confidence that it will perform exactly the same way in production. This confidence is essential to being able to embrace continuous delivery.

For more information on Docker, including details on how to create your own Docker files and images and advanced administration, check out the book Docker: Up & Running by Karl Matthias and Sean P. Kane (O’Reilly).

Later in this chapter we will demonstrate publishing Docker images to dockerhub directly from our CI3 tool of choice. All of this will be done online, in the cloud, with virtually no infrastructure installed on your own workstation.

Installing Docker

When installing Docker on a Mac, the preferred method is to install the native Mac application. If you see older documentation referring to something called Boot2Docker or Docker Toolbox, these are deprecated and you should not be installing Docker this way. For details on how to install Docker on your Mac, check out the installation instructions from the Docker website. Instructions are also available for other operating systems, but I won’t cover them in depth in this chapter as the online documentation will always be more current than this book.

When I started writing this book, I had Docker version 17.03.0-ce, build 60ccb22 installed. Make sure you check the documentation to ensure you’re looking at the newest installation instructions before performing the install.

You can also manually install Docker and all prerequisites via Homebrew. It’s slightly more involved and, honestly, I can see little use in installing it this way on a Mac. The Docker app comes with a nice icon that sits in your menu bar and automatically manages your environment to allow terminal/shell access.

If you’ve managed to install Docker properly, it should start up automatically on the Mac. Since Docker relies on features specific to the Linux kernel, you’re really starting up a VirtualBox virtual machine that emulates those Linux kernel features in order to start a Docker server daemon.

It may take a few minutes to start Docker, depending on the power of your computer.

Now you should be able to run all Docker commands in the terminal to examine your installation. One that you’ll find you may run quite  often is docker images. This command lists the Docker images you have stored in your local repository.

Running Docker Images

Now that you can examine the Docker version and the IP address of a running Docker machine, and you can see the list of installed Docker images, it’s time to put it to use and run a Docker image.

Docker lets you manually pull images into your local cache from a remote repository like docker hub. However, if you issue a docker run command and you haven’t already cached that image, you’ll see it download in the terminal.

If you run the following command, it will launch our “hello world” web application developed in the previous chapter.4 It will fetch the Docker image from docker hub if you don’t have it, and it will then invoke the Docker image’s start command. Note that you need to map the port from the inside of the container to the outside port so you can open up a browser from your desktop:

$ docker run -p 8080:8080 dotnetcoreservices/hello-world
Unable to find image 'dotnetcoreservices/hello-world:latest' locally
latest: Pulling from dotnetcoreservices/hello-world
693502eb7dfb: Pull complete 
081cd4bfd521: Pull complete 
5d2dc01312f3: Pull complete 
36c0e9895097: Pull complete 
3a6b0262adbb: Pull complete 
79e416d3fe9d: Pull complete 
6b330a5f68f9: Pull complete 
Digest: sha256:0d627fea0c79c8ee977f7f4b66c37370085671596743c42f7c47f33e9aa99665
Status: Downloaded newer image for dotnetcoreservices/hello-world:latest
Hosting environment: Production
Content root path: /pipeline/source/app/publish
Now listening on: http://0.0.0.0:8080
Application started. Press Ctrl+C to shut down.

The output shows what it looks like after that image has been cached locally. If you’re doing this for the first time, you will see a bunch of progress reports indicating that you’re downloading the layers of the Docker image. This command maps port 8080 inside the Docker image to port 8080 outside the Docker image.

Docker provides network isolation, so unless you explicitly allow traffic from outside a container to be routed inside the container, the isolation will function just like a firewall.  Since we’ve mapped the inside and outside ports, we can now hit port 8080 on localhost.

We can see that this application is running with the following Docker command:

$ docker ps
CONTAINER ID        IMAGE                            
COMMAND                  CREATED             STATUS              
PORTS                    NAMES
61a68ffc3851        dotnetcoreservices/hello-world   
"/pipeline/source/..."   3 minutes ago       Up 2 minutes        
0.0.0.0:8080->8080/tcp   priceless_archimedes

So let’s hit our application with an HTTP client to make sure it’s working:

$ curl http://localhost:8080/will/it/blend?
Hello, world! 

This shows that we can download a fully functioning piece of software from docker hub, cache the image locally, and execute the Docker image’s run command. Even if we didn’t install a single tool for ASP.NET Core or configure our workspace, we could still use this Docker image to launch our sample service. This functionality will be essential to us when we start to run tests in our continuous integration server and need to ensure that the artifact we tested is the exact same artifact that we deploy.

The Ctrl-C key combination may not be enough to kill the ASP.NET Core application we’re running because we ran it noninteractively. To kill a running Docker process, just find the container ID from the docker ps output and pass it to docker kill:

$ docker kill 61a68ffc3851

Continuous Integration with Wercker

Depending on your background, you may already have experience with continuous integration servers. Some of the more popular ones in the Microsoft world are Team Foundation Server (TFS) and Octopus, but many developers are also familiar with applications like Team City and Jenkins.

In this part of the chapter, we will be learning about a CI tool called Wercker. Wercker and its ilk all attempt to provide a software package that helps developers and operations people embrace CI best practices. This section of the chapter provides a brief overview of CI, and then a walkthrough of setting up Wercker to automatically build an application.

Wikipedia has an excellent section covering the best practices for continuous integration. I’ve already discussed some of the why for CI/CD, but it essentially boils down to one key mantra:

If you want more stable, predictable, and reliable releases, then you have to release more often, not less.

In order to release more frequently, in addition to testing everything, you need to automate builds and deployments in response to code  commits.

Building Services with Wercker

Of all the available choices for cloud-hosted, Docker-based builds I chose Wercker for a number of reasons. First and foremost, I didn’t have to supply a credit card. Frankly, if a cloud service requires a purchase up front, it might be compensating for a high turnover and departure rate. Free trials, on the other hand, are a marketing bet that you’ll like a service enough to keep using it.

Secondly, Wercker is absurdly easy to use, the interface is intuitive, and its tight integration with Docker and support for spinning up multiple attached Docker images for integration testing are outstanding, as you’ll see in upcoming chapters.

With Wercker, there are three basic steps to get going, and then you’re ready for CI:

  1. Create an application in Wercker using the website.
  2. Add a wercker.yml file to your application’s codebase.
  3. Choose how to package and where to deploy successful builds.

The first thing you’ll need to do before you can create an application in Wercker is to sign up for an account (you can log in with your existing GitHub account). Once you’ve got an account and you’re logged in, click the Create link in the top menu. This will bring up a wizard that should look something like the one in Figure 2-1.

Creating an Application in Wercker
Figure 2-1. Creating an application in Wercker

The wizard will prompt you to choose a GitHub repository as the source for your build. It will then ask you whether you want the owner of this application build to be your personal account or an organization to which you belong. For example, all of the Wercker builds for this book are both public and owned by the dotnetcoreservices organization.

Once you’ve created the application, you need to add a wercker.yml file to the repository (we’ll get to that shortly). This file contains most of the metadata used to describe and configure your automatic build.

Installing the Wercker CLI

You will want to be able to invoke Wercker builds locally so you can have a reliable prediction of how the cloud-based build is going to go before you push to your Git remote. This is helpful for running integration tests locally as well as being able to start your services locally in interactive mode while still operating inside the Wercker-generated Docker image (again, so you’re always using an immutable build artifact).

Your code is added to a Docker image specified in your wercker.yml file, and then you choose what gets executed and how. To run Wercker builds locally, you’ll need the Wercker CLI.

For information on how to install and test the CLI, check out the Wercker developer center documentation.

Skip to the section of the documentation entitled “Getting the CLI.” Here you will likely be told to use Homebrew to install the Wercker CLI:

$ brew tap wercker/wercker
$ brew install wercker-cli

If you’ve installed the CLI properly, you should be able to ask the CLI for the version:

$ wercker version
Version: 1.0.643
Compiled at: 2016-10-05 14:38:36 -0400 EDT
Git commit: ba5abdea1726ab111d2c474777254dc3f55732d3
No new version available

If you are running an older version of the CLI, you might see something like this, prompting you to automatically update:

$ wercker version Version: 1.0.174
Compiled at: 2015-06-24 10:02:21 -0400 EDT Git commit: 
ac873bc1c5a8780889fd1454940a0037aec03e2b
A new version is available: 1.0.295 (Compiled at: 2015-10-23T10:19:25Z,
Git commit: db49e30f0968ff400269a5b92f8b36004e3501f1) 
Download it from: https://s3.amazonaws.com/downloads.wercker.com/ 
  cli/stable/darwin_amd64/wercker 
Would you like update? [yN]

If you have trouble performing an automatic update (which happened to me several times), then it’s just as easy to rerun the curl command in Wercker’s documentation to download the latest CLI.

Adding the wercker.yml Configuration File

Now that you’ve got an application created via the Wercker website, and you’ve got the Wercker CLI installed, the next thing to do is create a wercker.yml file to define how you want your application built and deployed.

Take a look at the wercker.yml file that we use in our “hello world” sample, shown in Example 2-1.

Example 2-1. wercker.yml
box: microsoft/dotnet:1.1.1-sdk
no-response-timeout: 10
build:
  steps: 
    - script:
        name: restore
        code: |
          dotnet restore
    - script:
        name: build
        code: |
          dotnet build
    - script:
        name: publish
        code: |
          dotnet publish -o publish  
    - script:
        name: copy binary
        code: |
          cp -r . $WERCKER_OUTPUT_DIR/app 
          cd $WERCKER_OUTPUT_DIR/app
deploy:
  steps:
    - internal/docker-push:
        username: $USERNAME
        password: $PASSWORD
        repository: dotnetcoreservices/hello-world
        registry: https://registry.hub.docker.com
        entrypoint: "/pipeline/source/app/docker_entrypoint.sh"

The box property indicates the base docker hub image that we’re going to use as a starting point. Thankfully, Microsoft has already provided an image that has the .NET Core bits in it that we can use for testing and execution. There is a lot more that can be done with wercker.yml, and you’ll see this file grow as we build progressively more complex applications throughout the book.

We then run the following commands inside this container:

  1. dotnet restore to restore or download dependencies for the .NET application. For people running this command inside a firewalled enterprise, this step could potentially fail without the right proxy configuration.
  2. dotnet build to compile the application.
  3. dotnet publish to compile and then create a published, “ready to execute” output directory.

One command that’s missing from this is dotnet test. We don’t have any tests yet because we don’t have any functionality yet. In subsequent chapters, you’ll see how to use this command for integration and unit test invocation. After this chapter, every build needs to execute tests in order to be considered successful.

With all of those commands run, we then copy the published output to an environment variable provided by Wercker called WERCKER_OUTPUT_DIR. When Wercker completes a build, the build artifact will have a filesystem that looks exactly as we want it to inside a Docker image.

Assuming we’ve successfully built our application and copied the output to the right directory, we’re ready to deploy to docker hub.

Running a Wercker Build

The easiest way to run a Wercker build is to simply commit code. Once Wercker is configured, your build should start only a few seconds after you push. Obviously, we still want to use the regular dotnet command line to build and test our applications locally.

The next step after that is to see how the application builds using the Wercker pipeline (and therefore, within an isolated, portable Docker image). This helps to eliminate the “works on my machine” problem that arises regularly during development projects. We usually have a script with our applications that looks like this to invoke the Wercker build command:

rm -rf _builds _steps _projects
wercker build --git-domain github.com 
  --git-owner microservices-aspnetcore 
  --git-repository hello-world
rm -rf _builds _steps _projects

This will execute the Wercker build exactly as it executes in the cloud, all within the confines of a container image. You’ll see a bunch of messages from the Wercker pipeline, including fetching the latest version of the .NET Core Docker image and running all of the steps in our pipeline.

Note that even though the Git information is being specified, the files being used for the local build are the local files, and not the files as they exist in GitHub.

You can be reasonably confident that if the build executes locally, it will also execute in the cloud and you know you’ll be deploying the same artifact. This is a level of confidence that you cannot get from traditional, non-CI build processes.

It’s worth repeating that you didn’t have to spend a dime to get access to this CI functionality, nor did you have to invest in any of the resources required to perform these builds in the cloud. At this point, there is no excuse for not setting up a CI pipeline for all of your GitHub-based projects.

Continuous Integration with CircleCI

Wercker isn’t the only tool available to us for CI in the cloud, nor is it the only free tool. Where Wercker runs your builds inside a Docker image and produces a Docker image as an artifact output, CircleCI offers control at a slightly lower level.

If you go to http://circleci.com you can sign up for free with a new account or log in using your GitHub account.

You can start with one of the available build images (which include macOS for building iOS apps!) and then supply a configuration file telling CircleCI how to build your app.

For a lot of relatively common project types (Node.js, Java, Ruby), CircleCI can do a lot of guesswork and make assumptions about how to build your app.

For .NET Core, it’s not quite so obvious, so we need to set up a configuration file to tell CircleCI how to build the app.

Here’s a look at the circle.yml file for the “hello world” project:

machine:
  pre:
    - sudo sh -c 'echo "deb [arch=amd64] https://apt-mo.trafficmanager.net/repos/
    dotnet-release/ trusty main" > /etc/apt/sources.list.d/dotnetdev.list'
    - sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys
    417A0893
    - sudo apt-get update
    - sudo apt-get install dotnet-dev-1.0.1

compile: 
  override:
    - dotnet restore
    - dotnet build    
    - dotnet publish -o publish

test:
  override:
    - echo "no tests"

The key difference between this build and Wercker is that instead of being able to run the build inside an arbitrary Docker image that already has .NET Core installed on it, here we have to use tools like apt-get to install the .NET tools.

You may notice that the list of shell commands executed in the pre phase of the machine configuration is exactly the same set of steps listed on Microsoft’s website to install .NET Core on an Ubuntu machine. That’s basically what we’re doing—installing .NET Core on the Ubuntu build runner provided for us by CircleCI.

CircleCI 2.0 (in beta during the time this was written) is advertising full and native Docker support, so it’s possible that by the time you read this the build process will have gotten simpler.

Figure 2-2 shows a piece of the CircleCI dashboard for the “hello world” application.

Whether you decide to use CircleCI, Wercker, or some other CI tool not mentioned in this book, you should definitely look for one with deep and easy-to-use Docker integration. The ubiquity of Docker support in deployment environments and the ability to create and share portable, immutable release artifacts are incredibly beneficial to enabling the kind of agility needed in today’s marketplace.

Figure 2-2. CircleCI build history

Deploying to Docker Hub

Once you have a Wercker (or CircleCI) build that is producing a Docker image and all your tests are passing, you can configure it to deploy the artifact anywhere you like. For now, we’re going to deploy to docker hub.

We’ve already seen a hint of how this works in the wercker.yml file listed previously. There is a deploy section that, when executed, will deploy the build artifact as a docker hub image. We use Wercker environment variables so that we can store our docker hub username and password securely and not check sensitive information into source control.

This deploy step is shown in Example 2-2 to refresh your memory.

Example 2-2. Docker hub deploy in wercker.yml
deploy:
  steps:
    - internal/docker-push:
        username: $USERNAME
        password: $PASSWORD
        repository: dotnetcoreservices/hello-world
        registry: https://registry.hub.docker.com
        entrypoint: "/pipeline/source/app/docker_entrypoint.sh"

Assuming our docker hub credentials are correct and the Wercker environment variables are set up properly, this will push the build output to docker hub and make the image available for pulling and executing on anyone’s machine—including our own target environments.

This automatic push to docker hub is how the sample Docker image you executed earlier in the chapter was published.

In Figure 2-3, you can see a sample Wercker workflow. After we successfully build, we then deploy the artifact by executing the deploy step in the wercker.yml file. The docker hub section of this pipeline is easily created by clicking the “+” button in the GUI and giving the name of the YAML section for deployment (in our case it’s deploy).

Deployment Pipelines in Wercker
Figure 2-3. Deployment pipelines in Wercker

Summary

We’ve managed to get through an entire chapter without writing any new code. Ordinarily, something like this would give me the shakes, but it is in service of a worthy cause.

Even if we were the best developers on the planet, and unicorns appeared in the sky floating beneath rainbow parachutes every time we compiled our microservices, we would likely have unreliable products with brittle, unpredictable, error-prone production deployments. We need to be continuously building, testing, and deploying our code. Not once per quarter or once per month, but every time we make a change.

In every chapter after this, we will be building microservices with testing and CI in mind. Every commit will trigger a Wercker build that runs unit and integration tests and deploys to docker hub.

Before you continue on to the next chapter, I strongly recommend that you take a simple “hello world” ASP.NET Core application and set up a CI build for it on whatever CI host you choose. Put your code in GitHub, commit a change, and watch it go through the build, test, and deploy motions; then verify that the docker hub image works as designed.

This will help build the muscle memory for tasks that should become second nature to you. Hopefully the idea of starting a development project without an automated build pipeline will seem as crazy as the idea of building an unmaintainable monolith.

1 This is true for real Linux OS hosts. macOS and Windows both require a Linux virtual machine to host the Docker runtime.

2 While the container itself is ubiquitous, some Docker features may or may not be available, depending on the host environment.

3 This book will regularly use acronyms like CI (continuous integration) and CD (continuous delivery). It’s best to become familiar with these now.

4 It’s able to do this because we’ve already published it as a docker hub image. Later in this chapter you’ll see how this particular sausage is made.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.14.144.216