© Brandon Atkinson, Dallas Edwards 2018
Brandon Atkinson and Dallas EdwardsGeneric Pipelines Using Dockerhttps://doi.org/10.1007/978-1-4842-3655-0_4

4. A Practical Example

Brandon Atkinson1  and Dallas Edwards2
(1)
North Chesterfield, VA, USA
(2)
Midlothian, VA, USA
 

In Chapter 3 we showed you an example of how we can execute a few simulated pipeline stages in a Docker container to create a pipeline that can be ported to any CI server. In this chapter we’re going to take it a step further and create a pipeline that can clone, build, test, archive, and deploy a set of working applications. Then, we’ll show you how to take the pipeline you’ve created and move it from your desktop to two popular CI platforms.

An Overview of Our Applications

In this chapter we are focused on applications written using one of three distinct tech stacks; they are:
  1. 1.

    Spring Boot applications written in Java, using Maven as a build automation tool

     
  2. 2.

    ASP.NET Core Web APIs written in C#

     
  3. 3.

    Angular applications leveraging TypeScript and Node.js

     

We’ve created three sample projects, each using one of these technologies. These projects include everything you need—code, configuration files, Dockerfiles, etc.—to follow along for the rest of the chapter. The next three sections provide a brief overview of what these sample applications do and the commands we execute to build, test, and run them. We won’t go into detail on how each of the sample applications work—there are other books better suited for that. These are barebones implementations and aren’t meant to be used as a model for writing high-quality applications.

Spring Boot

Note

You can find the source code for this project at https://​github.​com/​Apress/​generic-pipelines-using-docker

The first application is an API written in Java using the Spring Boot framework, and using Maven to build, test, and package the project. It’s based off the quick start example on the Spring Boot homepage and works with any relatively recent version of Java and Maven. For the purposes of this book, we’re using JDK 8 and Maven 3.

Note

If you want to learn more about the Spring Boot framework and how this project works, check out the quick start guide at https://projects.spring.io/spring-boot/#quick-start .

To run the application from the command line, first compile the source code using the mvn clean package, then execute java -jar target/hello.jar. After a few seconds the application will be up and running. Once it is, open your web browser and navigate to http://localhost:8080. You’ll receive a simple message: “Hello World!” as shown in Figure 4-1.
../images/464323_1_En_4_Chapter/464323_1_En_4_Fig1_HTML.jpg
Figure 4-1

The Spring Boot application

ASP.NET Core Web API

Note

You can find the source code for this project at https://github.com/Apress/generic-pipelines-using-docker .

The second application is also an API, but this time written on top of ASP.NET Core Web API. It has a single endpoint that returns an array containing two values. It’s based off of the ASP.NET Core Web API project template generated by Visual Studio.

This project uses the commands built into the .NET Core CLI to build and test the project. Using the command line, run dotnet build ValueApi to build the project, followed by dotnet ValueApi/bin/Debug/netcoreapp2.0/ValueApi.dll to start the API. If you open a browser and navigate to http://localhost:5000/api/values, you’ll get a response like the one in Figure 4-2.
../images/464323_1_En_4_Chapter/464323_1_En_4_Fig2_HTML.jpg
Figure 4-2

The .NET Core application

Angular 5

Note

You can find the source code for this project at https://github.com/Apress/generic-pipelines-using-docker .

Our last sample project is a web application built using Angular 5, TypeScript, and Node.js and relies on Chromium to run its test suite. It’s based on the Angular Quick Start example.

To build and run the project, first download and install the project’s dependencies by running npm install on the command line. Next, use npm run build to compile the project. Finally, host the project in a lightweight web server by running npm start. Once the server is running, open a browser and navigate to http://localhost:4200. You will see a simple website like the one shown in Figure 4-3.
../images/464323_1_En_4_Chapter/464323_1_En_4_Fig3_HTML.jpg
Figure 4-3

The Angular 5 application

A Deep Dive into the Pipeline

Note

You can find the source code for this project at https://github.com/Apress/generic-pipelines-using-docker .

Now that you’ve got a high-level overview of the sample projects we’re dealing with in this chapter, it’s time to explore the pipeline itself. In the following sections, we explore the configuration file that drives the behavior of the pipeline. Then we take a close look at each of the five stages of our pipeline: clone, build, test, archive, and deploy. Finally, we take a peek inside the build containers where all this takes place.

The Pipeline Configuration File

Alongside the source code for each application, you’ll find a small JSON file named pipeline.json that contains some crucial information about the application. The pipeline will use this file to decide everything from how to build the application to where the resulting artifact should be stored for use later. The next three listings show the configuration file for each of our sample projects:
{
  "application": {
    "name": "Sample Java Application",
    "type": "java"
  },
  "build": {
    "path": null,
    "outputPath": null
  },
  "test": {
    "enabled": true,
    "path": null
  },
  "archive": {
    "registry": "docker.io",
    "repository": "edwardsdl/sample-java"
  },
  "deploy": {
    "containerPort": 8080
  }
}
Listing 4-1

The Pipeline Configuration File for the Sample Java Project

{
  "application": {
    "name": "Sample .NET Core App",
    "type": "netcore"
  },
  "build": {
    "path": null,
    "outputPath": "ValueApi/bin/Release/netcoreapp2.0"
  },
  "test": {
    "enabled": true,
    "path": "ValueTests/"
  },
  "archive": {
    "registry": "docker.io",
    "repository": "edwardsdl/sample-netcore"
  },
  "deploy": {
    "containerPort": 5000
  }
}
Listing 4-2

The Pipeline Configuration File for the Sample .NET Core Project

{
  "application": {
    "name": "Sample Node App",
    "type": "node"
  },
  "build": {
    "path": null,
    "outputPath": "dist/"
  },
  "test": {
    "enabled": true,
    "path": null
  },
  "archive": {
    "registry": "docker.io",
    "repository": " edwardsdl/sample-node"
  },
  "deploy": {
    "containerPort": 5000
  }
}
Listing 4-3

The Pipeline Configuration File for the Sample Node Project

Our pipeline configuration file is broken into four sections: application , build, test, and archive. Some provide information about the application, while others provide fine-grained control over specific stages in the pipeline. Let’s take a closer look at each section.

The application section stores high-level information about the project. We use it to store the name and type of the application. In this book we deal with three types: node, java, and netcore. It’s up to you to decide what application types you want to support and what identifiers to assign them. Supporting a greater number of tech stacks will give your developers more flexibility but will require more work on their end. If a new version of the .NET Core framework is released and maintains backwards compatibility, netcore1, netcore2, and netcore3 applications can all share a single build image. This is a useful trick for keeping the number of images you have to maintain to a minimum.

The build section contains two elements: path and outputPath. The path element is used to let the pipeline know where the applications code can be found when it pulls it from source control. For most projects the source code is located in the root of the repository, but we’ve found some teams appreciate the flexibility to put it elsewhere. The outputPath element lets the pipeline know where to put the compiled output of the build command.

The test section contains configuration settings for the test stage of the pipeline. The first property, enabled, is used to determine whether this stage is run at all. As your pipeline becomes more robust with additional stages and features, you’ll likely find yourself adding this property to other sections too. It can be very useful to turn portions of the pipeline on and off due to unusual situations or for nonstandard projects. The path property in this section tells the pipeline where the tests are located relative to the project’s root directory.

Caution

If the enabled property is setting off alarm bells, that’s a good thing! This is useful for teams who don’t yet have unit tests or need to temporarily disable the stage while working through an issue. This has the potential to be used as a crutch!

The archive section lets the pipeline know where the build artifacts—in our case the Docker image containing one of our sample applications—should be stored. The registry property indicates which Docker registry will store the image. In this case, we’ll be storing the image on Docker Hub (Docker’s public registry). If your organization is hosting their own internal registry, you’d put that here instead. The repository property is the name of the image and should not include any tags; the pipeline will handle all of the tagging automatically. Your organization may require you to store application binaries and images separately. In that case you can modify this section to represent an array of artifact repositories.

The Clone Stage

Typically, the first step in any pipeline is to download a copy of a project’s source, and this pipeline is no different. If you haven’t already, this would be a good time to clone or review the GitHub examples for the book. In the stages directory, you’ll find a file named 01_clone.sh with the following code:
#!/usr/bin/env bash
echo
echo "Cloning Application"
git clone "${GITHUB_URL:?}" .
Listing 4-4

The Clone Stage Shell Script

There’s not much happening in this stage. The script starts by outputting a brief description of the stage and then performing a git clone, which will place the application’s source code in the current working directory. Notice we’re using an environment variable here—GITHUB_URL . This—and others like it in subsequent stages—are expected to exist wherever the pipeline is running. Stages will also source information from the pipeline.json file. You’ll see an example of this in the next stage. We’ll cover the pipeline.json file in detail later in this chapter.

The Build Stage

In the second stage the pipeline will build the application. Each tech stack will be handled differently, but the end goal is the same: to create an artifact we can deploy inside a container. In the stages directory, you’ll find a file named 02_build.sh with the following code:
#!/usr/bin/env bash
echo
echo "Building Application"
application_type=$(jq -r .application.type pipeline.json)
case "${application_type}" in
  "java")
    mvn clean package
    ;;
  "netcore")
    dotnet restore
    dotnet build -c Release
    ;;
  "node")
    npm install
    ;;
  *)
    echo "Unable to build application type ${application_type}"
    exit 1
    ;;
esac
Listing 4-5

The Netcore Build Stage Shell Script

This script uses jq to pull the value of application.type out of the pipeline.json file and assign it to application_type. This should be set to either java, netcore, or node if the pipline.json file is configured correctly. If application_type doesn’t equal one of these values, the script terminates with exit code 1.

Note

jq is a fantastic tool that’s packed with functionality! You can find a great tutorial on the official website at https://stedolan.github.io/jq/ .

For java applications, we’ll use Maven to clean the workspace, which ensures there aren’t any cached or outdated files lying around, and then compile the application. It’s important to remember that we require all java apps that come through the pipeline to support Maven.

If the application_type is netcore, we’ll use the .NET Core CLI to perform a NuGet package restore by issuing the command dotnet restore app. Then, we’ll call dotnet build app -c Release to compile the source code using the Release configuration.

Finally, if application_type is set to node, we’ll use the Node Package Manager to download and install any required dependencies. After that, we issue the npm run build command to compile everything.

The Test Stage

The third stage of the pipeline handles test execution. You can find it in 03_test.sh.
#!/usr/bin/env bash
echo
echo "Testing Application"
application_type=$(jq -r .application.type pipeline.json)
enabled=$(jq -r .test.enabled pipeline.json)
test_path=$(jq -r .test.path pipeline.json)
if "${enabled}"
then
  echo "Skipped"
  exit 0
fi
case "${application_type}" in
  "java")
    mvn test
    ;;
  "netcore")
    # The path to the test project must be set until
    # https://github.com/Microsoft/vstest/issues/1129 is
    # resolved.
    dotnet test "${test_path}"
    ;;
  "node")
    npm run test
    ;;
  *)
    echo "Unable to test application type ${application_type}"
    exit 1
    ;;
esac
Listing 4-6

The Test Stage Shell Script

The test stage uses the same pattern as our build stage, with one exception. Before kicking off any tests, it checks the test .enabled property in the pipeline configuration file. If it’s set to false, the stage is skipped.

Assuming the stage is enabled in the configuration file, application_type is evaluated and the appropriate command is executed to run the test suite. As in the build stage, unknown application types cause the stage to fail with error code 1.

Using our sample projects, implementing this stage turns out to be fairly trivial. In our experience though, it tends to grow in complexity and even spawn completely new stages. For example, your organization may want to report on code coverage or require a certain percentage of tests to pass. Perhaps your teams have various suites of tests in several different repositories. If your teams write both unit tests and end-to-end tests, it might make sense to keep them in separate stages.

The Archive Stage

Now that the project has been built and tested successfully, it’s time to package it into an artifact and place it somewhere safe. In your organization this might be Artifactory, Nexus, GitHub, or any number of other repositories. For our sample projects, we are using Docker Hub. If you’re unfamiliar with it, Docker Hub is simply a free, public repository where anyone can store Docker images. The archive stage can be found in 05_archive.sh.
#!/usr/bin/env bash
echo
echo "Archiving Application"
registry=$(jq -r .archive.registry pipeline.json)
repository=$(jq -r .archive.repository pipeline.json)
image="${registry}/${repository}:latest"
docker login
    -u "${DOCKER_USERNAME?:}"
    -p "${DOCKER_PASSWORD?:}"
    "${registry}"
docker build -t "${image}" .
docker push "${image}"
Listing 4-7

The Archive Stage Shell Script

Like the build and test stages, the archive stage starts off by pulling some information out of the pipeline configuration file. We get the registry and repository values and then combine them to get the desired name of the image containing the application. For now, we’re just applying the latest tag, but in the next chapter we’ll discuss versioning your artifacts so they won’t be overwritten and can be uniquely identified later.

Now that we have a name, we can build and push the image. The first step is to log in to the registry specified in the pipeline configuration file. Like GITHUB_URL, the DOCKER_USERNAME and DOCKER_PASSWORD variables will be passed into the container as environment variables. Because these are credentials and thus sensitive information, they shouldn’t be stored in the pipeline configuration file.

Assuming we were able to login successfully, the next step is to build the image. Like we mentioned earlier, each of our sample applications is designed to be deployed inside a container, so each of them has an associated Dockerfile. Our pipeline expects each application’s Dockerfile to be located in the project’s root directory. Of course, you could always offer more flexibility by introducing a new docker.dockerfilePath variable in your pipeline configuration file. In our experience, however, that hasn’t proved to be necessary. To kick off the build, we issue the docker build command passing the name of the image and the build context.

The Deploy Stage

Finally, we reach the deploy stage. There are far too many deployment targets to cover here; every organization is different. For the purposes of this book, 05_deploy.sh will “deploy” to your local machine by simply running the newly created image.
#!/usr/bin/env bash
echo
echo "Deploying Application"
container_port=$(jq -r .deploy.containerPort pipeline.json)
registry=$(jq -r .archive.registry pipeline.json)
repository=$(jq -r .archive.repository pipeline.json)
image="${registry}/${repository}:latest"
docker run -dp "${container_port}:${container_port}" "${image}"
Listing 4-8

The Deploy Stage Shell Script

As mentioned before, each of our example apps are designed to be deployed inside a container. To do that, we need a few pieces of information from the pipeline configuration file: archive.registry, archive.namespace, and archive.repository. We put these together to form an image name, for example, docker.io/edwardsdl/sample-netcore:latest. This is the image we created in the archive phase. Next, we execute the command docker run -dp "${container_port}:${container_port}" "${image}". This runs the latest version of the image containing our application in “detached” mode—meaning in the background—and publishes the port the application is listening on. Once this is done, the deploy script terminates, which, since it is the last of the stage scripts, causes the pipeline container to exit.

Tip

While containerizing your applications isn’t necessary to create a generic pipeline, it certainly makes things easier. If your organization hasn’t explored the idea of containerized applications, I highly recommend doing so.

A Look at Our Build Containers

One of the key features of our pipeline is that it executes entirely inside a Docker container. This allows us total control over our build environment. We can add or update dependencies easily, set environment variables as needed, or install software in custom locations—all without interfering with other applications’ build environments or workflows. Of course, in order to have a build container you must have a Dockerfile. In this section we’ll take a look at the three Dockerfiles we use for each of our three tech stacks.

First let’s look inside Dockerfile.java. As you may have guessed, this is the Dockerfile we use to construct the image for our Java build containers.
FROM maven:3-jdk-8
RUN curl -fsSL get.docker.com | sh
RUN apt-get update && apt-get install -y jq zip
COPY stages stages
WORKDIR /app
Listing 4-9

The Dockerfile for the Java Build Container

The Dockerfile is relatively simple. We use maven:3-jdk-8 as our base image because it comes out of the box with both the Java 8 JDK and Maven 3. Admittedly this base image makes the container a little bloated, but in our experience these containers tend to get fairly large anyway, so it’s not worth worrying about a few extra megabytes.

Next, we install Docker inside the container. That probably sounds strange—it did to us the first time too! The reason is simple: our applications are designed to be deployed as containers and thus have Dockerfiles themselves. That means we need to issue docker build and docker push commands from inside our build containers.

Caution

We chose to install Docker using this method because it’s concise and easy to understand. However, it’s never a good idea to run scripts without examining them first. You can find a more secure method for installing Docker at https://docs.docker.com/install/ .

Note

Using Docker inside of Docker is becoming a very common scenario. However, if you are new to this concept you can learn more here: https://blog.docker.com/2013/09/docker-can-now-run-within-docker/ .

The third line of our Dockerfile installs two packages: jq and zip . The first, jq, is a command line tool that’s great for parsing and transforming JSON data. It’s used extensively inside our stage scripts, as you saw earlier in the chapter. The second is zip. I’m sure you can guess what that does.

Next, we copy the pipeline stages into the container. Be aware: by copying your code in now, you’ll be required to recreate your build images when your stage scripts change. In your implementation you may decide to clone your stages into your container when it starts up. You’ll always be running the latest code, but it’s more difficult to determine which version of the pipeline created a given artifact.

Finally, we set our working directory to /app. The pipeline will use this as its primary workspace. Application code will be cloned, built, tested, and packaged all within this directory.

We use Dockerfile.netcore to create the build container for .NET Core projects, which looks very similar to its Spring Boot counterpart. In this case we use microsoft/netcore:2-sdk as our base image instead of maven:3-jdk-8. Otherwise, this file is exactly the same.
FROM microsoft/dotnet:2-sdk
RUN curl -fsSL get.docker.com | sh
RUN apt-get update && apt-get install -y jq zip
COPY stages stages
WORKDIR /app
Listing 4-10

The Dockerfile for the .NET Core Pipeline Image

The Dockerfile for the Angular tech stack follows the same pattern as the previous two. In this case our base image will be node:9-stretch, which uses a recent version of Debian Linux and provides easy access to Node.js and NPM. We also install two additional dependencies: chromium and chromium-driver. These are used by our application’s test suite. We’ve named this file Dockerfile.node.
FROM node:9-stretch
RUN curl -fsSL get.docker.com | sh
RUN apt-get update && apt-get install -y
    chromium
    chromium-driver
    jq
    zip
COPY stages stages
WORKDIR /app
Listing 4-11

The Dockerfile for the Angular Pipeline Image

Running the Pipeline

Now that we’ve gone over the sample applications, stage scripts, and Dockerfiles, it’s time to run our pipeline. To begin, we’ll run it locally. Afterwards we’ll show you how to port it to several popular CI tools.

Before you move on, we suggest you fork one of our sample projects. These applications have been thoroughly tested, and with a few simple modifications you’ll be able to run them through the pipeline locally and in the cloud. In addition, as part of the sign-up process, both Travis CI and CircleCI will request access to your GitHub account in order to streamline the setup process and start builds when new code is committed to a linked repository. In the end, we think it’ll be easier for you to use one of our applications than build your own.

For the rest of the chapter we’ll be using our sample .NET Core project. You can find it at https://github.com/Apress/generic-pipelines-using-docker . If you’re using GitHub, forking our sample projects is easy! Just navigate to its repository on GitHub and click the “Fork” button in the top right. This will create a copy of the repository in your account.

Using the Command Line

Running the pipeline from the command line is pretty straightforward. First you build the Docker image for the pipeline you want to use by issuing the docker build command. Next you run the image passing along all of the required arguments to mount the Docker socket and set the appropriate environment variables. For example, if you want to build the sample .NET Core app, you’d issue the following commands from the root of the directory containing your application:
docker build -t pipeline -f <Dockerfile> .
docker run
  -v /var/run/docker.sock:/var/run/docker.sock
  --env GITHUB_URL=https://github.com/edwardsdl/sample-netcore.git
  --env DOCKER_USERNAME=AzureDiamond
  --env DOCKER_PASSWORD=hunter2
  pipeline
  /stages/00_run.sh
Listing 4-12

Building and Running the Pipeline

Obviously, the values for GITHUB_URL, DOCKER_USERNAME, and DOCKER_PASSWORD are placeholders. You need to replace them with the path for your fork and your Docker Hub credentials. Remember to use your Docker repository URL as well! Once you’ve run the docker run command , you’ll see the pipeline go through each of the stages we described earlier. Your output should look like that in Figure 4-4.
../images/464323_1_En_4_Chapter/464323_1_En_4_Fig4_HTML.jpg
Figure 4-4

Running the pipeline from the command line

In its final stage, the pipeline will “deploy” the sample .NET Core application locally. When you issue the docker ps command as in Figure 4-5, you’ll see you have one container running—the one running the sample application! Using your browser, navigate to http://localhost:5000/api/values to verify it’s working.
../images/464323_1_En_4_Chapter/464323_1_En_4_Fig5_HTML.jpg
Figure 4-5

Inspecting the deployed application

Using IntelliJ IDEA CE

If you’re more comfortable using an IDE, Intellij IDEA has wonderful support for building Docker images. You can use the “Docker Integration” plugin, which is incredibly helpful. Setup is a little more involved, but once you’re done you have a powerful development environment at your fingertips.

Tip

If you run into trouble when adding new Docker configuration profiles, check out JetBrains’ help page at www.jetbrains.com/help/idea/run-debug-configuration-docker.html .

To begin, we will create a few new configurations in our IDE. In the top right corner, click the “Select Run/Debug Configuration” drop-down box and click “Edit Configurations…” as in Figure 4-6.
../images/464323_1_En_4_Chapter/464323_1_En_4_Fig6_HTML.jpg
Figure 4-6

Creating a new configuration

On the Run/Debug Configuration window, create a new Docker configuration by clicking on the “Add New Configuration” button in the top left. Under the “Docker” menu item, select “Dockerfile” as shown in Figure 4-7.
../images/464323_1_En_4_Chapter/464323_1_En_4_Fig7_HTML.jpg
Figure 4-7

Adding a new Dockerfile configuration

The first configuration we’ll create will allow us to run the .NET Core pipeline. We’ll name this new configuration, “Run .NET Core Pipeline.” In the Dockerfile drop-down box, select “Dockerfile.netcore.” Now, select the checkbox labelled “Run built image” and set the container name to netcore-pipeline. Next, in the executable section, set the command to /stages/00_run.sh.

Now we need to mount the host’s Docker socket inside the container. This is what makes it possible for us to execute Docker commands inside the container. To do this, click the button labelled, “…” to the right of the bind mounts textbox. In the “Bind Mounts” window, add a new bind mount setting both the host path and the container path to /var/run/docker.sock (Figure 4-8).
../images/464323_1_En_4_Chapter/464323_1_En_4_Fig8_HTML.jpg
Figure 4-8

Mounting the Docker socket

Tip

If you want to learn more about the history of issuing Docker commands inside a container and mounting the Docker socket, Jérôme Petazzoni has written an excellent blog post at https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/ .

Next, we’ll need to add a few environment variables (Figure 4-9), specifically GITHUB_URL, DOCKER_USERNAME , and DOCKER_PASSWORD. The value of GITHUB_URL will be DOCKER_PASSWORD will be your Docker Hub username and password, respectively. The values we use here are placeholders and won’t work for you.es we use here are placeholders and won’t work for you.
../images/464323_1_En_4_Chapter/464323_1_En_4_Fig9_HTML.jpg
Figure 4-9

Adding environment variables

Note

If you have forked the sample GitHub repo, remember to use your GITHUB_URL in these examples!

After you’ve added the environment variables, click the OK button to return to the Run/Debug Configurations screen. Confirm that your settings look like those in Figure 4-10, and then click “Apply.”
../images/464323_1_En_4_Chapter/464323_1_En_4_Fig10_HTML.jpg
Figure 4-10

Adding the Run .NET Core Pipeline configuration

Next, we’ll repeat this process to create additional configuration profiles for our Java and Node pipelines. Most of the process is identical, but you’ll want to be sure to choose the correct Dockerfile and GITHUB_URL values. Reference Figures 4-11 and 4-12 to ensure your settings are correct.
../images/464323_1_En_4_Chapter/464323_1_En_4_Fig11_HTML.jpg
Figure 4-11

Adding the Run Java Pipeline configuration

../images/464323_1_En_4_Chapter/464323_1_En_4_Fig12_HTML.jpg
Figure 4-12

Adding the Run Node Pipeline configuration

Once you’ve added the last configuration profile, click the OK button to close the window . Now the Select Run/Debug Configuration drop-down box should contain three items: Run .NET Core Pipeline, Run Java Pipeline, and Run Node Pipeline (Figure 4-13).
../images/464323_1_En_4_Chapter/464323_1_En_4_Fig13_HTML.jpg
Figure 4-13

Listing the newly created configuration profiles

Moving to the Cloud

Now that we’ve seen the pipeline work on our local machine, it’s time to get it working using a real continuous integration tool. We’ll start by forking one of our sample projects in GitHub. Next, we’ll show you how to run our pipeline in Travis CI by converting the 00_run.sh script to a .travis.yml file. Finally, we’ll walk you through porting our pipeline from Travis CI to CircleCI.

Moving the Pipeline to Travis CI

As mentioned, we’ll be showing you how to use the pipeline with two continuous integration platforms. Up first, we’ll be looking at Travis CI.

Travis CI was one of the first—if not the first—CI/CD SaaS offering. It provides an intuitive interface, free accounts for open source projects, good documentation, and a large number of integrations. Because of this, Travis CI is wildly popular, especially amongst open source projects.

Creating a Travis CI Account

Before we can move our pipeline to Travis CI, we’ll need to create a new account. If you don’t need help, skip ahead to the next section.

Open a browser and navigate to https://travis-ci.org , then click the button labeled, “Sign Up” (Figure 4-14). If prompted, enter your GitHub username and password. If you are already signed in to GitHub, you won’t be asked to do so again.

Caution

Check that top-level domain! You want https://travis-ci.org not https://travis-ci.com . The latter is for paid projects only!

../images/464323_1_En_4_Chapter/464323_1_En_4_Fig14_HTML.jpg
Figure 4-14

The Travis CI homepage

Grant Travis CI access to your email address and permission to add new webhooks to your repositories. The service will use this to help you set up new builds, to trigger builds when new code is committed to your repositories, and notify you when things go wrong.

After granting Travis CI access, you’ll be dropped on a “Getting Started” page. Take a minute to read through this page. It details the steps required to add a new repository and start building it.

Adding a New Repository

Now that your account has been created, you’re ready to add a new repository to Travis CI. This is where it all comes together. Once you’re done setting up the repository, you’ll get to see the generic pipeline in action.

Head to your profile page by using the link in the instructions. Alternatively, you can click the “Profile” link in the drop-down menu located in the top right of the navigation bar, as shown in Figure 4-15.
../images/464323_1_En_4_Chapter/464323_1_En_4_Fig15_HTML.jpg
Figure 4-15

The “Getting Started” page

On your profile page, search for “sample.” This will return a list of any of our sample projects you forked. Click the gray toggle switch to the left of “sample-netcore” to allow Travis CI to integrate with the repository (Figure 4-16).
../images/464323_1_En_4_Chapter/464323_1_En_4_Fig16_HTML.jpg
Figure 4-16

Adding the .NET Core sample repository to Travis CI

Now click on the “Settings” button to the right of the toggle switch to navigate to the settings for this repository. Before we start the first build, we need to tell Travis CI what environment variables to pass to our container (Figure 4-17). These will be the same values you used when running the pipeline locally: DOCKER_USERNAME, DOCKER_PASSWORD, and GITHUB_URL.
../images/464323_1_En_4_Chapter/464323_1_En_4_Fig17_HTML.jpg
Figure 4-17

Adding environment variables to the build

Caution

Make sure to toggle the “Display value in build log” switch to the OFF position for the DOCKER_PASSWORD environment variable. You don’t want your password showing up in the build log!

Now everything is ready for us to kick off our first build! Click the “More options” button on the top right side of the page and select “Trigger build” from the drop-down menu (Figure 4-18).
../images/464323_1_En_4_Chapter/464323_1_En_4_Fig18_HTML.jpg
Figure 4-18

Triggering the first build

You’ll be prompted to select a branch and add a custom commit message and custom configuration. Essentially Travis CI is simulating a commit to your Git repository; nothing is actually pushed. This is actually an incredibly useful feature, especially when you’re getting started! It allows you to quickly test changes to your configuration file without having to go through the change, commit, push loop over and over again. For now, just click the “Trigger custom build” button at the bottom (Figure 4-19).
../images/464323_1_En_4_Chapter/464323_1_En_4_Fig19_HTML.jpg
Figure 4-19

Triggering a custom build

Note

For more information about this feature, check out the blog post announcing its release at https://blog.travis-ci.com/2017-08-24-trigger-custom-build .

You’ll be redirected to a page showing you a detailed, real-time status of your first build (Figure 4-20). If you scroll through the logs at the bottom, you’ll notice some familiar messages! The application is being cloned, built, tested, and archived just like it was when you ran the pipeline locally!
../images/464323_1_En_4_Chapter/464323_1_En_4_Fig20_HTML.jpg
Figure 4-20

Running the pipeline in Travis CI

A Look at the Travis CI Configuration File

So how does Travis know how to run our pipeline? It uses the .travis.yml file located in the root of our Git repository. Let’s take a look at what’s inside!
services:
  - docker
before_script:
  - |
    docker run -it -d
      -v /var/run/docker.sock:/var/run/docker.sock
      -e DOCKER_USERNAME=${DOCKER_USERNAME}
      -e DOCKER_PASSWORD=${DOCKER_PASSWORD}
      -e GITHUB_URL=${GITHUB_URL}
      --name netcore-pipeline
      edwardsdl/netcore-pipeline:latest
script:
  - docker exec netcore-pipeline /stages/01_clone.sh
  - docker exec netcore-pipeline /stages/02_build.sh
  - docker exec netcore-pipeline /stages/03_test.sh
  - docker exec netcore-pipeline /stages/04_archive.sh
Listing 4-13

The Travis CI Configuration File

It turns out the configuration file looks pretty similar to our 00_run.sh script. That’s by design! One of the primary benefits of this architecture is the ease with which you can move from one CI platform to another.

The services section describes any custom services—like MongoDB, Memcached, or RabbitMQ—your build requires. Travis CI will include these in your build environment. In our case, we ask that Docker be installed.

The before_script section lets us run any last-minute commands before the build really gets started. We’ll use it to pull and run the latest version of the netcore-pipeline Docker image. Just like when we ran it locally, we mount the Docker socket and pass the DOCKER_USERNAME, DOCKER_PASSWORD, and GITHUB_URL environment variables to the container.

Note

You may have noticed our docker run command is preceded by a vertical bar (|). This is called the literal block scalar style and it allows our command to span multiple lines. If you’re fascinated by formal language grammars, check out the YAML specification at http://yaml.org/spec/1.2/spec.html#id2795688 .

The script section is where we instruct Travis CI to run the stages we’ve included inside the container. Starting at 01_clone.sh, we simply work our way through each script until we’re done. If any stage fails, Travis will stop execution and mark the build as failing.

Note

For more information about configuring your Travis CI build, visit https://docs.travis-ci.com/user/customizing-the-build/ .

Running the Pipeline in CircleCI

It’s not unusual for organizations to transition from one CI/CD platform to another. Even upgrading from one version to another can be a huge undertaking. In this section, we’ll see what it takes to move our pipeline from Travis CI to CircleCI.

CircleCI is one of the world’s most popular CI/CD platforms. Like Travis CI, CircleCI is hosted in the cloud, offers free accounts, and is very easy to get up and running. On top of all that, it’s arguably an even better fit for our pipeline than Travis CI, as it was built with containerized pipelines in mind!

Creating a CircleCI Account

Tip

For more information about getting started with CircleCI, visit the 2.0 documentation page at https://circleci.com/docs/2.0/ .

This section will guide you through the process of creating an account. The process is fairly straightforward and very similar to that of Travis CI. If you’ve got experience working with CircleCI or are confident you don’t need help, feel free to skip to the next section.

To begin, open a browser and navigate to https://circleci.com . In the top right corner, click the button labelled “Sign Up” (Figure 4-21).
../images/464323_1_En_4_Chapter/464323_1_En_4_Fig21_HTML.jpg
Figure 4-21

The CircleCI home page

CircleCI will ask you to decide whether you want to sign up using GitHub or BitBucket (Figure 4-22). We’ll be using GitHub; but if you want to use BitBucket, it should be easy to follow along, as the process is almost identical.
../images/464323_1_En_4_Chapter/464323_1_En_4_Fig22_HTML.jpg
Figure 4-22

Signing up with CircleCI

If prompted, enter your GitHub username and password (Figure 4-23). If you are already signed in to GitHub, you won’t be asked to do so again.
../images/464323_1_En_4_Chapter/464323_1_En_4_Fig23_HTML.jpg
Figure 4-23

Signing up using GitHub

Next, CircleCI will request access to your email address and read access to your repositories (Figure 4-24). Like Travis CI, this information will be used to help set up your projects and notify you when your builds break.
../images/464323_1_En_4_Chapter/464323_1_En_4_Fig24_HTML.jpg
Figure 4-24

Authorizing CircleCI to access GitHub repositories

Creating a New CircleCI Project

After creating your account, you’ll be sent to your dashboard. This is where you’ll go to view the latest information on all your builds. At the moment, however, you have no projects. Instead of build information, you’ll be presented with a page welcoming you to the platform and directing you to add a new project (Figure 4-25). That sounds like a great idea!
../images/464323_1_En_4_Chapter/464323_1_En_4_Fig25_HTML.jpg
Figure 4-25

The builds screen

To begin, click the blue “Add projects” button in the center of the page. This will take you to a list of all the repositories in your GitHub or BitBucket account.

In the list of repositories, find the fork you created of the sample .NET Core repository and click the “Set Up Project” button (Figure 4-26).
../images/464323_1_En_4_Chapter/464323_1_En_4_Fig26_HTML.jpg
Figure 4-26

Adding a new project

Tip

Make sure the “Show Forks” checkbox is selected, otherwise your repository won’t show up in the list.

On the Add Project screen, select Linux as the operating system and choose “Other” under the Language section (Figure 4-27). Out of the box, CircleCI comes with the ability to intelligently build and test projects written using some popular technologies. While this is a wonderful feature, we won’t be using it for this or any of our other sample applications. All of our projects include a configuration file that tells CircleCI exactly what to commands to execute. We’ll take a closer look at this file a little later.
../images/464323_1_En_4_Chapter/464323_1_En_4_Fig27_HTML.jpg
Figure 4-27

Setting up the sample .NET Core project

After configuring the project, click the “Start building” button to create the CircleCI project and start a build.

Once you’ve started the build, you’ll be taken to a new page showing you its status (Figure 4-28). The build starts by pulling down our pipeline image, in this case edwardsdl/netcore-pipeline:latest, and then it begins executing the instructions found in the sample project’s config.yml file. Almost immediately however, the build fails! Checking the output of the Clone section makes the problem obvious: stages/01_clone.sh: line 6: GITHUB_URL: parameter null or not set. We never set the GITHUB_URL environment variable!
../images/464323_1_En_4_Chapter/464323_1_En_4_Fig28_HTML.jpg
Figure 4-28

The first build of the sample .NET Core project

To fix this, you’ll need to navigate to the build settings for the project and set a few environment variables (Figure 4-29). Go to the settings page by clicking the button with the gear shaped icon at the top right. Then click the “Environment Variables” link under the “Build Settings” section (Figure 4-30). Next use the “Add Variable” button to add three new environment variables: GITHUB_URL, DOCKER_USERNAME, and DOCKER_PASSWORD. Unlike Travis CI, the values you set here cannot be exposed in plain text.
../images/464323_1_En_4_Chapter/464323_1_En_4_Fig29_HTML.jpg
Figure 4-29

Projects page in Circle CI

../images/464323_1_En_4_Chapter/464323_1_En_4_Fig30_HTML.jpg
Figure 4-30

Adding environment variables

Now that we’ve set our environment variables, go back to the list of builds by clicking the “Builds” button at the top of the navigation bar on the left side. You should see a single row giving a summary of our failed build (Figure 4-31). Find the “rebuild” link on this row and click it.
../images/464323_1_En_4_Chapter/464323_1_En_4_Fig31_HTML.jpg
Figure 4-31

Rebuilding the project

After you click the “rebuild” link, you’ll be taken back to the build details page (Figure 4-32). This time the build should complete successfully! If you examine the build actions list, the output should look familiar. All of our stages are listed, and the output is the same as we saw when running the pipeline locally!
../images/464323_1_En_4_Chapter/464323_1_En_4_Fig32_HTML.jpg
Figure 4-32

Running the job a second time

A Look at the CircleCI Configuration File

So how did CircleCI know how to execute our stages? Take a look at one of our sample projects and you’ll find a hidden directory named .circleci. Inside this directory you’ll find a single file named config.yml. This is the CircleCI configuration file, and it tells CircleCI everything it needs to know. In the sample .NET Core app, this YAML file looks like this:
version: 2
jobs:
  build:
    docker:
      - image: edwardsdl/netcore-pipeline:latest
    steps:
      - setup_remote_docker:
          docker_layer_caching: true
      - run:
          name: Clone
          command: /stages/01_clone.sh
      - run:
          name: Build
          command: /stages/02_build.sh
      - run:
          name: Test
          command: /stages/03_test.sh
      - run:
          name: Archive
          command: /stages/04_archive.sh
Listing 4-14

The sample-netcore config.yml File

The file starts by specifying the Docker image to use when executing the pipeline steps. Those steps are defined in the next section. Most of this should look familiar—the clone stage is run first, followed by the build, test, and archive stages. The setup_remote_docker step is new though. This is what allows us to run Docker commands inside our container.

Note

You can find more information about CircleCI’s config.yml file at https://circleci.com/docs/2.0/configuration-reference/ .

Overview

In this chapter we took a deep dive into all the components necessary to build a simple, albeit fully functional, generic pipeline. We gave examples of several critical pipeline stages: clone, build, test, archive, and deploy. We then took a look at the Dockerfiles we use to create our build environments.

We then put it all together and used our pipeline to build and deploy several sample applications on our local machines. Taking it one step further, we showed you how easy it is to migrate the pipeline from your local machine to Travis CI and then to CircleCI.

In the next chapter, we cover more advanced topics and show you how to tackle some of the problems that tend to show up in real-world implementations.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.116.40.53