© Brandon Atkinson, Dallas Edwards 2018
Brandon Atkinson and Dallas EdwardsGeneric Pipelines Using Dockerhttps://doi.org/10.1007/978-1-4842-3655-0_3

3. Getting it Right with Docker and Scripts

Brandon Atkinson1  and Dallas Edwards2
(1)
North Chesterfield, VA, USA
(2)
Midlothian, VA, USA
 

Up until now we’ve looked at pipelines that work with monoliths and microservices. We’ve explored the challenges that come with both, but have seen how microservices can ease your pipeline workload. In the previous chapter we saw how you can build language-specific pipelines. These implementations allow for multiple teams to take advantage of a single pipeline. It also makes you begin to set and enforce development standards, which allows for code reuse across your pipelines. We’ll explore this concept much deeper in later chapters with code examples.

Language specific pipelines are great; they allow the DevOps team to focus on fewer pipelines by giving feature teams a shared implementation. It lowers maintenance efforts, allows for all teams to share in upgrades and improvements, and sets standards for how applications are deployed. While all this is great, we can do better. Imagine if you could build a single pipeline that could deploy any application regardless of the language it was written in. How much time could your DevOps team get back if they only had to support one implementation?

In this chapter we’ll begin laying the foundations of the generic pipeline using Docker. You’ll learn the pattern and process that allows a truly generic pipeline to work. We’ll explore how this pattern is driven by Docker and frees you from the underlying platform the pipeline runs on. At the end of this chapter you’ll be ready to start coding an implementation.

One Pipeline to Rule Them All

Language-specific pipelines are a great first start. They allow you to break free of application-specific implementations and begin to serve a larger base of teams simultaneously. However, as your feature teams begin to fully embrace microservices, it stands to reason they will begin to embrace other languages as well. You may start off as a 100% .NET or Java shop, and slowly turn into a true polyglot shop. This may not be a huge concern if you’re supporting a few languages, but what about five, six, or seven languages? This may seem like a lot, but it’s not. I have worked at organizations that had the following languages to support:
  • User Interface: Angular and React

  • Server Side: .NET Core, Java, Node.js, Python, Golang

Microservices give feature teams the flexibility to try out new languages and patterns. They allow for choosing the right language for the job as well. Take .NET and SQL Server for instance. Let’s imagine you’ve been writing back-end services in Node.js and now must write a new API that connects to SQL Server. You could use Node.js for this task; however, .NET has built-in functionality to do just that. You can very quickly spin up a .NET Core microservice that handles CRUD operations against your SQL Server. It would be faster to write and less error prone, since it’s inside the .NET ecosystem.

Another scenario involves hiring of new talent. Suppose you hire an amazing developer. When they join, they talk about writing their microservices in Golang, since it’s easy to write, easy to learn, and compiles and deploys very fast. You try it out and like it. Suddenly Golang services are springing up all over the place. The moral of the story is with microservices developers, feel free to try out new languages or stray from the standard if another language provides benefits.

As more and more languages come on the scene, you’ll find yourself writing more and more pipeline implementations. If you’re in an organization that supports seven languages, that would be seven pipelines. Even in a simple scenario, you’d have two languages to support, one for the UI and one for the server side. That’s one too many! A better solution is a single pipeline that is built to handle any language that your teams work with.

In the previous chapter we looked at shared steps of a pipeline, as shown in Figure 3-1.
../images/464323_1_En_3_Chapter/464323_1_En_3_Fig1_HTML.jpg
Figure 3-1

Shared steps of a CI/CD pipeline

A typical pipeline may consist of the following steps:
  • Build

  • Unit Test

  • Static Code Scan/Security Scan

  • Packaging/Publishing of Artifacts

  • Deploying

  • End to End Tests

  • Performance Tests

How mature your organization and CI/CD efforts will determine how many steps you have, but these are a good target. Upon close examination, most of these steps are not specific to a language. For instance, if you're using commercial software for static code analysis it will usually work with a wide variety of languages. Packaging of artifacts usually involves zipping up binaries and shouldn’t be any different across applications. The main differences in applications occur during the first two steps: build and unit test. Given this, we can reuse most of our code across steps, and consolidate code for the remaining ones using logic. For instance, Figure 3-2 shows a fully shared code base across all steps.
../images/464323_1_En_3_Chapter/464323_1_En_3_Fig2_HTML.jpg
Figure 3-2

Pipeline stages using shared code

In this scenario, the entire pipeline is now shared across all languages. Logic is included in the language-specific steps to perform the appropriate commands. This can be accomplished via if/else blocks, switch/case statements, etc. At first glance this may seem like a lot of logic code, especially if you’re dealing with a lot of languages. However, earlier in the book we discussed implementing development standards in your shared pipeline. In this scenario you’d have set common build commands to run for each language. For instance, in .NET you’d use the standard “dotnet build” command. These commands generally will be the built-in, out of the box commands for each language. The goal is to set a standard that all teams can follow without being overly complicated. Listing 3-1 shows an example build step.
if [ $language = "dotnetcore" ]
then
    dotnet build -c Release -o /output
elif [ $language = "java" ]
then
    mvn package -s settings.xml -f /
elif [ $language = "angular" ]
then
    npm run build
else
    echo "Error: No valid language provided."
fi
Listing 3-1

Sample Build Step in a Generic Pipeline

This example shows how you can pass in a simple variable, “language” in this case, to inform the step which language it is working with. An if/else statement then allows you to perform the build command for that language. Listing 3-2 shows the same example using a switch/case statement.
case $language in
"dotnetcore")
      dotnet build -c Release -o /output
      ;;
"java")
      mvn package -s settings.xml -f /
      ;;
"angular")
      npm run build
      ;;
*)
      echo "Error: No valid language provided."
      ;;
esac
Listing 3-2

Sample Build Step Using a Switch/Case Statement

In this example we’re covering three languages in the “build” step: Angular, .NET Core, and Java. The commands in each section are admittedly short and probably not a full set of the commands you may run. For instance, a proper Angular build section may look more like Listing 3-3.
"angular")
      npm set progress=false
      npm install –-registry https://registry.npmjs.org/
      npm run build
      ;;
Listing 3-3

A More Complete Angular Build Step

This case statement has three times the amount of code in it, but it’s still small for all intents and purposes. Even if you need to do more complex things for a specific language, you will most likely not end up with so many lines that it’s unmanageable. This is where setting standards for languages becomes important. To build a truly language-agnostic pipeline that all teams can use, you need high-quality standards. We’ll continue to explore this throughout the book; however, in this case your Angular build standard would be:
  • Run “npm set progress=false” to turn off progress bars.

  • Run “npm install” to restore packages.

  • Run “npm build” to build the project.

  • No other commands will be run.

With this build standard in place, all teams would need to conform to it or in order to use your pipeline. This may seem harsh, but there are a lot of benefits to it. Teams that share common standards on how they build and test can more rapidly debug issues and assist other teams with issues, and it simplifies your work in the pipeline. Now of course, there will always be outliers who need to do things their own way. In those cases it’s important not to deviate from the standard. You could explore things like a custom pipeline for that application, or provide hooks into the pipeline to allow teams to override your steps. In most cases you will need to hold strong on the standards and say no to a lot of requests.

Shell Scripts

For your pipeline to be truly generic and run anywhere, it needs to be written in a way that is portable. So far we’ve explored writing shared steps. In the previous section we saw how to combine commands from different languages into a single step. In case you haven’t noticed, those commands were all written in Shell. For this book we’ve chosen Shell scripts to execute all our commands.

Shell scripts were an easy choice to make for a variety of reasons. All our examples are written in languages that run in a Linux environment. This includes .NET Core, Angular, and Java. This was also an easy choice, since this book is about writing pipelines with Docker. While Docker can run on Windows, I would argue most organizations don’t utilize this option—especially those running on Amazon Web Services using Elastic Container Service (ECS) or Google Cloud Platform Kubernetes Engine. A Shell environment will be present in all our containers without any additional installations, which is nice!

No matter the container we’re working in, we’re confident a Shell environment is available. However, this is not the case with all Shell languages. Bash, for instance, is a very popular Shell language, but if you’re new to Docker you’ll quickly find out that it’s not always available. For instance, Alpine containers, which are lightweight Linux distribution based on “musl libc” and “busybox”, have a Shell environment but do not come with Bash installed.

While we have chosen Shell scripts for this book, there is no reason you cannot deviate from this and use another scripting language. For instance, you can use Bash or Python just as easily. The only caveat is you will need to ensure those runtimes are installed in your Docker image. In fact, if you happen to be using Windows containers, there is no reason you can’t follow along with us. You’ll just be using PowerShell in your containers, most likely.

Let’s take the example from the previous section and reimagine it as a Shell script for the build step. Listing 3-4 shows what this might look like.
#!/usr/bin/env sh
case $1 in
"dotnetcore")
      dotnet build -c Release -o /output
      ;;
"java")
      mvn package -s settings.xml -f /
      ;;
"angular")
      npm run build
      ;;
*)
      echo "Error: No valid language provided."
      ;;
esac
Listing 3-4

Build Step as a Shell Script

This example looks almost identical to Listing 3-2, with a couple of small but important changes. First, we’ve include a shebang as the first line in the file to indicate this is a Shell script. This would change based on the language you’re using. For instance, if you wanted to use Python 3 your shebang would be #!/usr/bin/python3. Next, we’ve replaced the “$language” variable with a “$1” indicating we’re passing it in as an argument. While this is still a very simple implementation of a build step, these two changes make it a fully functional step. Just save it as "build.sh" and you’re ready to use it in your pipeline.

This also opens the door for making your pipeline logic much more modular. Take Listing 3-3 as an example. This was a much more detailed build command for Angular. Granted it is all of three lines, but imagine you have a language that requires 30 lines or more. This is probably more code than you care to have in a switch/case statement. Now that we’re using Shell scripts, we can reimagine that code as a separate script as shown in Listing 3-5.
#!/usr/bin/env sh
npm set progress=false
npm install –-registry https://registry.npmjs.org/
npm run build
Listing 3-5

Angular Build Commands in Their Own Shell Script

Now, this script can be saved as "angular_build.sh". All your build logic is now consolidated into a separate script. If you needed to go crazy and have dozens of lines of code, it’s isolated here. This makes writing, maintenance, and debugging much easier. It also begins to open the door for sharing code across multiple pipelines. If we take all our build commands and put them into separate Shell scripts, our build step could be simplified as shown in Listing 3-6.
#!/usr/bin/env sh
case $1 in
"dotnetcore")
      dotnet_build.sh
      ;;
"java")
      maven_build.sh
      ;;
"angular")
      angular_build.sh
      ;;
*)
      echo "Error: No valid language provided."
      ;;
esac
Listing 3-6

Simplified Build Step Shell Script

While we haven’t reduced the line count of the code in the file, we’ve greatly simplified it, making it easier to read and follow. The script is no longer cluttered with code from the various languages. If we need to add another language, simply write the appropriate build script and then add another case statement to the build step script.

This method introduces shared scripts that can be executed from a step on your CI/CD platform. For instance, let’s imagine that you have multiple lines of business in your organization, each with their own DevOps team. Each LOB has development teams building microservices, and each runs their own CI/CD platform. We’ll also say that all those teams are writing microservices in Node.js. It’s not hard to imagine that each DevOps group has its own pipeline that can build, test, and deploy Node.js services. Each pipeline is essentially doing the same thing, and most likely using almost identical code to do it!

If both teams adopted using Shell scripts to build their Node.js services, then they could then easily share code. In fact, the code in Listing 3-5 could simply be renamed “npm_build.sh” and used for all Node.js applications! Even if each team was using different platforms for their pipelines, running Shell scripts is supported in every major platform.

Configuration Files

For Shell scripts to properly handle multiple languages, you must have some way to inform your pipeline about the application you want to build. You want to be explicit about what you are doing. A configuration file can solve this issue for you. Development teams can place this file in their repo and it would be cloned along with the application when the pipeline executes. It would contain all the information the pipeline would need to execute. Listing 3-7 shows what a simple configuration file may look like.

Note

This book focuses on building applications that are deployed via Docker containers to an orchestration service like Amazon ECS or Kubernetes. As such, the configuration file shown in this chapter is specific to that. A configuration file for your applications may look drastically different.

{
  "application": {
    "name": "Hello App",
    "language": "dotnetcore"
  },
  "build": {
    "path": "",
    "outputPath": "HelloApp/bin/Release/netcoreapp2.0"
  },
  "test": {
    "enabled": true,
    "path": "HelloTests/"
  },
  "archive": {
    "registry": "docker.io",
    "namespace": "YOUR-NAMESPACE",
    "repository": "YOUR-REPO"
  },
  "deploy": {
    "containerPort": 5000
  }
}
Listing 3-7

A Sample Configuration File

Let’s break this down by each section to better understand its makeup:
  • Application: Contains basic information about the application
    • Name: This is a friendly name for the application. This may be its identifier in the UI of the platform or used for reporting.

    • Language: This is the language the application is written in. This is the most important, if not only, variable the pipeline may care about.

  • Build: Contains information about how to build the application
    • Path: This would be the directory path in the cloned repo, in case your application is located somewhere other than root.

    • OutputPath: This tells the pipeline where the built binaries should be placed. It is useful if other stages require the binaries to be placed in specific locations.

  • Tests: Contains information about how to execute unit tests
    • Enabled: Would allow the application to bypass a stage

    • Path: Used if the unit tests are not located in the same directory as the application

  • Archive: Contains information about how to archive the built application
    • Registry: The URL to the Docker registry where the application will be pushed

    • Namespace: The namespace in the registry

    • Repository: The repository name under the namespace

  • Deploy: Contains information pertaining to the deployment
    • ContainerPort: The container port number

As you can see, even a simple configuration file can get complex very fast. However, this file contains just enough information that our pipeline can execute its stages and deploy our application. A lot of thought needs to go into these files to make them flexible for future changes. Additional sections may include things like:
  • Security Scans

  • Static Code Analysis

  • Performance Tests

  • ATDD Tests

This could go on and on. The main takeaway here is that for a generic pipeline that uses shared code to function properly, you need a way to instruct it on which paths to take while executing. We will explore configuration files in more detail, as well as use them in later chapters.

Docker at the Core

At the core of the generic pipeline is Docker. It is the glue that holds everything together, as well as the magic that makes it all possible. Docker provides a mechanism where we can isolate our pipeline from the underlying platform. It also allows us to create an environment that is specific to the needs of the application being deployed. For instance, if we’re building a .NET Core application, we don’t need to worry about having the Java runtime installed. It also allows for us to easily target specific runtime.

If you’re in a larger organization, it’s not uncommon to have an enterprise CI/CD platform that you must use. In these scenarios you are often forced to use the runtimes that are installed on the platform, or face long lead times to get new ones installed. Continuing with our .NET Core example, imagine we have an enterprise platform with various runtimes installed:
  • Java Runtime Environment 8

  • .NET Core 1.0

  • Python 2.6

  • .NET Standard 4.5

  • Sonar Scanner 2.0

If we’re using .NET Core 1.0 we’re in good shape. It’s installed on the platform, and we can build and deploy to our heart’s content. Figure 3-3 shows what this platform may look like.
../images/464323_1_En_3_Chapter/464323_1_En_3_Fig3_HTML.jpg
Figure 3-3

Enterprise CI/CD platform

Things are humming along just fine; however, our development team has begun working on a new version of their application and they are using .NET Core 2.0. Well that’s not going to work; we need to get .NET Core upgraded on the platform. In most cases this is not a quick process. For an enterprise to run a tight ship, they need to vet installations in lower environments first. Only after proper testing has taken place can the upgraded version of the runtime be scheduled for a Production deployment. That also takes time, as you need to secure a change order and downtime window. Figure 3-4 shows the updated platform after the installation.
../images/464323_1_En_3_Chapter/464323_1_En_3_Fig4_HTML.jpg
Figure 3-4

Enterprise CI/CD platform with new runtimes installed

If your team happens to be agile, they are most likely putting you on their impediment list! At this point the enterprise platform has become a bottleneck slowing down the development teams. In a worst-case scenario, you’re stifling progress and innovation because teams cannot move as fast as they need to. Imagine that an early access release of .NET Core comes out and a team would like to use it for their application. This would be even more of a challenge given it’s not a release candidate!

In addition to the slowdowns that this can present, there are also a ton of runtimes installed that most developers don’t need. To put it another way, the Java developers don’t need .NET Core and vice versa. Now most of the time this is not a problem, as multiple runtimes can be installed side by side without issue. But larger platforms, like Jenkins, also come with a lot of plugins that sometimes don’t play nice with each other. As the platform’s popularity grows inside the organization, more and more requests for plugins flow in. At some point there may be a conflict between plugins, and someone will have to lose out on functionality they were counting on.

These scenarios can continue to play out in many different forms. For instance, given our platform and the runtimes we have installed, if a team wanted to use Golang for their project they’d have to wait for that to be installed. Luckily all these problems can be solved with Docker. The only requirement is that it is installed on the enterprise platform. From there we have total control over what our applications need in the pipeline. Figure 3-5 shows the ideal platform in its purest form, with only Docker installed.
../images/464323_1_En_3_Chapter/464323_1_En_3_Fig5_HTML.jpg
Figure 3-5

Enterprise CI/CD platform with only Docker installed

Now at this point you may be thinking to yourself, “this is insane.” Why would I have an entire platform and only put Docker on it. Well, you’re still going to have a lot of other things installed/configured for your enterprise. However, you can begin to break free of installing individual runtimes and tooling to support all the applications your platform supports. Docker allows for teams to control their environments and only install things specific to their applications.

In the previous figures we had a platform with many runtimes installed to support many different development teams. Let’s see what that looks like with Docker in the mix. Figure 3-6 shows the platform once we begin to utilize Docker.
../images/464323_1_En_3_Chapter/464323_1_En_3_Fig6_HTML.jpg
Figure 3-6

Enterprise CI/CD platform utilizing Docker during builds

In this scenario, each application is built, tested, and deployed inside a container that is isolated from other containers. The container is responsible for what components and runtimes are installed, thus relieving the platform team of being responsible for installing and maintaining multiple runtimes on the platform. With this approach, the previous scenario of a team switching to a new runtime version becomes trivial. In fact, a team wanting to be bleeding edge and use an Early Access release is not a concern for the enterprise platform team anymore. Figure 3-7 shows an updated .NET Core 2.0 container and an Early Access .NET Core container being utilized.
../images/464323_1_En_3_Chapter/464323_1_En_3_Fig7_HTML.jpg
Figure 3-7

Enterprise CI/CD platform with updated containers

In this pattern everything the application needs to be built, tested, scanned, and deployed is inside the “build” container. It’s called a build container, as all the work in the pipeline will be performed inside it. We can illustrate this with a simple Dockerfile example. Listing 3-8 shows a sample Dockerfile for a .NET Core 1.0 application.
FROM microsoft/1.0-runtime:latest
RUN apt-get update && apt-get install -y
          unzip
RUN wget http://repo1.maven.org/.../sonar-runner-dist-2.4.zip &&
          unzip sonar-runner-dist-2.4.zip -d /opt
ENV PATH $PATH:/opt/sonar-runner-dist-2.4/bin
Listing 3-8

Sample Dockerfile for a .NET Core 1.0 Build Container

In this example our build container is based on a .NET Core base image. We update “apt-get” and install “unzip”. After that we download Sonar Runner, unzip it, and update the “PATH” so we can easily run it. At this point we have a build container that can build, test, and scan .NET Core 1.0 applications. Now if the development team decides they want to move to .NET Core 2.0 (why wouldn’t they) we simply create another build container using the Dockerfile shown in Listing 3-9.
FROM microsoft/aspnetcore-build:2.0
RUN apt-get update && apt-get install -y
          unzip
RUN wget http://repo1.maven.org/.../sonar-runner-dist-2.4.zip &&
          unzip sonar-runner-dist-2.4.zip -d /opt
ENV PATH $PATH:/opt/sonar-runner-dist-2.4/bin
Listing 3-9

Sample Dockerfile for a .NET Core 2.0 Build Container

As you can see, the only thing that changed was the base image of the Dockerfile. In this scenario we now have two build containers, all from changing a single line of code. Contrast that with what it would take to install a new runtime in the platform, fully test it in lower environments, and then promote it to Production. With Docker, we can simply make a change to a Dockerfile and test it. If it works, great! If not, you’ve literally wasted about ten minutes of work.

Earlier in the chapter we saw an example architecture where the pipeline was made up of shared components. Figure 3-8 shows us this concept again.
../images/464323_1_En_3_Chapter/464323_1_En_3_Fig8_HTML.jpg
Figure 3-8

Pipeline stages using shared code

In this scenario we used Shell scripts to encapsulate shared commands, which can be used across all of the application deployments. Now, we can envision this taking place inside Docker containers, as shown in Figure 3-9.
../images/464323_1_En_3_Chapter/464323_1_En_3_Fig9_HTML.jpg
Figure 3-9

Pipeline stages using shared code inside build containers

Now we have common shared pipeline code loaded into build containers that are isolated from each other. The same code is copied into each container, but changes to a container no longer have any side effects on the others. This is drastically different than if the CI/CD platform is responsible for all the runtimes and plugins. As illustrated earlier, a team can quickly and safely jump to a new version of a runtime without fear of affecting other teams on the previous runtime. Loading the shared shell scripts is extremely easy. All we need to do is add a COPY command to our build container Dockerfile, as shown in Listing 3-10.
FROM microsoft/aspnetcore-build:2.0
COPY stages stages
RUN apt-get update && apt-get install -y
          unzip
RUN wget http://repo1.maven.org/.../sonar-runner-dist-2.4.zip &&
          unzip sonar-runner-dist-2.4.zip -d /opt
ENV PATH $PATH:/opt/sonar-runner-dist-2.4/bin
Listing 3-10

Adding Shell Scripts to Our Build Container

In this example we assume there is a directory called “stages” that holds all the Shell scripts for our pipeline. By copying the scripts into each build container, we have further isolated changes they may impact other teams. A change to a script will only be reflected once the build container is rebuilt. This is certainly not the only way to get your shell scripts into your build container. You could copy them in as part of an application clone, or if using a language like Python you could include them as modules and perform a PIP installation. The main takeaway is that the scripts are part of the build container and isolated from other code.

Platform Agnostic

Running a CI/CD pipeline inside a Docker container provides you with so many benefits. We’ve already seen how it can provide your development teams isolation from other runtimes, and give them total flexibility on what is installed in the container. Combine this with scripts loaded into the container to execute your pipeline code, and now you’re decoupled from the underlying platform. This is a very big deal.

Many enterprise grade CI/CD platforms already allow you to share code across pipelines; this is not a new concept. However, in most cases the mechanisms in place to do so are clunky and not ideal. Take for instance Jenkins, which is a very large player in this space. Jenkins allows you to configure libraries that contain shared code for use in your pipeline. These can be configured at the local (folder) level or globally. Figure 3-10 shows a sample configuration.
../images/464323_1_En_3_Chapter/464323_1_En_3_Fig10_HTML.jpg
Figure 3-10

Folder level configuration of a shared library

In this scenario, we have a GitHub repo called “my-library” with some shared code in it. We reference this library in our Jenkinsfile for the pipeline to load it. Jenkins has a very specific directory structure to allow these files to be shared. The “my-library” repo would be structured like Listing 3-11.
(root – "my-library")
+- vars
|   +- foo.groovy
|   +- bar.groovy
|   +- baz.groovy
Listing 3-11

Directory Structure of a Shared Library

There may be many more files in this directory but there must be a folder named “vars,” which only contains Groovy files. While this is a fine method for sharing code across pipelines, it leaves a lot to be desired. This pattern also makes roper unit testing of the code more difficult, albeit still possible. It also locks us into using Groovy for our shared code, or at least using it as an entry point and then calling something else under the covers.

By using more generic scripts like Shell, we can already begin to break free of the platform even without Docker in the equation. If you’re on Windows you can use PowerShell, or even now you can consider Bash, which would not only decouple you from the platform but also the operating system. Putting Docker into the mix gives you the same capability; the only requirement now is that Docker be installed. You could also explore a language like Python, which pretty much runs anywhere.

If we were to move all the code in “my-library” into Shell scripts, our Jenkins configuration would be much simpler. In fact, all the steps from earlier would be gone. We’d have a Docker container loaded with our scripts, which we would simply run on Jenkins. Listing 3-12 shows a sample Docker image run in Jenkins. This example is in Groovy, and the Jenkins platform has the Docker plugin installed.
docker.withRegistry('YOUR-DOCKER-REGISTRY') {
       docker.image('YOUR-CUSTOM-IMAGE').inside('ARGS') {
             stage('Clone') {
                   sh("/stages/01_clone.sh")
             }
             stage('Build') {
                   sh("/stages/02_build.sh")
             }
             stage("Test") {
                   sh("/stages/03_test.sh")
             }
             stage("Archive") {
                   sh("/stages/04_archive.sh")
             }
       }
}
Listing 3-12

Running a Docker cContainer in Jenkins

Note

The preceding stages are the actual stages we’ll be building in upcoming chapters. This example would assume that you have cloned or copied your scripts into the root of the container that is mapped to the Jenkins workspace.

Now that we have all our shared code in scripts in a GitHub repo, we can copy that in via our Dockerfile and use the preceding command to execute the container. You have the option to either “bake” your scripts into the container when it’s built, or you could even do a git clone command to bring the scripts down when the container runs. The point here is you have some options.

Now let’s consider that you have your pipeline built in a Docker container using Shell scripts, and you’re up and running on Jenkins. Everything is going great, and then your boss comes to you and proclaims that Jenkins is out and Circle CI is in! In most organizations around the world this would be a major event. All your pipelines need to be rewritten on Circle CI. It will take months to migrate everything over, not to mention the testing and deployments and, wait a minute! We have everything isolated in Docker containers. This won’t be so bad.

We can take the same Docker image we’ve used in Jenkins and easily run it on another platform. Listing 3-13 shows what the same setup may look like in Circle CI.
version: 2
jobs:
  build:
    docker:
      - image: YOUR-DOCKER-REGISTRY/YOUR-CUSTOM-IMAGE
    steps:
      - setup_remote_docker:
          docker_layer_caching: true
      - run:
          name: Clone
          command: /stages/01_clone.sh
      - run:
          name: Build
          command: /stages/02_build.sh
      - run:
          name: Test
          command: /stages/03_test.sh
      - run:
          name: Archive
          command: /stages/04_archive.sh
Listing 3-13

Running the Docker Container in Circle CI

At this point the beauty of the pattern should be clear. While we are just looking at configuration files, you should readily see the similarities. On either platform, we use our Docker image as the basis for running the pipeline. Stages or Steps are defined and given names, like Clone, Build, Test, etc. Next, a Shell script is executed in each section. Since everything runs inside a container, there are no surprises when we execute the pipeline. It will run the same on each platform!

Note

At this point you may be thinking this is way too easy. While this pattern makes moving the code and executing it on another platform crazy easy, you still need to keep in mind the other aspects of running a CI/CD platform. Each will have its own ways of being configured, dealing with networking, etc. However, not having to worry about how to port a pipeline will provide a lot of breathing room if you need to make a switch.

To further illustrate the point of being agnostic, let’s look at what this pipeline looks like on another platform. Listing 3-14 shows what the same setup may look like in Travis CI.
services:
  - docker
before_script:
  - |
    docker run -it -d
      -v /var/run/docker.sock:/var/run/docker.sock
      -e DOCKER_USERNAME=${DOCKER_USERNAME}
      -e DOCKER_PASSWORD=${DOCKER_PASSWORD}
      -e GITHUB_URL=${GITHUB_URL}
      --name MY-PIPELINE
      YOUR-DOCKER-REGISTRY/YOUR-CUSTOM-IMAGE
script:
  - docker exec MY-PIPELINE /stages/01_clone.sh
  - docker exec MY-PIPELINE /stages/02_build.sh
  - docker exec MY-PIPELINE /stages/03_test.sh
  - docker exec MY-PIPELINE /stages/04_archive.sh
Listing 3-14

Running the Docker Container in Travis CI

In later chapters we’ll be using Circle CI and Travis CI to build a very simple pipeline. We’ll use the same Docker container and Shell scripts to build, test, and archive an application on both platforms. In fact, the Yaml files shown will be used in those examples!

Overview

We covered a lot of material and patterns in this chapter! Let’s take a quick moment to review what we discussed.

Shell Scripts

Shell scripts provide us a way to centralize our pipeline logic into smaller easy to share chunks of code. We can write scripts for very specific functionality and then use them inside “Stage” or “Step” scripts to stitch them together. By doing so, we create scripts that are easier to test and share across multiple pipeline implementations. Also, by using Shell we can be guaranteed they will run across any platform. This pattern also opens us up to using other scripts like Python or PowerShell.

Docker

Docker provides us the isolation needed to simplify the needs of the development team while running on the platform. In the past, the platform would need to have all the runtimes and plugins installed globally for anyone to use. With Docker, the only requirement is that Docker be installed for all to use.

Build Containers

Build containers provide a level of isolation on the platform. Each team can install their own runtimes and dependencies in a container, thus isolating their needs from others. This also provides them with the flexibility to use anything they want, without fear of it affecting the platform. If a team wants to us a bleeding edge release of a runtime, they have the power to do it. They own the build container (or a DevOps team) and are responsible for what goes in it. This allows teams to move at a much more rapid pace than if they were dependent on the platform team.

In the next chapters we’ll move out of theory and discussion and into actual implementation. We’ll create a simple demo application that we can use to deploy. We’ll look at building out our build container and how to implement it in Circle CI and Travis CI. You’ll take everything you have learned so far and apply it in a practical way. So let’s get started!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.14.70.163