Chapter 2. Continuous Delivery of Java Applications with Docker

“Integrating containers into a continuous-delivery pipeline is far from easy. Along with the benefits Docker brings, there are also challenges both technological and process-related.”

Viktor Farcic, author of DevOps 2.0

This chapter examines the impact of introducing Docker into a Java application build pipeline. In addition to looking at the theoretical issues, practical examples will also be provided, with the goal being to enable a Java developer already familiar with the basic concepts of Docker to start creating an appropriate pipeline. For Java developers looking for an introductory explanation of Docker, Arun Gupta’s Docker for Java Developers is an excellent primer.

Adding Docker to the Build Pipeline

Introducing Docker into a typical Java application build pipeline will primarily impact four locations within the pipeline. The figure below shows an extended example of the earlier Java application build pipeline with the locations of change highlighted (Figure 2-1).

Figure 2-1. A Dockerized Java application continuous delivery pipeline

Location 1, the developer’s machine, will now include packaging and running Java application within a Docker container. Typically, testing will occur here in two phases: the first ensures that the Java build artifact (JAR, WAR, etc.) is thoroughly tested, and the second asserts that the container image wrapping the artifact has been assembled correctly and functions as expected.

The CI-driven packaging of the application within a container is highlighted in Location 2. Notice now that instead of pushing a Java build artifact (JAR, WAR, etc.) to an artifact repository, a Docker image is being pushed to a centralized container registry, such as Docker Hub. This Docker image now becomes the single source of truth, as opposed to the WAR previously, and this is the artifact promoted through the pipeline. It is worth noting that some organizations may want to store code artifacts in addition to Docker images, and accordingly JFrog’s Artifactory can store both.

Location 3 pinpoints where tooling for defining and running multi-container Docker application (such as Docker Compose) could be used to drive acceptance and performance testing. Other existing options for orchestration and execution of build artifacts can also be used, including Maven/Gradle scripts, configuration management tooling (e.g., Ansible, Chef, Puppet), and deployment tooling like Capistrano.

Location 4 highlights that all application environments from QA to production must now change in order to support the execution of Docker containers, with the goal of testing an application within a production-like (containerized) environment as early in the pipeline as possible. Typically, the creation of a container deployment environment would be implemented by the use of a container orchestration framework like Docker Swarm, Kubernetes, or Apache Mesos, but discussing this is outside of the scope of this book.

The easiest way to understand these changes is to explore them with an example Java application and associated build pipeline. This is the focus of the next section.

Introducing the Docker Java Shopping Application

Throughout the remainder of this chapter, we will be working with an example project that is a simplified representation of an e-commerce application.

Obtaining the Example Project Code and Pipeline

For the examples in this chapter, we will use a simple e-commerce-style Java application named Docker Java Shopping, which is available via GitHub. The source repository contains three applications (with Docker build templates), a Docker Compose file for orchestrating the deployment of these applications, and a fully functional Jenkins build pipeline Vagrant virtual machine (VM) configuration. If you would like to follow along with the examples in this chapter, please clone the repository locally (↵ indicates where a code line has been broken to fit the page):

git clone ↵
https://github.com/danielbryantuk/oreilly-docker-java-shopping/

The GitHub repository root project directory structure can be seen in Figure 2-2.

Figure 2-2. The Docker Java Shopping project on GitHub

Docker Java Shopping Application Architecture

The application follows the microservices-style architecture and consists of three applications, or services. Don’t be too concerned with the use of the microservice pattern at the moment, as this will be discussed in Chapter 3, and we will deal with each of the three services as applications in their own right. The shopfront service has a UI that can be seen in Figure 2-3.

Figure 2-3. Docker Java Shopping application architecture

Figure 2-3 shows the architecture of this system, with the shopfront Spring Boot–based application acting as the main customer entry point. The shopfront application is mainly concerned with displaying and aggregating information from two other services that expose data via REST-like endpoints: the productcatalogue Dropwizard/Java EE–based application provides data on the products in the system; and the stockmanager Spring Boot–based application provides stock (SKU and quantity) data on the associated products.

In the default configuration (running via Docker Compose), the productcatalogue runs on port 8020, and the products can be viewed as JSON as shown in the following code:

$ curl localhost:8020/products | jq
[
  {
    "id": "1",
    "name": "Widget",
    "description": "Premium ACME Widgets",
    "price": 1.2
  },
  {
    "id": "2",
    "name": "Sprocket",
    "description": "Grade B sprockets",
    "price": 4.1
  },
  {
    "id": "3",
    "name": "Anvil",
    "description": "Large Anvils",
    "price": 45.5
  },
  {
    "id": "4",
    "name": "Cogs",
    "description": "Grade Y cogs",
    "price": 1.8
  },
  {
    "id": "5",
    "name": "Multitool",
    "description": "Multitools",
    "price": 154.1
  }
]

Also in the default configuration, the stockmanager runs on port 8030, and the product stock data can be viewed as JSON as shown in the following code:

$ curl localhost:8030/stocks | jq
[
  {
    "productId": "1",
    "sku": "12345678",
    "amountAvailable": 5
  },
  {
    "productId": "2",
    "sku": "34567890",
    "amountAvailable": 2
  },
  {
    "productId": "3",
    "sku": "54326745",
    "amountAvailable": 999
  },
  {
    "productId": "4",
    "sku": "93847614",
    "amountAvailable": 0
  },
  {
    "productId": "5",
    "sku": "11856388",
    "amountAvailable": 1
  }
]

Local Development Environment Configuration

If you plan to follow along with the examples in this book, the remainder of this chapter assumes that your local development system has the following software installed:

For developers keen to explore the example Docker Java Shopping application, first check out the project repository and build the applications locally. All three of the applications must be built via Maven (there is a convenient build_all.sh script for Mac and Linux users), and then run via the docker compose up --build command, which should be executed in the root of the project. The shopfront application can be found at http://localhost:8010/.

Let’s look at the project in more detail, and see how the Java applications have been deployed via Docker.

Building a Docker Image Locally

First we will look at the shopfront application located in the shopfront directory at the root of the project. This is a simple Spring Boot application that acts as an ecommerce “shop front.” The application’s build and dependencies are managed by Maven, and can be built with the mvn clean install command. Doing this triggers the compilation and unit (Maven Surefire) and integration (Maven Failsafe) testing, and creates a Fat JAR deployment artifact in the project’s target directory. The contents of the target directory after a successful build can be seen in Example 2-1.

Example 2-1. Typical output
(master) oreilly-docker-java-shopping $ ls
README.md          docker-compose.yml resttests
build_all.sh       guitests           shopfront
ci-vagrant         productcatalogue   stockmanager

So far, this is nothing different from a typical Java project. However, there is also a Dockerfile file in the shopfront directory. The contents are as follows:

FROM openjdk:8-jre
ADD target/shopfront-0.0.1-SNAPSHOT.jar app.jar
EXPOSE 8010
ENTRYPOINT ["java", ↵
"-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]

Assuming basic Docker knowledge, we can see that this Dockerfile is built from the default OpenJDK Java image with the 8-jre tag (ensuring that an OpenJDK 8 JRE is installed within the base image), the shopfront application Fat JAR is added to the image (and renamed to app.jar), and that the entrypoint is set to execute the application via the java -jar <jar_file> command. Due to the scope of this book, the Dockerfile syntax won’t be discussed in further detail in this chapter, as Arun’s Docker for Java Developers contains the majority of the commands that a Java developer will need.

Now that the Java application JAR has been built, we can use the following Docker command in the shopfront directory to build the associated Docker image:

docker build -t danielbryantuk/djshopfront .

This builds a Docker image using the Dockerfile in the current directory (the specified build context of .) and tags (-t) the image as danielbryantuk/djshopfront. We now have a Docker image that can be run as a container via the following command:

docker run -p 8010:8010 danielbryantuk/djshopfront 

After the application has initialized (visible via the logs shown after executing the docker run command), visit http://localhost:8010/health in a browser to confirm the successful startup (it’s worth noting that attempting to visit the shopfront UI will result in an error response, as the dependent productcatalogue and stockmanager applications have not been started). The docker run command can be stopped by issuing a SIGINT via the key combination Ctrl-C.

You should now have a basic understanding of how to build, package, and run Java applications locally via Docker. Let’s look at a few important questions that using Docker with Java imposes, before we move to creating a Docker-based continuous delivery pipeline.

Packaging Java Within Docker: Which OS, Which Java?

Packaging a Java application within a Docker image may be the first time many Java developers have been exposed to Linux (especially if you develop Java applications on MS Windows), and as the resultant Docker container will be running as is within a production environment, this warrants a brief discussion.

A Java developer who works in an organization that implements a similar build pipeline to that demonstrated in Figure 2-1 may not have been exposed to the runtime operational aspects of deployment; this developer may simply develop, test, and build a JAR or WAR file on their local machine, and push the resulting code to a version control system. However, when an application is running in production, it runs on a deeper operational stack: a JAR or WAR file is typically run within an application container (e.g., Tomcat, Websphere, etc.) that runs on a Java JDK or JRE (Oracle, OpenJDK, etc.), which in turn runs on an operating system (Ubuntu, CentOS, etc.) that runs on the underlying infrastructure (e.g., bare metal, VMs, cloud, etc.). When a developer packages an application within a Docker container, they are implicitly taking more responsibility further down this operational stack.

Accordingly, a developer looking to incorporate the packaging of Java applications with a Docker container will have to discuss several choices with their operations/sysadmin team (or take individual responsibility for the deployment). The first decision is what operating system (OS) to use as the base image. For the reasons of size, performance, and security, Docker developers generally prefer base images with the smaller minimal Linux distributions (e.g., Debian, Jessie, or Alpine) over traditional full-featured distributions (Ubuntu, RHEL). However, many organizations already have licences or policies that dictate that a certain distribution must be used. The same can often be said with the version of Java being used to run applications. Many organizations prefer to run the Oracle JDK or JRE, but the default Java Docker images available from the Docker Hub public container repository only offer OpenJDK for licensing reasons.

Regardless of which OS and Java are chosen, it is essential that the usage is consistent throughout the pipeline; otherwise, unexpected issues may show up in production (e.g., if you are building and testing on Ubuntu running the OpenJDK but running in production on CentOS with Oracle’s JRE, then your application hasn’t been tested within a realistic production environment).

Packing Java Within Docker: JDK or JRE?

Although most Java development occurs alongside a full Java Development Kit (JDK), it is often beneficial to run an application using only a Java Runtime Environment (JRE). A JRE installation takes up less disk space, and due to the exclusion of many build and debugging tools has a smaller security attack surface. However, when developing and testing, it is often convenient to run your application on a JDK, as this includes additional tooling for easy debugging. It should be stated again, though, that whatever you build and test on must be identical to what is used in the production environment).

Pushing Images to a Repository and Testing

Traditional Java application deployment involved creating a JAR or WAR and pushing this to an artifact repository such as Nexus using a build pipeline tool like Jenkins. Jenkins is an open source automation server written in Java that enables automation of the nonhuman (build pipeline) parts of the software development process. Alternatives to Jenkins include Bamboo, Team City, and SaaS-based offerings like TravisCI and CircleCI.

Builds can be triggered in Jenkins by a variety of methods, such as code commit or a cron schedule, and resulting artifacts can be exercised via scripts and external tooling like SonarQube. As shown in Location 2 in Figure 2-1, with Docker-based Java deployment a Docker image artifact is now pushed to a container registry, such as Docker Hub or JFrog’s Artifactory, rather than a traditional Java artifact repository.

In order to explore this next stage of the Java/Docker CD process, we need a Jenkins installation to illustrate the changes. Using HashiCorp’s Vagrant tool, the oreilly-docker-java-shopping project repository enables the construction of a Docker-enabled Jenkins server running on Ubuntu 16.04. To follow the instructions here, you must have locally installed the latest version of Vagrant and Oracle’s VirtualBox virtual machine hypervisor.

Create a Jenkins Server via Vagrant

From the oreilly-docker-java-shopping project root, navigate into the ci-vagrant folder that contains a single Vagrant file. The Jenkins VM can be created by issuing the vagrant up command within this folder. The creation of the VM may take some time, especially on the first run, which downloads the Ubuntu 16.04 base image. While the VM is being created, the Vagrantfile can be explored to learn how the machine has been provisioned.

The simple shell provisioning option has been used to install the necessary dependencies on to a basic Ubuntu 16.04 base image. If you are familiar with Ubuntu operations, none of the commands should come as any surprise; if you aren’t, don’t be overly concerned as this book focuses on the developer perspective on CD. It is worth noting that for a production-grade deployment of a Jenkins build server, it would be typical to provision a machine using a configuration management tool like Puppet, Chef, or Ansible.

Once the Jenkins Vagrant box is provisioned, you can navigate to http://localhost:8080, supply the Jenkins key that is provided, in the terminal window at the end of the Vagrant provisioning process, create a Jenkins account, and install the recommended Jenkins plugins. One of the biggest technical strengths of Jenkins is that it is extremely extensible through the use of plugins, which can be installed to provide additional build customization and functionality.

Now navigate to the Manage Jenkins menu on the left of the Jenkins home page, and select Manage Plugins from the resulting main menu. Select the Available tab and search for docker. Approximately a quarter of the way down the page is a plugin named CloudBees Docker Build and Publish Plugin, which provides the ability to build projects with a Dockerfile and publish the resultant tagged image (repo) to the docker registry. Install this plugin without a restart and navigate back to the Jenkins home page.

Building a Docker Image for a Java Application

Now create a new job by selecting New Item from the menu on the left. Enter the item name of djshopfront and choose “Freestyle project” before pressing the OK button. On the item configuration page that is displayed, scroll down to the Source Code Management section and select the Git radiobox. Enter the oreilly-docker-java-shopping GitHub details in this section of the configuration, as shown in Figure 2-4.

Figure 2-4. Configuring git in the djshopfront Jenkins build item

Now add an “Invoke top-level Maven targets” in the Build section of the configuration, shown in Figure 2-5. Enter the Goals as specified, and don’t forget to specify the correct POM, as this Git project repository contains applications within subdirectories from the root.

Figure 2-5. Configuring Maven in the djshopfront Jenkins build item

Next add a “Docker Build and Publish” build step, and enter the information as specified in Figure 2-6, but be sure to replace the tag in Repository Name with your Docker Hub account details (e.g., janesmith/djshopfront). Docker Hub is free to use for the public hosting of images, but other commercial container registries are available. If you wish to follow along with the example, then you will need to sign up for a free account and add your credentials to the “Registry credentials” in the Docker Build and Publish section of each build job.

Typically when building and pushing Docker images locally the docker build -t <image_name:tag> . and docker push <image_name:tag> commands are used, but the CloudBees Jenkins Docker Build and Publish plugin allows this to be managed via the Jenkins job configuration under Build.

Figure 2-6. Configuring the CloudBees Docker Build and Publish data in the Jenkins build item

The djshopfront job item can now be saved and run. By clicking on the Console Output menu item for the job, the build initialization in Figure 2-7 should initially be displayed:

Figure 2-7. Jenkins build log from the start of the djshopfront build item

Upon completion of the job, the resultant Docker image should have been pushed to the Docker Hub (if there are any errors displayed here, please check that you have entered your Docker Hub registry credentials correctly in the job configuration):

Figure 2-8. Jenkins build logs from a successful build of the djshopfront item

Once this job is run successfully, the resultant Docker image that contains the Java application JAR will be available for download by any client that has access to the registry (or, as in this case, the registry is publicly accessible) via docker pull danielbryantuk/djshopfront. This Docker image can now be run in any of the additional build pipeline test steps instead of the previous process of running the Fat JAR or deploying the WAR to an application container.

Now repeat the above process for the productcatalogue and stockmanager applications under the project root. Once complete, your Jenkins home page should look similar to Figure 2-9.

Figure 2-9. The Jenkins UI showing all three service build items

If all three Java application images have been successfully pushed to Docker Hub, you should be able to log in to the Docker Hub UI with your credentials and view the image details, as shown in Figure 2-10.

Figure 2-10. DockerHub UI showing the successful push of the three service Docker images

Potential Improvements to the Build Pipeline

The previous section of this book provided an introduction to building Java applications in Docker using Jenkins. It is worth noting that many of the techniques used with traditional Java Jenkins build pipelines can still be applied here:

  • The Java artifact build process (and any unit, integration, and code quality tests) can be conducted separately from a job that packages the resulting JAR or WAR within a Docker image. This can save time and compute resources if a Docker image is not required upon every build.
  • Java artifacts can still be stored in a Java artifact repository, such as Nexus, and pulled from this location into a Docker image within an additional build job.
  • Docker labels can be used to indicate artifact versions and metadata in much the same fashion as the Maven groupId, artifactId, and version (GAV) format.
  • Jenkins jobs can be parameterized, and users or upstream jobs can pass parameters into jobs to indicate the (label) version or other configuration options.
  • The Jenkins Promoted Build plugin can be used to “gate” build stages, and indicate when an artifact is ready to be promoted from one stage of the pipeline to the next.

Running Docker as a Deployment Fabric

The changes at Location 3 in Figure 2-1 indicate that any environment that previously ran Java applications as Fat JARs or deployed WARs must now be modified to instead run Docker containers.

Component Testing with Docker

Creation of component test environments and the triggering of associated tests (Location 2 in Figure 2-1) can be achieved via the Jenkins Pipeline job type and the CloudBees Docker Pipeline plugin, which can be installed in a similar manner to the previously mentioned CloudBees plugin. After this has been installed, create a new item named djshopfront-component-test of type Pipeline and click O.K.

Scroll the resulting item configuration page to the Pipeline section, and enter the following:

node {
    stage ('Successful startup check') {
        docker.image('danielbryantuk/djshopfront')
        .withRun('-p 8010:8010') {
            timeout(time: 30, unit: 'SECONDS') {
                waitUntil {
                    def r = sh script: ↵
                    'curl http://localhost:8010/health | grep ↵
                    "UP"', returnStatus: true
                    return (r == 0);
                }
            }
        }
    }
}

This pipeline command uses the CloudBees docker.image library to run the danielbryantuk/djshopfront image, sets a timeout for 30 seconds (to allow a max limit to wait for the Spring Boot application initialization), and then waits until the application health endpoint returns UP from a curl.

The curl/grep test used above can be replaced with any command or execution of any application that returns an exit code to indicate success or failure (e.g., a test Docker container or JUnit-powered test suite obtained from a Git repo).

Figure 2-11 shows the example pipeline code within the Jenkins configuration page. Save this item and run the associated job via the Build Now option in the left menu.

Figure 2-11.  Jenkins pipeline build configuration for djshopfront-component-test

Figure 2-12 shows the output from a typical execution of this test.

Figure 2-12. Jenkins build logs from a successful run of djshopfront-component-test 

Using Docker Compose for E2E Testing

The acceptance and performance testing stage of the pipeline, Location 4 in Figure 2-1, can be further modified with the introduction of Docker by utilizing Docker Compose, a tool for defining and running multicontainer Docker applications. Using the same approach above with the Jenkins pipeline job we can write jobs using Docker Compose, as follows:

node {    
    stage ('build') {
        git url: 'https://github.com/ ↵
        danielbryantuk/oreilly-docker-java-shopping.git'
        // conduct other build tasks
    }
    
    stage ('end-to-end tests') {
        timeout(time: 60, unit: 'SECONDS') {
            try {
                sh 'docker-compose up -d'

                waitUntil { // application is up
                    def r = sh script: ↵
                    'curl http://localhost:8010/health | ↵
                    grep "UP"', returnStatus: true
                    return (r == 0);
                }
                
                // conduct main test here
                sh 'curl http://localhost:8010 | ↵
                grep "Docker Java"'
                
            } finally {
                sh 'docker-compose stop'
            }
        }
    }    
    
    stage ('deploy') {
        // deploy the containers/application here
    }
}

The resulting job output can be seen in the Figure 2-13.

Figure 2-13. Jenkins build logs for a successful run of end-to-end tests

Navigating back to the dj-end-to-end-tests build page shows the multiple pipeline stages, and whether each stage has succeeded or not. Figure 2-14 shows an example of this with several builds succeeding.

Adrian Mouat talks more about using Docker Compose in this fashion in his Using Docker book.

Figure 2-14. An example dj-end-to-end-tests build item page showing pipeline stages

Running Docker in Production

Discussing the running of Docker in production is out of scope in a book of this size, but we strongly recommend researching technologies such as Docker Swarm, Kubernetes, and Mesos, as these platforms make for excellent Docker-centric deployment fabrics that can be deployed on top of bare metal, private VMs, or the public cloud.

It’s a Brave New (Containerized) World

There is no denying the flexibility afforded by deploying Java applications in Docker containers. However, deploying applications in containers adds several new constraints, and there are also a series of best practices emerging that can affect CD. Skills must be learned and new techniques mastered across architecture, testing, and operations. Let’s look at this over the next three chapters.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.21.46.92