Chapter 15. Infrastructure provisioning and deployment

This chapter covers

  • Driving infrastructure provisioning from Gradle
  • Automated deployment to different target environments
  • Verifying the outcome of a deployment with smoke and acceptance tests
  • Deployment as part of the build pipeline

Software deployments need to be repeatable and reliable. Any server outage inflicted by a faulty deployment—with the biggest hit on production systems—results in money lost for your organization. Automation is the next logical and necessary step toward formulating and streamlining the deployment process. In this chapter, we’ll talk about how to automate the deployment process with Gradle by the example of your To Do application.

Before any deployment can be conducted, the target environment needs to be preconfigured with the required software infrastructure. Historically, this has been the task of a system administrator, who would manually provision the physical server machine and install the software components before use. This setup can be defined as real code with tools like Puppet and Chef, checked into version control, and tested like an ordinary piece of software. Using this infrastructure-as-code approach helps prevent human error and minimizes the cycle time for spinning up a new environment based on the same software stack. While Gradle doesn’t provide native tooling for this task, you can bootstrap other tools to do the job for you.

Deploying software to use as build masters means more than just copying a file to a server. In your build, you’ll need to be able to configure and target different environments. Automating the full deployment lifecycle often requires cleaning out previously deployed artifacts, as well as restarting remote runtime environments like web containers. This chapter covers one viable approach to achieving this.

Once you deploy a new version of your software, you need to verify the outcome. Automated smoke and acceptance testing can help to detect the correct functioning of the software. You’ll set up a sufficient suite of tests and execute them with Gradle.

All of these processes—deployment to different environments and the verification of a successful deployment—need to be part of your build pipeline. After setting up the supporting tasks in Gradle, you can invoke them from corresponding jobs in Jenkins. Deploying software for authorized stakeholders within an organization should be as easy as pushing a button. You’ll extend your build pipeline by deploying jobs for different environments. Before we can dive into the details of deploying software, let’s review the tools that are helpful for provisioning an infrastructure.

15.1. Infrastructure provisioning

Before any application can be deployed, the hosting infrastructure needs to be provisioned. When I talk about infrastructure provisioning in the traditional sense, I mean setting up the hardware as well as installing and configuring the required operating system and software components.

Nowadays, we see a paradigm shift toward cloud provisioning of infrastructure. Unlike the traditional approach, a cloud provider often allocates preconfigured hardware in the form of virtual servers. Server virtualization is the partitioning of a physical server into smaller virtual servers to help maximize the server resources. Depending on the service offer, the operating system and software components are managed by the cloud provider.

In this section, we’ll talk about automating the creation of virtual servers and infrastructure software components with the help of third-party open source tools. These tools will help you set up and configure streamlined target environments for your To Do application. Later, you’ll learn how Gradle can integrate with these tools.

15.1.1. Infrastructure as code

Developers usually work in a self-contained environment—their development machine. Software infrastructure that’s needed to run an application has to be set up by hand. If you think back to your To Do application, this includes the Java runtime, a web container, and a database. What might sound unproblematic for a single developer can transform into a huge issue the moment the team grows in size. Now, each developer needs to make sure that they install the same version of the same software packages with the same configuration (optimally on the same operating system).

A similar process has to be followed for setting up the hardware and software infrastructure for other environments (for example, UAT and production) that are part of the deployment pipeline. In larger organizations, the responsibility for performing this task traditionally falls on the shoulders of the operations team. Without proper communication and documentation, getting these environments ready ends up becoming a lengthy and nerve-wracking procedure. Even worse, if any of the configuration settings need to be changed, they have to be propagated across all environments manually.

While shell scripting is a good first step to mitigate this pain, it doesn’t fully automate the infrastructure provisioning end-to-end across environments. The paradigm of infrastructure as code aims to bridge the gap between software development and system administration. With sufficient tooling, it’s possible to describe a machine’s configuration as executable code, which is then checked into version control and shared among different stakeholders. Any time you need to create a new machine, you can build a new one based on the instructions of your infrastructure code. Ultimately, this allows you to treat infrastructure code like any other software development project that can be versioned, tested, and checked for potential syntactical issues.

In the past couple of years, several commercial and open source tools have emerged to automate infrastructure provisioning. We’ll focus on two of the most popular open source infrastructure automation tools: Vagrant and Puppet. The next section will give you an architectural overview of how both tools can be used together to build a virtual machine from scratch. The end result will be a runtime environment equipped to serve your To Do application.

15.1.2. Creating a virtual machine with Vagrant and Puppet

Vagrant (http://www.vagrantup.com/) is an infrastructure tool for configuring and creating virtual environments. A machine can be managed with the help of the Vagrant executable. For example, you can start and stop a machine with a simple, one-line shell script. Even better, you can directly SSH into it and control it like every other remote *nix server.

The software configuration of a machine is described through shell scripts or provisioning tools such as Chef and Puppet. The provisioning provider you use often boils down to personal preference and knowledge of the tool. We’ll concentrate on Puppet.

Puppet (https://puppetlabs.com/puppet/) provides a Ruby-based DSL for declaring the software components and their required state on a target machine. If you think back to the runtime environment required for your To Do application, you can identify the following software packages and their configuration:

  • A Java runtime (JRE) installation. You’ll use version 6.
  • A Servlet container to host the web application. You’ll use Apache Tomcat with version 7.
  • An H2 database to manage your application data. To function properly, the database schema needs to be set up.

It’s beyond the scope of this book to fully describe the configuration needed to set up such a scenario. However, you can find a working example in the source code of the book. Let’s examine the basic structure of your Vagrant project to get a high-level understanding:

Figure 15.1 illustrates how the individual components of a Vagrant project play together. At a minimum, every Vagrant project needs to contain a Vagrantfile. Based on this file, virtual machines are configured and created. You’re going to go with Puppet as the configuration provider. The configuration you want to apply to the virtual machine is set up in a Puppet manifest file, which is referenced in the Vagrantfile. In this case, the name of the manifest file is tomcat.pp. To be able to version and share the Vagrant project with other developers, you need to check it into version control like regular source code.

Figure 15.1. Creating test environments with Vagrant and Puppet

Before any virtual machine can be initiated from the infrastructure definition, you’ll need to install the Vagrant executable and Virtual Box (https://www.virtualbox.org/). Please refer to the Vagrant documentation for more information. After a successful installation, you can invoke the Vagrant executable from the command line.

Let’s explore some of the commonly used commands. To bring up your Vagrant machine, navigate to the Vagrant project in your shell and execute the command vagrant up. Vagrant is fairly wordy, so we won’t show the command-line output here. After a few moments, you should see the notice that the virtual machine was brought up successfully.

In the Vagrantfile, you configured the virtual machine to be accessible by the IP address 192.168.1.33. As part of your provisioning code, you defined Tomcat to run on port 8080. To verify a successful installation of the web container, open the browser of your choice and enter the URL http://192.168.1.33:8080/. You should see the Tomcat 7 dashboard. To shut down the virtual machine, use the command vagrant destroy. In the next section, you’ll learn how to bootstrap Vagrant commands from Gradle.

15.1.3. Executing Vagrant from Gradle

At this point, you may be thinking, “Why would I want to execute Vagrant commands from Gradle?” The short answer is automation. Any workflow that incorporates interacting with a virtual machine provided by Vagrant needs to be able to call the corresponding command from Gradle. To show you a simple workflow, let’s assume you want to execute functional tests on a virtual machine that mimics the infrastructure setup of a production server. The following steps are involved:

1.  Start the virtual machine via the command vagrant up.

2.  Deploy the web application to the Tomcat server.

3.  Execute a suite of functional tests.

4.  Shut down the virtual machine via the command vagrant destroy.

This use case is fairly advanced and requires some complex setup. For now, you’ll start simple and enable your build to wrap Vagrant command calls with Gradle tasks. You’re going to write a custom task. The task defines the Vagrant commands you want to execute as an input parameter. Additionally, you need to point the task to the Vagrant project you’d like to target. The following listing demonstrates a reusable task that allows executing Vagrant commands.

Listing 15.1. Custom task for executing Vagrant commands

Depending on the complexity of your configuration, some Vagrant commands (especially vagrant up) may need a few minutes to finish. If you have a chain of tasks that build on each other, you need to make sure that task execution is delayed until Vagrant completes the actual work. Your task implementation takes care of this requirement by letting the current thread wait until the Vagrant command responds with an exit value. Next, you’ll put your Vagrant task to the test. The following listing demonstrates the use of the custom task to expose important Vagrant commands to a Gradle build.

Listing 15.2. Enhanced tasks for important Vagrant commands

Congratulations, you just implemented a way to integrate Vagrant into your build! Running Vagrant on a local machine is great for simulating production-like environments. When it comes to interacting with existing environments other than your local machine, your build needs to have a way of configuring the connection information. In the next section, we’ll explore a flexible way of storing and reading environment-specific configuration.

15.2. Targeting a deployment environment

The main maxim of continuous delivery is to get the software from the developer’s machine into the hands of the end users as quickly and frequently as possible. However, that doesn’t mean that you assemble your deliverable and deploy it in the production environment right away. In between these steps, a build pipeline usually verifies functional and nonfunctional requirements in other environments, as shown in figure 15.2.

Figure 15.2. Software propagation through different environments

At the beginning of this chapter, you created a virtual machine on your developer machine. Though the virtual machine has a production-like setup, you use this environment solely for testing purposes. The test environment brings together code changes from multiple developers of the team. Therefore, it can be seen as the first integration point of running code. On the deployed application in the test environment, you can run automated acceptance tests to verify functional and nonfunctional requirements. The user acceptance test (UAT) environment typically exists for the purpose of exploratory, manual testing. Once the QA team considers the current state of the software code to be satisfactory, it’s ready to be shipped to production. The production environment directly serves to the end user and makes new features available to the world.

If you want to use the same code for deploying to all of these environments, you’ll need to be able to dynamically target each one of them at build time. Naturally, the test, UAT, and production environments run on different servers with potentially different ports and credentials. You could store the configuration as extra properties in your build script, but that would quickly convolute the logic of the file. Alternatively, you could store this information in a gradle.properties file. In both cases, you’d end up with a fairly unstructured list of properties. At build time, you’d have to pick a set of properties based on a naming convention. Doesn’t sound very flexible, does it? There’s a better way of storing and reading this configuration with the help of a standard Groovy feature.

15.2.1. Defining configuration in a Groovy script

Configuration, especially if you have a lot of it, should be as readable and structured as possible. One of Groovy’s language features allows for defining properties with the help of closures within a Groovy script. The following listing shows an example of composing an environment-specific configuration in the form of a mini DSL.

Listing 15.3. Groovy-based, environment-specific configuration

Each of the environments that you want to define properties for is enclosed in the environments configuration block. For each environment, you assigned a closure with a descriptive name. For example, you can define the server hostname, SSH port, and username to log into the server. Use this configuration data and save it to a Groovy script under the directory gradle/config/buildConfig.groovy, as shown in your project directory tree:

You now have a Groovy-based configuration file in place, but how do you read its content from the Gradle build? Groovy provides a handy API class named groovy.util .ConfigSlurper that’s designed to parse a treelike data structure. Let’s take a closer look at its functionality.

15.2.2. Reading the configuration with Groovy’s ConfigSlurper

ConfigSlurper is a utility class for reading configuration in the form of Groovy scripts. Configuration can either be defined as properties on the root level of the script or as environment-specific properties wrapped by the environments closure. Once this configuration is parsed, the property graph can be navigated by dot notation. We’ll see how this looks in practice.

Reading this configuration file from Gradle requires some thinking. You need to make sure that the configuration is read before any task action is executed. Remember when we discussed Gradle’s build lifecycle phases in chapter 4? This is best done during the configuration phase, as shown in figure 15.3.

Figure 15.3. Reading Groovy script during Gradle’s configuration phase

The task that reads the Groovy script doesn’t need to contain a task action. Instead, you’ll create an instance of the class ConfigSlurper in the configuration block of the task and provide the specific environment you want to read in its constructor. The method parse points to the location of the configuration file. The following listing demonstrates how to parse the Groovy script based on the provided property env.

Listing 15.4. Reading configuration during configuration phase

The parsed configuration is made available to all projects of your build through the extra property config. Next, you’ll actually use the property in another task that requires the parsed configuration.

15.2.3. Using the configuration throughout the build

A typical usage pattern for the configuration shown in listing 15.3 is the deployment of the To Do web application to a specific environment. Key to targeting a specific environment is to provide a value for the property env on the command line. For example, if you want to target the UAT environment, you’d provide the project property -Penv=uat. The value of this project property directly corresponds to the closure named uat in the Groovy configuration script. Using this simple mechanism, you enable your build to run the same task logic—for example, deployment of code via environment-specific configuration, as shown in figure 15.4.

Figure 15.4. Targeting specific environments by providing a project property

You’re going to emulate a deployment task to see if your mechanism works. In the web project, you’ll create a new task named deployWar, as shown in listing 15.5. For now, you won’t bother with actually implementing deployment logic. To verify that the settings appropriate to the targeted environment are parsed correctly, the task’s doLast action will use Gradle’s logger to print the read hostname and port.

Listing 15.5. Using the configuration extra property

When you run the task deployWar for the environment local, you can see that the proper settings are parsed and rendered on the command line:

$ gradle deployWar -Penv=local
Loading configuration for environment 'local'.
:todo-web:deployWar
Deploying WAR file to 'localhost' via SSH on port 2222.

Just printing out the configuration is pretty boring. In the next section, you’ll actually use these settings to deploy the To Do application WAR file to a server.

15.3. Automated deployments

The end game of every build pipeline is to deploy the software to a production environment once it passes all automated and manual testing phases. The deployment process should be repeatable and reliable. Under all circumstances, you want to avoid human error when interacting with a production environment to install a new version. Failure to deploy the software properly will lead to unexpected side effects or downtime and actual money lost for your organization.

I think we can agree that the task of deploying software to production should be a nonevent. Deployment automation is an important and necessary step toward this goal. The code used to automate the deployment process shouldn’t be developed and exercised against the production environment right away to reduce the risk of breakages. Instead, start testing it with a production-like environment on your local machine, or a test environment. You already set up such an environment with Vagrant. It uses infrastructure definitions that are fairly close to your production environment. Mimicking a production-like environment using a virtual machine for developing deployment code is cheap, easy to manage, and doesn’t disturb any other environment participating in your build pipeline. Once you’re happy with a working solution, the code should be used for deploying to the least-critical environment in your build pipeline. After gaining more confidence that the code is working as expected, you can deploy it to more mission-critical environments like UAT and production.

Writing deployment code is not a cookie-cutter job. It’s dependent on the type of software you write and the target environment you’re planning to deploy to. For example, a deployment of a web application to a Linux machine has different requirements than client-installed software running on Windows. At the time of writing, Gradle doesn’t offer a unified approach for deploying software. The approach we’ll discuss in this chapter is geared toward deploying your web application to a Tomcat container.

15.3.1. Retrieving the artifact from the binary repository

In the last chapter, you learned how to upload an artifact to a binary repository. You’re going to retrieve it for deployment purposes. Now that you have the Groovy configuration file in place, you can also add the Artifactory repository URL. In this example, you only use a single repository that isn’t specific to an environment. ConfigSlurper also reads any properties declared outside of the environments closure independent of the provided env property. The following listing demonstrates how to declare common configuration—in this case, the binary repository.

Listing 15.6. Adding binary repository settings to buildConfig.groovy

In listing 15.7, you use the settings from the file buildConfig.groovy to download the WAR file with the current version of your project from Artifactory. In this example, Gradle’s dependency management does the heavy lifting of downloading the file from the repository. Executing the task fetchToDoWar will put the artifact into the directory build/download/artifacts. Please note that it’s not mandatory to use Gradle’s dependency management for retrieving the file. You could also use the Ant task Get and write a lower-level implementation in Groovy.

Listing 15.7. Downloading the WAR file from a remote repository

Try out the task. Assume your project version is 1.0.42. After executing the task, you’ll find the expected file in the download directory:

.
└── build
    └── download
        └── artifacts
            └── todo-web-1.0.42.war

Of course, it makes sense to download the artifact just once, even though you’ll deploy it to different environments. The task fetchToDoWar automatically implements the incremental build functionality. Executing the task a second time will mark it UP-TO-DATE, as shown in the following command-line output:

$ gradle fetchToDoWar
:fetchToDoWar UP-TO-DATE

Now things are getting exciting. You downloaded the artifact with the correct version. Before you can take care of business by deploying the web application to Tomcat, you should plan out the necessary steps to implement the process.

15.3.2. Identifying necessary deployment steps

The deployment process for a web application to a remote server needs to follow a workflow to ensure a smooth transition from the current version to a new one. What kind of aspects should be considered?

First, you need to make sure that all artifacts of the old version, like the exploded WAR file, are properly removed. Under all circumstances, you need to avoid mixing up old artifacts with the new ones.

Some deployment solutions like Cargo (http://cargo.codehaus.org/) allow for deploying a web application while the container is running, a technique also known as hot deployment. While it might sound attractive at first, because you don’t have to restart the server, hot deployment isn’t a viable solution for production systems. Over time, long-running JVM processes will run into an OutOfMemoryError for their PermGen space, which will cause it to freeze up. The reason is that the JVM will not garbage-collect class instances from previous deploys even if they’re now unused. Therefore, it’s highly recommended to fully stop the web container JVM process before a new version is deployed.

An efficient deployment process can look like the following steps:

1.  Push new artifact to server.

2.  Stop the web container process.

3.  Delete the old artifact and its extracted files.

4.  Deploy the new artifact.

5.  Start the web container process.

In the following section, you’ll implement this process with the help of Gradle. The previously created Vagrant instance will act as a test bed for your deployment script.

15.3.3. Deployment through SSH commands

We didn’t go into any specifics about the operating system of the virtual machine you set up before. Assume that the box is based on the Linux distribution Ubuntu. You may know that transferring a file to a remote machine running a SSH daemon can be achieved with Secure Copy (SCP). SCP uses the same security measures as SSH. For authentication purposes, SCP will ask for a password or a pass phrase. Alternatively, the private key file can be provided to authenticate the user. Vagrant automatically puts this identity file into the directory <USER_HOME>/.vagrant.d.

You could model the whole deployment process in a shell script and call it from Gradle by creating an enhanced task of type Exec. That’s certainly a valid way of implementing the necessary steps. However, in this section we’ll discuss how to model each step with a corresponding Gradle task.

File transfers with SCP

You’ll start by implementing the file transfer via SCP. If you’re familiar with Ant, you may have used the SCP task before. The Ant task provides a nice abstraction on top of a pure Java SSH implementation named JSch (http://www.jcraft.com/jsch/). The next listing shows how to wrap the Ant SCP task with a custom Gradle task declared in the buildSrc project.

Listing 15.8. Custom task wrapping the optional Ant SCP task

You’ll use this SCP abstraction in the build script of your web project to copy the WAR file to the remote location. In listing 15.9, you declare the JSch Ant task dependency with the help of a custom configuration named jsch. This dependency is passed on to the classpath property of the enhanced task responsible for transferring the WAR file to the remote server. You also incorporate the server settings read during the configuration phase of your build.

Listing 15.9. Transferring the WAR file to the server via SCP

This listing only implements step one of the deployment process. You have four more to go. All of the other steps need to execute shell commands on the remote server itself.

Executing remote commands with SSH

The SSH command makes it very simple to achieve such an operation. Instead of running an interactive shell, SSH can run a command on the remote machine and render the output. The following listing shows the custom task SshExec that internally wraps the SSH Ant task.

Listing 15.10. Custom task wrapping the optional Ant SSH task

In the next listing, you use this custom SSH task to run various shell commands on the Vagrant virtual box. As a whole, this script implements the full deployment workflow we discussed earlier.

Listing 15.11. SSH commands for managing Tomcat and deploying WAR file

That’s all you need to implement a full deployment process. Bring up the Vagrant box and give it a try. The following command-line output shows the execution steps in action:

$ gradle deployWar -Penv=local
...
Loading configuration for environment 'local'.
:todo-web:fetchToDoWar
:todo-web:copyWarToServer
Copying file 'todo-web-1.0.42.war' to server.
:todo-web:shutdownTomcat
Shutting down remote Tomcat.
Executing SSH command 'sudo -u tomcat /opt/apache-tomcat-7.0.42/
 bin/shutdown.sh'.
:todo-web:deleteTomcatWebappsDir
Executing SSH command 'sudo -u tomcat rm -rf /opt/apache-tomcat-
 7.0.42/webapps/todo'.
:todo-web:deleteTomcatWorkDir
Executing SSH command 'sudo -u tomcat rm -rf /opt/apache-tomcat-
 7.0.42/work'.
:todo-web:deleteOldArtifacts
Deleting old WAR artifacts.
:todo-web:copyWarToWebappsDir
Deploying WAR file to Tomcat.
Executing SSH command 'sudo -u tomcat cp /tmp/todo-web-1.0.42.war
 /opt/apache-tomcat-7.0.42/webapps/todo.war'.
:todo-web:startupTomcat
Starting up remote Tomcat.
Executing SSH command 'sudo -u tomcat /opt/apache-tomcat-7.0.42/
 bin/startup.sh'.
:todo-web:deployWar

After restarting Tomcat again, it may take some seconds until your web application is up and running. After giving Tomcat enough time to explode the WAR file and start its services, navigating to the URL http://192.168.1.33:8080/todo in a browser will present you with a To Do list ready to be filled with new tasks.

Running SSH commands isn’t the only approach for tackling deployment automation. There are many other ways to achieve this goal. Due to the diversity of this topic, we won’t present them in this chapter. Don’t feel discouraged from trying out new, automated ways of getting your software deployed in the target environment. As long as the process you choose is repeatable, reliable, and matches your organization’s needs, you’re on the right path.

15.4. Deployment tests

Every deployment of an application should be followed by rudimentary tests that verify that the operation was successful and the system is in an expected, working state. These types of tests are often referred to as deployment tests.

If for whatever reason a deployment failed, you want to know about it—fast. In the worst-case scenario, a failed deployment to the production environment, the customer shouldn’t be the first to tell you that the application is down. You absolutely need to avoid this situation because it destroys credibility and equates to money lost for your organization.

The harsh reality is that deployments can fail even with the best preparation. Knowing about it as soon as possible is worth a mint. As a result, you can take measures to bring back the system into an operational state; for example, by rolling back the application to the last “good” version.

In addition to these fail-fast tests, automated acceptance tests verify that important features or use cases of the deployed application are correctly implemented. For this purpose, you can use the functional tests you wrote in chapter 7. Instead of running them on your developer machine against an embedded Jetty container, you’ll configure them to target other environments. Let’s first look at how to implement the most basic types of deployment tests: smoke tests.

15.4.1. Verifying a successful deployment with smoke tests

Your deployment automation code should incorporate tests that check that your system is in a basic, functional state after the deployment is completed. These tests are called smoke tests. Why this name, you ask? Smoke tests make an analogy to hardware installation like electronic circuits. If you turn on the power and see smoke coming out of the electrical parts, you know that the installation went wrong. The same conclusion can be drawn for software.

After a deployment, the target environment may need time to reach its fully functional state. For example, if the deployment process restarts a web container, it’s obvious that it won’t be able to serve incoming requests right away. If that’s the case for your environment setup, make sure to give some leeway before executing your suite of smoke tests.

How do smoke tests look for a web application like your To Do application? Simple—you can fire basic HTTP requests to see if the Tomcat server is up and running. You also want to find out if the application’s homepage URL responds with the HTTP status code 200.

Making HTTP requests from Gradle tasks is easily achieved with Java’s standard API classes (java.net.HttpUrlConnection) or third-party libraries like Apache Http-Components (http://hc.apache.org/). You’ll make your life even easier by using a Groovy library named HTTPBuilder (http://groovy.codehaus.org/modules/http-builder/). HTTPBuilder wraps the functionality provided by HttpComponents with a DSL-style configuration mechanism, which boils down the code you need to write significantly. You’ll use HTTPBuilder from a custom task named HttpSmokeTest that acts as an abstraction layer for making HTTP calls. To make this task available to all projects of your build, the implementing class becomes part of the buildSrc project, as shown in the following directory tree:

Before any class under buildSrc can use the HTTPBuilder library, you need to declare it in a build script for that project. The following listing references the version 0.5.2.

Listing 15.12. Build script for buildSrc project

As you can imagine, you may implement other types of smoke tests later; for example, for testing the functionality of the database. For this reason, you’ll group smoke tests in the package com.manning.gia.test.smoke. Let’s have a closer look at the implementation in the next listing for smoke tests that fire an HTTP request.

Listing 15.13. Custom task for executing HTTP smoke tests

In your build script, you can set up as many smoke tests as you need. As with the URL, provide the HTTP endpoint of your web application that was read from the Groovy configuration file earlier. Figure 15.5 shows how to use the env project property to target a particular environment.

Figure 15.5. Running smoke tests against different environments

Let’s look at some exemplary smoke test implementations. The following listing shows two different smoke tests: one for verifying that Tomcat is up and running and another for checking if your web application was deployed successfully.

Listing 15.14. Smoke tests for verifying HTTP URLs

I’m sure you can imagine a whole range of smoke tests for your real-world applications. It’s certainly worth experimenting with options, as long as they’re cheap to write and fast to execute.

If all smoke tests pass, you can rightfully assume that your application was deployed successfully. But does it work functionally? As a next step, you should determine whether the provided application functionality works as expected.

15.4.2. Verifying application functionality with acceptance tests

Functional tests, also called acceptance tests, focus on verifying whether the end user requirements are met. In chapter 7, you learned how to implement a suite of functional tests for your web application with the help of the browser automation tool Geb.

Of course, you want to be able to run these tests against a deployed application in other environments. Acceptance tests are usually run during the automated acceptance test phase of the continuous delivery build pipeline. This is the first time in the pipeline that you bring together the work of the development team, deploy it to a test server, and verify whether the functionality meets the needs of the business in an automated fashion. In later phases of the build pipeline, acceptance tests can be run to get quick feedback about the success of a deployment on a functional level. The better the quality of your tests, the more confident you can be about the determined result.

In listing 15.13, you added a new task of type Test for running functional tests against remote servers. Geb allows for pointing to an HTTP endpoint by setting the system property geb.build.baseUrl. The value you assign to this system property is derived from the read environment configuration, as shown in the following listing.

Listing 15.15. Task for exercising functional tests against remote servers

15.5. Deployment as part of the build pipeline

In the previous chapters, we discussed the purpose and practical application of phases during the commit stage. We compiled the code, ran automated unit and integration tests, produced code quality metrics, assembled the binary artifact, and pushed it to a repository for later consumption. For a quick refresher on previously configured Jenkins jobs, please refer to earlier chapters.

While the commit stage asserts that the software works at the technical level, the acceptance stage verifies that it fulfills functional and nonfunctional requirements. To make this determination, you’ll need to retrieve the artifact from the binary repository and deploy it to a production-like test environment. You use smoke tests to make sure that the deployment was successful before a suite of automated acceptance tests is run to verify the application’s end user functionality. In later stages, you reuse the already downloaded artifact and deploy it to other environments: UAT for manual testing, and production environment to get the software into the hands of the end users.

Let’s look at these stages with Jenkins. As a template for these stages, you’ll duplicate an existing Jenkins job. Don’t worry about their configuration at the moment. You’ll modify their settings in a bit. The end result will be the following list of Jenkins jobs:

  • todo-acceptance-deploy
  • todo-acceptance-test
  • todo-uat-deploy
  • todo-uat-smoke-test
  • todo-production-deploy
  • todo-production-smoke-test

Together these jobs model the outstanding stages of your build pipeline, as shown in figure 15.6.

Figure 15.6. Acceptance, UAT, and production stages as part of the build pipeline

In the next sections, we’ll examine each of these stages one by one. Let’s start with the acceptance stage.

15.5.1. Automatic deployment to test environment

You’ll start by configuring the Jenkins job todo-acceptance-deploy for automatically deploying the WAR file to a test server. Figure 15.7 illustrates this stage in the context of later deployment stages of the build pipeline.

Figure 15.7. Deploying the WAR file to a test server for acceptance testing

Have a quick look at this checklist to see if it’s set up correctly:

  • In the Source Code Management configuration section, choose the option Clone Workspace and the parent project todo-distribution.
  • As with the build step, you want to download the WAR file from Artifactory and deploy it to the test environment. Add a build step for invoking your Gradle script using the wrapper and enter the task deployWar. In the field Switches, you’ll provide the appropriate environment property: -Penv=test.
  • Define the build name by incorporating the upstream build number parameter: todo#${ENV,var="SOURCE_BUILD_NUMBER"}.
  • Add a parameterized build action that defines a build trigger on the job running your deployment tests named todo-acceptance-test. As far as parameters go, you’ll reuse the existing ones by selecting the option Current Build Parameters.

15.5.2. Deployment tests

Deployment testing should follow directly after deploying the application to the test environment (figure 15.8).

Figure 15.8. Deployment tests against deployed WAR file

There are two important points you need to consider when configuring the corresponding Jenkins job. The execution of the job has to be slightly delayed to allow the test environment to come up properly. Also, the downstream project (the deployment to UAT) may not be executed automatically. The following list explains the necessary configuration steps:

  • In the Source Code Management configuration section, choose the option Clone Workspace and the parent project todo-acceptance-test.
  • In the Advanced Project Options configuration section, tick the checkbox Quiet Period and enter the value 60 into the input field. This option will delay the execution of the job for one minute to ensure that the Tomcat server has been properly started. Because this method can be kind of brittle, you may want to implement a more sophisticated mechanism to check whether the server is up and running.
  • As with the build step, you want to run smoke and acceptance tests against the test environment. Add a build step for invoking your Gradle script using the wrapper and enter the tasks smokeTests remoteFunctionalTest. In the field Switches, you’ll provide the appropriate environment property: -Penv=test.
  • Define the build name by incorporating the upstream build number parameter: todo#${ENV,var="SOURCE_BUILD_NUMBER"}.
  • Add a parameterized build action that defines a manual build trigger on the job deploying the WAR file to the UAT environment named todo-uat-deploy. To define the manual trigger, choose the option Build Pipeline Trigger → Manually Execute Downstream Project from the Add Post-build Action drop-down menu. The build pipeline view will indicate the manual trigger by displaying a Play button for this job.

When executing the full pipeline in Jenkins, you’ll notice that the job for deploying to UAT requires manual intervention. Only if you actively initiate the deployment will the pipeline execution resume—that is, until it hits another push-button, downstream job.

15.5.3. On-demand deployment to UAT and production environment

You already configured a push-button deployment in step 6. The same configuration needs to apply to the job that deploys the artifact to the production environment, as shown in figure 15.9.

Figure 15.9. Performing push-button releases to UAT and production environments

We won’t go into too much detail about the configuration of these jobs. In fact, they look very similar to the jobs that you set up to implement the acceptance stage. The big differentiator is the environment they target. In the Gradle build step, the UAT deployment job needs to set the –Penv=uat switch. The deployment job to the production environment applies the setting –Penv=prod.

The build pipeline view in Jenkins can be configured to keep a history of previously executed builds. This is a handy option if you want to get a quick overview of failed and successful builds. This view also enables the stakeholders of your build to deploy artifacts with a specific version. Typical scenarios for this use case could be one of the following:

  • The product team decides to launch a new feature included in a specific version of your application.
  • Rolling back the application version in production to a known “good” state due to a failed deployment or broken feature.
  • Deploying a given feature set for manual testing by the QA team into the UAT environment.

Jenkins needs to know which version should be deployed when you hit the release button. Thankfully, the parameterized build plugin helps you to provide the appropriate version to the job. For each of the deployment jobs, make the following configuration. Tick the checkbox This Build Is Parameterized. From the drop-down menu Add Parameter choose String Parameter. In the Name input box, enter the value SOURCE_BUILD_NUMBER.

15.6. Summary

Software deployments need to be repeatable and reliable. Any server outage inflicted by a faulty deployment—with the biggest hit on production systems—results in money lost for your organization. Automation is the next logical and necessary step toward formulating and streamlining the deployment process.

Deployable artifacts often look different by nature, follow custom project requirements, and demand distinct runtime environments. While there’s no overarching recipe for deploying software, Gradle proves to be a flexible tool for implementing your desired deployment strategy.

A configured target environment is a prerequisite for any software deployment. At the beginning of this chapter, we discussed the importance of infrastructure as code for setting up and configuring an environment and its services in an automated fashion. Vagrant can play an instrumental role in creating and testing infrastructure templates. You learned how to bootstrap a virtual machine by wrapping Vagrant management commands with Gradle tasks. Later, you implemented an exemplary deployment process using SSH commands and exercised the functionality on a running Vagrant box.

To ensure repeatability for your deployments, the same code should be used across all environments. This means that the automation logic needs to use dynamic property values to target a particular environment. Environment-specific configuration becomes very readable when structured with closures and stored in a Groovy script. Groovy’s API class ConfigSlurper provides an easy-to-use mechanism for parsing these settings. To have the property values available for consumption across all projects of your build, you coded a task that reads the Groovy script during Gradle’s configuration lifecycle phase.

The outcome of every deployment needs to be verified. Automated deployment tests, invoked after a deployment, can provide fast feedback. Smoke tests are easy to implement and quickly reveal breakages. Functional tests, also called acceptance tests, are the natural extension of smoke tests. This type of test assesses whether functional and nonfunctional requirements are met.

By the end of this chapter, you extended your build pipeline by manual and push-button deployment capabilities. In Jenkins, you set up three deployment jobs for targeting a test, UAT, and production environment, including their corresponding deployment tests. With these last steps completed, you built a fully functional, end-to-end build pipeline. Together, we explored the necessary tooling and methods that will enable you to implement your own build pipeline using Gradle and Jenkins.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
44.222.125.114