Chapter 13. Continuous integration

This chapter covers

  • The benefits of continuous integration
  • Using Jenkins to build a Gradle project
  • Exploring cloud-based CI solutions
  • Modeling a build pipeline with Jenkins

If you’re working as a member of a software development team, you inevitably will have to interface with code written by your peers. Before working on a change, a developer retrieves a copy of the code from a central source code repository, but the local copy of this code on the developer’s machine can quickly diverge from the version in the repository. While working on the code, other developers may commit changes to existing code or add new artifacts like resource files and dependencies. The longer you wait to commit your source code into the shared code repository, the harder it’ll become to avoid merge conflicts and integration issues.

Continuous integration (CI) is a software development practice where source code is integrated frequently, optimally multiple times a day. With each change, the source code is compiled and tested by an automated build, which leads to significantly less integration headaches and immediate feedback about the health status of your project.

In this chapter, we’ll discuss the principles and architecture of continuous integration. We’ll also explore the tooling that enables continuous integration, called CI servers or platforms. CI servers automatically schedule and execute a build after a code change is made to the central repository. After learning the basic mechanics of a CI server, you’ll put your knowledge into practice. We’ll discuss how to install and use the popular open source CI server Jenkins to build your To Do application. As with many development tools, CI server products have been moved to the cloud. We’ll explore various offerings and compare their feature sets.

Breaking up a big, monolithic build job into smaller, executable steps leads to faster feedback time and increases flexibility. A build pipeline orchestrates individual build steps by defining their order and the conditions under which they’re supposed to run. Jenkins provides a wide range of helpful extensions to model such a pipeline. This chapter will build the foundation for configuring the pipeline steps we’ve touched on so far in the book. With each of the following chapters, you’ll add new steps to the pipeline until you reach deployment into production. Let’s first get a basic understanding of how continuous integration ties into the development process.

13.1. Benefits of continuous integration

Integrating source code committed to a central VCS by different developers should be a nonevent. Continuous integration is the process of verifying these integrations by building the project in well-defined intervals (for example, every five minutes) or each time a commit is pushed to the VCS. You’re perfectly set up with your Gradle build to make this determination. With every commit, you can compile the code, run various types of tests, and even determine if the code quality for your project improved or degraded. What exactly do you gain? Apart from the initial time investment of setting up and configuring a CI server, continuous integration provides many benefits:

  • Reduced risk: Code is built with every commit to the VCS. Therefore, the code is frequently integrated. This practice reduces the risk of discovering integration issues late in the project’s lifecycle; for example, every two to four weeks for a new release. As a side effect, you can also be confident that your build process works because it’s constantly exercised.
  • Avoiding environment-specific errors: Developers usually build software on a single operating system. While you can rule out general build tool runtime issues by using the Gradle Wrapper, you still have a dependency on the machine’s setup. On a CI server, you can exercise the build independent of a particular machine setup or configuration.
  • Improved productivity: While developers run their builds many times a day, it’s reasonable for them to concentrate on executing tasks that are essential to their work: compiling the code and running selected tests. Long-running tasks, like generating code quality reports, would reduce their productivity and are better off being run on a CI server.
  • Fast feedback: If a build fails because of an integration issue, you’ll want to know about it as soon as possible so you can fix the root cause. CI servers offer a wide variety of notification methods. A common notification would be an email containing the link to the failed build, the error message, and a list of recent commits.
  • Project visibility: Continuous integration will give you a good idea of the current health status of your project. Many CI servers come with a web-based dashboard that renders successful and failed builds, aggregates metrics, and provides central reporting.

Despite all of these benefits, introducing continuous integration to a team or organization requires an attitude of transparency, and in extreme cases may even require a complete culture shift. The health status of a project is always visible through a dashboard or notifications. This means that a broken build won’t be a secret anymore. To improve project quality, try to foster a culture of intolerance for defects. You’ll see that it pays off in the long run. With these benefits in mind, let’s see how continuous integration plays out in practice by playing through a typical scenario.

Three components are essential to a CI environment: a central VCS that all developers commit changes to, the CI server, and an executable build script. Figure 13.1 illustrates the interaction between those components.

Figure 13.1. Anatomy of a CI environment

Let’s go over a typical scenario of integrating code changes in a team of three developers:

1.  Committing code: One or more developers commit a code change to the VCS within a certain timeframe.

2.  Triggering the build: A CI server can be configured in two different modes to identify if there’s a code change in the VCS. The server can either be scheduled to check the VCS for changes in predefined time intervals (pull mode), or it can be configured to listen for a callback from the VCS (push mode). If a change is identified, the build is automatically initiated. Alternatively, you schedule a predefined time interval for triggering a build.

3.  Executing the build: Once a build is triggered, it executes a specific action. An action could be anything from invoking a shell script, to executing a code snippet, to running a build script. In our discussions, this will usually be a Gradle build.

4.  Sending a notification: A CI server can be configured to send out notifications about the outcome of a build, whether it was successful or failed. Notifications can include emails, IMs, IRC messages, SMS, and many more.

Depending on the configuration of your CI server, these steps are performed for a single code change or for multiple code changes at once. The longer the scheduled intervals for a scheduled build, the more changes are usually picked up.

Over the past 10 years, many open source and commercial CI server products have sprung up. Many of them are downloadable products that are installed and hosted within your company’s network. Recently, there’s been a lot of hype about CI servers available in the cloud. Cloud-based solutions relieve you from the burden of having to provision infrastructure and lower the barrier of entry. They’re usually a good fit for your own open source project. Among the most popular CI servers are Hudson/Jenkins, JetBrains TeamCity, and Atlassian Bamboo. In this chapter, you’ll mainly use Jenkins to implement continuous integration for your To Do application because it has the biggest market share. Before you can emulate a typical CI workflow on your local machine, you’ll have to set up your components.

13.2. Setting up Git

Continuous integration is best demonstrated by seeing it in action. All you need is a CI server installed on your local system, access to a central VCS repository, and a project you can build with Gradle. This section assumes that you’ve already installed a Java version on your machine.

Jenkins is the perfect candidate for getting started quickly. Its distribution can be downloaded and started in literally a minute. For your convenience, I uploaded the sample To Do application to GitHub, an online hosting service for projects. GitHub is backed by the free and open source VCS named Git. Don’t be intimidated by this suite of tools if you haven’t used them yet. You’ll install and configure each of them step by step. You’ll start by signing up on GitHub if you don’t yet have an account.

13.2.1. Creating a GitHub account

Creating a free account on GitHub (https://github.com/) is as easy as entering your username, email address, and a password in the signup form on the homepage, as shown in figure 13.2.

Figure 13.2. Signing up for a free GitHub account

That’s it; you don’t even need to confirm your account. A successful signup will bring you to your account’s dashboard. Feel free to explore the functionality or update your profile settings. To establish a secure SSH connection between your computer and GitHub, you’ll need to generate SSH keys and add the public key to your GitHub account. GitHub offers a comprehensive guide (https://help.github.com/articles/generating-ssh-keys) that explains the nitty-gritty details of achieving this.

13.2.2. Forking the GitHub repository

The sample To Do application is available as a public GitHub repository under https://github.com/bmuschko/todo. Because you’re not the owner of this repository, you won’t be able to commit changes to it. The easiest way to get push permission on a repository is to fork it from your own account. A fork is a local copy of the original repository that you can modify at will without harming the original repository. To fork a repository, navigate to the sample repository URL and click the Fork button in the navigation bar shown in figure 13.3.

Figure 13.3. Forking the sample repository

After a few seconds, the project will be ready for use. To interact with your remote GitHub repository you’ll need to install and configure the Git client.

13.2.3. Installing and configuring Git

You can download the client distribution from the Git homepage (http://git-scm.com/). The page offers you installers for the most common operating systems. Follow the instructions to install Git onto your system. After a successful installation, you should be able to execute Git on the command line. You can verify the installed version with the following command:

$ git --version
git version 1.8.2

Commits to a remote repository can be directly mapped to your GitHub account. By setting your client’s username and email address, GitHub will automatically link the change to your account. The following two commands show how to set both configuration values:

$ git config --global user.name "<username>"
$ git config --global user.email "<email>"

You’re all set; you’ve configured Git and the sample repository. Next, you’ll install Jenkins and configure a build job to run the build for your To Do application.

13.3. Building a project with Jenkins

Jenkins (http://jenkins-ci.org/) originated as a project called Hudson (http://hudson-ci.org/). Hudson started out as an open source project in 2004 at Sun Microsystems. Over the years, it became one of the most popular CI servers with a huge market share. When Oracle bought Sun in 2011, the community decided to fork the project on GitHub and call it Jenkins. While Hudson still exists today, most projects switched to using Jenkins because it provides the best support for bug fixes and extensions. Jenkins, which is entirely written in Java, is easy to install and upgrade and provides good scriptability and over 600 plugins. You’re going to install Jenkins on your machine.

13.3.1. Starting Jenkins

On the Jenkins webpage, you can find native installation packages for Windows, Mac OS X, and various Linux distributions. Alternatively, you can download the Jenkins WAR file and either drop it into your favorite Servlet container or directly start it using the Java command. Download the WAR file and start up the embedded container with the Java command:

$ java -jar jenkins.war

After it starts up successfully, open the browser and enter the URL http://localhost:8080/. You should see the Jenkins dashboard. You’re ready to install plugins and configure build jobs.

13.3.2. Installing the Git and Gradle plugins

Jenkins comes with a minimal set of features. For example, out of the box you can only configure a build job that pulls the source code from a project hosted on CVS or Subversion and invoke an Ant script. If you want to build a project with Gradle hosted on a Git repository, you’ll need to install the relevant plugins. These plugins can be installed through the plugin manager. To access the plugin manager, click Manage Jenkins on the main dashboard page. Then, on the following page, click Manage Plugins. You’ll end up on the Plugin Manager page, shown in figure 13.4.

Figure 13.4. Jenkin’s Plugin Manager page

The Plugin Manager page shows four tabs: Updates, Available, Installed, and Advanced. To install new plugins, navigate to the Available tab. In the upper-right corner, you’ll find a search input box called Filter. Enter the search criteria “git plugin” and tick the checkbox next to the plugin named Git Plugin, as shown in figure 13.5.

Figure 13.5. Installing the Git plugin

After pressing the button Install Without Restart, the plugin is downloaded and installed. Using this technique, you’ll also search for the Gradle plugin. Enter “gradle plugin” into the search box, as shown in figure 13.6.

Figure 13.6. Installing the Gradle plugin

After ticking the plugin’s checkbox, press the button Download Now and Install After Restart. You’ll see a screen similar to figure 13.7 that shows the downloaded and installed plugins. To use the plugins, Jenkins needs to be restarted. Ticking the checkbox Restart Jenkins When Installation Is Complete and No Jobs Are Running will take care of the restart.

Figure 13.7. Restarting Jenkins through the browser

After a few moments, Jenkins is restarted and the plugins are fully functional. You’re ready to define your first build job.

13.3.3. Defining the build job

Jenkins defines the actual work steps or tasks in a build job. A build job usually defines the origin of source code that you want to build, how it should be retrieved, and what action should be executed when the job is run. For example, a build job can be as simple as compiling the source code and running the unit tests. You’ll create a build job that does exactly that for your To Do application.

On the Jenkins main dashboard, click the link New Job. This opens a screen that lets you enter the job name and select the type of project you want to build. For the job name, enter “todo” and press the radio button Build a Free-style Software Project. A free-style project allows you to control all aspects of a build job; for example, the VCS and build tool you want to use. Figure 13.8 shows the selected values.

Figure 13.8. Creating the free-style build job

When you’re done, click OK. The build job is created and you’ll be presented with the job configuration page.

Configuring the repository

First, you’ll configure the GitHub repository for your build job. By configuring the repository, you ensure that Jenkins will know where to find the source code of your project when the job is executed. If you scroll down a little in the configuration screen, you’ll find a section named Source Code Management.

You want to build your project stored in a Git repository. Click the Git radio button and enter the repository URL, which is the SSH URL you’ll find in the forked repository of your GitHub account. It usually has the following form: [email protected]:<username>/todo.git. Figure 13.9 illustrates the filled-out Source Code Management section.

Figure 13.9. Configuring the Git repository

Now that you’ve told Jenkins where to retrieve the sources from, you’ll also want to define when to pull them. In the next section, you’ll set up a build trigger.

Configuring the build trigger

A build trigger is a standard feature of Jenkins. It determines when a build should be executed or triggered. Let’s say you want to poll your repository on GitHub in certain time intervals, such as every minute. Scroll to the configuration section named Build Triggers, tick the checkbox Poll SCM, and enter the Unix cron expression “* * * * *” into the input box, as shown in figure 13.10.

Figure 13.10. Polling the repository for changes minute by minute

The expression “* * * * *” means that the repository should be polled every single minute. Polling serves your purpose of periodically checking for changes. On the flip side, this method is fairly inefficient. Not only does it create unnecessary load for your VCS and Jenkins server, it also delays the build after a change is pushed to the repository by the timeframe you defined in your cron expression (in your case this is one minute).

A better way is to configure your Jenkins job to listen for push notifications from the repository. Every time a change is committed to the repository, the VCS will make a call to Jenkins to trigger a build. Therefore, a build is only executed if an actual change occurs. You’ll find many examples online that describe the necessary setup for your VCS. Next, we’ll define what a build means if it’s triggered.

Configuring the build step

Whenever a build is triggered, you want to execute your Gradle build script. Each task that should be executed is called a build step. Build steps can be added in the configuration section Build. Under Build, click the dropdown box Add Build Step and select Invoke Gradle Script. The options you see in figure 13.11 are provided by the Gradle plugin you installed earlier. Choose the radio button Use Gradle Wrapper and enter the tasks “clean test” into the Tasks input box.

Figure 13.11. Configuring the Gradle build invocation

This is one of the scenarios where the Gradle Wrapper really shines. You didn’t have to install the Gradle runtime. Your build provides the runtime and clearly expresses which version of Gradle should be used.

If you’re building the project on your developer machine, you’ll want to make good use of Gradle’s incremental build feature to save time and improve the performance of your build. In a CI setting, the build should be run from a clean slate to make sure all tests are rerun and recorded appropriately. That’s why you added “clean test” to the list of tasks. Next, we’ll touch on configuring build notifications.

Configuring email notification

Email notifications are set up as a post-build action. Scroll down to the section Post-build Actions, click the dropdown box Add Post-build Action, and choose the option E-mail Notification. The only thing you need to do to receive emails on a failed build is to enter your email address into the Recipients input box, as shown in figure 13.12.

Figure 13.12. Setting up a post-build email notification action

After adding this entire configuration, make sure you save the settings by pressing Save on the bottom of the screen. That’s all you need to execute your build.

13.3.4. Executing the build job

After saving the build job, you can find it listed on Jenkins’ dashboard. The gray ball on the left side of the job indicates that it hasn’t been built yet. A successful build will turn it blue, and a failed build is indicated by a red ball. You can either wait a minute until the job is triggered automatically or you can manually initiate the build by pressing the clock icon, which schedules the build. Figure 13.13 shows your first build in progress.

Figure 13.13. Build executing in progress

After a few minutes, the build is finished. You should see the ball turn blue and a sun icon will appear, which indicates the health status of your project. The job also reports on the last duration of the build and displays a timestamp that tells you the last time the build was successfully run. Figure 13.14 shows the successful build in the dashboard.

Figure 13.14. Build job executed successfully

To get more information about the specifics of a build, you can click on the job name, which brings you to the project’s homepage. The page lets you reconfigure the job, trigger the build manually, and inspect the build history. You’ll find your first build at #1 in the build history.

Click on it to examine what happened under the hood when the job was executed. One of the menu items on the left side is the console output. The console output recorded the steps that were executed during the build. First, the Git repository was checked out from the master branch. After pulling down the source code, the Gradle build was initiated for the tasks you defined. If you look closer, you can also see that the Gradle Wrapper and the dependencies were downloaded before the tasks were executed. Figure 13.15 shows an excerpt of the console output.

Figure 13.15. Job execution console output

The console output is rendered in real time while a build is executing. This feature provides invaluable information if you want to track down the root cause of a failed build.

Congratulations, you set up a CI job for your project! To trigger a subsequent build, you can either push a code change to your repository or manually initiate it on the project’s dashboard. Next, you’ll improve on your project’s reporting capabilities.

13.3.5. Adding test reporting

Jenkins provides extensive reporting capabilities. With minimal effort, you can configure your project to process the XML test results produced by testing frameworks like JUnit, TestNG, and Spock. In turn, Jenkins generates a graphical test result trend over time and lets you drill into the details of successfully executed and failed tests. Though limited in functionality, it can serve as an easy-to-set-up alternative to reporting provided by Sonar.

Publishing unit test results

You may remember that the XML test results produced by Gradle sit in the directory build/test-results for each of your subprojects. To create a clean separation between unit and integration test results, you reconfigured the project on GitHub to put the results for unit tests in the subdirectory unit and integration test results into the subdirectory integration.

After navigating back to the project configuration page, scroll down to the section Post-build Actions, click the dropdown box Add Post-build Action, and choose Publish JUnit Test Result Report. You can tell Jenkins to parse the test results of all subprojects by entering the expression “**/build/test-results/unit/*.xml” into the input field, as shown in figure 13.16.

Figure 13.16. Configuring test reporting for all subprojects

For test results to be rendered on the project dashboard, you’ll need to execute the build at least once. You’re going to trigger a build manually. You’ll find a new icon called Latest Test Results. If you click on it, you can view statistical information on your executed test suite. The test result trend determines the historical development over multiple data points. After executing the job at least twice, a graph is rendered. Successful tests are displayed in blue and failed tests in red. Figure 13.17 shows the test result trend in the project’s dashboard.

Figure 13.17. Test result trend graph

Your unit test task is configured to produce code coverage metrics with JaCoCo. Next, you’ll show the test coverage trend alongside the unit test results.

Publishing code coverage results

Rendering JaCoCo code coverage results is provided through a third-party plugin. You already know how to install a plugin for Jenkins. Go to the plugin manager page, search for jacoco plugin, and install the plugin. After restarting Jenkins, you can add a new post-build action named Record JaCoCo Coverage Report. Figure 13.18 shows how to configure the plugin to point to the correct exec file, as well as directories that hold the class and source files.

Figure 13.18. Configuring code coverage reporting

Another useful feature of the plugin is the ability to act as a quality gate. Let’s assume you want to make sure that your unit tests have to cover at least 70% of all classes and methods. In case your project’s code coverage is below the expected quality threshold, Jenkins will appropriately reflect that as poor health status.

In figure 13.19 you can see the code coverage trend below the test result trend after creating at least two data points. You can directly drill into the coverage result by clicking on the graph or the menu item Coverage Trend on the left side of the project dashboard.

Figure 13.19. Code coverage trend graph

This concludes our discussion of setting up a basic build job with Jenkins. You’ll now be able to use your knowledge to build your own projects. Later in this chapter, you’ll expand your knowledge by chaining multiple build jobs to form a build pipeline. Before you do that, we’ll discuss some cloud-based CI solutions.

13.4. Exploring cloud-based solutions

Cloud-hosted CI servers deliver immediate benefits. First and foremost, you don’t need to provision your infrastructure and maintain the software. Depending on the purpose of your build, the demand for hardware resources can be high. Continuous integration in the cloud promises to provide a scalability solution when you need it. Need more CPU power to satisfy the demand of multiple, concurrent compilation build jobs? Simply scale up by purchasing a plan that provides more hardware resources. Many cloud-based CI servers directly integrate with your online repository account like GitHub. Log into your account, select a project, and start building. The following list gives an overview of some of the popular CI server solutions in the cloud with Gradle support:

  • CloudBees DEV@cloud : The DEV@cloud service (http://www.cloudbees.com/dev.cb) is a standard Jenkins server. The free version comes with limited server resources and plugin support. The paid plan gives you full access to all standard Jenkins plugins. DEV@cloud also allows you to limit the visibility of project reports and access to configuration options.
  • CloudBees BuildHive: BuildHive (https://buildhive.cloudbees.com/) is a free service that lets you build projects hosted on GitHub. The service is backed by Jenkins with a limited feature set—for example, you can’t add more Jenkins plugins or repositories hosted outside of GitHub. Build jobs are easy to set up and provide support for verifying pull requests before you merge them. Build-Hive is a good choice if you need basic compilation and testing support for open source projects.
  • Travis CI: Travis CI (https://travis-ci.org/) is a CI service suitable for open source, small-business, and enterprise projects. The service provides its own homegrown CI product that lets you build projects hosted on GitHub. Projects need to provide a configuration file checked in with the source code to indicate the language and the command you want to execute.
  • drone.io: Drone.io (https://drone.io/) lets you link your GitHub, Bitbucket, or Google Code accounts to CI build projects. In the free version, you can only build public repositories. Paid plans offer build support for private repositories as well. While reporting is limited, drone.io allows you to automatically deploy your application to environments like Heroku or AppEngine.

Choosing a hosted CI server might sound like a no-brainer. However, there are some drawbacks. Continuous integration can consume a lot of hardware resources, especially if you have to build a whole suite of applications and want quick feedback. The costs may easily spiral out of control. If you’re playing with the idea of using a cloud-based CI solution, it’s a good idea to try out the free tier first and diligently evaluate the pros and cons.

You already learned how to use Jenkins to build tasks for your To Do application. If you want to build a full pipeline that separates individual tasks into phases, you’ll need to create multiple build jobs and connect them. In the following section, you’ll learn how to achieve that with Jenkins.

13.5. Modeling a build pipeline with Jenkins

While it may be convenient to run all possible tasks of your Gradle build in a single build job, it’s hard to find the root cause of a failed build. It’s much easier to break up the build process into smaller steps with their own technical responsibility. This leads to clear separation of concerns and faster, more specific feedback. For example, if you create a step for exclusively executing integration tests and that step fails, you know two things. On the one hand, you can be certain that the source code is compilable and the unit tests ran successfully. On the other hand, the root cause for a failed integration test is either an unsuccessful test assertion or a misbehaving integration with other components of the system. In this section, you’ll model the first steps of your build pipeline, as shown in figure 13.20.

Figure 13.20. Modeling the first phases of a build pipeline

A build pipeline defines quality gates between each of the steps. Only if the result of a build step fulfills the requirements of its quality gate will the pipeline then proceed to the next step. What does this mean for your example? In case the suite of integration tests fails to run successfully, the pipeline won’t trigger the next build step that performs code analysis.

13.5.1. Challenges of building a pipeline

When modeling a build pipeline, you face certain challenges that call for adequate solutions. The following list names a few very important points:

  • Every build pipeline starts with a single initial build job. During the job’s execution, the project’s source code is checked out or updated from the VCS repository. Subsequent steps will work on the same revision of the code base to avoid pulling in additional, unwanted changes.
  • A unique build number or identifier is used to clearly identify a build. This build number should be assigned by the first job of the pipeline and carried across all steps of the pipeline. Produced artifacts (for example, JAR files, reports, and documentation) incorporate the build number to clearly identify their version.
  • A deliverable artifact should only be created once. If later steps require it (for example, for deployment), it should be reused and not rebuilt. The build number is used to retrieve the artifact from a shared binary repository.
  • While many build steps are triggered automatically (for example, on a code change committed to VCS or when a previous step passed the quality gate), some of the steps need to be initiated manually. A typical example would be the deployment of an artifact to a target environment. Manual triggers are especially useful if you want to provide push-button release functionality to nontechnical stakeholders. In such a scenario, the product owner could decide when to release functionality to the end user.

At the time of writing, Jenkins doesn’t provide a standardized and easy-to-use solution to implement those needs. The good news is that you can model a full-fledged build pipeline with the help of community plugins. The next section will give you a high-level overview of their features and use cases before you use them to configure your pipeline jobs.

13.5.2. Exploring essential Jenkins plugins

Wading through the feature lists of more than 600 Jenkins plugins is no fun if you need particular functionality. The following four plugins provide you with the most essential functionality to get started with a build pipeline. Please install every one of them while following along.

Parameterized trigger plugin

Jenkins provides out-of-the-box functionality for chaining individual build jobs. All you need to do is add a new post-build action called Build Other Projects. This action allows you to define the build job name that should automatically be triggered when the current build job completes. The problem with this approach is that you can’t pass parameters from one job to another, a feature you need to clearly identify a build by an initial build number.

The Parameterized Trigger plugin extends the functionality of chaining build jobs with the ability to declare parameters for the triggered job. After installing the plugin, you can add a new post-build action named Trigger Parameterized Build on Other Projects. In the configuration section you can name the project to build, under what condition it should be triggered, and the parameters you want to pass along. Keep in mind that you can also trigger multiple jobs by declaring a comma-separated list of job names.

Let’s say you want to define a parameter named SOURCE_BUILD_NUMBER in the first step of your pipeline that indicates the initial number of a build. As the value for this parameter, you can use the built-in Jenkins parameter BUILD_NUMBER. BUILD_NUMBER is a unique number assigned to every Jenkins build job at runtime. Figure 13.21 demonstrates how to define a build trigger on the build job running your integration tests from the job definition responsible for compilation/unit tests execution.

Figure 13.21. Passing a parameter from one build job to another when triggered

In the triggered build, you can now use the parameter SOURCE_BUILD_NUMBER as an environment variable in either the build job definition or the invoked Gradle build. For example, in your Gradle build script you can directly access the value of the parameter by using the expression System.env.SOURCE_BUILD_NUMBER.

If you’re unsure about what parameters have been passed to a build job, you can install the Jenkins plugin Show Build Parameters Plugin. It helps you verify parameters and their values by displaying them on the project page for a specific build.

Build name setter plugin

By default, every Jenkins job uses the expression #${BUILD_NUMBER} to display the number for a particular build. On your project page, the expression looks similar to this:, Build #8 (Apr 2, 2013 6:08:44 AM)., If you’re dealing with multiple pipeline definitions, you may want a more expressive build name to clearly identify which pipeline a build belongs to. The Build Name Setter plugin allows you to adjust the build name expression. Figure 13.22 shows how you can add the prefix todo to the build name expression for the initial compilation/unit tests job.

Figure 13.22. Build name expression for initial job

After running a build, the name is displayed as follows: Build todo#8 (Apr 2, 2013 6:08:44 AM). We’ll expand on using the plugin’s functionality later when you model the full pipeline.

Clone workspace SCM plugin

As discussed earlier, you only want to check out the source code from the VCS repository once during the initial build job execution. Subsequent build jobs should work on the same change set. The Clone Workspace SCM plugin lets you reuse a project’s workspace in other jobs. To achieve this, you’ll need to configure the initial build job to archive the checked-out change set, as shown in figure 13.23.

Figure 13.23. Archiving the initial job workspace

In subsequent jobs, you can now select the new option Clone Workspace in the Source Code Management configuration section. Figure 13.24 demonstrates how to reuse the workspace of the parent project todo-initial in one of the subsequent build jobs.

Figure 13.24. Cloning the archived workspace in subsequent jobs

Instead of checking out the source code again, you can now build on top of the already existing workspace. This gives you access to previously created artifacts like compiled class files and project reports.

Build pipeline plugin

After chaining multiple build jobs, it’s easy to lose track of their exact order if you don’t name them appropriately. The Build Pipeline plugin provides two types of functionality. On the one hand, it offers a visualization of your whole pipeline in one single view. On the other hand, it allows you to configure a downstream build job to only execute if the user initiates it manually. This is especially useful for push-button deployment tasks. We’ll explore this functionality in chapter 15 when discussing artifact deployments.

Creating a build pipeline view of the chained tasks is simple. After installing the plugin, click the + tab in the main Jenkins dashboard to add a new view. In the rendered page, select the radio button Build Pipeline View and enter an appropriate view name, as shown in figure 13.25.

Figure 13.25. Creating a new build pipeline view

After pressing OK, you’re presented with just one more page. Select the initial build job and you’re ready to create the build pipeline view. Figure 13.26 shows an exemplary view produced by the plugin.

Figure 13.26. Build pipeline view

As shown in the figure, the pipeline consists of three build jobs. The arrows indicate the order of execution. The status of a build is indicated by the color.

Another option for creating a graphical representation of the pipeline is the Downstream Buildview plugin. Starting from the initial project, it renders a hierarchical view of downstream projects. The plugin you choose for your project is mostly a matter of taste. In this book, we’ll stick to the Build Pipeline plugin. After getting to know these plugins, you’re well equipped to build the first three steps of your pipeline.

13.5.3. Configuring the pipeline jobs

Modeling a build pipeline for your To Do application doesn’t require any additional Gradle tasks. With the help of Jenkins, you’ll orchestrate a sequence of build jobs that call off to your existing tasks. The full pipeline consists of three build jobs in the following order:

1.  todo-initial: Compiles the source code and runs the unit tests

2.  todo-integ-tests: Runs the integration tests

3.  todo-code-quality: Performs static code analysis using Sonar

Earlier in this chapter, you set up a build job for compiling your source code and running the unit tests. With minor modifications, this job will serve as the initial step for your build pipeline. To indicate that the job is the entry point for your pipeline, you’ll rename it todo-initial. Go ahead and create new free-style build jobs for steps 2 and 3 with the names mentioned above. Later, you’ll fill them with life.

Declaring Jenkins build jobs can be a repetitive and tedious task. To keep it short, I’ll stick to the most important points when explaining the configuration for each of the build steps.

Quick setup of Jenkins jobs

In its default configuration, Jenkins stores the definition of a build job in the directory ~/.jenkins/jobs on your local disk. Don’t worry if you feel lost at any point of time when configuring your pipeline. The source code of the book contains the job definition for each of the steps. All you need to do is copy the job definitions to the jobs directory and restart the server.

Step 1: Compilation and unit tests

You’ll start by making some additional tweaks to the initial build job:

  • To be able to use the same workspace in downstream projects, make sure to add the post-build action Archive for Clone Workspace SCM with the expression “**/*”.
  • Define the build name using the expression todo#${BUILD_NUMBER}.
  • Add a parameterized build action that defines a build trigger on the job running your integration tests named todo-integ-tests. You’ll also declare the downstream parameter SOURCE_BUILD_NUMBER=${BUILD_NUMBER}.
Step 2: Integration tests

The integration test build step is only triggered if step 1 is completed successfully. This is only the case if there were no compilation errors and all unit tests passed. Make the following modifications to the default job configuration:

  • In the Source Code Management configuration section, choose the option Clone Workspace and then the parent project todo-initial.
  • As the build step, you want to trigger the execution of your integration tests. Add a build step for invoking your Gradle script using the wrapper and enter the task databaseIntegrationTest.
  • You separated the test results between unit and integration tests by writing them to different directories. For publishing the test reports, use the expression “**/build/test-results/integration/*.xml.” You’ll select the file “**/build/jacoco/integrationTest.exec” for code coverage reporting.
  • Define the build name by incorporating the upstream build number parameter: todo#${ENV,var="SOURCE_BUILD_NUMBER"}.
  • Add a parameterized build action that defines a build trigger on the job running your static code analysis named todo-code-quality. As far as parameters go, you’ll reuse the existing ones by choosing the option Current Build Parameters.
Step 3: Code quality

The code quality build step forms the last step in your pipeline for now. Therefore, you don’t need to define a downstream project. You’ll expand on your pipeline in the next two chapters by adding jobs for publishing the WAR file to a repository and deploying the artifact to different runtime environments. Most of the configuration for this job looks similar to the previous job definition:

  • In the Source Code Management configuration section, choose the option Clone Workspace and then the parent project todo-initial.
  • As the build step, you want to trigger the execution of Sonar Runner to produce code quality metrics. Add a build step for invoking your Gradle script using the wrapper and enter the task sonarRunner.
  • Define the build name by incorporating the upstream build number parameter: todo#${ENV,var="SOURCE_BUILD_NUMBER"}.

Perfect, you built your first build pipeline with Jenkins by setting up a chain of Jenkins jobs. Make sure to configure at least one of the pipeline visualization plugins. It’s exciting to see the job execution travel down the swim lane.

13.6. Summary

Continuous integration is a software development practice that delivers an instant payoff for your team and the project. By automatically integrating shared source code multiple times a day, you make sure that defects are discovered at the time they’re introduced. As a result, the risk of delivering low-quality software is reduced.

In this chapter, you experienced firsthand how easy it is to set up continuous integration for a project. You installed the open source CI server Jenkins on your machine and created a build job for the To Do application. In a first step, you learned how to periodically retrieve the source code from a GitHub repository and trigger a Gradle build. One of Jenkins’s strong suits is reporting. You configured your build job to display the unit test results and code coverage metrics. Hosting a Jenkins instance on a server requires hardware resources and qualified personnel to maintain it. We explored popular, cloud-hosted CI solutions and compared their advantages and disadvantages. While hosting a CI server in the cloud is convenient, it may become costly with an increasing number of build jobs and features.

A CI server is more than just a platform for compiling and testing your code. It can be used to orchestrate full-fledged build pipelines. You learned how to model such a build pipeline with Jenkins. Though Jenkins doesn’t provide a standardized pipeline implementation out of the box, you can combine the features of various community plugins to implement a viable solution. We discussed how to set up build jobs for the first three stages of a continuous delivery commit phase and tied them together.

In the next chapter, you’ll learn how to build the distribution for your project and how to publish it to private and public artifact repositories. In later chapters, you’ll extend this pipeline by creating jobs for publishing the WAR file and deploying it to a target environment.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.86.235.207