9 Deploying Spring Boot applications

This chapter covers

  • Running Spring Boot applications as a JAR file or deploying as a WAR file
  • Deploying Spring Boot applications to Cloud Foundry and Heroku
  • Running Spring Boot applications as Docker containers
  • Developing Spring Boot applications for Kubernetes clusters and the Red Hat OpenShift platform

Once you are done with your application development and testing, you need to deploy the applications into your production server to serve the application users. Spring Boot applications can be deployed through an array of deployment strategies. Based on the application’s scalability, availability, and resilience requirements, you can decide on your application deployment strategy.

In this chapter, we’ll introduce you to various approaches to deploy the Spring Boot application. You’ll learn traditional deployment techniques, such as running the application as an executable JAR or deploying it into an application server as a WAR. We’ll then explore deploying into Pivotal Cloud Foundry and Heroku. Later, we’ll also learn how to run Spring Boot applications as a Docker container and deploy them into a Kubernetes cluster. Finally, we’ll show how to deploy the application into Red Hat OpenShift. Let’s get started.

Developing various types of applications with the Spring Boot framework is a popular choice among developers and organizations. Due to the framework’s flexibility, ease of use, and popularity, it is often used to develop a diverse category of applications, such as Web applications, REST APIs, microservices, and others. Some of these applications are small and target a limited number of users, whereas some are complex and available across multiple geographies and a broad range of users. The deployment strategies for first-category applications are straightforward. However, the latter category is complex and requires a sophisticated and thoughtful deployment model.

To meet the need for all categories, Spring Boot supports a wide range of deployment techniques. You can package your Spring Boot application as an executable JAR and run it without the need for any application server. Spring Boot provides built-in support for several embedded Web servers. Similarly, if you need to package your application as a WAR file and deploy it to an application server, Spring Boot has built-in support to prepare the WAR file. As you’ll explore shortly, it is straightforward to package your Spring Boot application as a WAR file without defining a web.xml and other configurations.

Deploying the applications through the JAR or WAR files approach has a prerequisite you need to build a package for your application. The Pivotal Cloud Foundry (PCF) (https://www.cloudfoundry.org/) offers an alternative approach with which you can use your source code directly to deploy the application, and PCF will perform the required steps. Similarly, if you don’t have your on-premises infrastructure, you can leverage cloud providers, such as AWS, Azure, Google Cloud Platform (GCP), and Heroku to deploy your packaged application. In this chapter, we’ll demonstrate how to deploy your application on Heroku.

Further, if you need to run your application as a container image, Spring Boot provides built-in support to generate a container image for your application. You can then use the image to run your application locally or deploy it to cloud environments. If you need scalable, high available and fault-tolerant applications, you can deploy your application to Kubernetes. In this chapter, we’ll demonstrate how to deploy a Spring Boot application to Kubernetes and Red Hat OpenShift.

Note How to deploy an application and serve end-users is a business requirement and is done based on multiple factors, such as application performance, availability, scalability, resilience, compliance needs, and so on. Thus, there are plenty of deployment techniques and strategies available. There are many technical toolkits and platforms out there to facilitate the diverse need of the deployments. In this book, we aim to focus on the Spring Boot application deployment on popular and commonly used platforms. Due to the vastness of this subject, it is beyond the scope of this text to provide an in-depth discussion on the technologies and platforms. However, we’ll provide additional references for the specific technology or platform wherever possible and cover the setup steps (if any) in the book’s companion GitHub wiki.

9.1 Running Spring Boot applications as executable JAR files

Previously, you’ve seen that you can package a Spring Boot application as an executable JAR file and execute it in local machines or servers. In this section, we’ll explore this step in detail.

9.1.1 Technique: Packaging and executing a Spring Boot application as an executable JAR file

In this technique, we’ll demonstrate how to package and execute a Spring Boot application as an executable JAR file.

Problem

You have developed a Spring Boot application and need to execute it as an executable JAR file.

Solution

Once you are done with the application development, you need to execute it to see it in action. Spring Boot provides several options to deploy the application and run it. In this technique, we’ll explore Spring Boot’s built-in approach to package the application as an executable JAR file and run it. This is one of the popular approaches to package and run a Spring Boot application.

To demonstrate how to package the application components and run the application as an executable JAR file, we’ll use the Course Tracker Spring Boot application we’ve developed in the earlier chapters.

Source code

The final version of the Spring Boot project is available at http://mng.bz/oa7Z.

To ensure the application is packaged as an executable JAR file, you need to ensure the following two things:

  1. The packaging type in the pom.xml file needs to be set as a JAR. This ensures the application components will be packaged as a JAR.

  2. Configure the spring-boot-maven-plugin in the plugins section of the pom.xml file, as shown in the following listing.

Listing 9.1 The Spring Boot Maven plugin

<plugin>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-maven-plugin</artifactId>
</plugin>

The spring-boot-maven-plugin prepares the executable JAR file when the Maven package goal is executed. We’ll discuss more on this in the discussion section.

Open a terminal window, and browse to the location of the pom.xml file. Next, execute the mvn package command to build and package the application components. This ensures the application is compiled, built, and packaged as a JAR file. The following listing shows the output.

Listing 9.2 The mvn package command

$course-tracker-app	arget>mvn package
[INFO] Scanning for projects...
[INFO]
[INFO] ------------< com.manning.sbip.ch09:course-tracker-app-jar >------------
[INFO] Building course-tracker-app-jar 1.0.0
[INFO] --------------------------------[ jar ]---------------------------------
[INFO]
[INFO] --- maven-resources-plugin:3.2.0:resources (default-resources) @
 course-tracker-app-jar ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Using 'UTF-8' encoding to copy filtered properties files.
[INFO] Copying 1 resource
[INFO] Copying 7 resources
[INFO]
[INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @
 course-tracker-app-jar ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 6 source files to C:sbip
epoch09course-tracker-app-
 jar	argetclasses
[INFO]
[INFO] --- maven-resources-plugin:3.2.0:testResources (default-
 testResources) @ course-tracker-app-jar ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Using 'UTF-8' encoding to copy filtered properties files.
[INFO] skip non existing resourceDirectory C:sbip
epoch09course-
 tracker-app-jarsrc	est
esources
[INFO]
[INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @
 course-tracker-app-jar ---
[INFO] Changes detected - recompiling the module!
[INFO]
[INFO] --- maven-surefire-plugin:2.22.2:test (default-test) @ course-
 tracker-app-jar ---
[INFO]
[INFO] --- maven-jar-plugin:3.2.0:jar (default-jar) @ course-tracker-app-
 jar ---
[INFO] Building jar: C:sbip
epoch09course-tracker-app-
 jar	argetcourse-tracker-app-jar-1.0.0.jar
[INFO]
[INFO] --- spring-boot-maven-plugin:2.5.3:repackage (repackage) @ course-
 tracker-app-jar ---
[INFO] Replacing main artifact with repackaged archive
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------

After successfully packaging, you’ll find there is a target directory created in the same location as the pom.xml file. This target directory contains an executable JAR file. By default, the name of the JAR file is <artifactId>-<version>.jar. In our example, the JAR file name is course-tracker-app-jar.1.0.0.jar. You can execute this JAR file using the java -jar <jarName> command from your terminal from the target directory. This is shown in the following listing.

Listing 9.3 Executing the Spring Boot executable JAR file

$course-tracker-app	arget>java -jar course-tracker-app-jar.1.0.0.jar

You’ll notice the application starts up and is successfully initialized. In this example, the application starts on default HTTP port 8080. Open a Web browser, and access http:/ /localhost:8080, then you’ll see the Course Tracker application index page.

Discussion

In this section, we discussed how to create and run an executable JAR file from your Spring Boot application. In chapter 1, sections 1.3.3 and 1.3.4, we briefly discussed how the JAR file is created and explored the structure of the JAR file. We discussed that the repackage goal of spring-boot-maven-plugin hooks in at the Maven package phase and prepares the executable JAR file. Previously, we discussed that Spring Boot projects have a parent POM called spring-boot-starter-parent. This POM file includes the necessary configuration to define the repackage goal. Further, in the same target directory, you’ll notice that there is another JAR file with naming format <artifactId>-<version>.jar.original. In our example, this JAR name is course-tracker-app-jar-1.0.0.jar.original. This is the original JAR file prepared by Maven. Note that this is not an executable JAR. The contents of this JAR file are subsequently packaged by the spring-boot-maven-plugin to create the executable JAR file. The following listing shows the structure of the Spring Boot-packaged JAR file.

Listing 9.4 Spring Boot-packaged JAR file structure

course-tracker-app-jar-1.0.0.jar
  |
  +-META-INF
  |  +-MANIFEST.MF
  +-org
  |  +-springframework
  |    +-boot
  |      +-loader
  |        +-<spring boot loader classes>     
  +-BOOT-INF
    +-classes
    |  +-com
    |    +-manning
    |      +-sbip
    |        +-ch09
    |          +-CourseTrackerSpringBootApplication.class
    +-lib                                    
    | +-dependency1.jar
    | +-dependency2.jar
    +-classpath.idx
    +-layers.idx

These loader classes are used to launch a Spring Boot application.

Third-party libraries required for the Spring Boot application to run (e.g., Spring JARs, logging JARs, etc.)

The META-INF folder contains the MANIFEST.MF manifest file. A manifest is a special file that contains meta-information about the files packaged in the JAR file. The following listing shows the sample contents of a manifest file.

Listing 9.5 The MANIFEST.MF file for the Course Tracker JAR file

Manifest-Version: 1.0
Created-By: Maven Jar Plugin 3.2.0
Build-Jdk-Spec: 17
Implementation-Title: course-tracker-app-jar
Implementation-Version: 1.0.0
Main-Class: org.springframework.boot.loader.JarLauncher
Start-Class: com.manning.sbip.ch09.CourseTrackerSpringBootApplication
Spring-Boot-Version: 2.6.3
Spring-Boot-Classes: BOOT-INF/classes/
Spring-Boot-Lib: BOOT-INF/lib/
Spring-Boot-Classpath-Index: BOOT-INF/classpath.idx
Spring-Boot-Layers-Index: BOOT-INF/layers.idx

Listing 9.5 contains various meta-information about the JAR file. The Main-Class property contains the org.springframework.boot.loader.JarLauncher class, which is the entry point of the execution of the JAR. The Start-Class property contains the actual Spring Boot application class that begins the initialization of the Spring Boot application. The JarLauncher class launches this class specified in Start-Class property.

The application-specific class files are packaged inside the BOOT-INFclasses, and the dependencies are packaged inside the BOOT-INFlib folder. These are the third-party libraries required by the Spring Boot application to function.

In addition, the JAR also includes two index files: classpath.idx and layers.idx. The classpath.idx file contains a list of JAR names (including the directories) in the order they should be added to the classpath.

The layers.idx files contain a list of layers and parts of the JAR that should be contained within them. The layers play a crucial role if you need to build a Docker image from the contents of the JAR file. While creating the Docker file these layers are written into different layers in the Docker image. We’ll discuss this in greater depth while discussing the creation of a Docker image of a Spring Boot application.

By default, Spring Boot defines the following layers:

  • dependencies—Contains all dependencies with a version that does not contain SNAPSHOT.

  • spring-boot-loader—Spring Boot loader classes. For instance, the JarLauncher class is part of this layer.

  • snapshot-dependencies—Contains all dependencies with a version that contains SNAPSHOT.

  • application—Contains application classes and resources.

The last thing we will discuss in this section is to view and extract the aforementioned layers through layertools JAR mode. Previously, you noticed that you can execute the executable JAR via the java -jar <jarName> command. You can specify the -Djarmode=layertools to view the layertools options. The following listing shows the use of layertools.

Listing 9.6 Using layertools JAR mode

$course-tracker-app	arget>java -Djarmode=layertools -jar course-tracker-
 app-jar-1.0.0.jar
Usage:
  java -Djarmode=layertools -jar course-tracker-app-jar-1.0.0.jar
 
Available commands:
  list     List layers from the jar that can be extracted
  extract  Extracts layers from the jar for image creation
  help     Help about any command

The layertools provides three options: list the layers, extract the layers, and the help command for which help is the default option. When you execute the command, the JarLauncher class is invoked as it is the entry point of the JAR execution. However, as the jarmode flag is configured, instead of starting the application, it executes any of the available commands of layertools. These commands are provided by another launcher: JarModeLauncher. It is used whenever we invoke java -jar with -Djarmode=layertools.

Further, by default Spring Boot packages the layers.idx file. When an executable JAR with this file is created, Spring Boot automatically provides and packages the spring-boot-jarmode-layertools JAR. The spring-boot-jarmode-layertools JAR includes the LayerToolsJarMode class, which provides the necessary support for the layertools jarmode feature. Let’s now discuss the use of list and extract commands along with layertools jarmode. The following listing shows the use of the list command.

Listing 9.7 Use of list command in jarmode layertools to view the layers

$course-tracker-app	arget> java -Djarmode=layertools -jar course-tracker-
 app-jar-1.0.0.jar list
dependencies
spring-boot-loader
snapshot-dependencies
application

Listing 9.7 shows the layers present inside the course-tracker-app-jar-1.0.0.jar file. You can extract these layers into the file system using the extract command, as shown in the following listing.

Listing 9.8 Use of extract command in jarmode layertools to extract the layers in the file system

$course-tracker-app	arget>java -Djarmode=layertools -jar course-tracker-
 app-jar-1.0.0.jar extract --destination layers
 
C:sbip
epoch09course-tracker-app-jar	arget>dir layers
 Volume in drive C is OS
 Volume Serial Number is 8EF3-F5B9
 
 Directory of C:sbip
epoch09course-tracker-app-jar	argetlayers
 
04/03/2022  01:20 PM    <DIR>          .
04/03/2022  01:20 PM    <DIR>          ..
04/03/2022  01:20 PM    <DIR>          application
04/03/2022  01:20 PM    <DIR>          dependencies
04/03/2022  01:20 PM    <DIR>          snapshot-dependencies
04/03/2022  01:20 PM    <DIR>          spring-boot-loader

In listing 9.8, we first used the extract command and specified a destination folder called layers to extract the layers. We then use the dir command to show the created directories. If you browse these directories, you’ll notice the contents of the course-tracker-app-jar-1.0.0.jar JAR is extracted inside these folders.

If you are wondering what the need for these layers is and why we are discussing these in this section, wait until we demonstrate creating Docker images for Spring Boot applications. You’ll notice that these layers help us to build an optimized docker image. As we’ve discussed the executable JAR creation and structure in this section, for continuity purposes, we have provided the layers discussion in the same section.

9.2 Deploying Spring Boot applications as WAR in the WildFly application server

In the previous section, we explored how to package Spring Boot application components in an executable JAR and run it. Although it works fine, at times, you need to package your application components into a WAR file and deploy them into a Web server or application servers.

Before containerization and Kubernetes, deploying applications into a Web server or application servers were the de facto standards. Application servers offer a lot of enterprise features that help developers and application architects to leverage those features and plan application deployment strategies. For instance, most application servers provide features, such as support for database connection, session replication, sticky sessions, clustering, and more. For application server-based deployments, it is a common scenario to deploy the same instance of the application into multiple servers and use a load balancer to balance the incoming requests among the application instances.

Figure 9.1 shows a high-level diagram with the use of application server clustering to deploy Spring Boot applications. This cluster deployment provides capabilities, such as load balancing and high availability. Note that we’ve provided this design for a high-level understanding and allow you to visualize how the typical application server-based production deployments work.

Figure 9.1 Deploying Spring Boot application in an application server cluster. The user request is received by a load balancer that front ends the application servers. Based on the load balancer configuration, the request is routed to one of the application server instances, and a response is provided back to the user.

In the following section, you’ll learn how to package your application as a WAR file and deploy it into a standalone WildFly server (https://www.wildfly.org/). WildFly is the community edition of the popular Red Hat JBoss Enterprise Application Platform server and is available free of cost.

9.2.1 Technique: Packaging and deploying a Spring Boot application as WAR in the WildFly application server

In this technique, we’ll discuss how to package a Spring Boot application as a WAR file and deploy into WildFly application server.

Problem

You have developed a Spring Boot application and need to package it as a WAR file and deploy it in the WildFly application server.

Solution

In this section, we’ll demonstrate how to package a Spring Boot application and deploy it in the WildFly server (https://www.wildfly.org/). You can refer to the version-specific installation document available at https://docs.wildfly.org/. To demonstrate how to package the application components as a WAR file and deploy it in the WildFly application server, we’ll use the Course Tracker Spring Boot application we developed in the earlier chapters.

Source code

The final version of the Spring Boot project is available at http://mng.bz/nY75.

To package the components as WAR files, you need to make two changes:

  1. In the pom.xml file, the packaging type should be war, as shown in the following listing.

    Listing 9.9 Package type as WAR type in pom.xml file

    ...
    <groupId>com.manning.sbip.ch09</groupId>
    <artifactId>course-tracker-app-war</artifactId>
    <version>1.0.0</version>
    <packaging>war</packaging>
    <name>course-tracker-app-war</name>
    ...
  2. Define an instance of a WebApplicationInitializer to run the application from a WAR deployment. The WebApplicationInitializer allows us to configure the ServletContext programmatically in a Servlet 3.0+ environment. If you create your Spring Boot application through Spring Initializr (available at https://start.spring.io) with the packaging type as war, then by default Spring Boot provides a class called ServletInitializer. This class extends the SpringBootServletInitializer class, which is an instance of WebApplicationInitializer. The SpringBootServletInitializer class is an opinionated WebApplicationInitializer implementation provided by Spring Boot to run a Spring Boot application in a WAR deployment. If you are not creating your Spring Boot application from Spring Initializr, you have to perform this step manually.

The following listing shows the ServletInitializer class.

Listing 9.10 The ServletInitializer class

package com.manning.sbip.ch09;
 
import org.springframework.boot.builder.SpringApplicationBuilder;
import
 org.springframework.boot.web.servlet.support.SpringBootServletInitial
 izer;
 
public class ServletInitializer extends SpringBootServletInitializer {
 
  @Override
  protected SpringApplicationBuilder configure(SpringApplicationBuilder
 application) {
    return application.sources(CourseTrackerSpringBootApplication.class);
  }
 
}

In listing 9.10, we added the CourseTrackerSpringBootApplication class in SpringApplicationBuilder. Later on, this SpringApplicationBuilder is used to build an instance of SpringApplication, which is run to start the Spring Boot application.

Next, let’s exclude the logback-starter dependency from the spring-boot-starter-web dependency in the pom.xml, as shown in the following listing.

Listing 9.11 Excluding the logback-classic dependency from spring-boot-starter-web

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <exclusions>
        <exclusion>
            <groupId>ch.qos.logback</groupId>
            <artifactId>logback-classic</artifactId>
        </exclusion>
    </exclusions>
</dependency>

We excluded this dependency, as it conflicts with the slf4j-jboss-logmanager-1.1.0.Final.jar of the WildFly server. Next, let’s define the context root of the application to “/”. The following listing shows the associated configuration for jboss-web.xml file located in the srcmainwebappWEB-INF folder.

Listing 9.12 The jboss-web.xml file

<?xml version="1.0" encoding="UTF-8"?>
<jboss-web>
    <context-root>/</context-root>
</jboss-web>

We are done with all the configurations. Let’s now package the application and deploy it into the WildFly server. To package the application, you need to execute the mvn package command from a terminal from the directory where the application’s pom.xml file is located. After successfully building, you’ll notice that the application is packaged as a WAR file. You can deploy this WAR file on the WildFly server.

Before starting deployment, you need to ensure an instance of the WildFly application server is running. You can then open a browser window and access the http://localhost:9990 URL, and you’ll notice the WildFly server management console. Click on the Deployments menu and then the Upload Deployment button, as shown in figure 9.2.

Figure 9.2 WildFly server Upload Deployment screen to upload a deployment

In the next window, upload the previously generated WAR file (e.g. course-tracker-app-war-1.0.0.war) from the target directory, and click the Next button and then in the next screen click the Finish button. After successful deployment, you’ll notice the successful deployment message, as shown in figure 9.3.

Figure 9.3 The Course Tracker WAR file uploaded successfully into the server. This indicates the application deployed successfully and can be accessed.

Click on the Close button, and the Course Tracker application is ready to be accessed. Let’s open a browser window and access the http://localhost:8080 URL. You’ll notice the index page of the Course Tracker application, as shown in figure 9.4.

Figure 9.4 The Course Tracker application index page. This page is served by the WildFly server.

If you are performing frequent deployments and need to automate the deployment process, you can use the wildfly-maven-plugin Maven plugin to automatically deploy the generated WAR file.

Source code

The final version of the Spring Boot project with wildfly-maven-plugin is available at http://mng.bz/44JV.

To use the wildfly-maven-plugin, you need to add the associated configuration in the Course Tracker pom.xml file. Following is the summary of the changes. The following listing shows the updated pom.xml file.

Listing 9.13 Updated pom.xml file with wildfly-maven-plugin configuration

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http:/ /maven.apache.org/POM/4.0.0"
 xmlns:xsi="http:/ /www.w3.org/2001/XMLSchema-instance"
 xsi:schemaLocation="http:/ /maven.apache.org/POM/4.0.0
 https:/ /maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.6.3</version>
        <relativePath/>
        <!-- lookup parent from repository -->
    </parent>
    <groupId>com.manning.sbip.ch09</groupId>
    <artifactId>course-tracker-app-war-mvn-plugin</artifactId>
    <version>1.0.0</version>
    <packaging>war</packaging>
    <name>course-tracker-app-war-mvn-plugin</name>
    <description>Spring Boot application for Chapter 09</description>
    <properties>
        <java.version>17</java.version>
     <wildfly.deploy.user>${ct.deploy.user}</wildfly.deploy.user>
     <wildfly.deploy.pass>${ct.deploy.pass}</wildfly.deploy.pass>
     <plugin.war.warName>${project.build.finalName}</plugin.war.warName>         
    </properties>
    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
            <exclusions>
                <exclusion>
                    <groupId>ch.qos.logback</groupId>
                    <artifactId>logback-classic</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-tomcat</artifactId>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-jpa</artifactId>
        </dependency>
        <dependency>
            <groupId>com.h2database</groupId>
            <artifactId>h2</artifactId>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-validation</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-thymeleaf</artifactId>
        </dependency>
        <dependency>
            <groupId>org.webjars</groupId>
            <artifactId>bootstrap</artifactId>
            <version>4.4.1</version>
        </dependency>
        <dependency>
            <groupId>org.webjars</groupId>
            <artifactId>jquery</artifactId>
            <version>3.4.1</version>
        </dependency>
        <dependency>
            <groupId>org.webjars</groupId>
            <artifactId>webjars-locator</artifactId>
            <version>0.38</version>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
            <exclusions>
                <exclusion>
                    <groupId>org.junit.vintage</groupId>
                    <artifactId>junit-vintage-engine</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
    </dependencies>
    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
            <plugin>
                <groupId>org.wildfly.plugins</groupId>
                <artifactId>wildfly-maven-plugin</artifactId>
                <version>2.1.0.Beta1</version>
                <configuration>
                    <hostname>localhost</hostname>
                    <port>9990</port>      
                    <username>${wildfly.deploy.user}</username>
                    <password>${wildfly.deploy.pass}</password>                  
                    <name>${project.build.finalName}.${project.packaging}</name>
                </configuration>
                <executions>
                    <execution>
                        <id>undeploy</id>
                        <phase>clean</phase>
                        <goals>
                            <goal>undeploy</goal>
                        </goals>
                        <configuration>
                            <ignoreMissingDeployment>true</ignoreMissingDeployment>
                        </configuration>
                    </execution>
                    <execution>
                        <id>deploy</id>
                        <phase>install</phase>
                        <goals>
                            <goal>deploy</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</project>

To deploy the Spring Boot application WAR file through the plugin, you need to configure the server username, password, and the WAR file name that needs to be deployed. We are referring these properties from settings.xml file. We’ve provided relevant settings.xml in the next listing.

Properties defined in the properties section of this pom.xml

To use the wildfly-maven-plugin, you’ve defined the server configuration, including host, port, username, and password. We’ve also defined two execution configurations: one to perform the deployment in the Maven install phase and one to perform undeployment in the Maven clean phase. For security reasons, we haven’t configured the username and password in the pom.xml file. We’re referring those from the Maven settings.xml file. The following listing shows the Maven settings.xml profile configuration.

Listing 9.14 Maven settings.xml profile configuration inside profiles section

...
<profile>
      <id>course-tracker-prod</id>
          <activation>
            <activeByDefault>true</activeByDefault>
          </activation>
          <properties>
            <ct.deploy.user>user</ct.deploy.user>
            <ct.deploy.pass>password</ct.deploy.pass>
          </properties>
    </profile>
...

We refer to the properties ct.deploy.user and ct.deploy.pass in the pom.xml properties configuration in listing 9.13, so the username and password could be used by wildfly-maven-plugin to perform the deploy and undeploy operations.

Open a terminal window to browse to the course-tracker-app-war-mvn-plugin application’s pom.xml directory, and execute the mvn install command. You’ll notice that the application deployed successfully. The following listing shows the mvn install command’s output.

Listing 9.15 The mvn install command output for successful deployment

...
...
[INFO] --- spring-boot-maven-plugin:2.5.3:repackage (repackage) @ course-
 tracker-app-war-mvn-plugin ---
[INFO] Replacing main artifact with repackaged archive
[INFO]
[INFO] <<< wildfly-maven-plugin:2.1.0.Beta1:deploy (deploy) < package @
 course-tracker-app-war-mvn-plugin <<<
[INFO]
[INFO]
[INFO] --- wildfly-maven-plugin:2.1.0.Beta1:deploy (deploy) @ course-
 tracker-app-war-mvn-plugin ---
[INFO] JBoss Threads version 2.3.3.Final
[INFO] JBoss Remoting version 5.0.12.Final
[INFO] XNIO version 3.7.2.Final
[INFO] XNIO NIO Implementation Version 3.7.2.Final
[INFO] ELY00001: WildFly Elytron version 1.9.1.Final
[INFO] --------------------------------------------------------------------
[INFO] BUILD SUCCESS

You can now open a browser window and access the http://localhost:8080/ URL to use the Course Tracker application. You’ll notice the Course Tracker application index page. If you need to undeploy the application, you can execute the mvn clean command, and the application will be undeployed, as shown in the following listing.

Listing 9.16 Mvn clean to undeploy the deployed WAR file

$course-tracker-app	arget>mvn clean
[INFO] Scanning for projects...
[INFO]
[INFO] ------< com.manning.sbip.ch09:course-tracker-app-war-mvn-plugin >---
[INFO] Building course-tracker-app-war-mvn-plugin 1.0.0
[INFO] --------------------------------[ war ]-----------------------------
[INFO]
[INFO] --- maven-clean-plugin:3.1.0:clean (default-clean) @ course-tracker-
 app-war-m
[INFO] Deleting C:sbip
epoch09course-tracker-app-war-mvn-plugin	arget
[INFO]
[INFO] --- wildfly-maven-plugin:2.1.0.Beta1:undeploy (undeploy) @ course-
 tracker-app
[INFO] JBoss Threads version 2.3.3.Final
[INFO] JBoss Remoting version 5.0.12.Final
[INFO] XNIO version 3.7.2.Final
[INFO] XNIO NIO Implementation Version 3.7.2.Final
[INFO] ELY00001: WildFly Elytron version 1.9.1.Final
[INFO] --------------------------------------------------------------------
[INFO] BUILD SUCCESS

Discussion

With this technique, you’ve learned to deploy a Spring Boot application in an application server. We’ve discussed two approaches to achieve this. In the first approach, you build the WAR file via the mvn install command and then manually deploy the WAR file via the application server’s Web interface. With the second approach, you’ve used the wildfly-maven-plugin to automatically deploy the generated WAR file in the application server.

Now that you’ve explored both approaches, you may wonder which approach is better. I would recommend the wildfly-maven-plugin-based approach, as it enables a more automated way of deployment and requires less manual intervention.

9.3 Deploying Spring Boot applications in Cloud Foundry

In the previous sections, we’ve discussed two traditional approaches with JAR and WAR files to package and deploy a Spring Boot application. In this section, we’ll look into an alternative application deployment approach through Cloud Foundry.

Note Cloud Foundry provides a much more straightforward and easier model to build, test, and deploy applications. As you’ll notice shortly, Cloud Foundry allows you to push your source code to the Cloud Foundry server and perform the build and deployment from the source code. Finally, it makes the application available to the end users. Cloud Foundry is a large topic and offers several features. It is beyond the scope of this text to provide in-depth coverage of this. Refer to the Cloud Foundry documentation available at https://docs.cloudfoundry.org/ for more information.

These days, cloud platforms allow us to deploy applications and make them available across the globe in a short period. The cloud platforms also allow us to scale the application on demand without worrying much about infrastructure and its scalability. Figure 9.5 shows various layers of technology stacks used in an application.

Figure 9.5 Layers of technology stacks required by an application. In traditional IT, all layers of infrastructure are managed by you. In the IaaS model, the core infrastructure is delivered as a service. In the PaaS model, only the application and data need to be managed by you, and the rest of all layers are delivered as a service. In the SaaS model, all layers are delivered as a service. We’ve highlighted the PaaS model, as Cloud Foundry belongs to this model.

Cloud Foundry belongs to the platform-as-a-service model, where only the data and application are managed by you, and all remaining layers are managed by the Cloud Foundry. But what is Cloud Foundry in the first place? It is an open-source cloud application platform that allows you to select the cloud platform you want to use, offers several developer frameworks, and offers other application services. One of the major benefits of Cloud Foundry over the traditional deployments is that it makes application building, testing, deployment, and scaling faster and easier. In the next technique, we’ll explore how to deploy a Spring Boot application to Cloud Foundry.

9.3.1 Technique: Deploying a Spring Boot application to Cloud Foundry

In this technique, we’ll discuss how to deploy a Spring Boot application to Cloud Foundry.

Problem

Your Spring Boot application is currently running as a standalone JAR file in a Unix server. You need to deploy it to a cloud platform through Cloud Foundry.

Solution

Using this technique, we’ll explore how to deploy a Spring Boot application in a Cloud Foundry cloud platform. To deploy your application in Cloud Foundry, you need a Cloud Foundry instance. You can either run Cloud Foundry yourself, use a company-provided Cloud Foundry instance, or use a hosted solution. There are several hosted solutions available, such as anynines (https://paas.anynines.com/) and SAP (http://mng.bz/vo7p), which provides a trial version of the Cloud Foundry instance. In this technique, we’ll use the SAP Cloud Foundry instance. You can browse the SAP link and follow the steps to set up your trial account.

Source code

The final version of the Spring Boot project is available at http://mng.bz/4jNR.

Once you are done with the Cloud Foundry instance set up, you’ll need to install the Cloud Foundry command-line interface (CLI). You’ll use this CLI tool to interact with the Cloud Foundry instance. The CLI runs on a terminal window and makes REST calls to the Cloud Foundry API. Browse to the https://github.com/cloudfoundry/cli#downloads link to install CLI on your computer. Once the installation is completed successfully, run the cf version command from your terminal, and it should return the installed Cloud Foundry CLI version.

The next step is to log in to the Cloud Foundry instance, which you can do using the cf login command. The following listing shows the complete login command.

Listing 9.17 Cloud Foundry login

cf login -a <CLOUDFOUNDRY_API_ENDPOINT> -u <USERNAME>

The CLOUDFOUNDRY_API_ENDPOINT is the Cloud Foundry instance URL. If you are using SAP, you’ll find this on the SAP account page. The USERNAME is your login ID. For SAP, this is the email ID of the SAP account you just created.

Invoking the command of listing 9.17 with the API endpoint and the username will prompt you to enter the password. Enter your SAP account login password. The following listing shows a sample command and the associated output.

Listing 9.18 Login to Cloud Foundry

cf login -a https:/ /api.cf.eu10.hana.ondemand.com/ -u ****@gmail.com
API endpoint: https:/ /api.cf.eu10.hana.ondemand.com/
 
Password:
 
Authenticating...
OK
 
Targeted org 6****986trial.
 
Targeted space dev.
 
API endpoint:   https:/ /api.cf.eu10.hana.ondemand.com
API version:    3.102.0
user:           ****@gmail.com
org:            6****86trial
space:          dev

Next, let’s build the Course Tracker Spring Boot application using the mvn clean install command. We’ll use the generated JAR file to push to the Cloud Foundry instance. Instead of pushing the raw JAR file, we’ll define a manifest.yml file in the application root directory, so Cloud Foundry CLI can read it and perform the deployment. The following listing shows the manifest.yml file.

Listing 9.19 The manifest.yml file to deploy into Cloud Foundry

applications:
- name: course-tracker-app-cf
  instances: 1
  memory: 1024M
  path: target/course-tracker-app-cf-1.0.0.jar
  random-route: true
  buildpacks:
  - java_buildpack

This is a relatively simple configuration file with minimal details. We’ve specified the application name, the number of instances required, the memory that needs to be allocated, and the application executable path. The route details indicate Cloud Foundry to assign a random route for the deployed application. The buildpacks configuration allows Cloud Foundry to select a Java buildpack to run the application. You can now run the cf push command (from any OS user) to start the deployment, as shown in the following listing.

Listing 9.20 Cloud Foundry push command to start deployment

cf push

The command takes a while to upload the artifacts, and the deployment begins. Once the command returns, you can execute the cf apps command to find the running application and the associated URL. The following listing shows a sample output of the cf apps command.

Listing 9.21 Sample output of the cf apps command

> cf apps
Getting apps in org 6****986trial / space dev as ****@gmail.com...
 
name                    requested state   processes           routes
course-tracker-app-cf   started           web:1/1, task:0/0   course-
 tracker-app-cf-active-genet-qh.cfapps.eu10.hana.ondemand.com

In the above example, the course-tracker-app-cf-active-genet-qh.cfapps.eu10 .hana.ondemand.com is the application route (URL). In your case, you might notice a different routes name. You can copy the routes and access the URL in a browser window. You’ll notice you are redirected to the Course Tracker application index page.

Discussion

With this technique, we’ve demonstrated how to deploy your Spring Boot application to Cloud Foundry. To keep things simple, we’ve used the Course Tracker application with an in-memory database. In a production application, you’ll also have other application components, such as database, messaging, caching, and others.

Based on the Cloud Foundry service provider, you can use the offerings from the provider. To find the list of offerings, you can execute the cf marketplace command, and it will return the available services and their details. Based on the need, you can enable one or more services. To know more about a service offering, you can execute the cf marketplace -e <SERVICE_OFFERING> command. Replace the SERVICE_OFFERING placeholder with the actual service name.

To create a new service, you can use the cf create-service <SERVICE> <SERVICE_PLAN> <SERVICE_INSTANCE> command. Further, you can find the list of services by invoking the cf services command. You can bind service with your application using the cf bind-service <APP_NAME> <SERVICE_INSTANCE> command.

Lastly, once you have the services defined, you may need to access the service-specific environment variables. For instance, if you’ve created a database, you need the database URL, username, password, and more to connect and access it. Spring provides the CloudFoundryVcapEnvironmentPostProcessor (http://mng.bz/QWO6) class that takes all the Cloud Foundry environment variables and provides in form of Spring Environment. If you have configured Spring spring-boot-starter-actuator and enabled the env actuator endpoint, you’ll find the Cloud Foundry properties through /actuator/env endpoint. You can also refer to the java-cfenv library (https://github.com/pivotal-cf/java-cfenv) for more information on using Cloud Foundry environment variables.

9.4 Deploying Spring Boot applications in Heroku

In the previous section, you’ve seen how to deploy an application in Cloud Foundry. In this section, we’ll discuss deploying a Spring Boot application in Heroku (https://www.heroku.com/). Heroku is another PaaS solution that allows you to build, run, and execute applications in the cloud. It can run applications written in Ruby, Node.js, Java, Python, Clojure, Scala, Go, and PHP.

Heroku takes the application source code along with the dependencies the application requires and prepares an artifact that can be executed. For instance, a Spring Boot application takes the Spring Boot application source code and the pom.xml for the required dependencies. Heroku uses Git distributed version control system for deploying the application. Lastly, Heroku uses Dynos (https://devcenter.heroku.com/articles/dynos) to execute the applications. Dynos are lightweight Linux containers in which Heroku runs the application. In the next technique, let’s explore how to deploy a Spring Boot application in Heroku.

9.4.1 Technique: Deploying a Spring Boot application in Heroku

In this technique, we’ll discuss how to deploy a Spring Boot application in Heroku.

Problem

You need to deploy the application in the Heroku cloud platform.

Solution

Heroku is a PaaS solution that allows you to deploy a Spring Boot application in the Heroku cloud platform with a few steps. To demonstrate this, we’ll use the previously used Course Tracker Spring Boot application to deploy into Heroku.

Source code

The final version of the Spring Boot project is available at http://mng.bz/XWj9.

To begin with, you need to create a user account in Heroku. You can navigate to https://signup.heroku.com/ and sign up for a new account. Next, you need to install Heroku Command Line Interface (CLI) tool on your machine. This CLI provides a set of commands to interact with the Heroku cloud platform and also allows you to deploy the application. Refer to https://devcenter.heroku.com/articles/heroku-cli for more information on installing the CLI in your machine. You are now ready to start deploying your application.

First, log in to Heroku from your terminal, so that you can execute the next set of commands to proceed with your deployment. Open a terminal and type heroku login. This command provides an option to authenticate yourself through a browser-based login. Once authenticated, you will find output similar to the following listing.

Listing 9.22 Login to Heroku

heroku login
heroku: Press any key to open up the browser to login or q to exit:
 Opening browser to https:/ /cli-
 auth.heroku.com/auth/cli/browser/d4da08df-3725-44b6-bf28-
 c0a78fbe54e3?requestor=SFMyNTY.g2gDbQAAAA8xMDMuMjE1LjIyNC4xNTFuBgDw-
 iCkewFiAAFRgA.6fS4ju_OBxvr9_YQNkSn5Z7UK68CQNULUhh9VEzCVxQ
Logging in... done
Logged in as *****@gmail.com

Next, as mentioned earlier, Heroku uses a Git-distributed version control system for deployment. Thus, we need to create a Git repository for the Course Tracker application. Browse to the root directory of the Course Tracker application and execute the commands, as shown in the following listing.

Listing 9.23 Creating a Git repository for the Course Tracker application

git init                                         
git add .                                        
git commit -am "Course Tracker first commit"     

Initializes an empty local Git repository

Add all the files to the repository.

Commits the changes in the local Git repository

Next, to deploy the application in Heroku, we need to provision a new Heroku application. We will do that by executing the heroku create command, as shown in the following listing.

Listing 9.24 Provisioning the Heroku application

heroku create
Creating app... done,  secure-journey-03985
 https:/ /secure-journey-03985.herokuapp.com/ |
 https:/ /git.heroku.com/secure-journey-03985.git

The command in the listing also creates a remote repository called Heroku and adds its reference in your local Git repo. Heroku generates a random name (in this case secure-journey-03985) for your Spring Boot application.

In the Course Tracker application, to keep the example simple and easy to execute, we’ve used the H2 in-memory database. However, it is seldom the case in a production application. To demonstrate how to use a mainstream database, we used PostgreSQL in the application. Refer to the application pom.xml file for related configuration. Before we proceed with the deployment, let’s attach a PostgreSQL database to the application. Execute the heroku addons:create heroku-postgresql command from your terminal to create a PostgreSQL database add-on. Once the add-on is created, Heroku will automatically populate the environment variables SPRING_ DATASOURCE_URL, SPRING_DATASOURCE_USERNAME, and SPRING_DATASOURCE_PASSWORD. These environment variables allow the Course Tracker application to connect to the database. Refer to the application.properties file of the Course Tracker application. Next, we’ll deploy the code by pushing the changes to the remote Heroku master branch, as shown in the following listing.

Listing 9.25 Deploying the Spring Boot application in Heroku

c:sbip
epoch09course-tracker-app-heroku>git push heroku master
Enumerating objects: 41, done.
Counting objects: 100% (41/41), done.
Delta compression using up to 8 threads
Compressing objects: 100% (30/30), done.
Writing objects: 100% (41/41), 64.32 KiB | 5.85 MiB/s, done.
Total 41 (delta 3), reused 0 (delta 0)
remote: Compressing source files... done.
remote: Building source:
remote:
remote: -----> Building on the Heroku-20 stack
remote: -----> Determining which buildpack to use for this app
remote: -----> Java app detected
remote: -----> Installing JDK 11... done
remote: -----> Executing Maven
remote:        $ ./mvnw -DskipTests clean dependency:list install
...
...
remote:        https:/ /secure-journey-03985.herokuapp.com/ deployed to Heroku
remote:
remote: Verifying deploys... done.
To https:/ /git.heroku.com/secure-journey-03985.git
 * [new branch]      master -> master

In listing 9.25, you may notice that Heroku uses the Maven wrapper (.mvnw) to deploy the application. Once the application is successfully built and deployed, it is accessible via https://secure-journey-03985.herokuapp.com. For you, this URL could be different, as Heroku uses a random name for the application. You can also run the heroku open command to automatically open the application URL in a browser window. You can check the Spring Boot startup logs by accessing the heroku logs command.

Discussion

In this technique, you’ve deployed a Spring Boot application in the Heroku cloud platform. As you’ve noticed, it is extremely easy to build and deploy a Spring Boot application in Heroku. By using a few commands, you’ve got a running application with an HTTPS URL from your source code. The complexity of building, packaging, and deploying are taken care of by the platform. To make things further simplified, for Maven projects, Heroku provides the heroku-maven-plugin (https://github.com/heroku/heroku-maven-plugin). This plugin allows you to deploy the application without using a Git repository. You can find a detailed discussion on how to use the plugin at http://mng.bz/y47p. You can also refer to the Heroku documentation available at https://devcenter.heroku.com/ for a detailed discussion on various offerings and configurations.

9.5 Running Spring Boot applications as Docker containers

In previous sections, we learned a few deployment techniques. For example, the traditional deployments in which you package and deploy the application yourself into some server. The Cloud Foundry-based deployment is where you push the executable to the platform, and it takes care of the deployment. Lastly, we’ve seen the Heroku cloud platform in which you provide your source code to the platform, and it does the build, deployment, and execution.

In this section, we’ll shift our attention to containers and use the most popular container implementation Docker to run the Course Tracker application as a containerized application. However, before we proceed to containerize the Course Tracker application, let’s understand what a container is and why you should care about it.

A container image is a lightweight, standalone, executable software package that includes everything the application requires to run itself. These include application components, runtime, system tools, settings, and libraries. A container image turns into a container at its runtime, as shown in figure 9.6.

Figure 9.6 A container image can be used to create one or more containers.

The various components to run a container are shown in figure 9.7.

Figure 9.7 Various components to run a container. The infrastructure is at the bottom, and host operating systems run on top of it. A container runtime environment, such as Docker, runs on top of the host operating systems. The containers are run by the container runtime.

One of the most important reasons to use a container in the first place is due to its promise of reliable execution from one environment to another environment. It is a relatively common occurrence that in a typical infrastructure, applications may behave differently. For instance, we often found that applications working perfectly in the Dev environment may have some issues while running in UAT. Containers remove this problem, as it is a standalone package that contains everything the application requires to run. Thus, if the same image is used to run the application in Dev or UAT, it is expected to run uniformly.

Docker is the most popular and dominant container technology platform and can be used to deal with container and container images. Docker is so popular that it is almost synonymous with containers and container technology. However, there are other container platforms other than Docker, such as rkt (pronounced rocket) from Red Hat and LXD (pronounced lexdi). In this section, we’ll focus on Docker, discuss creating a Docker image, and running the image as a container.

9.5.1 Technique: Creating a container image and running a Spring Boot application as a container

In this technique, we’ll demonstrate how to generate a container image and run a Spring Boot application as a container.

Problem

You are running the Course Tracker application in your Unix server through the WildFly application server. However, you’ve heard a lot of good things about containers and want to run the application as a container.

Solution

To proceed with the next technique, you need to install and configure Docker. You can refer to Docker documentation available at https://www.docker.com/get-started for a detailed discussion on installing and configuring Docker. You can also refer to Docker in Practice (http://mng.bz/M2aQ) by Ian Miell and Aidan Hobson Sayers from Manning Publications for an in-depth understanding of Docker.

In this section, we’ll explore the following approaches to Dockerize the Course Tracker application:

  1. Use Dockerfile to create the container image and then run the image to create the container.

  2. Use Spring Boot built-in containerization (requires Spring Boot version >=2.3). This uses the Paketo buildpacks (https://paketo.io/) to build the image.

In these approaches, we’ll use H2 in-memory database with the application to keep the examples simple.

Source code

The final version of the Spring Boot project is available at http://mng.bz/aDrj.

Let’s begin with the first approach. We’ll use a Dockerfile to create the Docker image for the Course Tracker application. Before we define the Dockerfile, let’s execute the mvn clean install command to generate the JAR file of the Course Tracker application.

Let’s now define the Dockerfile for the Course Tracker application. A Dockerfile is a text file that contains all the commands needed to assemble and create the image. You can refer to https://docs.docker.com/engine/reference/builder/ for further details on Dockerfile. The following listing shows the sample Dockerfile we’ve created for the Course Tracker application. This file is located under the root directory of the application.

Listing 9.26 Dockerfile to create the Docker image for Course Tracker

FROM adoptopenjdk:11-jre-hotspot
ADD target/*.jar application.jar
ENTRYPOINT ["java", "-jar","application.jar"]
EXPOSE 8080

In listing 9.26, the Dockerfile contains the following:

  • FROM—We are using adoptopenjdk:11-jre-hotspot as the base image for our image. A base image is an image upon which your application Docker image is built.

  • ADD—We then add the JARs from the target directory as application.jar in the image.

  • ENTRYPOINT—This is the entry point where you run the image.

  • EXPOSE—We expose HTTP port 8080 in the container.

We can now build an image for the Course Tracker application.

Next, let’s execute the command, as shown in listing 9.27 to create the image. You need to execute the command from the location where the Dockerfile is located.

Listing 9.27 Building a Docker image for Course Tracker application

docker build --tag course-tracker:v1 .

In listing 9.27, note the period (.) at the end of the command. This indicates that the Dockerfile is available in the current directory. Besides, we tag the image with the name course-tracker:v1 to refer to the image, while creating a container from the image. Once you execute the command, it will take a while to build the image. Once the image is successfully built, you can list the image using the command, as shown in the following listing.

Listing 9.28 Listing the Docker image

docker image ls

You can now run the image, and a Docker container will be created. The following listing shows the command to run the image.

Listing 9.29 Docker run command to run the course-tracker image

docker run -p 8080:8080 course-tracker:v1

We’ve used the docker run command to run the container image. We’ve also used a port mapping of local machine HTTP port 8080 to the container’s HTTP port 8080. This ensures the HTTP request to the port 8080 in the local machine is forwarded to the container’s port 8080.

Once the command runs successfully, you’ll notice the console log of the Course Tracker application. Open a browser window, and access the http://localhost:8080 URL, then you’ll be redirected to the Course Tracker index page.

Let’s now briefly discuss the container image structure we’ve created in listing 9.26. Your Docker container image consists of multiple layers. If you recall, we started with the base image (adoptopenjdk:11-jre-hotspot). In our Dockerfile, we performed additional activities, such as adding the JAR file from the target location to the image. This has created an additional layer on top of the base image. Figure 9.8 shows the notion of layers in a Docker image.

Figure 9.8 Various layers in a container image. These layers are added on top of the base image as per the instructions specified in the Dockerfile. In the example, the adoptopenjdk:11-jre-hotspot is the base image, and the Spring Boot application JAR is added on top of the base image as a new layer.

If you are interested to see the various layers of the Docker image, you can use the dive tool (https://github.com/wagoodman/dive) to view the various layers of the created image. To view the layers, install Dive, and execute dive course-tracker:latest. Figure 9.9 shows the layers.

Figure 9.9 Using dive tool to view the layers inside a Docker image. In the top-left corner is the list of layers. The first few layers are from the OpenJDK, and the last layer is formed by adding the jars from the target directory.

In the Dockerfile, we’ve added the fat JAR inside the image. However, we could write a better Dockerfile for Spring Boot applications. Instead of adding the complete JAR, we could add the layers from the generated JAR file. Recall from section 9.1 that Spring Boot provides a means to layer the JAR file through the layers.xml file. It also provides the jarmode option to view and extract the layers. Let’s add the JAR layers in the Docker image instead of adding the complete JAR file. The following listing shows the updated Dockerfile.

Listing 9.30 Dockerfile to create a better Docker image

FROM adoptopenjdk:11-jre-hotspot as builder
WORKDIR application
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} application.jar
RUN java -Djarmode=layertools -jar application.jar extract
 
FROM adoptopenjdk:11-jre-hotspot
WORKDIR application
COPY --from=builder application/dependencies/ ./
COPY --from=builder application/spring-boot-loader/ ./
COPY --from=builder application/snapshot-dependencies/ ./
COPY --from=builder application/application/ ./
ENTRYPOINT ["java", "org.springframework.boot.loader.JarLauncher"]

Listing 9.30 contains a multi-stage Dockerfile. The builder stage (the first part of the Dockerfile) extracts the directories used later. Each of the COPY commands relates to the layers extracted by jarmode. Finally, we’ve used the org.springframework.boot .loader.JarLauncher as the entry point for the application. You can build the image using the same command shown in listing 9.27. Figure 9.10 shows the image layers.

Figure 9.10 Layers of the course-tracker:v2 Docker image. Instead of the fat JAR, the directories are added as layers.

Now that you’ve seen how to create an image using Dockerfile, let’s move on to building the Docker image, using Spring Boot’s built-in approach. Previously, you noticed the deployment using Heroku or Cloud Foundry. With Heroku, you just provided the source code, and the platform does the rest to build the code, add a runtime, and make the application available for the end users. Similarly, Spring Boot provides support to directly build a Docker image from the source code through Spring Boot Maven (and also Gradle) plugins. Spring Boot uses Cloud Native buildpacks (https://buildpacks.io/) to achieve this.

Buildpacks are the part of the platform (e.g., Cloud Foundry) that takes the application code and converts it into something that the platform can run. For instance, in the Cloud Foundry example, its Java buildpack noticed that you’re pushing a JAR file, and it automatically adds a relevant JRE. The buildpacks allow us to build a Docker-compatible image we can run anywhere. Let’s see this in action. You can run the command, as shown in the following listing, to generate the image.

Listing 9.31 Building a Docker image with Spring Boot Maven plugin

mvn spring-boot:build-image -Dspring-boot.build-image.imageName=course-
 tracker:v3

The command in listing 9.31 builds a Docker image with the name course-tracker:v3. By default, Spring Boot uses the artifactId:version to build the image. We’ve used the -Dspring-boot.build-image.imageName=course-tracker:v3 to customize the image name to course-tracker:v3. You can run the image in the same manner you’ve executed the earlier images.

Discussion

In this technique, we’ve learned how to build a Docker image from a Spring Boot application and run the image as a Docker container. Containers provide excellent portability support, as the container images can be run anywhere reliably. In this section, we’ve executed the Docker images manually using the docker run command. Although this approach works well, it does not scale. Imagine if you need to run hundreds of containers for your applications. It becomes quite tedious to run, update, and manage them. For instance, in a production system, if a container gets terminated for any reason, you need to ensure that you can bring up a new container. It will be excellent if there is a tool that could orchestrate the container management process. Thankfully, Kubernetes is there to address these concerns. Let’s discuss Kubernetes in the next section.

9.6 Deploying Spring Boot applications in a Kubernetes cluster

These days there is a trend to use containers to package and deploy applications. Specifically, containers are an excellent choice to package microservices along with their dependencies and configurations. Based on the demand for microservices, you can increase the number of containers. However, as the applications grow into multiple containers and span across multiple servers, it becomes quite difficult to manage them.

Kubernetes provides an open source API to manage how and where to run the containers. It orchestrates a set of virtual machines, known as a Kubernetes cluster, in which it schedules and runs the containers. In Kubernetes, containers are packed inside a pod, which is the fundamental operational unit.

Note In this section, we’ll use a single-node Kubernetes cluster created in the local machine and focus on how to deploy a Spring Boot application into a Kubernetes cluster. If you are not familiar with Kubernetes, you can refer to Kubernetes documentation at https://kubernetes.io/ for an understanding and installation.

9.6.1 Technique: Deploying a Spring Boot application in a Kubernetes cluster

In this technique, we’ll demonstrate how to deploy a Spring Boot application in a Kubernetes cluster.

Problem

You’ve explored containerization and are fascinated by the way it works. However, you understand that manually managing containers for a large application is a tedious task, as there will be so many containers. You heard that Kubernetes is a container orchestration tool that can orchestrate the containers automatically and want to try it out.

Solution

Using the previous technique, we created a Docker container image for the Spring Boot application. We’ll use the same course-tracker:v3 image in this technique. However, before proceeding with Kubernetes deployment let’s tag the image. The following listing shows the command to tag the image.

Listing 9.32 Docker tag command to tag the image

docker tag course-tracker:v3 musibs/course-tracker

In listing 9.32, we used the docker tag command to tag the image. The first part of the Docker tag (course-tracker:v3) command specifies the existing image, and the later part (musibs/course-tracker) is the tagged image with the format repository/ image. We haven’t specified any version here, and the Docker takes the version as the default value latest.

Source code

The final version of the Spring Boot project is available at http://mng.bz/xvpW.

Once you are done with the tagging, you may push the image to the Docker registry. The Docker registry is a storage and distribution system for Docker images. You can pull images to your local machine from the Docker registry or push images from your local machine to it.

In this example, we’ll use the Docker Hub (https://hub.docker.com/) as the Docker registry to store the image. Kubernetes pulls the Docker image from the Docker registry into the kubelet (the node where the image is run in a Kubernetes pod), which are not usually connected to the Docker daemon. In this example, though, as we are using the Kubernetes cluster in the local machine, you can skip this step. For completeness, be aware that you can use the docker push command (e.g., docker push musibs/course-tracker) to push the image to the Docker hub.

Now that we are ready with the docker image of the application, we are ready to run the application in Kubernetes. We need the following two things:

  1. The Kubernetes CLI (kubectl)

  2. A Kubernetes cluster to deploy the application

To interact with Kubernetes, you use the kubectl command to run commands against the Kubernetes cluster. Refer to https://kubernetes.io/docs/tasks/tools/ to install kubectl. For a Kubernetes cluster, we’ll use Kind (https://kind.sigs.k8s.io/) to create a local Kubernetes cluster. Once Kind is installed, run the following command, as shown in the following listing, to create a Kubernetes cluster.

Listing 9.33 Create a local Kubernetes cluster with Kind

kind create cluster
 
Creating cluster "kind" ...
  Ensuring node image (kindest/node:v1.20.2) ?
  Preparing nodes ?
  Writing configuration ?
  Starting control-plane ?
  Installing CNI ?
  Installing StorageClass ?
Set kubectl context to "kind-kind"
You can now use your cluster with:
 
kubectl cluster-info --context kind-kind
 
Thanks for using kind! ?

Once the cluster is successfully created, Kind automatically configures the Kubernetes CLI to point to the newly created cluster. To see that everything is set up as expected, execute the command, as shown in the following listing.

Listing 9.34 Kubernetes cluster information

kubectl cluster-info
    
Kubernetes control plane is running at https:/ /127.0.0.1:49672
KubeDNS is running at https:/ /127.0.0.1:49672/api/v1/namespaces/kube-
 system/services/kube-dns:dns/proxy
 
To further debug and diagnose cluster problems, use 'kubectl cluster-info
 dump'.

To deploy an application to Kubernetes, we specify the configurations in a YAML configuration file. However, instead of defining the configurations manually, let’s use the kubectl command to generate them for us. Create a new directory called k8s anywhere in your machine and run the command, as shown in the following listing from the k8s directory.

Listing 9.35 Generate the deployment YAML file

kubectl create deployment course-tracker --image musibs/course-tracker –
 dry-run=client -o=YAML > deployment.yaml

The command in listing 9.35 creates the deployment.yaml configuration file in the k8s directory. The –-dry-run=client option allows us to preview the deployment object that the kubectl create deployment command creates. The -o option specifies that the command output is to be written in YAML format. Listing 9.36 shows the contents of the generated file.

Listing 9.36 The generated deployment.yaml file

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: course-tracker
  name: course-tracker
spec:
  replicas: 1
  selector:
    matchLabels:
      app: course-tracker
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: course-tracker
    spec:
      containers:
      - image: musibs/course-tracker
        name: course-tracker
        resources: {}
status: {}

The deployment.yaml file contains the specifications, such as the image to be used, how many containers to run, and more. Refer to the Kubernetes documentation for a detailed discussion on the purpose of various tags.

The deployment.yaml file specifies to Kubernetes how to deploy and manage the application, but it does not allow the application to be a network service to other applications. To do that, we need a Kubernetes Service resource. Execute the command, as shown in the following listing, in the k8s directory to generate the YAML for the service resource.

Listing 9.37 The Kubectl command to create a service

kubectl create service clusterip course-tracker-service --tcp 80:8080 -o
 yaml --dry-run=client > service.yaml

Listing 9.38 shows the generated YAML configuration for the service.

Listing 9.38 The generated service.yaml file

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: course-tracker-service
  name: course-tracker-service
spec:
  ports:
  - name: 80-8080
    port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: course-tracker-service
  type: ClusterIP
status:
  loadBalancer: {}

Let’s now apply the YAML files (from the k8s directory) to Kubernetes, as shown in the following listing.

Listing 9.39 Apply the configuration in a Kubernetes cluster through kubectl

kubectl apply -f .

The command in listing 9.39 creates a new deployment and service. Execute the command in listing 9.40 to get a status of the created Deployment and Service.

Listing 9.40 Get the status of all Kubernetes components

kubectl get all

You’ll notice an output similar to listing 9.41.

Listing 9.41 Status of all Kubernetes components

NAME                                  READY   STATUS    RESTARTS   AGE
pod/course-tracker-84f4d94d5d-gbw99   1/1     Running   0          25m
 
NAME                             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/course-tracker-service   ClusterIP   10.96.54.100   <none>        80/TCP    25m
service/kubernetes               ClusterIP   10.96.0.1      <none>        443/TCP   3h36m
 
NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/course-tracker   1/1     1            1           25m
 
NAME                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/course-tracker-84f4d94d5d   1         1         1       25m

The last change we need to perform is to use port forward, so we can make an HTTP request to the application. This is needed, as the service we’ve defined is accessible in the Kubernetes cluster network and not accessible outside. Let’s execute the following port-forward command, as shown in listing 9.42. Note that this command runs foreground, and the command does not return. Thus, you can open a new terminal window and execute the command.

Listing 9.42 Port forwarding to enable HTTP requests to the application

kubectl port-forward pod/course-tracker-84f4d94d5d-gbw99 8080:8080

In your case, the pod name could be different. You can find the pod name (highlighted in bold) in listing 9.41 Once the command runs successfully, you’ll see the following output, as shown in listing 9.43.

Listing 9.43 Successful port forward output

Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080

That’s all. You can now open a browser window and access the http://localhost:8080 URL. You’ll notice that you are redirected to the application index page.

Discussion

Using this technique, we’ve explored how to run a container image in a Kubernetes cluster. We created a local Kubernetes cluster with the use of Kind. We then defined a deployment and service using the kubectl command. After that, we applied the configurations, so the resources could be created by Kubernetes. Lastly, we applied port forwarding to the Kubernetes pod, so the application would be accessible outside of the Kubernetes cluster.

9.7 Deploying Spring Boot applications in Red Hat OpenShift

Red Hat OpenShift is an enterprise Kubernetes platform with support for several cloud providers. Previously, you’ve explored how to deploy a Docker container in a local Kubernetes cluster. Red Hat OpenShift provides the managed Kubernetes platform, on which you can deploy your application. You can find more details about various Red Hat OpenShift offerings at https://cloud.redhat.com/learn/what-is-openshift. In this section, we’ll demonstrate how to deploy a Spring Boot application in the Red Hat OpenShift platform via the Red Hat OpenShift developer console.

9.7.1 Technique: Deploying a Spring Boot application in the Red Hat OpenShift platform

In this technique, we’ll discuss how to deploy a Spring Boot application in the Red Hat OpenShift platform.

Problem

OpenShift provides a self-service platform to create, modify, and deploy applications and provides faster development and release cycles. You need to deploy the Course Tracker application into the Red Hat OpenShift platform.

Solution

In this technique, you’ll learn how to deploy a Spring Boot application in the Red Hat OpenShift platform. There are several ways a Spring Boot application can be deployed in OpenShift, including Dockerfile, container image, Git, and others. In this section, we’ll demonstrate how to deploy an application through GitHub.

Source code

The final version of the Spring Boot project is available at http://mng.bz/g42e.

To begin with, you need to create a Red Hat account to access the OpenShift platform. You can visit http://mng.bz/enZ9 for a developer sandbox account. If you don’t have an existing Red Hat account, create a new one with the required details. If you already have an account, then log in with your credentials. Once successfully logged in, you can access the OpenShift Developer sandbox account. You’ll find a page similar to that in figure 9.11.

Figure 9.11 Red Hat Developer sandbox home page with administrator views. By default, Red Hat creates two projects, dev and stage, for us.

In the top left corner, switch to the Developer View from Administrator View, and you’ll find a screen similar to that in figure 9.12.

Figure 9.12 Red Hat sandbox Developer View. From this screen, you can select your application configuration for deployment. For instance, you can select the From Git option and provide your Git repository path.

Using this technique, we’ll show you how to deploy a Spring Boot application using the From Git option. We’ve already created a GitHub repository for the Course Tracker application, and we’ll use the same. You can access this repository at http://mng.bz/p275. Click on the From Git option in the Developer Sandbox page, and you’ll be redirected to the next page, as shown in figure 9.13.

Figure 9.13 The Import from Git page to create a deployment from Git

Provide the GitHub repository URL for the Course Tracker application, and click Create. After successful deployment, you’ll find a page similar to that in figure 9.14.

Figure 9.14 Course Tracker application deployed successfully

You can find the application URL in the bottom right corner in the Routes section. Click on the link, and you’ll be redirected to the index page of the Course Tracker application.

Discussion

With this technique, you’ve explored how to deploy a Spring Boot application in the Red Hat OpenShift platform. OpenShift supports a variety of approaches for deploying an application. For instance, in this example, you’ve provided the application source code from the GitHub repository, and OpenShift does the heavy lifting for us. It has taken the source code, built it, deployed it into a Kubernetes Pod, and made the application available to the external world.

OpenShift provides many features and configurations you can use in your application. For instance, in your application, you can add various health checks, such as startup probe, readiness probe, liveness probe, and others. These probes allow you to verify your application status. For instance, the liveness probe checks whether the application container is running. Failure of the liveness probe means the container is killed. To learn more about OpenShift, you can play around with the OpenShift Developer sandbox available at https://developers.redhat.com/developer-sandbox.

Summary

  • We discussed deploying a Spring Boot application as an executable JAR file, and deployed it as a WAR file in the WildFly application server.

  • We introduced deploying Spring Boot applications to Cloud Foundry and Heroku.

  • We covered running Spring Boot applications as Docker containers and deploying them into Kubernetes clusters.

  • We introduced deploying a Spring Boot application as a container in the Red Hat OpenShift platform.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.200.180