Chapter 5. Continuous Delivery

Entire books have been written about Continuous Delivery and DevOps in general. We are not going to repeat why Continuous Delivery is important. With that limited scope in mind, let’s review how container images and Kubernetes support the following DevOps principles:

Small batch changes

All changes should be incremental and finite. When failures occur, small batch changes are typically easier to recover than large disruptive changes.

Source control all the things

A history of all changes is helpful to understand what changes have been made and to identify the cause of regressions in the code base or configuration.

Production-like environments

Developers should have access to environments and tools that are representative of production. Production environments typically operate at larger scales than development or quality assurance (QA) and with more complex configuration. The variance can mean that features that work fine in early stages do not work correctly in production—which is the only place it matters.

Shift-left of operational practices

We should expose behaviors for health management, log collection, change management, and so on earlier in the development process.

Continuous integration of changes

All changes should be built and deployed together on an ongoing basis to identify when the intermingling of various changes leads to an unforeseen issue or application programming interface (API) incompatibility.

Highly automated testing with continuous feedback

To manage velocity, you need to automate your testing and validation work so that you can always be testing (ABT).

Image Build

Containers are the ideal unit of delivery because they encapsulate all aspects of your application, middleware, and operating system (OS) packages into a single package. You can create container images in several ways, and the most popular means of creating an image is by a Dockerfile. The Dockerfile describes all of the relevant details necessary to package the image. These instructions are just plain text until converted by the image build, and you should always manage these and other declarative resources we have learned about in Kubernetes in a source control repository such as Git.

In Chapter 4, we built a container image. The act of building used our Dockerfile as a set of instructions to construct the image. Let’s take a closer look inside the Dockerfile:

FROM openliberty/open-liberty:microProfile1
 
RUN groupadd -g 999 adminusr && 
  useradd -r -u 999 -g adminusr adminusr
RUN chown adminusr:adminusr -R /opt/ol /logs /config
USER 999
 
COPY --chown=adminusr db2jcc4.jar /config/db2jcc4.jar
ADD --chown=adminusr 
 http://repo1.maven.org/maven2/com/ibm/mq/wmq.jmsra/9.1.0.0/wmq.
 jmsra-9.1.0.0.rar /config/wmq.jmsra.rar
COPY --chown=adminusr key.jks /config/resources/security/key.jks
COPY --chown=adminusr keystore.xml /config/configDropins/
defaults/keystore.xml
COPY --chown=adminusr server.xml /config/server.xml
COPY --chown=adminusr target/portfolio-1.0-SNAPSHOT.war /config/
apps/Portfolio.war
FROM statements

They declare the foundation for your container. Generally, base images are application runtimes (e.g., openliberty/open-liberty:microProfile1 or node:latest) or operating systems (e.g., ubuntu:16.04, rhel:7, or alpine:latest).

RUN statements

They execute commands and save the resulting changes to the filesystem as a distinct layer in the container image. To optimize build time, move commands that need to adjust the container image more frequently (e.g., adding application binaries) toward the end of the file.

USER statements

They allow you to specify under what identity on the OS the container should be launched. Building your container to run as nonroot is considered best practice. Whenever specifying a user for Kubernetes, we recommend that you use the UID (numerical user ID from the OS) and create an alias via groupadd/useradd commands or equivalents for your OS.

COPY and ADD statements

They move files from your local workspace into the container file system. COPY should always be your default; ADD is useful when pulling directly from URLs, which is not supported by the COPY directive.

Each line in the Dockerfile creates a unique layer that is reused for subsequent builds where no changes have occurred. Reusing layers creates a very efficient process that you can use for continuous integration builds.

As you saw in Chapter 4, you build container images as follows:

$ docker build -t repository:tag .

The last period is significant because it denotes the current directory. All files in the directory are shipped off to the Docker runtime to build the image layers. You can limit files from being delivered by adding a .dockerignore file. The .dockerignore file specifies files by name pattern to either be excluded (the default) or included (by beginning the line with an exclamation mark [!”]).

Programmability of Kubernetes

The declarative model of Kubernetes objects makes them ideal for source control, meaning that you have a record (throughout the history of your source control system) of all changes that were made and by whom.

Using the command kubectl apply, you can push updates easily into your cluster. Some resource types do not easily support rolling or uninterrupted updates (like DaemonSets), but a majority do. You don’t need to stop at just one object, though; you can apply entire directories:

$ kubectl apply -Rf manifests/

In this snippet, we use the -R option, which indicates to recursively process all files in the manifests/ directory.

Kubernetes makes it easy to create consistent clusters in development, QA, and production. Consistency means that developers and testers can develop and test against production-like environments in earlier stages. In addition, Helm charts make it possible to also deploy supporting Services like databases, messaging, and caching much easier and more accessible to developers and testers.

General Flow of Changes

The only constant is change. Many tools exist to help you on your Continuous Delivery journey. The general flow for all of these will be as follows:

  1. Register a Git post-commit hook to trigger a build for all commits delivered into the source control repository.

  2. The build will prepare the container image associated with the Git repository and publish it to an image repository.

  3. The build service will create a Kubernetes cluster, create a Kubernetes Namespace within a cluster, or reuse an existing cluster or Namespace based on convention to deploy the Kubernetes objects described in the manifest.

  4. Optionally, the build can package a Helm chart and push it into a Helm repository and deploy it from there, with references to the images published earlier.

  5. Run automated tests against the deployed system to verify the changes have not created any regressions in the source code or supporting configuration files.

  6. Optionally, drive an automated rolling continuous update to the next stage in the pipeline to validate the change for release to production.

For more detailed instruction on using automation tools to facilitate Continuous Delivery, the IBM Garage Method website includes courses such as “Use Jenkins in a Kubernetes cluster to continuously integrate and deliver on IBM Cloud Private” that walk you through the aforementioned steps by providing an easy-to-follow tutorial. After you become comfortable with Continuous Delivery, the next step is to focus on operating your enterprise application. In Chapter 6, we provide an overview of several tools that reduce the complexity of operating enterprise applications in a Kubernetes cluster.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.244.201