Java EE in the container

As it turns out the approach of a layered file system matches Java EE's approach of separating the application from the runtime. Thin deployment artifacts only contain the actual business logic, the part which changes and which is rebuilt each and every time. These artifacts are deployed onto an enterprise container which does not change that often. Docker container images are built step-by-step, layer-by-layer. Building an enterprise application image includes an operating system base image, a Java runtime, an application server and finally the application. If only the application layer changes, only this step will have to be re-executed and retransmitted - all the other layers are touched only once and then cached.

Thin deployment artifacts leverage the advantages of layers since only a matter of kilobytes has to be rebuilt and redistributed, respectively. Therefore, zero-dependency applications is the advisable way of using containers.

As seen in the previous chapter, it makes sense to deploy one application per application server. Containers execute a single process which in this case is the application server containing the application. The application server therefore needs to run on a dedicated container that is included in the container as well. Both the application server and the application are added at image build time. Potential configuration, for example regarding datasources, pooling, or server modules, is also made during build time, usually by adding custom configuration files. Since the container is owned by the single application these components are configured without affecting anything else.

Once a container is started from the image it should already contain everything that is required to fulfill its job. The application as well as all required configuration must already be present. Therefore, applications are not deployed on a previously running container anymore but added during the image build time, to be present at container runtime. This is usually achieved by placing the deployment artifact into the container's auto-deployment directory. As soon as the configured application server starts, the application is deployed.

The container image is built only once and then executed on all the environments. Following the idea of reproducible artifacts before, the same artifacts that run in production have to be tested upfront. Therefore the same Docker image that has been verified will be published to production.

But what if applications are configured differently in various environments? What if different external systems or databases need to be communicated with? In order to not interfere with several environments, at least the used database instances will differ. Applications shipped in containers are started from the same image but sometimes still need some variations.

Docker offers the possibility of changing several aspects of running containers. This includes networking, adding volumes, that is, injecting files and directories that reside on the Docker host, or adding Unix environment variables. The environment differences are added by the container orchestration from outside of the container. The images are only built once for a specific version, used and potentially modified in different environments. This brings the big advantage that these configuration differences are not modeled into the application rather than managed from the outside. The same is true for networking and connecting applications and external systems, which we will see in the coming sections.

Linux containers, by the way, solve the business-politically motivated issue of shipping the application together with the implementation in a single package for the reason of flexibility. Since containers include the runtime and all dependencies required, including the Java runtime, the infrastructure only has to provide a Docker runtime. All used technology including the versions are the responsibility of the development team.

The following code snippet shows the definition of a Dockerfile building an enterprise application hello-cloud onto a WildFly base image.

FROM jboss/wildfly:10.0.0.Final

COPY target/hello-cloud.war /opt/jboss/wildfly/standalone/deployments/

The Dockerfile specifies the jboss/wildfly base image in a specific version which already contains a Java 8 runtime and the WildFly application server. It resides in the application's project directory, pointing to the hello-cloud.war archive which was previously built by a Maven build. The WAR file is copied to WildFly's auto-deployment directory and will be available at that location at container runtime. The jboss/wildfly base image already specifies a run command, how to run the application server, which is inherited by the Dockerfile. Therefore it doesn't have to specify a command as well. After a Docker build the resulting image will contain everything from the jboss/wildfly base image including the hello-cloud application. This matches the same approach of installing a WildFly application server from scratch and adding the WAR file to the auto-deployment directory. When distributing the built image, only the added layer including the thin WAR file needs to be transmitted.

The deployment model of the Java EE platform fits the container world. Separating the application for the enterprise container leverage the use of copy-on-write file systems, minimizing the time spent on builds, distribution, or deployments.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.129.42.243