© Kinnary Jangla 2018
Kinnary JanglaAccelerating Development Velocity Using Dockerhttps://doi.org/10.1007/978-1-4842-3936-0_8

8. Advanced Docker Use Cases

Kinnary Jangla1 
(1)
San Francisco, CA, USA
 

In the previous chapter, you learned the advantages and challenges of distributed environments, such as heterogeneity, concurrency, scalability, transparency, and failure handling, to name just a few.

Later, I walked you through a sample end-to-end application called FunFeed, which, relying mainly on given user interests, renders a list of images on the users’ feed related to those interests. We saw the different services that sit behind the application, got them running on their respective Docker containers, and then got all of these services up and running, using the Docker Compose tool. Finally, we made a request to the application and viewed the resulting output on the browser.

Toward the end of the chapter, I covered some hurdles you could face when setting up services in Docker and running the application end-to-end with the help of Docker Compose.

Now that you’ve seen most of the basic use cases of Docker, basic commands to get acquainted with Docker, how to get an end-to-end application running and debugged, it’s time to view some advanced Docker use cases.

In this chapter, we’ll look at how Docker operates in a production environment, orchestration using Docker, some advanced use cases, and, ultimately, some tips and tricks for Docker.

In the last chapter, you gained some practical knowledge about running applications, based on microservices architecture, on Docker. That in itself is one of the basic use cases of Docker.

Let’s look at what could have been done differently if that application were run in a production environment.

Docker in Production Environments

Now that we’ve got our application built and even running on Docker from our local machines, it might be time to ship it. Let’s deploy it in our production environment, so that the world can start using it.

But wait, is our application really ready to be shipped? The answer is, not so fast!

There are many critical decisions to be made before we decide to ship our application. Let’s look at some of them.

Managing Docker Images

We’ve seen in our previous chapters that Docker Hub is the public registry from which you both retrieve Docker images and publish to it, such that the images are made available to the world. However, when you want to make these images available to a smaller subset, such as the employees of a certain company, publishing them to the world won’t really work.

You might want to set certain standards to write these images, for consistency and to avoid random local environment configurations, even though this process seemed quite straightforward in our development environments. Creating consistent standards for images will also help avoid dependencies on your development environment.

Given that we prefer to publish our images to a smaller subset and not the entire world, you’ll have to set up a private Docker image registry. And, last, you’ll want to make this private registry secure and available to your continuous deployment system.

Docker in Cloud

Now that you have your Docker image published in the right location, you’ll have to deploy it to the Docker hosts. Today, most cloud providers, such as Amazon Web Services (AWS), Google Cloud, etc., provide support for deployment of Docker containers. These cloud providers charge for the resources, so the number can quickly turn up, and you might be in for a sticker shock.

Planning to host Docker strategically in the cloud might be your best option. Besides, the deployment process of Docker containers can vary from cloud provider to provider, making ramp-up curves difficult and time-consuming.

Security and Network

When working on a single development machine, you don’t really have to worry about security or network access. There is no network intrusion, as such, because you’re only dealing with a single host. Besides that, troubleshooting is pretty simple too, because again, it’s a single machine you’re dealing with.

Take that scenario and apply it to multiple hosts across a network in a production environment for scalability reasons. Your network settings will require a lot more thought. To begin with, only restricted people should have access to your Docker containers. Public traffic should not be able to touch certain containers. Tapping on the network, brute force login attempts, hacks, etc., must be supervised.

Security patches, whenever available, will have to be applied to all your Docker hosts. Using containers makes this much easier.

Load Balancing

Now that we’re aware, that we’ll require multiple hosts for scalability reasons, balancing this load across hosts is important. There are, however, multiple load balancers available today, such as ngapi.

Even though you could use one of these readily available load balancers, with Docker, creating and destroying containers could be common. This means that configuration settings will have to be updated every time a Docker container is created or destroyed.

Every time you deploy a new version of your application, your load balancer will have to take care not to drop traffic or rout it to the older version of your application.

Deployment

In a development environment, deploying and getting the services up and running is as simple as running docker-compose up . In a production environment, however, that might not be so simple. You will have to plan these in advance.

In a production environment, Docker Compose configurations will vary significantly from those in a development environment. In addition, as the traffic to your application increases, and as your application matures, you’ll have multiple, continuous upgrades, hotfixes, and settings that must be consistent, resulting in abundant related issues to deal with on a continuous basis.

Service Discovery

Having an application with a growing number of microservices will require you to register these services. You’ll have to find efficient ways of managing your service registries. There are multiple tools to do this, such as ZooKeeper.

Regardless of which tool you select to manage your service registry, one thing to be very sure of is to keep your service registrations in sync with your Docker container instances. Doing so will ensure that any new service registered is also recognizable by its Docker container instance.

Log Management

On a single development machine, we used docker logs <container id> to view the logs of an instance of a container. With multiple Docker hosts and services spread across these Docker hosts, troubleshooting becomes tedious. Distributed logging will have to be put in place to enable viewing of logs across containers, to troubleshoot issues.

Needless to say, logs will be long and numerous. You’ll have to find a way to view and search these logs.

Monitoring Docker Containers

You’ll have to watch the hosts and containers, to make sure they’re healthy and not running out of space. You’ll have to know the health of the entire system and each individual service as well.

You’ll need to have certain monitoring strategies in place for this. Tools such as Grafana can help you achieve this.

Managing Databases

In development environments, databases can be hosted in a single container, without having to worry about input/output (I/O) performance. This changes in a production environment. I/O performance becomes essential, especially if you care to provide a good consumer experience. Your database will have to scale and be highly available, in order to maintain good I/O performance.

These are only some of the challenges that you might encounter when you make the decision to take your application to production. Docker provides some amazing capabilities, but in spite of that, there are certain other tools required to make scalability more efficient, because Docker is not a full-blown architecture service. It’s a tool and that’s all.

Orchestration Using Docker

What is container orchestration, after all? Put simply, a container orchestration is the process of deploying multi-container applications on multiple machines. Or, even more essentially, it’s the process of transitioning individual containers on a single host to multi-container applications on multiple machines.

Needless to say, in order to achieve this, one would require a distributed platform that can stay online through the entire lifetime of an application, surviving hardware and software failures and upgrades.

In order to enable orchestration, Docker came up with a solution known as “Docker in swarm mode.”

Basically, it consists of a group of Docker Engines on which applications can be deployed using the Docker API . API objects such as Service and Node can be used to do this.

There are multiple tools that can be used for orchestration, for example, Kubernetes. One way to orchestrate Docker is with Docker! Docker orchestration is built in as part of the core Docker Engine, and it relies on some fundamental principles, such as simplicity, reliability, security, and backward compatibility.

Modern distributed applications that serve heavy traffic are mostly all going to run on multiple hosts and multiple machines and, therefore, will require orchestration as a critical element. More often than not, a new tool comes on the market, and developers must ramp up on it quickly. Before you know it, some other tool supersedes it, and it’s time to ramp up on that.

Simplicity of tools makes it easier for developers to start using them more quickly. At the same time, making these tools more powerful allows developers to use them for longer periods of time, thus providing more flexibility. Docker in swarm mode takes advantage of this fundamental principle. And it’s built with simplicity in mind, yet it’s one of the most powerful tools. It focuses on resilience in addition to simplicity. Computers fail all the time, and systems should expect that and be able to adapt to potential failures effortlessly.

Needless to say, applications built on distributed systems must be highly secure. Security should be an assumed principle. Continuous upgrades of certificates, privacy updates, network tapping, etc., are effortlessly incorporated in the swarm mode.

Docker has had multiple versions and millions of users using these different versions. For this reason, maintaining backwards compatibility is essential for Docker, and that’s exactly what Docker in swarm mode provides.

Advanced Use Cases

Let’s look where else Docker containers have left their mark and where they’re currently being used for advanced uses.
  • Land Information System (LIS) : LIS is owned by NASA and has been extremely difficult to install, owing to its complexity and its dependencies on other complex libraries. With Docker, scaling LIS has been relatively simpler and, hence, has made it available to a larger group of users. Docker has also made LIS installation simpler. So, in this case, NASA uses Docker to simplify its installation process and improve its scalability rather than helping to achieve continuous delivery.

  • Local area network (LAN) caches: An interesting example of an obscure use case is using Docker for setting up a LAN cache. This allows you not to have to deal with the grungy work that comes with setting up a LAN party. Even though this might be a typical Docker use case, it’s definitely one that’s very interesting.

  • Government software : Docker has been quietly helped by federal government software, which is a universe all its own. Docker has been proven helpful in achieving the security and privacy needed in complex government software.

  • Bioinformatics: Many bioinformatics programs have been using Docker to build their own Docker registries for bioinformatics tools and software. BioShaDock is an exclusive bioinformatics repository for bioinformatics programs. This differentiates it from a public Docker registry.

  • Internet of Things (IoT): Not surprisingly, Docker has entered the IoT realm as well. Resin.io leverages Docker for its deployment of IoT devices.

Tips and Tricks

Now that we’ve looked at some obscure but interesting use cases of Docker, let’s quickly take a look at some tips and tricks that can come in handy when debugging your Docker application.
  • HTTP proxy : A typical Dockerfile starts with a FROM, with which you pull a public image from the Docker registry. This means it will have to be pulled from the Internet. Note the following code snippet:

         FROM tifayuki/java:8
         MAINTAINER . . .
         RUN apt-get update
         wget download.java.net/glassfish/4.0/release/glassfish-4.0.zip
         . . .
You might run into an issue if you’re behind a proxy. In this case, you can set up your proxy using the ENV command in your Dockerfile. So, your Dockerfile will look like the following snippet:
         FROM tifayuki/java:8
         MAINTAINER . . .
         ENV http_proxy http://server:port
         ENV https_proxy http://server:port
         #. . . some other online commands
  • Listing all existing containers : You can use docker container ps -a to list all your containers. This will list containers that have stopped running as well.

  • Stopping all running containers : Using docker container stop $(docker container ps -a -q) will stop all running containers.

  • Deleting all existing containers : docker container rm $(docker container ps -a -q) will delete all your existing containers. To remove containers that are still running, you can use the –f flag. So, your command would look like Docker container rm –f.

  • Deleting all existing images: docker image rm $(docker image ls -aq) will let you delete all your existing images.

  • Using the CMD command in Dockerfile: CMD and RUN are two commands that can become confusing when trying to determine what to run when. RUN runs the command, then commits the result at the time of build. The CMD command mainly provides defaults for a running container. It should be used inside a Dockerfile only once, and it runs the software in your image at runtime.

Summary

In this chapter, I reviewed the decisions that you’ll have to make, in order to take your Docker application to production. You saw how network access and security, deployment of multiple Docker containers and multiple Docker hosts, etc., can be quite challenging.

You then saw how Docker has a swarm mode to help with orchestration, which is managing complex multi-container applications on multiple machines. You also learned some tips and tricks that can be very useful when building applications with Docker.

This concludes this book. All the knowledge you’ve gained, if put into practice, can tremendously increase the velocity of your software engineering.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.8.216