Networking and orchestration

We started off the chapter by saying that containers are completely self-contained and have no access to each other, even if they're running on the same host. But, to run real applications, we need containers to communicate. Fortunately, there is a way to do this: the Docker network.

Connecting containers

A Docker network is like a private chat room for containers; all the containers inside the network can talk to each other, but they can't talk to containers outside it or in other networks, and vice versa. All you need to do is have Docker create a network, give it a name, and then you can start containers inside that network and they will be able to talk to each other.

Let's develop an example to try this out. Suppose we want to run the Redis database inside a container and send data to it from another container. This is a common pattern for many applications.

In our example, we're going to create a Docker network and start two containers inside it. The first container is a public Docker Hub image that will run the Redis database server. The second container will install the Redis client tool and write some data to the Redis server container. Then, to check it worked, we can try to read the data back from the server.

Run the following command to apply the Docker network example manifest:

sudo puppet apply /vagrant/examples/docker_network.pp

If everything worked as it should, our Redis database should now contain a piece of data named message, containing a friendly greeting, proving that we've passed data from one container to another over the Docker network.

Run the following command to connect to the client container and check that this is the case:

sudo docker exec -it pbg-redis redis-cli get message
"Hello, world"

So how does it all work? Let's take a look at the example manifest. First of all, we create the network for the two containers to run in, using the docker_network resource in Puppet (docker_network.pp):

docker_network { 'pbg-net':
  ensure => present,
}

Now, we run the Redis server container, using the public redis:alpine image:

docker::run { 'pbg-redis':
  image => 'redis:alpine',
  net   => 'pbg-net',
}

Tip

Did you note that we supplied the net attribute to the docker::run resource? This specifies the Docker network that the container should run in.

Next, we build a container which has the Redis client (redis-cli) installed, so that we can use it to write some data to the Redis container.

Here's the Dockerfile for the client container (Dockerfile.pbg-demo):

FROM nginx:1.13.0-alpine
RUN apk update 
  && apk add redis

LABEL org.label-schema.vendor="Bitfield Consulting" 
  org.label-schema.url="http://bitfieldconsulting.com" 
  org.label-schema.name="Redis Demo" 
  org.label-schema.version="1.0.0" 
  org.label-schema.vcs-url="github.com:bitfield/puppet-beginners-guide.git" 
  org.label-schema.docker.schema-version="1.0"

We build this container in the usual way using docker::image:

docker::image { 'pbg-demo':
  docker_file => '/vagrant/examples/Dockerfile.pbg-demo',
  ensure      => latest,
}

Finally, we run an instance of the client container with docker::run, passing in a command to redis-cli to write some data to the other container:

docker::run { 'pbg-demo':
  image   => 'pbg-demo',
  net     => 'pbg-net',
  command => '/bin/sh -c "redis-cli -h pbg-redis set message "Hello,world""',
}

As you can see, this container also has the attribute net => 'pbg-net'. It will, therefore, run in the same Docker network as the pbg-redis container, and so the two containers will be able to talk to each other.

When the container starts, the command attribute calls redis-cli with the following command:

redis-cli -h pbg-redis set message "Hello, world"

The -h pbg-redis argument tells Redis to connect to the host pbg-redis.

Tip

How does using the pbg-redis name connect to the right container? When you start a container inside a network, Docker automatically configures DNS lookups within the container to find the other containers in the network by name. When you reference a container name (the title of the container's docker::run resource, which in our example is pbg-redis), Docker will route the network connection to the right place.

The command set message "Hello, world" creates a Redis key named message and gives it the value "Hello, world".

We now have all the necessary techniques to containerize a real application: using Puppet to manage multiple containers, built from dynamic data, pushed to a registry, updated on demand, communicating over the network, listening on ports to the outside world, and persisting and sharing data via volumes.

Container orchestration

We've seen a number of ways to manage individual containers in this chapter, but the question of how to provision and manage containers at scale and across multiple hosts (what we call container orchestration) remains.

For example, if your app runs in a container, you probably won't be running just one instance of the container; you need to run multiple instances, and route and load-balance traffic to them. You also need to be able to distribute your containers across multiple hosts, so that the application is resilient against the failure of any individual container host.

What is orchestration?

When running containers across a distributed cluster, you also need to be able to deal with issues such as networking between containers and hosts, failover, health monitoring, rolling out updates, service discovery, and sharing configuration data between containers via a key-value database.

Although container orchestration is a broad task, and different tools and frameworks focus on different aspects of it, the core requirements of orchestration include:

  • Scheduling: Running a container on the cluster and deciding which containers to run on which hosts to provide a given service
  • Cluster management: Monitoring and marshalling the activity of containers and hosts across the cluster, and adding or removing hosts
  • Service discovery: Giving containers the ability to find and connect to the services and data they need to operate

What orchestration tools are available?

Google's Kubernetes and Docker's Swarm are both designed to orchestrate containers. Another product, Apache Mesos, is a cluster management framework which can operate on different kinds of resources, including containers.

Most containers in production today are running under one of these three orchestration systems. Kubernetes has been around the longest and has the biggest user base, but Swarm, though a relatively new arrival, is part of the official Docker stack, so is being rapidly adopted.

Because all these products are necessarily rather complicated to set up and operate, there is also the option of Platform-as-a-Service (PaaS) orchestration; essentially, running your containers on a managed cloud platform. Google Container Engine (GKE) is Kubernetes as a service; Amazon's EC2 Container Service (ECS) is a proprietary, Kubernetes-like system.

As yet, Puppet integration with container orchestrators is somewhat limited and at an early stage, though, given the popularity of containers, this is likely to advance rapidly. There is some elementary support for generating Kubernetes configuration from Puppet resources, and some for managing Amazon ECS resources, but it's fair to say that automating container orchestration at scale with Puppet is still in its infancy so far. Watch this space, however.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.163.209