Docker Swarm

Docker, Inc. provides a native clustering solution called Docker Swarm. A Swarm cluster is a pool of Docker nodes that can be managed as if they were a single machine. Swarm uses the standard Docker API, which means existing tools are fully compatible, including the Docker client. Docker Swarm is a monolithic scheduler where a single Swarm Master that’s aware of the entire cluster state is responsible for scheduling.

As shown in Figure 5.2, a Docker Swarm cluster will contain one or more Swarm Masters, a number of nodes running the Docker daemon, and a discovery backend, not to be confused with container discovery services covered later in the networking section of this chapter.

Image

FIGURE 5.2: Docker Swarm cluster overview

Master Nodes

A Swarm cluster will contain one or more master nodes. Only one master node will be performing cluster scheduling work, but additional nodes can be running to provide high availability. The masters will elect a leader, and if for some reason the elected master is unavailable another master will be elected and take over the task of handling client requests and scheduling work. A supported service that can provide the necessary leader election feature is required. Services like Consul, Zookeeper, and etcd are commonly used for this purpose, as well as the discovery backend. The master uses the list of nodes in the discovery backend to manage containers on the nodes. The master communicates with the nodes using the standard Docker protocol, the same one the client uses.

Discovery Backend

The Docker Swarm discovery backend is a pluggable service that is used as a cluster discovery mechanism and will manage cluster state. When a node joins the cluster it will use this service to register itself and join the cluster. The discovery backend is pluggable, with a cloud-hosted backend available, as well as many other options.

Swarm Strategies

The Docker Swarm scheduler supports multiple strategies that determine how Swarm computes ranking for container placement. When we run a new container, Docker Swarm will schedule it on the node with the highest computed ranking for the selected strategy. Swarm currently supports three strategies; spread, binpack, and random. The spread and binpack strategies consider the nodes available CPU, RAM, and number of running containers. The random strategy simply selects a node and is primarily used for debugging. The spread strategy optimizes for nodes with the least number of containers, trying to keep a more even distribution. The binpack strategy optimizes for nodes with the most containers, attempting to fill nodes up.

An API is available for creating new Swarm strategies, so if a strategy with a needed algorithm does not exist, we can create one.

Swarm Filters

Docker Swarm comes with multiple filters that can be used to scheduler containers on a subset of nodes. Constraint filters are key/value pairs associated with nodes, and can be used for selecting specific nodes. Affinity filters can be used to schedule containers that need to be close to other containers. Port filters are used to handle host port conflicts, and are considered unique filters. A health filter will prevent scheduling of containers on an unhealthy node.

Creating a Swarm on Azure

A Docker Swarm cluster can be created on Microsoft Azure using an ARM template, the Azure Container Service (ACS), a Docker hosted service called tutum (http://tutum.co), or even Docker machine. Docker machine is a tool used to install Docker hosts on local machines or on cloud providers like Microsoft Azure.

An ARM template is currently available on GitHub to simplify the process of creating a Docker Swarm deployment on Azure (https://github.com/Azure/azure-quickstart-templates/tree/master/docker-swarm-cluster). This template can be copied and placed in source control management system for the project it will be used in and extended to meet the infrastructure requirements of the project.

The Docker Swarm Cluster Template will automatically build and configure three Swarm Manager nodes to ensure maximum availability. We can specify the number of application nodes via a template parameter. If we need to modify the template it can easily be forked and modified to meet the needs of the project. For example, we might want to install some additional monitoring services on the nodes, change how the network is set up, or use a different cluster discovery back end.

Connecting to the Swarm

Once the cluster deployment has completed we can connect to Docker Swarm in the same way we connect to a single instance of the Docker daemon. We can use the Docker client to execute standard Docker API commands such as info, run, and ps.

For example, look at the following command lists running containers on an Azure Swarm:

$ docker -H tcp://<manager DNS name>-manage.<location>.cloudapp
.azure.com:2375


Image Secure Communications

For situations that require the Docker client to connect to a daemon over a public network, it is recommended that Transport Layer Security (TLS) is configured. More information on configuring TLS in Docker can be found here: https://docs.docker.com/engine/security/https/. Alternatively, an SSH session can be established to the Swarm Manager and the local Docker client can be used.


One of the nice things about working with Docker Swarm is the simplicity. The Docker Swarm API is like working with a single instance of Docker, only there is an entire cluster behind it.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.14.134.130