Deploying a swarm stack from a Docker Compose file

I can deploy the application with Docker Compose on a development laptop by specifying multiple Compose files—the core file and the local override. In swarm mode, you use the standard docker command, rather than docker-compose to deploy a stack. The Docker CLI doesn't support multiple files for stack deployment, but I can generate a single stack file by using Docker Compose to join the source files together. This command generates a single Compose file called docker-stack.yml from the two Compose files for the stack deployment:

docker-compose -f docker-compose.yml -f docker-compose.swarm.yml config > docker-stack.yml

Docker Compose joins the input files and checks whether the output configuration is valid. I capture the output in a file called docker-stack.yml. This is an extra step that would easily fit into your deployment pipeline. Now I can deploy my stack on the swarm, using the stack file that contains the core service descriptions, plus the secrets and deployment configuration.

You deploy a stack from a Compose file with a single command, docker stack deploy. You need to pass the location of the Compose file and a name for the stack, and then Docker creates all the resources in the Compose file:

> docker stack deploy --compose-file docker-stack.yml nerd-dinner
Creating service nerd-dinner_message-queue
Creating service nerd-dinner_elasticsearch
Creating service nerd-dinner_nerd-dinner-api
Creating service nerd-dinner_kibana
Creating service nerd-dinner_nerd-dinner-index-handler
Creating service nerd-dinner_nerd-dinner-save-handler
Creating service nerd-dinner_reverse-proxy
Creating service nerd-dinner_nerd-dinner-web
Creating service nerd-dinner_nerd-dinner-homepage
Creating service nerd-dinner_nerd-dinner-db

The result is a set of resources that are logically grouped together to form the stack. Unlike Docker Compose, which relies on naming conventions and labels to identify the grouping, the stack is a first-class citizen in Docker. I can list all stacks, which gives me the basic details—the stack name and the number of services—in the stack:

> docker stack ls
NAME SERVICES ORCHESTRATOR
nerd-dinner 10 Swarm

There are 10 services in my stack, deployed from a single Docker Compose file that is 137 lines of YAML. That's a tiny amount of configuration for such a complex system: two databases, a reverse proxy, multiple front-ends, a RESTful API, a message queue, and multiple message handlers. A system of that size would typically be described in a Word-deployment document running to hundreds of pages, and it would require a weekend of manual work to run all the steps. I deployed this with one command.

I can also drill down into the containers running the stack to see the status and the node they're running on with docker stack ps, or get a higher-level view of the services in the stack with docker stack services:

> docker stack services nerd-dinner
ID NAME MODE REPLICAS IMAGE
3qc43h4djaau nerd-dinner_nerd-dinner-homepage replicated 2/2 dockeronwindows/ch03...
51xrosstjd79 nerd-dinner_message-queue replicated 1/1 dockeronwindows/ch05...
820a4quahjlk nerd-dinner_elasticsearch replicated 1/1 sixeyed/elasticsearch...
eeuxydk6y8vp nerd-dinner_nerd-dinner-web replicated 2/2 dockeronwindows/ch07...
jlr7n6minp1v nerd-dinner_nerd-dinner-index-handler replicated 2/2 dockeronwindows/ch05...
lr8u7uoqx3f8 nerd-dinner_nerd-dinner-save-handler replicated 3/3 dockeronwindows/ch05...
pv0f37xbmz7h nerd-dinner_reverse-proxy replicated 1/1 sixeyed/traefik...
qjg0262j8hwl nerd-dinner_nerd-dinner-db replicated 1/1 dokeronwindows/ch07...
va4bom13tp71 nerd-dinner_kibana replicated 1/1 sixeyed/kibana...
vqdaxm6rag96 nerd-dinner_nerd-dinner-api replicated 2/2 dockeronwindows/ch07...

The output here shows that I have multiple replicas running the frontend containers and the message handlers. In total, there are 15 containers running on my two-node swarm, which is two VMs with a combined total of four CPU cores and 8 GB of RAM. At idle, the containers use very few compute resources, and I have plenty of capacity to run extra stacks on here. I could even deploy a copy of the same stack, using a different port for the proxy, and then I would have two completely separate test environments running on the same set of hardware.

Grouping services into stacks makes it much easier to manage your application, especially when you have multiple apps running with multiple services in each. The stack is an abstraction over a set of Docker resources, but you can still manage the individual resources directly. If I run docker service rm, it will remove a service, even if the service is part of a stack. When I run docker stack deploy again, Docker will see that a service is missing from the stack and will recreate it.

When it comes to updating your application with new image versions or changes to service attributes, you can take the imperative approach and modify the services directly, or you stay declarative by modifying the stack file and deploying it again. Docker doesn't force a process on you, but it's better to stay declarative and use compose files as the single source of truth.

I can scale up the message handlers in my solution either by adding replicas :2 to the deploy section of the stack file and deploying it again or by running docker service update --replicas=2 nerd-dinner_nerd-dinner-save-handler. If I update the service and don't change the stack file as well, the next time I deploy the stack, my handler will go down to one replica. The stack file is viewed as the desired final state, and if the current state has deviated, it will be corrected when you deploy again.

Using the declarative approach means you always make these sorts of changes in the Docker Compose file(s), and update your app by deploying the stack again. The Compose files live in source control alongside your Dockerfiles and the application source code, so they can be versioned, compared, and labelled. That means when you pull the source code for any particular version of your app, you'll have everything you need to build and deploy it.

Secrets and configurations are the exception, you would keep them in a more secure location than the central source repository, and only admin users would have access to the plain text. The Compose files just reference external secrets, so you get the benefit of a single source of truth for your app manifest inside source control, with sensitive data kept outside.

Running a single node or a two-node swarm is fine for development and test environments. I can run the full NerdDinner suite as a stack, verifying that the stack file is correctly defined, and I can scale up and down to check the behavior of the app. This doesn't give me high availability, because the swarm has a single manager node, so if the manager goes offline, then I can't administer the stack. In the datacenter you can run a swarm with many hundreds of nodes, and get full high availability with three managers.

You can build a swarm with greater elasticity for high availability and scale by running it in the cloud. All the major cloud operators support Docker in their IaaS services, so you can easily spin up Linux and Windows VMs with Docker pre-installed, and join them to a swarm with the simple commands you've seen in this chapter.

Docker Swarm isn't just about running applications at scale across a cluster. Running across multiple nodes gives me high availability, so my application keeps running in the case of failure, and I can take advantage of that to support the application life cycle, with zero-downtime rolling updates and automated rollbacks.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.223.213.238