Running services across many containers

Replicated services are how you scale in swarm mode, and you can update running services to add or remove containers. Unlike Docker Compose, you don't need a Compose file that defines the desired state of each service; that detail is already stored in the swarm from the docker service create command. To add more message handlers, I use docker service scale, passing the name of one or more services and the desired service level:

> docker service scale nerd-dinner-save-handler=3
nerd-dinner-save-handler scaled to 3
overall progress: 1 out of 3 tasks
1/3: starting [============================================> ]
2/3: starting [============================================> ]
3/3: running [==================================================>]

The message handler services were created with the default single replica, so this adds two more containers to share the work of the SQL Server-handler service. In a multi-node swarm, the manager can schedule the containers to run on any node with a capacity. I don't need to know or care which server is actually running the containers, but I can drill down into the service list with docker service ps to see where the containers are running:

> docker service ps nerd-dinner-save-handler
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE

sbt4c2jof0h2 nerd-dinner-save-handler.1 dockeronwindows/ch05-nerd-dinner-save-handler:2e win2019-dev-02 Running Running 23 minutes ago
bibmh984gdr9 nerd-dinner-save-handler.2 dockeronwindows/ch05-nerd-dinner-save-handler:2e win2019-dev-02 Running Running 3 minutes ago
3lkz3if1vf8d nerd-dinner-save-handler.3 dockeronwindows/ch05-nerd-dinner-save-handler:2e win2019-02 Running Running 3 minutes ago

In this case, I'm running a two-node swarm, and the replicas are split between the nodes win2019-dev-02 and win2019-02. Swarm mode refers to service processes as replicas, but they're actually just containers. You can log on to the nodes of the swarm and administer service containers with the same docker ps, docker logs and docker top commands, as usual.

Typically, you won't do that. The nodes running replicas are just black boxes that are managed for you by the swarm; you work with your services through the manager node. Just as Docker Compose presents a consolidated view of logs for a service, you can get the same from the Docker CLI connected to a swarm manager:

PS> docker service logs nerd-dinner-save-handler
nerd-dinner-save-handler.1.sbt4c2jof0h2@win2019-dev-02
| Connecting to message queue url: nats://message-queue:4222
nerd-dinner-save-handler.1.sbt4c2jof0h2@win2019-dev-02
| Listening on subject: events.dinner.created, queue: save-dinner-handler
nerd-dinner-save-handler.2.bibmh984gdr9@win2019-dev-02
| Connecting to message queue url: nats://message-queue:4222
nerd-dinner-save-handler.2.bibmh984gdr9@win2019-dev-02
| Listening on subject: events.dinner.created, queue: save-dinner-handler
...

Replicas are how the swarm provides fault tolerance to services. When you specify the replica level for a service with the docker service create , docker service update, or docker service scale command, the value is recorded in the swarm. The manager node monitors all the tasks for the service. If containers stop and the number of running services falls below the desired replica level, new tasks are scheduled to replace the stopped containers. Later in the chapter, I'll demonstrate that when I run the same solution on a multi-node swarm, I can then take a node out of the swarm, without causing any loss of service.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.14.144.108