Global services

An alternative to replicated services is global services. In some cases, you may want the same service running on every node of the swarm as a single container on each server. To do that, you can run a service in global mode—Docker schedules exactly one task on each node, and any new nodes that join will also have a task scheduled.

Global services can be useful for high availability with components that are used by many services, but, again, you don't get a clustered application just by running many instances of it. The NATS message queue can run as a cluster across several servers, and it could be a good candidate to run as a global service. To run NATS as a cluster, though, each instance needs to know the address of other instances—which doesn't work well with dynamic virtual IP addresses allocated by the Docker Engine.

Instead, I can run my Elasticsearch message handler as a global service, so every node will have an instance of the message handler running. You can't change the mode of a running service, so, first, I need to remove the original service:

> docker service rm nerd-dinner-index-handler
nerd-dinner-index-handler

Then, I can create a new global service:

> docker service create `
>> --mode=global `
>> --network nd-swarm `
>> --name nerd-dinner-index-handler `
>> dockeronwindows/ch05-nerd-dinner-index-handler:2e;
q0c20sx5y25xxf0xqu5khylh7
overall progress: 2 out of 2 tasks
h2ripnp8hvty: running [==================================================>]
jea4p57ajjal: running [==================================================>]
verify: Service converged

Now I have one task running on each node in the swarm, and the total number of tasks will grow if nodes are added to the cluster, and shrink if the nodes are removed. This can be useful for services that you want to distribute for fault tolerance, and you want the total capacity of the service to be proportionate to the size of the cluster.

Global services are also useful in monitoring and auditing functions. If you have a centralized monitoring system such as Splunk, or you're using Elasticsearch Beats for infrastructure data capture, you could run an agent on each node as a global service.

With global and replicated services, Docker Swarm provides the infrastructure to scale your application and maintain specified service levels. This works well for on-premises deployments if you have a fixed-size swarm but variable workloads. You can scale application components up and down to meet the demand, provided they don't all require peak processing at the same time. You have more flexibility in the cloud, where you can increase the total capacity of your cluster, just by adding new nodes to the swarm, allowing you to scale your application services more widely.

Running applications at scale across many instances typically adds complexity—you need to have a way of registering all the active instances, a way of sharing load between them, and a way of monitor all the instances, so that if any fail, they don't have any load sent to them. This is all built-in functionality in Docker Swarm, which transparently provides service discovery, load-balancing, fault-tolerance and the infrastructure for self-healing applications.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.223.206.69