Deploying services

Most of the commands we will initially need are accessible through the docker services command:

$ docker service
<snip>
Commands:
create Create a new service
inspect Display detailed information on one or more services
logs Fetch the logs of a service or task
ls List services
ps List the tasks of one or more services
rm Remove one or more services
scale Scale one or multiple replicated services
update Update a service
As you might be suspecting, given how similar these commands are to some of the ones for managing containers, once you move to an orchestration platform as opposed to fiddling with containers directly, the ideal management of your services would be done through the orchestration itself. I would probably expand this and go as far as to say that if you are working with containers too much while having an orchestration platform, you did not set something up or you did not set it up correctly.

We will now try to get some sort of service running on our Swarm, but since we are just exploring how all this works, we can use a very slimmed down (and a very insecure) version of our Python web server from Chapter 2, Rolling Up the Sleeves. Create a new folder and add this to a new Dockerfile:

FROM python:3

ENV SRV_PATH=/srv/www/html

EXPOSE 8000

RUN mkdir -p $SRV_PATH &&
groupadd -r -g 350 pythonsrv &&
useradd -r -m -u 350 -g 350 pythonsrv &&
echo "Test file content" > $SRV_PATH/test.txt &&
chown -R pythonsrv:pythonsrv $SRV_PATH

WORKDIR $SRV_PATH

CMD [ "python3", "-m", "http.server" ]

Let's build it so that our local registry has an image to pull from when we define our service:

$ docker build -t simple_server .

With the image in place, let's deploy it on our swarm:

$ docker service create --detach=true 
--name simple-server
-p 8000:8000
simple_server
image simple_server could not be accessed on a registry to record
its digest. Each node will access simple_server independently,
possibly leading to different nodes running different
versions of the image.

z0z90wgylcpf11xxbm8knks9m

$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
z0z90wgylcpf simple-server replicated 1/1 simple_server *:8000->8000/tcp
The warning shown is actually very important: our service is only available on our local machine's Docker registry when we built it, so using a Swarm service that is spread between multiple nodes will have issues since other machines will not be able to load the same image. For this reason, having the image registry available from a single source to all of the nodes is mandatory for cluster deployments. We will cover this issue in more detail as we progress through this and following chapters.

If we check out http://127.0.0.1:8000, we can see that our service is running! Let's see this:

If we scale this service to three instances, we can see how our orchestration tool is handling the state transitions:

$ docker service scale simple-server=3

image simple_server could not be accessed on a registry to record
its digest. Each node will access simple_server independently,
possibly leading to different nodes running different
versions of the image.

simple-server scaled to 3

$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
z0z90wgylcpf simple-server replicated 2/3 simple_server *:8000->8000/tcp

$ # After waiting a bit, let's see if we have 3 instances now
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
z0z90wgylcpf simple-server replicated 3/3 simple_server *:8000->8000/tcp

$ # You can even use regular container commands to see it
$ docker ps --format 'table {{.ID}} {{.Image}} {{.Ports}}'
CONTAINER ID IMAGE PORTS
0c9fdf88634f simple_server:latest 8000/tcp
98d158f82132 simple_server:latest 8000/tcp
9242a969632f simple_server:latest 8000/tcp

You can see how this is adjusting the container instances to fit our specified parameters. What if we now add something in the mix that will happen in real life-a container death:

$ docker ps --format 'table {{.ID}}  {{.Image}}  {{.Ports}}'
CONTAINER ID IMAGE PORTS
0c9fdf88634f simple_server:latest 8000/tcp
98d158f82132 simple_server:latest 8000/tcp
9242a969632f simple_server:latest 8000/tcp

$ docker kill 0c9fdf88634f
0c9fdf88634f

$ # We should only now have 2 containers
$ docker ps --format 'table {{.ID}} {{.Image}} {{.Ports}}'
CONTAINER ID IMAGE PORTS
98d158f82132 simple_server:latest 8000/tcp
9242a969632f simple_server:latest 8000/tcp

$ # Wait a few seconds and try again
$ docker ps --format 'table {{.ID}} {{.Image}} {{.Ports}}'
CONTAINER ID IMAGE PORTS
d98622eaabe5 simple_server:latest 8000/tcp
98d158f82132 simple_server:latest 8000/tcp
9242a969632f simple_server:latest 8000/tcp

$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
z0z90wgylcpf simple-server replicated 3/3 simple_server *:8000->8000/tcp

As you can see, the swarm will bounce back up like nothing happened, and this is exactly why containerization is so powerful: not only can we spread processing tasks among many machines and flexibly scale the throughput, but with identical services we don't really care very much if some (hopefully small) percentage of services dies, as the framework will make it completely seamless for the client. With the built-in service discovery of Docker Swarm, the load balancer will shift the connection to whatever container is running/available so anyone trying to connect to our server should not see much of a difference.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.116.19.17