Docker Container Basics

Now that we are connected to our Azure VM we can start exploring Docker.

Docker Info

To make sure that the extension has successfully installed Docker on the machine, we can check whether Docker is installed by typing

docker info

as shown in Figure 2.12.

Image

FIGURE 2.12: Docker info output


Image Note

The following commands are executed on the Docker host via SSH. Therefore, they are exactly the same whether you are connecting to the Docker host from a Windows or Mac OS X machine.


The first two lines indicate that we do not have any containers or images yet, as they show Containers: 0 and Images: 0.

Figure 2.13 shows a logical view of the current state and components of our Azure VM after the successful installation of Docker.

Image

FIGURE 2.13: Azure VM state after provisioning

Now we can create our first container. As in many other examples, we start with a simple scenario. We want to create a container that hosts a simple web site. As a first step, we need a Docker image. We can think of an image as a template for containers, which could contain an operating system such as Ubuntu, web servers, databases, and applications. Later in this chapter, we will learn how to create our own images, but for now we will start with an existing image.

Docker has the notion of image repositories. The public place for Docker repositories is Docker Hub (https://hub.docker.com). Docker Hub can host public and private repositories, as well as official repositories which are always public. The official repositories contain certified images from vendors such as Microsoft or Canonical, and can be consumed by everyone. Private repositories are only accessible to authenticated users of those repositories. Typical scenarios for private repositories are companies that do not want to share their images with others for security reasons, or companies who are working on new services or applications that should not be publicly known.

Docker Pull and Docker Search

To host a simple web application in a Docker container we need a web server. For this exercise, we’ll use NGINX. To create a container that is running NGINX we need a Docker image that contains NGINX.

Get the NGINX base image from Docker Hub by entering the following command (you might have to log in to Docker Hub from the CLI of the Docker host before you can pull the image):

docker pull nginx

The Docker command line also supports searching Docker Hub. The following command returns all images that have NGINX included.

docker search nginx

Docker Images

During the pull, there is a lot of output in our command window. Running this command caused Docker to check locally for an image called NGINX. If it cannot find it locally, it will pull it from Docker Hub and put it in the local registry on the VM. We can now run

docker images

to check all the images that are on our VM. Figure 2.14 shows the shell after pulling down the image and running the “docker images” command.

Image

FIGURE 2.14: NGINX image on dockerhost

Figure 2.15 illustrates the state of the Azure VM after downloading the image from Docker Hub.

Image

FIGURE 2.15: Logical diagram of NGINX image on dockerhost

Docker Run and Docker PS

Now that we have our image locally, we can start our first container, by entering

docker run --name webcontainer –p 80:80 -d nginx

This command starts a container called “webcontainer” based on the “NGINX” image that you downloaded in the previous section. The parameter “-p” maps ports directly. Port 80 and 443 are exposed by the NGINX image and by using “–p 80:80” we map port 80 on the Docker host to port 80 of the running container. We could have also used the “-P” parameter, which dynamically maps the ports of a container to ports of the host. In Chapter 5, “Service Orchestration and Connectivity,” we’ll see how to use static port mapping because it makes our lives easier when running and exposing multiple containers on virtual machines in a cluster. Finally, the “-d” parameter tells Docker to run that container in the background.


Image Note

Docker pull is not required to download an image. Docker run will download an image automatically if the image is not found locally.


We can now run the docker ps command to check our running container. Figure 2.16 shows the output of the command.

Image

FIGURE 2.16: Output of docker ps

From a topology perspective on the Azure VM, we now have an image (NGINX) and a container (webcontainer) based on that image. Figure 2.17 provides a logical view of our Azure VM running the container “webcontainer.”

Image

FIGURE 2.17: Logical view of the container running on Azure VM

We can now access the default NGINX web site in our container directly from within the host, for example by executing the following command:

curl http://localhost

Figure 2.18 shows the welcome web page for NGINX, telling us that the web service is working.

Image

FIGURE 2.18: Working NGINX web server


Image How to Make the Web Site Accessible Through the Internet

To access the website over the internet we need to add an endpoint to the virtual machine. As with all Azure virtual machine operations, we can use the portal, PowerShell, and CLI to add the endpoint.


We just created our first container based on an image that we pulled from Docker Hub and familiarized ourselves with some basic Docker commands.

Next, we’ll look a bit deeper into the Docker basics and learn more about volumes and images, which are important Docker concepts.

Adding Content to a Container Using Volumes

In our scenario, we currently have a running container with the default installation of NGINX in it. This is great but we wanted to host our own web site inside the container. So the question is, how do we handle situations where we need to place custom files inside of a container?

There are a couple of ways of dealing with this. One is to have the container pull in content dynamically and the other one is to make the application and components part of an image.

If we think about a container as an immutable object, we might think that it would make the most sense for all the content to be bundled as part of an image (and therefore part of any containers based on that image). However, several scenarios exist where it makes sense to have a container pull in content dynamically, share data between containers, and have the data independent from the container life cycle.

One very obvious example of this type of scenario is databases, where the data needs to persist in a way in which it is independent from the container life cycle. Think about a scenario where we need to update a container with MySQL to apply security patches. If the data is part of the container image, it would be deleted and that would be a bad thing. Another scenario is the development scenario. A common approach is to have a local reproduction of our application source code on the Azure VM. When we create or start a container, we want to pull in the source code, but we do not want to create a new image every time we make some changes to our source code. Chapter 4, “Setting Up Your Development Environment,” covers the development scenario in detail. The solution to both the database and the development scenarios are Docker data volumes.

Docker data volumes are directories that exist within the filesystem of the Docker host that are mounted into running containers. The biggest advantage is that the data persisted in a data volume is independent from the container life cycle, meaning it is not deleted when the container is deleted. From a container life cycle perspective, it is important to understand that data volumes can be shared among multiple containers. The best way to understand data volumes is to create one. Let’s recreate the container “webcontainer,” using a data volume for our web site content. Before we can create a new one we need to stop and delete the webcontainer that is already running. The following commands will first stop the container and then delete it.

docker stop webcontainer
docker rm webcontainer


Image Use the Container ID to Delete a Container

We can also use the first four digits of the container id to delete a container. For this example, the commands for stopping and deleting a container are shown below:


docker stop cca0
docker rm cca0

We will store the sources for our custom web page in the directory /home/src on the Docker host.

We can use the following commands to create the directory, assuming we are already in the /home directory.

mkdir src
cd src

Next, let’s create a simple HTML page called index.html in the src directory. We can use the nano editor to create that file. Type nano and hit “Enter” to open the editor.

The content of the HTML page is very simple and is shown below:

<html>
    <head>
    </head>
    <body>
           This is a Hello from a website in a container!
    </body>
</html>

Save the file as “index.html.”

Let’s look at how to mount directories from the Docker host into containers. To mount the host directory we need to execute the following command:

docker run –name webcontainer –v /home/src:/usr/share/nginx/html:ro
–p 80:80 –d nginx

We have already learned that this command will create a container called webcontainer based on the NGINX image and map port 80 on the Docker host to port 80 in the container.

The new part is

-v /home/src:/usr/share/nginx/html:ro

The “-v” parameter mounts the “/home/src” directory created earlier to the “/usr/share/nginx/html” mount point in the container.


Image Note

The NGINX image uses the default NGINX configuration, so the root directory for the container is /usr/share/nginx/html.


Once the container is up and running, we can check our changes by entering curl http://localhost. The output should now look like Figure 2.19.

Image

FIGURE 2.19: Web site with custom content in simple webcontainer

Updating and Committing an Image

Another option is to update a container and commit the changes to an image. The first step would be to create a container with a standard input (stdin) stream. The following command creates a container and allocates a pseudo-tty by using the “-t” parameter, and opens a standard input (stdin) stream using the “-i” parameter:

docker run –t –i nginx /bin/bash.

This will create a new container based on the “NGINX” image and drop you into the container’s shell, which will look similar to

root@67337e2dbcbb:/#

Now we can go ahead and install software and make other changes within the container. In our example, we resynchronize the package index files using apt-get update.

Once the container is in the state we want it to be, we can exit the container by entering

root@67337e2dbcbb:/# exit

Finally, we can commit a copy of that container to a new image using the following command from the Docker host:

docker commit -m "updates applied" -a "Boris Scholl" 67337e2dbcbb
bscholl/nginx:v1

Figure 2.20 shows the entire flow of dynamically creating a new image.

Image

FIGURE 2.20: Create a new Docker image using docker commit

Adding Content to an Image Using a Dockerfile

We can also copy content into a container using a Dockerfile. A Dockerfile is a text file that contains instructions about how to build a Docker image, and as we shall see, it is the more preferred approach.

Let’s start looking at the basic Dockerfile structure and syntax by using our NGINX example. The code below shows a Dockerfile that copies the contents of the “web” directory on the Docker host into the directory “/usr/share/nginx/html” in the container.

#Simple WebSite
FROM nginx
MAINTAINER Boris Scholl <[email protected]>
COPY web /usr/share/nginx/html
EXPOSE 80

The first line represents a comment as it is prefixed with the “#” sign. The “FROM” instruction tells Docker which image we want to base our new image on. In our case, it is the NGINX image that we pulled down earlier in the chapter. The “MAINTAINER” instruction is to specify who maintains the image. As we will see in a later chapter, that is quite important information when we deal with several teams and many images. The “COPY” instruction tells Docker to copy the contents of the “web” directory on the Azure VM (Docker host) to the directory “/usr/share/nginx/html” in the container. Below is the folder structure on the Azure VM:

|-/src
  |-Dockerfile
  |-web
    |-index.html


Image Note

When copying files in the Dockerfile, the path to the local directory is relative to the build context where the Dockerfile is located. For this example, the content to copy is in the “src” directory. The Dockerfile is in the same directory.


Finally, we use the “EXPOSE” instruction to expose port 80. Now that we have the Dockerfile, we can build our image. The following command builds the image, and by using the “-t” parameter, we tag the image with the “customnginx” repository name.

docker build –t customnginx .

The period (“.”) at the end of the command tells Docker that the build context is the current directory. The build context is where the Dockerfile is located, and all “COPY” instructions are relative to the build context.

Once the build has been successful, we can run the docker images command again to see what images we now have in our local repository. Figure 2.21 shows the output of the docker build and docker images commands:

Image

FIGURE 2.21: Docker build and docker images output

As we can see, there are now three images on the Azure VM.

nginx: This is the official NGINX image we pulled from Docker Hub.

bscholl/nginx: This is the image we created using docker commit.

customnginx: This is the image we created from the Dockerfile using docker build.

This was just a small example to demonstrate how to create a Docker image using a Dockerfile. Table 2.1 provides a list of the most common instructions to use with Dockerfiles to build an image.

Image
Image

TABLE 2.1: Common commands

To finish our Dockerfile exercise, we should test if we can create a new container based on our new image customnginx.

First, we should delete the webcontainer that we have created in our mounting exercise by executing:

docker stop webcontainer
docker rm webcontainer

Next, we create a new container by executing:

docker run –-name webcontainer –d –p 80:80 customnginx

Executing curl http://localhost should return the same page as previously shown in Figure 2.19.

In this chapter, we have pulled down the NGINX image from Dockerhub and created two new Docker images. Figure 2.22 shows the logical view of the Azure VM.

Image

FIGURE 2.22: Logical view of Azure VM

Image Layering

If we look closer at Figure 2.22, we can see another great advantage of Docker. Docker “layers” images on top of each other. In fact, a Docker image is made of filesystems layered on top of each other.

So what does that mean and why is this a good thing? Let’s look at the layers of our recently created image by using the

docker history customnginx

command.

Figure 2.23 shows that the image has 15 layers.

Image

FIGURE 2.23: Layers of the image customnginx

Let’s look at the history of the NGINX image by executing

docker history nginx.

Figure 2.24 shows that the image has 12 layers.

Image

FIGURE 2.24: Layers of image NGINX

If we compare the layers (or intermediate images) of the NGINX image with the layers of the customnginx, we can see that Docker incrementally commits changes to the filesystem, with each change creating a new image layer. The customnginx image has exactly three layers more than the base image. If we look at the description of the three additional layers, we will find the commands used in Dockerfile:

MAINTAINER Boris Scholl <[email protected]>
COPY web /usr/share/nginx/html
EXPOSE 80

This means that Docker adds a layer for each Dockerfile instruction executed. This comes with many benefits, such as faster builds (as images are smaller) and rollback capabilities. As every image contains all its building steps, we can easily go back to a previous step. We can do this by tagging a certain layer. To tag a layer we can simply use the

docker tag imageid

command.

For the purpose of this book, we do not need to go into the details of how the various Linux filesystems work and how Docker takes advantage of them. Chapter 4 covers image layers from a development perspective.

If you are interested in advanced reading on that topic, you can check out the Docker layers chapter on https://docs.docker.com/engine/userguide/storagedriver/imagesandcontainers/.

Viewing Container Logs

Chapter 7 covers monitoring in detail, but there are situations where the container won’t start or you want to check if the container was accessed by another container or service at the time of execution. In those cases, we can view the logs of a container by executing

docker logs webcontainer.

The output is shown below:

172.17.0.1 - - [12/Dec/2015:17:16:11 +0000] "GET / HTTP/1.1" 200 95
"-" "curl/7.38.0" "-"
172.17.0.1 - - [12/Dec/2015:17:51:55 +0000] "GET / HTTP/1.1" 200 95
"-" "curl/7.38.0" "-"

If we wanted to continue to see live updates to the logs as they happen, we can add the “--follow” option to the docker logs command as shown below.

docker logs --follow webcontainer.

The docker logs command also offers the parameters --since, --timestamps, and --tail to filter the logs.

Container Networking

Docker provides rich networking features to provide complete isolation of containers. Docker creates three networks by default when it’s installed.

Bridge: This is the default network that all containers are attached to. It is usually called docker0. If we create a container without the –net flag, the Docker daemon connects the container to this network. We can see the network by executing ifconfig on the Docker host.

None: This instructs the Docker daemon not to attach the container to any part of the Docker host’s network stack In this case, we can create our own networking configuration.

Host: Adds a container on the Azure VM’s network stack. The network configuration inside the container is identical to the Azure VM.

To choose a network other than “bridge,” for example “host,” we need to execute the command below:

docker run –name webcontainer –net=host –d – p 80:80 customnginx

In addition to the default networks, the –net parameter also supports the following options:

'container:<name|id>': reuses another container’s network stack.

'NETWORK': connects the container to a user-created network using 'docker network create' command. Docker provides default network drivers for creating a new bridge network or overlay network. We can also create a network plugin or remote network written to your specifications, but this is beyond the scope of this chapter.


Image Overlay Network

An overlay network is a network that is built on top of another network. Overlay networks massively simplify container networking and are the way to deal with container networking going forward. In Chapter 5, “Service Orchestration and Connectivity,” we discuss clusters, which are collections of multiple Azure VMs. The cluster uses an Azure virtual network (VNET) to connect all the Azure VMs, and an overlay network would be built on top of that VNET. The overlay network requires a valid key-value store service, such as Zookeeper, Consul or Etcd. Chapter 5 also covers key-value stores. Chapter 5 covers how to set up an overlay network for our sample application.


Let’s have a closer look at the bridge network as it enables us to link containers, which is a basic concept that we should be aware of. By linking containers, we provide a secure channel for Docker containers to communicate with each other.

Start the first container.

docker run --name webcontainer –d –p 80:80 customnginx

Now we can start a second container and link it to the first one.

docker run --name webcontainer2 --link webcontainer:weblink -d –p
85:80 customnginx

The –link flag uses this format: sourcecontainername:linkaliasname. In this case, the source container is webcontainer and we call the link alias weblink.

Next we enter our running container webcontainer2, to see how Docker set up the link between the containers. We can use the exec command as shown below:

docker exec -it fcb9 bash

fcb9 are the first four digits of the container id for webcontainer2.

Once we are inside the container, we can issue a ping command to the webcontainer. As we can see in Figure 2.25 we can ping the webcontainer by its name.

Image

FIGURE 2.25: Pinging the webcontainer

Note that the IP address for webcontainer is 172.17.0.2. During startup, Docker created a host entry in the /etc/hosts file of webcontainer2 with the IP address for webcontainer as shown in Figure 2.26. We can get the host entries by executing

more etc/hosts

Image

FIGURE 2.26: Linked container entry in /etc/hosts of webcontainer2

In addition to the host entry, Docker also set environment variables during the start of webcontainer2 that hold information about the linked container. If we execute printenv we get the output shown in Figure 2.27. The environment variables that start with WEBLINK are the ones containing information about the linked containers.

Image

FIGURE 2.27: Environment variables of webcontainer2

For more advanced networking scenarios check out https://docs.docker.com/v1.8/articles/networking/

Environment Variables

Environment variables are critical when we start thinking about abstracting services in containers. Good examples are configuration and connection string information. Environment variables can be set by using the “–e” flag in docker run. Below is an example that creates an environment variable “SQL_CONNECTION” and sets the value to staging.

docker run --name webcontainer2 --link webcontainer:weblink -d -p
85:80 -e SQL_CONNECTION='staging' customnginx

Chapters 4 and 6 cover the usage of environment variables in more detail.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.58.121.8