In this chapter, we are going to be looking at launching more than just a simple web server using our local Docker installation. We will look at the following topics:
build
commandWe will then look at using all the techniques above to launch a WordPress and Drupal application stack.
Before we start learning how to launch containers, we should quickly discuss some of the more common terminology we are going to be using in this chapter.
Please note, the Docker commands in this chapter have been written for use with Docker 1.13 and later. Trying to run commands such as docker image pull nginx
in older versions will fail with an error. Please refer to Chapter 1, Installing Docker Locally for details on how to install the latest version of Docker.
A Docker image
is a collection of all the files that make up an executable software application. This collection includes the application plus all the libraries, binaries, and other dependencies such as deployment descriptors and so on. just needed just to run the application everywhere without any hitch or hurdle. These files in the Docker image are read-only and hence the content of the image cannot be altered. If you choose to alter the content of your image, the only option Docker allows is to add another layer with the new changes. In other words, a Docker image is made up of layers which you can review using docker image history
subcommand.
The Docker image architecture effectively leverages this layering concept to seamlessly add additional capabilities to the existing images to meet the varying business requirements and increase the reuse of images. In other words, capabilities can be added to existing images by adding additional layers on top of that image and deriving a new image. The Docker images have a parent/child relationship and the bottom-most image is called the base image. The base image is the special image that doesn't have any parent:
In the previous diagram, Ubuntu is a base image and it does not have any parent image.
As you can see in the above diagram, everything starts with a base image and here in this example, it is Ubuntu. Further on, the wget
capability is added to the image as a layer and the wget
image is referencing Ubuntu image as its parent. And in the next layer, an instance of Tomcat application server is added and it refers the wget
image as its parent. Each addition that is made to the original base image is stored in a separate layer (a kind of hierarchy gets generated here to retain the original identity).
Precisely speaking, any Docker image has to originate from a base image and an image gets continuously enriched in its functionality by getting fresh modules and this is accomplished by adding an additional module as a new layer on the existing Docker image one by one as vividly illustrated in the above diagram.
The Docker platform provides a simple way of building new images or extending existing images. You can also download the Docker images that the other people have already created and deposited in Docker image repositories (private or public).
A Docker Registry is a place where Docker images can be stored in order to be publicly or privately found, accessed, and used by worldwide software developers for quickly crafting fresh and composite applications without any risks. Because, all the stored images will have gone through multiple validations, verifications, and refinements, the quality of those images are really high.
Using the dockerimage push
subcommand, you can dispatch your Docker image to the registry so that it is registered and deposited. Using the dockerimage pull
subcommand, you can download a Docker image from the registry.
A Docker Registry could be hosted by a third party as a public or private registry, such as one of the following registries:
Every institution, innovator and individual can have their own Docker Registry to stock up their images for internal and/or external access and usage.
In the previous chapter, when you ran the dockerimage pull
subcommand, the nginx
image got downloaded mysteriously. In this section, let's unravel the mystery around the docker image pull
subcommand and how the Docker Hub immensely contributed toward this unintended success.
The good folks in the Docker community have built a repository of images and they have made it publicly available at a default location, index.docker.io
. This default location is called the Docker Hub. The docker image pull
subcommand is programmed to look for the images at this location. Thus, when you pull
a nginx
image, it is effortlessly downloaded from the default registry. This mechanism helps in speeding up the spinning of the Docker containers.
The Docker Hub is the official repository that contains all the painstakingly curated images that are created and deposited by the worldwide Docker development community. This so-called cure is enacted for ensuring that all the images stored in the Docker Hub are secure and safe through a host of quarantine tasks. There are additional mechanisms such as creating the image digest and having content trust that gives you the ability to verify both the integrity and the publisher of all the data received from a registry over any channel.
There are proven verification and validation methods for cleaning up any knowingly or unknowingly introduced malware, adware, viruses, and so on, from these Docker images.
In addition to the official repository, the Docker Hub Registry also provides a platform for thethird-party developers and providers for sharing their images for general consumption. The third-party images are prefixed by the user ID of their developers or depositors.
For example, russmckendrick/cluster
is a third-party image, wherein russmckendrick
is the user ID and cluster
is the image repository name. You can download any third-party image by using the docker image pull
subcommand, as shown here:
docker image pull russmckendrick/cluster
Apart from the preceding repository, the Docker ecosystem also provides a mechanism for leveraging the images from any third-party repository hub other than the Docker Hub Registry, and it also provides the images hosted by the local repository hubs. As mentioned earlier, the Docker engine has been programmed to look for images at index.docker.io
by default, whereas in the case of the third-party or the local repository hub, we must manually specify the path from where the image should be pulled.
A manual repository path is similar to a URL without a protocol specifier, such as https://
, http://
and ftp://
.
Following is an example of pulling an image from a third-party repository hub:
docker image pull registry.domain.com/myapp
18.218.21.96