© Kinnary Jangla 2018
Kinnary JanglaAccelerating Development Velocity Using Dockerhttps://doi.org/10.1007/978-1-4842-3936-0_2

2. Docker

Kinnary Jangla1 
(1)
San Francisco, CA, USA
 

Docker is another term for longshoreman. Longshoreman: a person employed by a port to load and unload ships.

https://www.collinsdictionary.com/us/dictionary/english/docker

In the last chapter, you saw what containers are and the differences between them and virtual machines (VMs). You also read about some of the advantages of containers and the challenges of using them.

Docker provides a solution to some of the problems posed by containers. But why did Docker become so successful only in recent years? Let’s look into that a little .

In this chapter, you will learn about the evolution of Docker and the reasons for its wide adoption by the software industry. You will learn some basics of Docker, some basic use cases for it, and some of its main components. We’ll dive deeper into this in the future chapters.

History

As new as containerization and Docker might sound to you, the intriguing wrinkle is that they’re really not new. The idea of containers has been around since the early days of Unix, with the chroot command. Rings a bell? Docker software was originally built on Linux containers, which were introduced in 2008.

As you should know from having read Chapter 1, containerized applications share a common operating system (OS) kernel, eliminating the need for each instance to run its own separate system. An application can be deployed in seconds and uses a lot fewer resources than hypervisor-based virtualization. However, because applications rely heavily on a common OS kernel, this approach can work only for applications that share the exact OS version. Docker found a way to address this limitation.

Docker was released as an open source project by dotCloud, Inc., in 2013. dotCloud is a San Francisco–based technology startup founded by the French-born American developer and entrepreneur Solomon Hykes. It relies heavily on namespaces and cgroups to ensure resource isolation and to package an application along with its dependencies, which are mostly Linux kernel features. It is this clustering of dependencies into a package that lets an application run across different platforms and still support a level of portability. This also allows developers to develop in the language of their choice, on a platform of their choice. This flexibility is what attracted a lot of interest in recent years.

Docker became extremely famous in many fast-growing companies that were trying to build test and dev environments for developers that could replicate production systems in many ways. Today, Docker is used by some well-known companies, including PayPal, Spotify, Yelp, and Pinterest, which are finding value in the software.

Let’s look at a time line of Docker milestones, according to the Container Journal. Docker source code was released as an open source software in March 2013. Needless to say, everyone had access to it after that. About a year later, Docker built the libcontainer framework, which it switched to. Around the same time, demand for orchestration tools increased, as Docker kept getting popular. In order for Docker containers to scale, orchestration frameworks are key. In June 2014, Google introduced Kubernetes, which helped Docker scale. Later that year, Amazon’s EC2 container service, which is a cloud-based container as a service, was offered. In June 2015, the open container initiative, which promotes open standards related to containers, was launched. A year later, Docker acquired a small company working on unikernels technology called Unikernels. By June of 2016, Docker had become very popular with the container ecosystem. It included the Swarm orchestrator in its platform, even though it was replaceable. Later that year, Docker started supporting all versions of Microsoft Windows. By 2016, Docker was extremely successful, and major companies began using it extensively for their most important use cases.

Now that we’ve reviewed how Docker became a success in the industry, let’s dive deeper into what Docker is and what use cases it solves.

What Is Docker?

Docker is the name of the company that produces the software called Docker. It is also the open source project that is now called Moby. When someone refers to Docker, he or she can be referring to any of these three things. Let’s try to understand a bit about each of them.

Docker is a software that runs on Linux and Windows. It is a tool designed to make it easier to create, deploy, and run applications, by using containers. The software is developed in the open, as part of the Moby open source project on GitHub.

Docker is a tool that is mainly designed for developers, so that they can focus on developing on their choice of platform, without having to worry about the OS the application will eventually run on. It allows them to run end-to-end workflow without having to get into services they don’t understand. In other words, it helps them to obtain a clearer view of the entire stack fairly easily. Additionally, running Docker containers has no additional memory overhead, so multiple Docker containers running multiple services creates very low overhead.

Understanding the different parts of Docker will help us get a good overview of everything Docker is made of before we dive deeper into any of it. The Docker architecture is explained in detail in Chapter 4.

The Docker Runtime and Orchestration Engine

The Docker engine is the infrastructure plumbing software that runs and orchestrates containers. This means that all the Docker, Inc., and third-party products plug into the Docker Engine and build around it. It is combined with a workflow for building and managing your application stacks. It is this underlying client-server technology that builds and runs containers using Docker’s components and services. It is made up of the Docker daemon, a server that is a type of long-running program; a REST API, which specifies interfaces that programs can use to talk to the daemon and tell it what to do; and the CLI, the command-line interface that talks to the Docker daemon through the API. Many docker applications use the underlying API and CLI.

In other words, the Docker Engine is the program that creates and runs the Docker container from the Docker image file. So, next, let’s take a quick look at what a Docker image file is.

Docker Images

A Docker image is not just a file; it is more of a file system. This file system is composed of multiple layers, and each layer contains a file of the contents for that layer that cannot be changed. In other words, it is immutable. It is essentially a snapshot of a Docker container.

Docker images are created with the build command. They produce a container and are stored in a Docker registry. Images can become fairly large quite quickly. Therefore, they are designed to be composed of layers of other images, allowing a minimal amount of data to be sent when transferring images over a network.

To explain this more clearly with a programming metaphor, if an image is a class, then a container is an instance of a class—a runtime object. Containers are lightweight and portable encapsulations of an environment in which you can run applications.

An image is created using a Dockerfile. Let’s see what a Dockerfile is. Later on, we’ll learn how to build a Docker image from a Dockerfile in detail, in Chapter 5. For now, let’s take a quick look at what Dockerfiles are all about.

Dockerfiles

Everything starts with a Dockerfile. It is a text document that contains a set of instructions or commands to assemble an image that are understood by the build engine.

The Dockerfile defines what goes in the environment inside your container. Access to resources, mapping volumes, passing arguments, copying files that must be inside your container, etc., go into this file. After creating the Dockerfile, you will have to build it to create the image of the container. The image is just the snapshot of all the executed instructions in the Dockerfile. Once you have this application image built, you can expect it to run across any machine using the same kernel.

Why Should You Use Docker?

Docker provides application isolation with little overhead. By saving space with the low memory footprint, it has some powerful advantages.

Primarily, you can benefit from the extra layer of abstraction (in which code and its dependencies are packed together) offered by Docker. Another significant advantage is that you can have many more containers running on a single machine than you can with virtualization alone, owing to Docker’s lightweight nature.

Another significant advantage is that containers can be spun and shut down within seconds. The Docker FAQ has a good overview of what Docker adds to traditional containers.

Let’s look at some of the key uses.

Docker’s Key Use Cases

Here are some of the key use cases that Docker supports that promote consistency of environments.

Configuration Management

Simplifying configurations is one of the primary use cases of Docker. One of the features it provides is the ability to run any application or platform with its own config on any OS or other infrastructure. Docker provides the capability of clubbing your environment with your configuration into code, packaging it, and deploying it.

Code Pipeline Management

When you have simplified your application configuration, code management becomes a lot simpler as a result. Code lives in many different environments before it reaches a point at which it can be shipped. It first lives in the developer’s machine, where it is tested, then it goes to test environments, where it might be deployed on test machines. Only after that does it reach the production servers.

All these environments vary in infrastructure, settings, configuration, etc. With Docker, a consistent environment is provided across these different phases, which in turn ease the development and deployment process. The ease with which Docker images can be spun helps you to maintain consistency across runtime environments.

Developer Productivity

As mentioned earlier, the life cycle of shipping an application goes through numerous phases, starting from the developer machine all the way to the production servers. At all points, we mostly strive to ensure a consistency between test and production environments.

To achieve this, every service must reflect how it will run in the production environment. For that to be possible, test environments require all the dependent services that end up taking huge amounts of space.

Docker comes in handy here by allowing a bigger number of services to run simultaneously, by not adding to the memory footprint. Docker’s shared code volumes make it available to the container’s host OS, which helps to support low memory usage.

This works amazingly well for developers, because they can use the code editor of their choice on a platform of their choice to develop the application, without worrying about the OS the application will run in on a production setting. This also helps developers avoid getting into the nitty gritty of services they don’t really understand but still enables them to test their end-to-end scenarios, which implicitly helps them understand the full stack better.

Faster Deployment

Prior to the existence of VMs, spinning up new hardware was a very cumbersome and time-consuming process. With VMs, that process became slightly easier, and with Docker, it became exponentially easier.

Creating and destroying Docker containers, bringing up a new container, etc., become extremely simple with Docker, not to mention less costly, which in turn allows for better resource allocation.

Application Isolation

When multiple microservices power up an application, it is very likely that these services depend on common libraries and packages, but possibly different versions of them. If you were to start an application on a single machine, getting all these services up and running to kick-start the application would practically be impossible, owing to the version conflicts of the various dependencies.

For that reason, isolating these microservices in their own environments, with only their dependencies and configurations that don’t conflict with other services, lets that service run independently. Setting up all these microservices in their independent Docker containers and having these containers communicate with each other seems like an ideal solution to getting an application up and running seamlessly.

Continuous Integration and Continuous Deployment

Docker has the ability to do image versioning. This means that you can set up your Docker containers to pull new code from your code repository, build it, package it in a Docker image, and push this new image to your image repository. Your deployment tool can then pull the newest image from your image repository, deploy it to your test environments, and, finally, promote it to your production environments. You could do this either every time there is new code in your repository or at a certain frequency, depending on how often you require your code to be deployed.

Consistent Environments Across Machines

How often have you observed that something works on your coworkers’ machines but not on yours? Docker helps you prevent this situation completely, by setting consistent environment variables and configuration settings in the image file, so that your and your coworkers’ machines look the same, without any other variables that can affect the run of an application or service.

Summary

In this chapter, you learned how Docker evolved, how it went from being an open source project in 2013 to acquiring unikernels to running natively on Windows. You saw what requirements of the software industry gave rise to the wide adoption of Docker. You also learned some basics of Docker and its components. We’ll dive deeper into this in future chapters.

Finally, you learned some of the key use cases of Docker, ranging from code pipeline management to faster deployments to increasing developer productivity. These are just some of the use cases of Docker that are widely applied across the software industry.

In the next chapter, you will learn about the differences between monoliths and microservices and when and why you use one vs. the other. You will see how to use Docker with microservices, as well.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.15.212.237