Chapter 31

Containers and Ubuntu

The latest trend in cloud computing is containers. The easiest way to define containers is to begin with a comparison. Virtualization, which is described in Chapter 30, “Virtualization on Ubuntu,” enabled us to end our dependence on physical hardware when we need a new server; thanks to virtualization, we can instead create servers as virtual machines that can be moved across physical machines or parts of physical machines. Similarly, containers allow us to replicate the software components needed to run specific processes or programs. This is a much faster and smaller subset of software than virtualization requires.

A container packages an application and the application’s entire runtime environment—all the files and dependencies needed for the application to run. Containers can be large or small, depending on the use case and the software involved. Processes that run in a container are isolated from the rest of the system on which the container is running. This provides a level of security as well as extreme portability.

A trend in enterprise computing today that must not be ignored is the move to a microservice architecture. Microservice architecture attempts to modularize software system code into discrete chunks that can be easily replaced, updated, and replicated. The overall system is a collection of loosely coupled small (micro) services. The microservices are independently deployable and communicate with each other using a clearly defined mechanism, which may be private and internal to the system or publicly accessible (preferably via an API). Containers make it easier to create systems using this architecture.

Chapter 12, “Command-Line Master Class, Part 2,” has a section that describes a way to confine a script to a directory, which is often called running in a chroot jail. This concept, borrowed from the FreeBSD world, can be used to prevent processes from accessing files or resources outside the directory (the chroot jail) in which the process is confined. You can think of containers as a stronger, more powerful, and better-isolated version of a chroot jail.

You need infrastructure, which could be a local machine such as a developer’s laptop or a set of physical servers in a data center or a set of cloud-based servers running as virtual machines in one of the public clouds mentioned in Chapter 32, “Ubuntu and Cloud Computing.”

On top of the infrastructure you run an operating system—in the case of this book and this chapter, we will assume Ubuntu. Over the past several years, changes to the Linux kernel (specifically the addition of control groups, or cgroups) and the development of a new initialization system called systemd (which uses cgroups) have expanded the ability to control and isolate user processes. Combining this with Linux kernel work on user namespaces, which allow the mapping of user and group IDs on a per-namespace basis, provides the foundation for containers. The user namespaces work allows a process to have root privileges within a defined user namespace while being a normal unprivileged process outside it. Because it is now possible to contain user and group privileges for operations to a certain subset of a system, the idea became known as running in a container.

The following sections of this chapter list and describe specific technologies. They are presented so that each section builds on the preceding one. There are “brands” and options out there besides the ones listed here, but this chapter presents prime examples of breakthrough technologies that have made the idea of containers progressively easier or more practical. As with some of the other overview-style chapters in this book, entire books could be and have been written about each of these, so we must content ourselves here with a high-level overview that provides enough information to give you a basic understanding and a few guideposts to help you discover where you might want to learn more.

LXC and LXD

Once user namespaces existed, it was possible to contain processes, but it still was not terribly easy or even practical to do this at any scale. The Linux Containers Project (https://linuxcontainers.org) created a set of tools, templates, and library and language bindings called LXC (sometimes pronounced “lexie”) to make it easier to use the containment features of the Linux kernel and manage containers. The goal was to create an environment that comes as close as what one would have in a virtual machine but without the overhead of running a separate kernel and simulating all the hardware. The project is community based and completely open source, and it has been highly successful.

The next step was to create a command-line tool to make it easier to manage containers both locally and over a network. The Linux Containers Project created a next-generation container manager, which it called LXD. LXD builds on LXC, with an aim to improve the user experience.

In LXD, everything is image based, meaning an entire container is created, configured, and then stored. Then you can deploy an image or multiple copies of it, called instances, where you want to, knowing that all the instances are internally identical.

LXD containers are system containers, meaning each LXD container includes a full Linux system, exactly as it would if you were running it directly on a physical system or from within a virtual machine. This is a boon for cloud computing, especially when you want to automate the deployment of a large set of virtual servers across a data center or in a public cloud.

While LXD was founded by and development has been led by Canonical, it is open source, integrates with and runs on other Linux distributions, and has images available for deploying instances of CentOS, RHEL, SUSE, Debian, and others, along with Ubuntu instances.

We are not yet at the granular scale of “just the application and its runtime environment.” That comes next.

Docker

The Docker platform confusingly shares its name with a company created around a platform called Docker, Inc. The company supports and leads the development of the platform, which has open source projects that feed into the free Docker platform community edition and its for-payment enterprise Docker platform product. From here on, assume that we are referring to the Docker platform when we speak of Docker.

Docker is a container system with tools for building, deploying, and running application containers. The containers are everything the idea promised.

With Docker, you create a container image that includes only an application and the runtime software and dependencies needed to run that application. Once you have a container image created, you can deploy one or multiple instances of it to any machine or set of machines (physical or virtual) that are running the Docker Engine.

One of the greatest advantages of Docker containers is the portability of container instances. Each container instance is identical to the original image. The Docker Engine can be run locally on a developer’s machine, in an isolated development or testing environment, and in production environments. The application in the container instance should work identically in each location because details such as networking, storage, and operating system details are abstracted away.

Typically, a Docker container is created for each separate component in a larger application or system. This gives you the freedom to update or replace components without taking the entire system offline. In fact, you could deploy a new component instance while an older version of the instance is running, run them side-by-side for a while to compare and test, and then switch over with little to no downtime—at least in theory. You could deploy additional instances of a container instance when you discover that your current load is greater than your system can handle easily, notify your load balancer to add the additional instances to the pool of resources, and recover from an overload quickly. Much depends on your design and deployment details.

Kubernetes

As you can imagine, all the convenience of containers comes with a price: complexity. You must find a way to keep track of each containerized application, where it is deployed, and so on. To make containers useful without going insane, you will want to use a container platform that helps you develop, deploy, and manage applications (or services) across physical, virtual, and cloud-based servers. Enter Kubernetes.

Kubernetes is based on ideas rooted in years of experience running production workloads in containers at Google. Google has billions of containers running applications across data centers all over the world. Kubernetes is an open source system for automating deployment and management of containerized applications at whatever scale may be needed, up to Google-sized complexity. Kubernetes controls scaling, service discovery, load balancing, and a ton more. It is fully open source and built through cooperation between many large partners and a community of developers.

Kubernetes has quickly become the de facto standard for container management, and you can run Kubernetes on myriad different platforms. It is generally used in a cloud-type environment, whether that means within your private data center or a public cloud or a hybrid of the two.

At this point, the complexity involved in getting things set up and running is such that most companies hire experts to come in and help them get running.

References

https://linuxcontainers.org/lxd/The main website for LXD

www.docker.comThe main website for Docker

https://kubernetes.ioThe main website for Kubernetes

www.ubuntu.com/kubernetesThe main web page for Kubernetes via Canonical

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.3.175