17

Running Containers

The IT industry never ceases to amaze me. Back when the concept of virtualization came about, it revolutionized the data center. Virtualization allowed us to run many small Virtual Machines (VMs) on one server, effectively allowing us to consolidate the equipment in our server racks. And just when we thought it couldn’t get any better, the concept of containerization took the IT world by storm, allowing us to build portable instances of our software that not only improved how we deploy applications but also changed the way we develop them. In this chapter, we will cover the exciting world of containerization. This exploration will include:

  • What is containerization?
  • Understanding the differences between Docker and LXD
  • Installing Docker
  • Managing Docker containers
  • Automating Docker image creation with Dockerfiles
  • Managing LXD containers

To begin, let’s explore what containerization is, how it differs from virtualization, and some considerations around how this technology might be implemented.

What is containerization?

In the last chapter, we covered virtualization. Virtualization allows us to run multiple virtual servers on one physical piece of hardware. We allocate CPU, RAM, and disk space to these VMs, and they run as if they were real servers. In fact, for all intents and purposes, a VM is a real server.

However, there are also weaknesses with VMs. Perhaps the most glaringly obvious is the fact that at least some of the resources you allocate to a VM are likely being wasted. For example, perhaps you’ve allocated 512 MB of RAM to a VM. What if the application only rarely uses more than 100 MB of RAM? That means most of the time, 412 MB of RAM that could otherwise be used for a useful purpose is just sitting idle. The same can be said of CPU usage. Nowadays, VM solutions do have ways of sharing unused resources, but effectively, resource efficiency is a natural weakness of the platform.

Containers, unlike VMs, are not actual servers. At least, not in the way you typically think about them (in terms of hardware). While VMs typically have one or more virtualized CPUs, containers share the CPU with the host. VMs also have their own kernel, but containers share the kernel of the host. Containers are still segregated, though. Just as a VM cannot access the host filesystem, a container can’t either (unless you explicitly set it up to do so).

What is a container, then? It’s probably best to think of a container as a filesystem rather than a VM. The container itself contains a file structure that matches that of the distribution it’s based on. A container based on Ubuntu Server, for example, will have the same filesystem layout as a real Ubuntu Server installation on a VM or physical hardware. Imagine copying all the files and folders from an Ubuntu installation, putting them all in a single segregated directory, and having the binary contents of the filesystem executed as a program, without an actual operating system running.

To be fair, that description was an oversimplification of how containers actually run on an Ubuntu server, as the technology utilizes the functionality of the Linux kernel to isolate various components of a container from the rest of the system. However, a full discussion of those technologies is beyond the scope of this book. But understanding that such isolation exists within containers is something you should keep in mind, as keeping processes that are running within a container separate from other processes running on the host server is an important benefit.

Portability is another strength of containerization. With a container, you can literally pass it around to various members of your development team, and then push the container into production when everyone agrees that it’s ready. The container itself will run exactly the same on each workstation, regardless of which operating system the workstation uses. To be fair, you can export and import VMs on any number of hosts, but containers make this process extremely easy. In fact, portability is at the core of the design of this technology.

The concept of containerization is not necessarily new. When Docker hit the scene, it took the IT world by storm, but it was by no means the first solution to offer containerization. LXC, and other technologies, predate it. It was, however, a clever marketing tactic with a cool-sounding brand that launched containerization into mainstream popularity. By no means am I saying that Docker is all hype, though. It’s an awesome technology with many benefits. It’s definitely worth using, and you may even find yourself preferring it to VMs.

The main difference with containerization is that each container generally does one thing. For example, perhaps a container holds a hosted website or contains a single application. VMs are often created to do many tasks, such as a web server that hosts ten websites. Containers, on the other hand, are generally used for one task each, though depending on the implementation you may see others going against this norm.

When should you use containers? I recommend you consider containers any time you’re running a web app or some sort of service and you’d benefit from saving resources. The truth is, not all applications will run well in a container, but it’s at least something to consider. Any time you’re running an application that is typically accessed via a web browser, it’s probably better off in a container rather than a VM. As an administrator, you’ll most likely experiment with the different tools available to you and decide on the best tool for the job based on your findings.

Now that we understand the core concepts surrounding containers, let’s explore the differences between two container technologies.

Understanding the differences between Docker and LXD

In this chapter, we’re going to explore both Docker and LXD and see examples of containers running in both. Before we start working on that though, it’s a good idea to understand some of the things that set each solution apart from the other.

Docker is probably the technology most of my readers have heard of. It seems as though you can’t visit a single IT conference nowadays without it at least being mentioned. Docker is everywhere, and it runs on pretty much any platform. There’s lots of documentation available for Docker and various resources you can utilize to deploy it. Docker utilizes a layered approach to containerization. Every change you make to the container creates a new layer, and these layers can form the base of other containers, thus saving disk space. More on that later.

LXD (pronounced Lex-D) finds its roots in LXC, so it’s important to understand that first before we talk about LXD. LXC (pronounced Lex-C) is short for Linux Containers and is another implementation of containerization, similar to Docker. This technology, like similar solutions, uses the control groups (cgroups) feature of the Linux kernel, which isolates processes and is able to segregate them from one another. This enhances security, as processes should not be able to read data from other processes unless there’s a good reason to.

LXC takes the concept of segregation even further, by creating an implementation of virtualization based solely on running applications in an isolated environment that matches the environment of an operating system. You can run LXC containers on just about every distribution of Linux available today.

LXD is also available for many Linux distributions, but it’s treated as a first-class citizen in Ubuntu. This is because Canonical (the company behind Ubuntu) had a major hand in its development, and also offers commercial support for it. Since the software that makes LXD itself work is distributed via snap packages, that essentially means that any distribution of Linux that is able to install snap packages should be able to install LXD.

LXD takes LXC and gives it additional features that it otherwise wouldn’t have, such as snapshots, ZFS support, and migration. LXD doesn’t replace LXC; it actually utilizes it to provide its base technology. Perhaps the best way to think of LXD is as LXC with an additional management layer on top that adds additional features.

How does LXD/LXC differ from Docker? The main difference is that while they are both container solutions and solve the same goal in a very similar way, LXD is more similar to an actual VM while Docker tries harder to differentiate itself from that. In comparison, Docker containers are transactional (like I’ve already mentioned) and you generally have an ENTRYPOINT command that is run inside the container when you launch it. Essentially, LXD has a filesystem that you can directly access from the host operating system and has a simpler approach to containerization. You can think of LXC as a form of machine container that closely emulates a VM, and a Docker container is an application container that provides the foundation needed to run an application. Regardless of these differences, the technologies can be used in the same way and provide support for identical use cases.

When should you use Docker and when should you use LXD? I actually recommend you practice both since they’re not overly difficult to learn. We will go over the basics of these technologies in this chapter. But to answer the question at hand, there are a few use cases where one technology may make more sense than the other. Docker is more of a general-purpose tool. You can run Docker containers on Linux, macOS, and even Windows. It’s, therefore, a good choice if you want to create a container that runs everywhere. LXD is generally best for Linux environments, though Docker runs great in Linux too. The operating system you’re running your container solution on is of little importance nowadays, since most people use a container service to run containers rather than an actual server that you manage yourself. In the future, if you get heavily into containerization, you may find yourself forgoing the operating system altogether and just running them in a service such as Amazon’s Elastic Container Service (ECS), which is one of a handful of cloud services that allow you to run containers without having to manage the underlying server.

Another benefit of Docker is Docker Hub, which you can use to download containers others have made or even upload your own for others to use. The benefit here is that if someone has already solved the goal you’re trying to achieve, you can benefit from their work rather than starting from scratch, and they can also benefit from your work as well. This saves time and is often better than creating a solution by hand.

Always make sure to audit third-party resources before you put them to use in your organization. This includes (but isn’t limited to) containers developed by a third party. You should understand how the container image was built, how secure the settings are, and whether or not there’s anything built-in that might impose a security risk. Basically, some administrators will happily accept a container image as-is, but that practice can be very risky. Never deploy a container image that hasn’t been audited for security.

Now that we understand not only the core concepts but the differences between the two containerization standards, let’s take a look at Docker.

Installing Docker

Installing Docker is very fast and easy, so much so that it barely constitutes its own section. In the last chapter, we had to install several packages in order to get a Kernel-based Virtual Machine (KVM) virtualization server up and running as well as tweak some configuration files. In comparison, installing Docker is effortless, as you only need to install the docker.io package:

sudo apt install docker.io

Yes, that’s all there is to it. Installing Docker was definitely much easier than setting up KVM, as we did in the previous chapter. Ubuntu includes Docker in its default repositories, so it’s only a matter of installing this one package and its dependencies. You’ll now have a new service installed on your machine, simply titled docker. In order to be useful, the service needs to be running. You can check to see whether or not it’s already running with the following command:

systemctl status docker

Check the output of the previous command to see if the docker service is running. You should also check to see if the service is enabled. If not, you can start the service and also enable it at the same time with the following command:

systemctl enable --now docker

We also have the docker command available to us now, which allows us to manage our containers. By default, it does require root privileges, so you’ll need to use sudo to use it. To make this easier, I recommend that you add your user account to the docker group before going any further. This will eliminate the need to use sudo every time you run a docker command. The following command will add your user account to the appropriate group:

sudo usermod -aG docker <yourusername>

After you log out and then log in again, you’ll be able to manage Docker much more easily.

You can verify your group membership by simply running the groups command with no options, which should now show your user as a member of the docker group.

Well, that’s it. Docker is installed, and your user account is a member of the docker group, so you’re good to go. Wow, that was easy!

Now that we have Docker installed, let’s start using it. It’s a fun technology to learn, and in the next section, we’ll explore a few examples.

Managing Docker containers

Now that Docker is installed and running, let’s take it for a test drive. After installing Docker, we have the docker command available to use now, which has various sub-commands to perform different functions with containers. First, let’s try out docker search:

docker search ubuntu

With Docker, containers are created from images. There are many pre-existing container images we can use, or we can build our own. The docker search command allows us to search for a container image that already exists and has been made available to us. Once we’ve chosen an image, we can download it locally and create container instances from it.

The ability of administrators to search for (and download) an existing container is just one of many great features Docker offers us. Although we can definitely build our own container images (and we will do so, right here in this chapter), sometimes it might make sense to use a pre-existing container image, rather than create a new one from scratch.

For example, you can install an NGINX container, simply named nginx. This is actually an official container image, so it should be trustworthy. We can tell that a container image is trustworthy by the verbiage DOCKER OFFICIAL IMAGE being present if you were to look up the image on the Docker Hub website at https://hub.docker.com. If we wanted to deploy a container running NGINX, doing so via the official image would save us a lot of time, especially compared to creating one from scratch. After all, why reinvent the wheel if you don’t have to?

However, even if the container image comes from a trustworthy source, you should still audit it. With the NGINX example, we can be fairly confident that the image is safe and doesn’t contain any unwanted objects, such as malware. However, there’s no such thing as 100% trustworthy when it comes to security, so we should audit them anyway.

But how does this work? The docker search command will search Docker Hub, which is an online repository that hosts containers for others to download and utilize. You could search for containers based on other applications, or even other distributions such as Fedora or AlmaLinux, if you wanted to experiment. The command will return a list of Docker images available that meet your search criteria.

So what do we do with these images? An image in Docker is its closest equivalent to a VM or hardware image. It’s a snapshot that contains the filesystem of a particular operating system or Linux distribution, along with some changes the author included to make it perform a specific task. This image can then be downloaded and customized to suit your purposes. You can choose to upload your customized image back to Docker Hub if you would like to contribute upstream. Every image you download will be stored on your machine so that you won’t have to re-download it every time you wish to create a new container.

To pull down a Docker image for our use, we can use the docker pull command, along with one of the image names we saw in the output of our search command:

docker pull ubuntu

With the preceding command, we’re pulling down the latest Ubuntu container image available on Docker Hub. The image will now be stored locally, and we’ll be able to create new containers from it. The process will look similar to the following screenshot:

Figure 17.1: Downloading an Ubuntu container image

If you’re curious as to which images you have saved locally, you can execute docker images to get a list of the Docker container images you have stored on your server:

docker images

The output will look similar to this:

Figure 17.2: Listing installed Docker images

Notice the IMAGE ID in the output. If for some reason you want to remove an image, you can do so with the docker rmi command, and you’ll need to use the ID as an argument to tell the command what to delete. The syntax would look similar to this if I was removing the image with the ID shown in the screenshot:

docker rmi d2e4e1f51132

Once you have a container image downloaded to your server, you can create a new container from it by running the docker run command, followed by the name of your image and an application within the image to run. An application run from within a Docker container is known as an ENTRYPOINT, which is just a fancy term to describe an application a particular container is configured to run. You’re not limited to the ENTRYPOINT though, and not all containers actually have an ENTRYPOINT. You can use any command in the container that you would normally be able to run in that distribution. In the case of the Ubuntu container image we downloaded earlier, we can run bash with the following command so that we can get a prompt and enter any command(s) we wish:

docker run -it ubuntu /bin/bash

Once you run that command, you’re now interacting with a shell prompt from within your container. From here, you can run commands you would normally run within a real Ubuntu machine, such as installing new packages, changing configuration files, and more. Go ahead and play around with the container, and then we’ll continue with a bit more theory on how this is actually working.

There are some potentially confusing aspects of Docker we should get out of the way first before we continue with additional examples. The thing that’s most likely to confuse newcomers to Docker is how containers are created and destroyed. When you execute the docker run command against an image you’ve downloaded, you’re actually creating a container. Therefore, the image you downloaded with the docker pull command wasn’t an actual container itself, but it becomes a container when you run an instance of it. When the command that’s being run inside the container finishes, the container goes away. Therefore, if you were to run /bin/bash in a container and install a bunch of packages, those packages would be wiped out as soon as you exited the container.

You can think of a Docker image as a “blueprint” for a container that can be used to create running containers. Every container you run has a container ID that differentiates it from others. If you want to remove a persistent container, for example, you would need to reference this ID with the docker rm command. This is very similar to the docker rmi command that’s used to remove container images.

To see the container ID for yourself, you’ll first need to exit the container if you’re currently running one. There are two ways of doing so. First, you could press Ctrl + d to disconnect, or even type exit and press Enter. When you exit the container, you’re removing it (Docker containers only typically exist while running). When you run the docker ps command (which is the command you’ll use any time you want a list of containers on your system), you won’t see it listed. Instead, you can add the -a option to see all containers listed, even those that have been stopped.

You’re probably wondering, then, how to exit a container and not have it go away. To do so, while you’re attached to a container, press Ctrl + p and then press q (don’t let go of the Ctrl key while you press these two letters). This will drop you out of the container, and when you run the docker ps command (even without the -a option), you’ll see that it’s still running.

The docker ps command deserves some attention. The output will give you some very useful information about the containers on your server, including the CONTAINER ID that was mentioned earlier. In addition, the output will contain the IMAGE it was created from, the COMMAND being run when the container was CREATED, and its STATUS, as well as any PORTS you may have forwarded. The output will also display randomly generated names for each container, which are usually quite comical. As I was going through the process of creating containers while writing this section, the code names for my containers were tender_cori, serene_mcnulty, and high_goldwasser. This is just one of the many quirks of Docker, and some of these can be quite humorous.

The important output of the docker ps -a command is the CONTAINER ID, the COMMAND, and the STATUS. The ID, which we already discussed, allows you to reference a specific container to enable you to run commands against it. COMMAND lets you know what command was being run. In our example, we executed /bin/bash when we started our containers.

If we have any containers that were stopped, we can resume a container with the docker start command, giving it a container ID as an argument. Your command will end up looking similar to this:

docker start d2e4e1f51132

The output will simply return the ID of the container, and then drop you back to your shell prompt—not the shell prompt of your container, but that of your server. You might be wondering at this point, how do I get back to the shell prompt for the container? We can use docker attach for that:

docker attach d2e4e1f51132

The docker attach command is useful because it allows you to attach your shell to a container that is already running. Most of the time, containers are started automatically instead of starting with /bin/bash as we have done. If something were to go wrong, we may want to use something like docker attach to browse through the running container to look for error messages. It’s very useful.

Speaking of useful, another great command is docker info. This command will give you information about your implementation of Docker, such as letting you know how many containers you have on your system, which should be the number of times you’ve run the docker run command unless you cleaned up the previously run containers with docker rm. Feel free to take a look at its output and see what you can learn from it.

Getting deeper into the subject of containers, it’s important to understand what a Docker container is and what it isn’t. A container is not a service running in the background, at least not inherently. A container is a collection of namespaces, such as a namespace for its filesystem or users. As discussed earlier in this chapter, containers are isolated from the rest of the server by utilizing technology within the Linux kernel. When you disconnect without a process running within the container, there’s no reason for it to run, since its namespace is empty. Thus, it stops. If you’d like to run a container in a way that is similar to a service (it keeps running in the background), you would want to run the container in detached mode. Basically, this is a way of telling your container to run this process and to not stop running it until you tell it to. Here’s an example of creating a container and running it in detached mode:

docker run -dit ubuntu /bin/bash

After running the previous command, Docker will print a container ID, and then drop back to your command prompt. You can then see that the container is running with the docker ps command, so use docker attach along with the container ID to connect to it and run commands.

Normally, we use the -it options to create a container. This is what we used a few examples ago. The -i option triggers interactive mode, while the -t option gives us a pseudo-TTY. At the end of the command, we tell the container to run the Bash shell. The -d option runs the container in the background.

It may seem relatively useless to have another Bash shell running in the background that isn’t actually performing a task. But these are just simple examples to help you get the hang of Docker. A more common use case may be to run a specific application. In fact, you can even serve a website from a Docker container by installing and configuring Apache within the container, including a virtual host. The question then becomes: how do you access the container’s instance of Apache within a web browser? The answer is port redirection, which Docker also supports. Let’s give this a try.

First, let’s create a new container in detached mode. Let’s also redirect port 80 within the container to port 8080 on the host:

docker run -dit -p 8080:80 ubuntu /bin/bash

The command will output a container ID. This ID will be much longer than you’re accustomed to seeing. This is because when we run docker ps -a, it only shows shortened container IDs. You don’t need to use the entire container ID when you attach; you can simply use part of it as long as it’s long enough to be different from other IDs:

docker attach dfb3e

Here, I’ve attached to a container with an ID that begins with dfb3e. This will connect my shell to a Bash shell within the container.

Let’s install Apache. We’ve done this before, but there are a few differences that you’ll see. First, if you simply run the following command to install the apache2 package as we would normally do, it may fail for one or two reasons:

sudo apt install apache2

The two problems here are first that sudo isn’t included by default in the Ubuntu container, so it won’t even recognize the sudo part of the command. When you run docker attach, you’re actually attaching to the container as the root user, so the lack of sudo won’t be an issue anyway. Second, the repository index in the container may be out of date, if it’s even present at all. This means that apt within the container won’t even find the apache2 package. To solve this, we’ll first update the repository index:

apt update

Then, install apache2 using the following command:

apt install apache2

You may be asked to set your time zone or geographic location during the installation of packages. If so, go ahead and enter each prompt accordingly.

Now we have Apache installed in our container. We don’t need to worry about configuring the default sample web page or making it look nice. We just want to verify that it works. Let’s start the service:

/etc/init.d/apache2 start

After running that command, Apache should be running within the container.

The previous command is definitely not our normal way of starting services. Typically, we’d use a command like systemctl start apache2, but there’s no actual init system inside a container, so running systemctl commands will not work as they normally would. Always refer to any documentation that may exist for a container you’re attempting to run, regarding how to start an application it may contain.

Apache should be running within the container. Now, press Ctrl + p and then press q (don’t let go of the Ctrl key while you press these two letters) to exit the container, but allow it to keep running in the background. You should be able to visit the sample Apache web page for the container by navigating to localhost:8080 in your web browser. You should see the default It works! page of Apache.

Congratulations, you’re officially running an application within a container:

Figure 17.3: The default Apache start page, running from within a container

As your Docker knowledge grows, you’ll want to look deeper into the concept of an ENTRYPOINT. An ENTRYPOINT is a preferred way of starting applications in a Docker container. In our examples so far, we’ve used an ENTRYPOINT of /bin/bash. While that’s perfectly valid, an ENTRYPOINT is generally a Bash script that is configured to run the desired application and is launched by the container.

Our Apache container is running happily in the background, responding to HTTP requests over port 8080 on the host. But what should we do with it at this point? We can create our own image from it so that we can simplify deploying it later. To be fair, we’ve only installed Apache inside the container, so it’s not saving us that much work. In a real production environment, you may have a container running that needed quite a few commands to set it up. With an image, we can have all of that work baked into the image, so we won’t have to run any setup commands we may have each time we want to create a container. To create a container image, let’s grab the container ID of a running container by running the docker ps command. Once we have that, we can now create a new image of the container with the docker commit command:

docker commit <Container ID> ubuntu/apache-server:1.0

That command will return us the ID of our new image. To view all the Docker images available on our machine, we can run the docker images command to have Docker return a list. You should see the original Ubuntu image we downloaded, along with the one we just created. We’ll first see a column for the repository the image came from; in our case, it is Ubuntu. Next, we see the tag. Our original Ubuntu image (the one we used docker pull to download) has a tag of latest. We didn’t specify that when we first downloaded it; it just defaulted to latest. In addition, we see an image ID for both, as well as the size.

To create a new container from our new image, we just need to use docker run, but specify the tag and name of our new image. Note that we may already have a container listening on port 8080, so this command may fail if that container hasn’t been stopped:

docker run -dit -p 8080:80 ubuntu/apache-server:1.0 /bin/bash

Speaking of stopping a container, I should probably show you how to do that as well. As you can probably guess, the command is docker stop followed by a container ID:

docker stop <Container ID>

This will send the SIGTERM signal to the container, followed by SIGKILL if it doesn’t stop on its own after a delay.

Admittedly, the Apache container example was fairly simplistic, but it does the job as far as showing you a working example of a container that is actually somewhat useful. Before continuing on, think for a moment of all the use cases you can use Docker for in your organization. It may seem like a very simple concept (and it is), but it allows you to do some very powerful things. Perhaps you’ll want to try to containerize your organization’s intranet page or some sort of application. The concept of Docker sure is simple, but it can go a long way with the right imagination.

Before I close out this section, I’ll give you a personal example of how I implemented a container at a previous job. At this organization, I worked with some Embedded Linux software engineers who each had their own personal favorite Linux distribution. Some preferred Ubuntu, others preferred Debian, and a few even ran Gentoo. This in and of itself wasn’t necessarily an issue—sometimes it’s fun to try out other distributions. But for developers, a platform change can introduce inconsistency, and that’s not good for a software project. The included build tools are different in each distribution of Linux because they all ship different versions of all the development packages and libraries. The application this particular organization developed was only known to compile properly in Debian, and newer versions of the compiler posed a problem for the application. My solution was to provide each developer with a Docker container based on Debian, with all the build tools that they needed to perform their job baked in. At this point, it no longer mattered which distribution they ran on their workstations. The container was the same no matter what they were running. Regardless of what their underlying operating system was, they all had the same tools. This gave each developer the freedom to run their preferred distribution of Linux (or even macOS), and it didn’t impact their ability to do their job. I’m sure there are some clever use cases you can come up with for implementing containerization.

Now that we understand the basics of Docker, let’s take a look at automating the process of building containers.

Automating Docker image creation with Dockerfiles

I’ve mentioned previously in this book that anything worth having a server do more than once should be automated, and building a Docker container is no exception. A Dockerfile is a neat way of automating the building of Docker images by creating a text file with a set of instructions for their creation. Docker is able to take this file, execute the commands it contains, and build a container. It’s magic.

The easiest way to set up a Dockerfile is to create a directory, preferably with a descriptive name for the image you’d like to create (you can name it whatever you wish, though), and inside it create a text file named Dockerfile. For a quick example, copy this text into your Dockerfile and I’ll explain how it works:

FROM ubuntu
MAINTAINER Jay <[email protected]>
# Avoid confirmation messages
ARG DEBIAN_FRONTEND=noninteractive
# Update the container's packages
RUN apt update; apt dist-upgrade -y
# Install apache2 and vim
RUN apt install -y apache2 vim-nox
# Start Apache
ENTRYPOINT apache2ctl -D FOREGROUND

Let’s go through this Dockerfile line by line to get a better understanding of what it’s doing:

FROM ubuntu

We need an image to base our new image on, so we’re using Ubuntu as a starting point.

This will cause Docker to download the ubuntu:latest image from Docker Hub, if we haven’t already downloaded it locally. If we do have it locally, it will just use the locally cached version.

MAINTAINER Jay <[email protected]>

Here, we’re setting the maintainer of the image. Basically, we’re declaring its author. This is optional, so you don’t need to include that if you don’t want to.

# Avoid confirmation messages

Lines beginning with a hash symbol (#) are ignored, so we are able to create comments within the Dockerfile. This is recommended to give others a good idea of what your Dockerfile is doing.

ARG DEBIAN_FRONTEND=noninteractive

Here, we’re setting an environment variable that, in this case, sets the environment to noninteractive. The reason we do this is that the process of installing a package while building a Docker container should be automatic; if a prompt comes up while a package is being installed and asks you a question, your input will not pass through and the process will hang.

With this environment variable, we’re clarifying that we want to be in noninteractive mode so that the default answers to any questions that come up will be used and we won’t be prompted.

RUN apt update; apt dist-upgrade -y

With the RUN command, we’re telling Docker to run a specific command while the image is being created. In this case, we’re updating the image’s repository index and performing a full package update to ensure the resulting image is as fresh as can be. The -y option is provided to suppress any requests for confirmation while installing the packages. Despite the fact that we set noninteractive mode earlier, apt will still try to confirm changes interactively, and the -y option suppresses that.

RUN apt install -y apache2 vim-nox

Next, we’re installing both apache2 and vim-nox. The vim-nox package isn’t required, but I personally like to make sure all of my servers and containers have it installed. I mainly included it here to show you that you can install multiple packages in one line.

ENTRYPOINT apache2ctl -D FOREGROUND

I mentioned the concept of an ENTRYPOINT earlier, which again is where we clarify which application should run when the container starts. The apache2ctl command is a wrapper command for Apache that allows administrators to control the finer points of running the Apache daemon. A full walk-through of this command is beyond the scope of this chapter, but we’re using it here because we want Apache to automatically start with the container, and apache2ctl is one method of doing that without relying on systemctl (which the container doesn’t have).

Great, now we have a Dockerfile. So what do we do with it? Well, turn it into an image of course! To do so, we can use the docker build command, which can be executed from within the directory that contains the Dockerfile. Here’s an example of using the docker build command to create an image tagged packt/apache-server:1.0:

docker build -t learnlinuxtv/apache-server:1.0 .

Once you run that command, you’ll see Docker create the image for you, running each of the commands you asked it to. The image will be set up just the way you like. Basically, we just automated the entire creation of the Apache container we used as an example in this section. If anything goes wrong, Docker will print an error to your shell. You can then fix the error in your Dockerfile and run it again, and it will continue where it left off.

Once complete, we can create a container from our new image:

docker run -dit -p 8080:80 learnlinuxtv/apache-server:1.0

Almost immediately after running the container, the sample Apache site will be available on localhost:8080 on the host. With a Dockerfile, you’ll be able to automate the creation of your Docker images. That was easy, wasn’t it? There’s much more you can do with Dockerfiles; feel free to peruse Docker’s official documentation to learn more. Exploration is key, so give it a try and experiment with it.

Managing LXD containers

With Docker out of the way, let’s take a look at how to run containers with LXD. Let’s dive right in and install the required package:

sudo snap install lxd

As you can see, installing LXD is just as easy as installing Docker. In fact, managing containers with LXD is very straightforward as well, as you’ll soon see. Installing LXD gives us the lxc command, which is the command we’ll use to manage LXD containers. Before we get going though, we should add our user account to the lxd group:

sudo usermod -aG lxd <yourusername>

Make sure you log out and log in for the changes to take effect. Just like with the docker group with Docker, the lxd group will allow our user account to manage LXD containers.

Next, we need to initialize our new LXD installation. We’ll do that with the lxd init command:

lxd init

The process will look similar to the following screenshot:

Figure 17.4: Setting up LXD with the lxd init command

The lxd init command will ask us a series of questions regarding how we’d like to set up LXD. The defaults are mostly fine for everything, and for the size of the pool, I just used the default of 30 GB, but you can use whatever size you want to. I set ipv6 to none during the setup since my network doesn’t utilize that, and I also decided to make lxd available over the network.

Even though we chose the defaults for most of the questions, they’ll give you a general consensus of some of the different options that LXD makes available for us. For example, we can see that LXD supports the concept of a storage pool, which is one of its neater features.

Here, we’re creating a default storage pool with a filesystem format of zfs, which is a filesystem that is used on actual hard disks. During the setup process, LXD sets up the storage pool, network bridge, IP address scheme, and basically everything we need to get started.

Now that LXD is installed and set up, we can configure our first container:

lxc launch ubuntu:22.04 mycontainer

With that simple command, LXD will now download the root filesystem for this container and set it up for us. Once done, we’ll have an LXD container based on Ubuntu 22.04 running and available for use. This is different than Docker, which only sets up an image by default, making us run it manually. During this process, we gave the container the name of mycontainer. The process should be fairly easy to follow so far.

You might be wondering why we used a lxc command to create a container since we’re learning about LXD here. As I mentioned earlier, LXD is an improvement over LXC, and as such, it uses lxc commands for management. Commands that are specific to the LXD layer will be lxd, and anything specific to container management will be done with lxc.

When it comes to managing containers, there are several types of operations you will want to perform, such as listing containers, starting a container, stopping a container, deleting a container, and so on. The lxc command suite is very easy and straightforward. Here is a table listing some of the most common commands you can use, and I’m sure you’ll agree that the command syntax is very logical. For each example, you substitute <container> with the name of the container you created:

Goal

Command

List the containers

lxc list

Start a container

lxc start <container>

Stop a container

lxc stop <container>

Remove a container

lxc delete <container>

List the downloaded images

lxc image list

Remove an image

lxc image delete <image_name>

With all the basics out of the way, let’s jump into our container and play around with it. To open a shell to the container we just created, we would run the following:

lxc exec mycontainer bash

In the preceding command, exec tells the container we want to execute a command, mycontainer is the name of the container that we want to execute something against, and the specific command we want to execute is bash. After you execute that command, it immediately runs bash from the container as root. From here, you can configure the container as you need to by installing packages, setting up services, or whatever else you may need to do in order to make the container conform to the purpose you have for it. In fact, the process of customizing the container for redeployment is actually easier than it is with Docker.

Unlike with Docker, changes are not wiped when you exit a container, and you don’t have to exit it in a particular way to avoid losing your changes. We also don’t have layers to deal with in LXD, which you may or may not be happy about (layers in Docker containers can make deployments faster, but when previously run containers aren’t cleaned up, the number of layers can look messy).

The Ubuntu image we used to create our container includes a default user account, ubuntu. This is similar to some VPS providers, which also include an ubuntu user account by default (Amazon EC2 is an example of this). If you prefer to log in as this user rather than root, you can do that with this command:

lxc exec mycontainer -- su --login ubuntu

The ubuntu user has access to sudo, so you’ll be able to run privileged tasks with no issue.

To exit the container, you can press Ctrl + d on your keyboard, or simply type exit. Feel free to log in to the container and make some changes and experiment. Once you have the container set up the way you like it, you may want the container to automatically start up when you boot your server. This is actually very easy to do:

lxc config set mycontainer boot.autostart 1

With the preceding command, we’re setting boot.autostart to 1, which turns on that particular feature. Similar to a Boolean variable for those that are familiar with programming, 1 means “on” and 0 means “off.” After setting this config value, your newly created container will now start up with the server anytime it’s booted.

Now, let’s have a bit of fun. Feel free to install the apache2 package in your container. Similar to Docker, I’ve found that you will probably want to run apt update to update your package listings first, as I’ve seen failures installing packages on a fresh container solely because the indexes were stale. So just run this to be safe:

sudo apt update && sudo apt install apache2

Now, you should have Apache installed and running in the container. Next, we need to grab the IP address of the container. Yes, you read that right, LXD has its own IP address space for its containers, which is very neat. Simply run ip addr show (the same command you’d run in a normal server), and it will display the IP address information. On the same machine that’s running the container, you can visit this IP address to see the default Apache web page. If you’re running the container on a server with no graphical user interface, you can use the curl command to verify that it’s working:

curl <container_ip_address>

Although we have Apache running in our container, we can see that it’s not very useful yet. The web page is only available from the machine that’s hosting the container. This doesn’t help us much if we want users in our local network or even from the outside internet to be able to reach our site. We could set up firewall rules to route traffic to it, but there’s an easier way—creating a profile for external access.

I mentioned earlier that even though LXD is a containerization technology, it shares some of its feature set with VMs, basically giving you VM-like features in a non-VM environment. With LXD, we can create a profile to allow it to get an IP address from your DHCP server and route traffic directly through your LAN, just as with a physical device that you connect to your network.

Before continuing, you’ll need a bridge connection set up on your server. This is done in software via Netplan and was discussed as part of the previous chapter. If you list your network interfaces (ip addr show), you should see a br0 connection. If you don’t have this configured, refer back to Chapter 16, Virtualization, and refer to the Bridging the VM network section there. Once you’ve created this connection, you can continue on.

Some network cards do not support bridging, especially with some Wi-Fi cards. If you’re unable to create a bridge on your hardware, the following section may not work for you. Consult the documentation for your hardware to ensure your network card supports bridging.

To create the profile we’ll need in order to enable external access to our containers, we’ll use the following command:

lxc profile create external

We should see output similar to the following:

Profile external created

Next, we’ll need to edit the profile we just created. The following command will open the profile in a text editor so that you can edit it:

lxc profile edit external

Inside the profile, we’ll replace its text with this:

description: External access profile
devices:
  eth0:
  name: eth0
  nictype: bridged
  parent: br0
  type: nic

From this point forward, we can launch new containers with this profile with the following command:

lxc launch ubuntu:22.04 mynewcontainer -p default -p external

Notice how we’re applying two profiles, default and then external. We do this so that the values in default can be loaded first, followed by the second profile so that it overrides any conflicting parameters that may be present.

We already have a container, though, so you may be curious how we can edit the existing one to take advantage of our new profile. That’s simple:

lxc profile add mycontainer external

From this point forward, assuming the host bridge on your server has been configured properly, the container should be accessible via your local LAN. You should be able to host a resource, such as a website, and have others be able to access it. This resource could be a local intranet site or even an internet-facing website.

As far as getting started with LXD is concerned, that’s essentially it. LXD is very simple to use, and its command structure is very logical and easy to understand. With just a few simple commands, we can create a container, and even make it externally accessible. Canonical has many examples and tutorials available online to help you push your knowledge even further, but with what you’ve learned so far, you should have enough practical knowledge to roll out this solution in your organization.

Summary

Containers are a wonderful method of hosting applications. You can spin up more containers on your hardware than you’d be able to with VMs, which will definitely save resources. While not all applications can be run inside containers, it’s a very useful tool to have available. In this chapter, we looked at both Docker and LXD. While Docker is better for cross-platform applications, LXD is simpler to use but is very flexible. We started out by discussing the differences between these two solutions, then we experimented with both creating containers and looking at how to manage them.

In the next chapter, we will expand our knowledge of containers even further and take a look at orchestration, which allows us to manage multiple containers more efficiently. This will be the chapter where all of the concepts relating to containers come together.

Relevant videos

Further reading

Join our community on Discord

Join our community’s Discord space for discussions with the author and other readers:

https://packt.link/LWaZ0

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.45.153