If a container can contain a whole operating system, such as Ubuntu, you might be wondering: Can't I just run Puppet inside the container?
You can, and some people do take this approach to managing containers. It also has a number of advantages:
Of course, there are a few disadvantages too:
There are also some hybrid options, such as running Puppet in the container during the build stage, and then removing Puppet and its dependencies, plus any intermediate build artifacts, before saving the final image.
Puppet's image_build
module is a promising new way of building containers directly from Puppet manifests, and I expect to see rapid progress in this space in the near future.
Which option you favor probably depends on your basic approach to containers. Do you see them as mini-virtual machines, not too different from the servers you're already managing? Or do you see them as transient, lightweight, single-process wrappers?
If you treat containers as mini-VMs, you'll probably want to run Puppet in your containers, in the same way as you do on your physical and virtual servers. On the other hand, if you think a container should just run a single process, it doesn't seem appropriate to run Puppet in it. With single-process containers, there's very little to configure.
I can see arguments in favour of the mini-VM approach. For one thing, it makes it much easier to transition your existing applications and services to containers; instead of running them in a VM, you just move the whole thing (application, support services, and database) into a container, along with all your current management and monitoring tools.
However, while this is a valid approach, it doesn't really make the most of the inherent advantages of containers: small image sizes, quick deployment, efficient rebuilding, and portability.
Personally, I'm a container minimalist: I think the container should contain only what it needs to do the job. Therefore, I prefer to use Puppet to manage, configure, and build my containers from the outside, rather than from the inside, and that's why I've used that approach in this chapter.
That means generating Dockerfiles from templates and Hiera data, as we've seen in the examples, as well as templating the config files which the container needs. You can have the Dockerfile copy these files into the container during the build, or mount individual files and directories from the host onto the container.
As we've seen, a good way to handle shared data is to have Puppet write it into a Docker volume or a file on the host, which is then mounted (usually read-only) by all running containers.
The advantage of this is that you don't need to rebuild all your containers following a config change. You can simply have Puppet write the changes to the config volume and trigger each container to reload its configuration using a docker::exec
resource, which executes a specified command on a running container.
At the risk of laboring a point, containerization is not an alternative to using configuration management tools such as Puppet. In fact, the need for configuration management is even greater, because you not only have to build and configure the containers themselves, but also store, deploy, and run them, all of which requires infrastructure.
As usual, Puppet makes this sort of task easier, more pleasant, and—most importantly—more scalable.
18.119.104.160