Creating nodes for building container images

We already discussed that mounting a Docker socket is a bad idea due to security risks. Running Docker in Docker would require privileged access, and that is almost as unsafe and Docker socket. On top of that, both options have other downsides. Using Docker socket would introduce processes unknown to Kubernetes and could interfere with its scheduling capabilities. Running Docker in Docker could mess up with networking. There are other reasons why both options are not good, so we need to look for an alternative.

Recently, new projects spun up attempting to help with building container images. Good examples are img (https://github.com/genuinetools/img), orca-build (https://github.com/cyphar/orca-build), umoci (https://github.com/openSUSE/umoci), buildah (https://github.com/containers/buildah), FTL (https://github.com/GoogleCloudPlatform/runtimes-common/tree/master/ftl), and Bazel rules_docker (https://github.com/bazelbuild/rules_docker). They all have serious downsides. While they might help, none of them is a good solution which I'd recommend as a replacement for building container images with Docker.

kaniko (https://github.com/GoogleContainerTools/kaniko) is a shiny star that has a potential to become a preferable way for building container images. It does not require Docker nor any other node dependency. It can run as a container, and it is likely to become a valid alternative one day. However, that day is not today (June 2018). It is still green, unstable, and unproven.

All in all, Docker is still our best option for building container images, but not inside a Kubernetes cluster. That means that we need to build our images in a VM outside Kubernetes.

How are we going to create a VM for building container images? Are we going to have a static VM that will be wasting our resources when at rest?

The answer to those questions depends on the hosting provider you're using. If it allows dynamic creation of VMs, we can create them when we need them, and destroy them when we don't. If that's not an option, we need to fall back to a dedicated machine for building images.

I could not describe all the methods for creating VMs, so I limited the scope to three combinations. We'll explore how to create a static VM in cases when dynamic provisioning is not an option. If you're using Docker for Mac or Windows, minikube, or minishift, that is your best bet. We'll use Vagrant, but the same principles can be applied to any other, often on-premise, virtualization technology.

On the other hand, if you're using a hosting provider that does support dynamic provisioning of VMs, you should leverage that to your benefit to create them when needed, and destroy them when not. I'll show you the examples of Amazon's Elastic Compute Cloud (EC2) and Google Cloud Engine (GCE). If you use something else (for example, Azure, DigitalOcean), the principle will be the same, even though the implementation might vary significantly.

The primary question is whether Jenkins supports your provider. If it does, you can use a plugin that will take care of creating and destroying nodes. Otherwise, you might need to extend your Pipeline scripts to use provider's API to spin up new nodes. In that case, you might want to evaluate whether such an option is worth the trouble. Remember, if everything else fails, having a static VM dedicated to building container images will always work.

Even if you chose to build your container images differently, it is still a good idea to know how to connect external VMs to Jenkins. There's often a use-case that cannot (or shouldn't) be accomplished inside a Kubernetes cluster. You might need to execute some of the steps in Windows nodes. There might be processes that shouldn't run inside containers. Or, maybe you need to connect Android devices to your Pipelines. No matter the use-case, knowing how to connect external agents to Jenkins is essential. So, building container images is not necessarily the only reason for having external agents (nodes), and I strongly suggest exploring the sections that follow, even if you don't think it's useful at this moment.

Before we jump into different ways to create VMs for building and pushing container images, we need to create one thing common to all. We'll need to create a set of credentials that will allow us to login to Docker Hub.

 1  open
"http://$JENKINS_ADDR/credentials/store/system/domain/_/newCredentials"

Please type your Docker Hub Username and Password. Both the ID and the Description should be set to docker since that is the reference we'll use later. Don't forget to click the OK button.

Now we are ready to create some VMs. Please choose the section that best fits your use case. Or, even better, try all three of them.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.189.193.210