Defining a Pod with the tools

Every application is different, and the tools we need for a continuous deployment pipeline vary from one case to another. For now, we'll focus on those we'll need for our go-demo-3 application.

Since the application is written in Go, we'll need golang image to download the dependencies and run the tests. We'll have to build Docker images, so we should probably add a docker container as well. Finally, we'll have to execute quite a few kubectl commands. For those of you using OpenShift, we'll need oc as well. All in all, we need a Pod with golang, docker, kubectl, and (for some of you) oc.

The go-demo-3 repository already contains a definition of a Pod with all those containers, so let's take a closer look at it.

 1  cat k8s/cd.yml

The output is as follows.

apiVersion: v1
kind: Pod
metadata:
  name: cd
  namespace: go-demo-3-build
spec:
  containers:
  - name: docker
    image: docker:18.03-git
    command: ["sleep"]
    args: ["100000"]
    volumeMounts:
    - name: workspace
      mountPath: /workspace
    - name: docker-socket
      mountPath: /var/run/docker.sock
    workingDir: /workspace
  - name: kubectl
    image: vfarcic/kubectl
    command: ["sleep"]
    args: ["100000"]
    volumeMounts:
    - name: workspace
      mountPath: /workspace
    workingDir: /workspace
  - name: oc
    image: vfarcic/openshift-client
    command: ["sleep"]
    args: ["100000"]
    volumeMounts:
    - name: workspace
      mountPath: /workspace
    workingDir: /workspace
  - name: golang
    image: golang:1.9
    command: ["sleep"]
    args: ["100000"]
    volumeMounts:
    - name: workspace
      mountPath: /workspace
    workingDir: /workspace
  serviceAccount: build
  volumes:
  - name: docker-socket
    hostPath:
      path: /var/run/docker.sock
      type: Socket
  - name: workspace
    emptyDir: {}

Most of the YAML defines the containers based on images that contain the tools we need. What makes it special is that all the containers have the same mount called workspace. It maps to /workspace directory inside containers, and it uses emptyDir volume type.

We'll accomplish two things with those volumes. On the one hand, all the containers will have a shared space so the artifacts generated through the actions we will perform in one will be available in the other. On the other hand, since emptyDir volume type exists only just as long as the Pod is running, it'll be deleted when we remove the Pod. As a result, we won't be leaving unnecessary garbage on our nodes or external drives.

To simplify the things and save us from typing cd /workspace, we set workingDir to all the containers.

Unlike most of the other Pods we usually run in our clusters, those dedicated to CDP processes are short lived. They are not supposed to exist for a long time nor should they leave any trace of their existence once they finish executing the steps we are about to define.

The ability to run multiple containers on the same node and with a shared file system and networking will be invaluable in our quest to define continuous deployment processes. If you were ever wondering what the purpose of having Pods as entities that envelop multiple containers is, the steps we are about to explore will hopefully provide a perfect use-case.

Let's create the Pod.

 1  kubectl apply -f k8s/cd.yml --record

Pleases confirm that all the containers of the Pod are up and running by executing kubectl -n go-demo-3-build get pods. You should see that 4/4 are ready.

Now we can start working on our continuous deployment pipeline steps.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.253.33