Chapter 12. Container Orchestration: Kubernetes

If you are experimenting with Docker, or if running a set of Docker containers on a single machine is all you need, then Docker and Docker Compose would be sufficient for your needs. However, as soon as you move from the number 1 (single machine) to the number 2 (multiple machines), you need to start worrying about orchestrating the containers across the network. For production scenarios, this is a given. You need at least two machines to achieve fault tolerance/high availability.

In our age of cloud computing, the recommended way of scaling an infrastructure is “out” (also referred to as “horizontal scalability”), by adding more instances to your overall system, as opposed to the older way of scaling “up” (or “vertical scalability”), by adding more CPUs and memory to a single instance. A Docker orchestration platform uses these many instances or nodes as sources of raw resources (CPU, memory, network) that it then allocates to individual containers running within the platform. This ties into what we mentioned in Chapter 11 in regards to the advantages of using containers over classic virtual machines (VMs): the raw resources at your disposal will be better utilized because containers can get these resources allocated to them on a much more granular basis than VMs, and you will get more bang for your infrastructure buck.

There has also been a shift from provisioning servers for specific purposes and running specific software packages on each instance (such as web server software, cache software, database software) to provisioning them as generic units of resource allocation and running Docker containers on them, coordinated by a Docker orchestration platform. You may be familiar with the distinction between looking at servers as “pets” versus looking at them as “cattle.” In the early days of infrastructure design, each server had a definite function (such as the mail server), and many times there was only one server for each specific function. There were naming schemes for such servers (Grig remembers using a planetary system naming scheme in the dot-com days), and a lot of time was spent on their care and feeding, hence the pet designation. When configuration management tools such as Puppet, Chef, and Ansible burst onto the scene, it became easier to provision multiple servers of the same type (for example, a web server farm) at the same time, by using an identical installation procedure on each server. This coincided with the rise of cloud computing, with the concept of horizontal scalability mentioned previously, and also with more concern for fault tolerance and high availability as critical properties of well-designed system infrastructure. The servers or cloud instances were considered cattle, disposable units that have value in their aggregate.

The age of containers and serverless computing also brought about another designation, “insects.” Indeed, one can look at the coming and going of containers as a potentially short existence, like an ephemeral insect. Functions-as-a-service are even more fleeting than Docker containers, with a short but intense life coinciding with the duration of their call.

In the case of containers, their ephemerality makes their orchestration and interoperability hard to achieve at a large scale. This is exactly the need that has been filled by container orchestration platforms. There used to be multiple Docker orchestration platforms to choose from, such as Mesosphere and Docker Swarm, but these days we can safely say that Kubernetes has won that game. The rest of the chapter is dedicated to a short overview of Kubernetes, followed by an example of running the same application described in Chapter 11 and porting it from docker-compose to Kubernetes. We will also show how to use Helm, a Kubernetes package manager, to install packages called charts for the monitoring and dashboarding tools Prometheus and Grafana, and how to customize these charts.

Short Overview of Kubernetes Concepts

The best starting point for understanding the many parts comprising a Kubernetes cluster is the official Kubernetes documentation.

At a high level, a Kubernetes cluster consists of nodes that can be equated to servers, be they bare-metal or virtual machines running in a cloud. Nodes run pods, which are collections of Docker containers. A pod is the unit of deployment in Kubernetes. All containers in a pod share the same network and can refer to each other as if they were running on the same host. There are many situations in which it is advantageous to run more than one container in a pod. Typically, your application container runs as the main container in the pod, and if needed you will run one or more so-called “sidecar” containers for functionality, such as logging or monitoring. One particular case of sidecar containers is an “init container,” which is guaranteed to run first and can be used for housekeeping tasks, such as running database migrations. We’ll explore this later in this chapter.

An application will typically use more than one pod for fault tolerance and performance purposes. The Kubernetes object responsible for launching and maintaining the desired number of pods is called a deployment. For pods to communicate with other pods, Kubernetes provides another kind of object called a service. Services are tied to deployments through selectors. Services are also exposed to external clients, either by exposing a NodePort as a static port on each Kubernetes node, or by creating a LoadBalancer object that corresponds to an actual load balancer, if it is supported by the cloud provider running the Kubernetes cluster.

For managing sensitive information such as passwords, API keys, and other credentials, Kubernetes offers the Secret object. We will see an example of using a Secret for storing a database password.

Using Kompose to Create Kubernetes Manifests from docker-compose.yaml

Let’s take another look at the docker_compose.yaml file for the Flask example application discussed in Chapter 11:

$ cat docker-compose.yaml
version: "3"
services:
  app:
    image: "griggheo/flask-by-example:v1"
    command: "manage.py runserver --host=0.0.0.0"
    ports:
      - "5000:5000"
    environment:
      APP_SETTINGS: config.ProductionConfig
      DATABASE_URL: postgresql://wordcount_dbadmin:$DBPASS@db/wordcount
      REDISTOGO_URL: redis://redis:6379
    depends_on:
      - db
      - redis
  worker:
    image: "griggheo/flask-by-example:v1"
    command: "worker.py"
    environment:
      APP_SETTINGS: config.ProductionConfig
      DATABASE_URL: postgresql://wordcount_dbadmin:$DBPASS@db/wordcount
      REDISTOGO_URL: redis://redis:6379
    depends_on:
      - db
      - redis
  migrations:
    image: "griggheo/flask-by-example:v1"
    command: "manage.py db upgrade"
    environment:
      APP_SETTINGS: config.ProductionConfig
      DATABASE_URL: postgresql://wordcount_dbadmin:$DBPASS@db/wordcount
    depends_on:
      - db
  db:
    image: "postgres:11"
    container_name: "postgres"
    ports:
      - "5432:5432"
    volumes:
      - dbdata:/var/lib/postgresql/data
  redis:
    image: "redis:alpine"
    ports:
      - "6379:6379"
volumes:
  dbdata:

We will use a tool called Kompose to translate this YAML file into a set of Kubernetes manifests.

To get a new version of Kompose on a macOS machine, first download it from the Git repository, then move it to /usr/local/bin/kompose, and make it executable. Note that if you rely on your operating system’s package management system (for example, apt on Ubuntu systems or yum on Red Hat systems) for installing Kompose, you may get a much older version that may not be compatible to these instructions.

Run the kompose convert command to create the Kubernetes manifest files from the existing docker-compose.yaml file:

$ kompose convert
INFO Kubernetes file "app-service.yaml" created
INFO Kubernetes file "db-service.yaml" created
INFO Kubernetes file "redis-service.yaml" created
INFO Kubernetes file "app-deployment.yaml" created
INFO Kubernetes file "db-deployment.yaml" created
INFO Kubernetes file "dbdata-persistentvolumeclaim.yaml" created
INFO Kubernetes file "migrations-deployment.yaml" created
INFO Kubernetes file "redis-deployment.yaml" created
INFO Kubernetes file "worker-deployment.yaml" created

At this point, remove the docker-compose.yaml file:

$ rm docker-compose.yaml

Deploying Kubernetes Manifests to a Local Kubernetes Cluster Based on minikube

Our next step is to deploy the Kubernetes manifests to a local Kubernetes cluster based on minikube.

A prerequisite to running minikube on macOS is to install VirtualBox. Download the VirtualBox package for macOS from its download page, install it, and then move it to /usr/local/bin/minikube to make it executable. Note that at the time of this writing, minikube installed a Kubernetes cluster with version 1.15. If you want to follow along with these examples, specify the version of Kubernetes you want to install with minikube:

$ minikube start --kubernetes-version v1.15.0
 minikube v1.2.0 on darwin (amd64)
 Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
 Configuring environment for Kubernetes v1.15.0 on Docker 18.09.6
 Downloading kubeadm v1.15.0
 Downloading kubelet v1.15.0
 Pulling images ...
 Launching Kubernetes ...
 Verifying: apiserver proxy etcd scheduler controller dns
 Done! kubectl is now configured to use "minikube"

The main command for interacting with a Kubernetes cluster is kubectl.

Install kubectl on a macOS machine by downloading it from the release page, then moving it to /usr/local/bin/kubectl and making it executable.

One of the main concepts you will use when running kubectl commands is context, which signifies a Kubernetes cluster that you want to interact with. The installation process for minikube already created a context for us called minikube. One way to point kubectl to a specific context is with the following command:

$ kubectl config use-context minikube
Switched to context "minikube".

A different, and more handy, way is to install the kubectx utility from the Git repository, then run:

$ kubectx minikube
Switched to context "minikube".
Tip

Another handy client utility for your Kubernetes work is kube-ps1. For a macOS setup based on Zsh, add this snippet to the file ~/.zshrc:

source "/usr/local/opt/kube-ps1/share/kube-ps1.sh"
PS1='$(kube_ps1)'$PS1

These lines change the shell prompt to show the current Kubernetes context and namespace. As you start interacting with multiple Kubernetes clusters, this will be a lifesaver for distinguishing between a production and a staging cluster.

Now run kubectl commands against the local minikube cluster. For example, the kubectl get nodes command shows the nodes that are part of the cluster. In this case, there is only one node with the role of master:

$ kubectl get nodes
NAME       STATUS   ROLES    AGE     VERSION
minikube   Ready    master   2m14s   v1.15.0

Start by creating the Persistent Volume Claim (PVC) object from the file dbdata-persistentvolumeclaim.yaml that was created by Kompose, and which corresponds to the local volume allocated for the PostgreSQL database container, when running it with docker-compose:

$ cat dbdata-persistentvolumeclaim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  creationTimestamp: null
  labels:
    io.kompose.service: dbdata
  name: dbdata
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi
status: {}

To create this object in Kubernetes, use the kubectl create command and specify the file name of the manifest with the -f flag:

$ kubectl create -f dbdata-persistentvolumeclaim.yaml
persistentvolumeclaim/dbdata created

List all the PVCs with the kubectl get pvc command to verify that our PVC is there:

$ kubectl get pvc
NAME     STATUS   VOLUME                                     CAPACITY
ACCESS MODES   STORAGECLASS   AGE
dbdata   Bound    pvc-39914723-4455-439b-a0f5-82a5f7421475   100Mi
RWO            standard       1m

The next step is to create the Deployment object for PostgreSQL. Use the manifest file db-deployment.yaml created previously by the Kompose utility:

$ cat db-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    kompose.cmd: kompose convert
    kompose.version: 1.16.0 (0c01309)
  creationTimestamp: null
  labels:
    io.kompose.service: db
  name: db
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      creationTimestamp: null
      labels:
        io.kompose.service: db
    spec:
      containers:
      - image: postgres:11
        name: postgres
        ports:
        - containerPort: 5432
        resources: {}
        volumeMounts:
        - mountPath: /var/lib/postgresql/data
          name: dbdata
      restartPolicy: Always
      volumes:
      - name: dbdata
        persistentVolumeClaim:
          claimName: dbdata
status: {}

To create the deployment, use the kubectl create -f command and point it to the manifest file:

$ kubectl create -f db-deployment.yaml
deployment.extensions/db created

To verify that the deployment was created, list all deployments in the cluster and list the pods that were created as part of the deployment:

$ kubectl get deployments
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
db       1/1     1            1           1m

$ kubectl get pods
NAME                  READY   STATUS    RESTARTS   AGE
db-67659d85bf-vrnw7   1/1     Running   0          1m

Next, create the database for the example Flask application. Use a similar command to docker exec to run the psql command inside a running Docker container. The form of the command in the case of a Kubernetes cluster is kubectl exec:

$ kubectl exec -it db-67659d85bf-vrnw7 -- psql -U postgres
psql (11.4 (Debian 11.4-1.pgdg90+1))
Type "help" for help.

postgres=# create database wordcount;
CREATE DATABASE
postgres=# q

$ kubectl exec -it db-67659d85bf-vrnw7 -- psql -U postgres wordcount
psql (11.4 (Debian 11.4-1.pgdg90+1))
Type "help" for help.

wordcount=# CREATE ROLE wordcount_dbadmin;
CREATE ROLE
wordcount=# ALTER ROLE wordcount_dbadmin LOGIN;
ALTER ROLE
wordcount=# ALTER USER wordcount_dbadmin PASSWORD 'MYPASS';
ALTER ROLE
wordcount=# q

The next step is to create the Service object corresponding to the db deployment, that will expose the deployment to the other services running inside the cluster, such as the Redis worker service and the main application service. Here is the manifest file for the db service:

$ cat db-service.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    kompose.cmd: kompose convert
    kompose.version: 1.16.0 (0c01309)
  creationTimestamp: null
  labels:
    io.kompose.service: db
  name: db
spec:
  ports:
  - name: "5432"
    port: 5432
    targetPort: 5432
  selector:
    io.kompose.service: db
status:
  loadBalancer: {}

One thing to note is the following section:

  labels:
    io.kompose.service: db

This section appears in both the deployment manifest and the service manifest and is indeed the way to tie the two together. A service will be associated with any deployment that has the same label.

Create the Service object with the kubectl create -f command:

$ kubectl create -f db-service.yaml
service/db created

List all services and notice that the db service was created:

$ kubectl get services
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
db           ClusterIP   10.110.108.96   <none>        5432/TCP   6s
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP    4h45m

The next service to deploy is Redis. Create the Deployment and Service objects based on the manifest files generated by Kompose:

$ cat redis-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    kompose.cmd: kompose convert
    kompose.version: 1.16.0 (0c01309)
  creationTimestamp: null
  labels:
    io.kompose.service: redis
  name: redis
spec:
  replicas: 1
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        io.kompose.service: redis
    spec:
      containers:
      - image: redis:alpine
        name: redis
        ports:
        - containerPort: 6379
        resources: {}
      restartPolicy: Always
status: {}

$ kubectl create -f redis-deployment.yaml
deployment.extensions/redis created

$ kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
db-67659d85bf-vrnw7     1/1     Running   0          37m
redis-c6476fbff-8kpqz   1/1     Running   0          11s

$ kubectl create -f redis-service.yaml
service/redis created

$ cat redis-service.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    kompose.cmd: kompose convert
    kompose.version: 1.16.0 (0c01309)
  creationTimestamp: null
  labels:
    io.kompose.service: redis
  name: redis
spec:
  ports:
  - name: "6379"
    port: 6379
    targetPort: 6379
  selector:
    io.kompose.service: redis
status:
  loadBalancer: {}

$ kubectl get services
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
db           ClusterIP   10.110.108.96   <none>        5432/TCP   84s
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP    4h46m
redis        ClusterIP   10.106.44.183   <none>        6379/TCP   10s

So far, the two services that have been deployed, db and redis, are independent of each other. The next part of the application is the worker process, which needs to talk to both PostgreSQL and Redis. This is where the advantage of using Kubernetes services comes into play. The worker deployment can refer to the endpoints for PostgreSQL and Redis by using the service names. Kubernetes knows how to route the requests from the client (the containers running as part of the pods in the worker deployment) to the servers (the PostgreSQL and Redis containers running as part of the pods in the db and redis deployments, respectively).

One of the environment variables used in the worker deployment is DATABASE_URL. It contains the database password used by the application. The password should not be exposed in clear text in the deployment manifest file, because this file needs to be checked into version control. Instead, create a Kubernetes Secret object.

First, encode the password string in base64:

$ echo MYPASS | base64
MYPASSBASE64

Then, create a manifest file describing the Kubernetes Secret object that you want to create. Since the base64 encoding of the password is not secure, use sops to edit and save an encrypted manifest file secrets.yaml.enc:

$ sops --pgp E14104A0890994B9AC9C9F6782C1FF5E679EFF32 secrets.yaml.enc

Inside the editor, add these lines:

apiVersion: v1
kind: Secret
metadata:
  name: fbe-secret
type: Opaque
data:
  dbpass: MYPASSBASE64

The secrets.yaml.enc file can now be checked in because it contains the encrypted version of the base64 value of the password.

To decrypt the encrypted file, use the sops -d command:

$ sops -d secrets.yaml.enc
apiVersion: v1
kind: Secret
metadata:
  name: fbe-secret
type: Opaque
data:
  dbpass: MYPASSBASE64

Pipe the output of sops -d to kubectl create -f to create the Kubernetes Secret object:

$ sops -d secrets.yaml.enc | kubectl create -f -
secret/fbe-secret created

Inspect the Kubernetes Secrets and describe the Secret that was created:

$ kubectl get secrets
NAME                  TYPE                                  DATA   AGE
default-token-k7652   kubernetes.io/service-account-token   3      3h19m
fbe-secret            Opaque                                1      45s

$ kubectl describe secret fbe-secret
Name:         fbe-secret
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
dbpass:  12 bytes

To get the base64-encoded Secret back, use:

$ kubectl get secrets fbe-secret -ojson | jq -r ".data.dbpass"
MYPASSBASE64

To get the plain-text password back, use the following command on a macOS machine:

$ kubectl get secrets fbe-secret -ojson | jq -r ".data.dbpass" | base64 -D
MYPASS

On a Linux machine, the proper flag for base64 decoding is -d, so the correct command would be:

$ kubectl get secrets fbe-secret -ojson | jq -r ".data.dbpass" | base64 -d
MYPASS

The secret can now be used in the deployment manifest of the worker. Modify the worker-deployment.yaml file generated by the Kompose utility and add two environment variables:

  • DBPASS is the database password that will be retrieved from the fbe-secret Secret object.

  • DATABASE_URL is the full database connection string for PostgreSQL, which includes the database password and references it as ${DBPASS}.

This is the modified version of worker-deployment.yaml:

$ cat worker-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    kompose.cmd: kompose convert
    kompose.version: 1.16.0 (0c01309)
  creationTimestamp: null
  labels:
    io.kompose.service: worker
  name: worker
spec:
  replicas: 1
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        io.kompose.service: worker
    spec:
      containers:
      - args:
        - worker.py
        env:
        - name: APP_SETTINGS
          value: config.ProductionConfig
        - name: DBPASS
          valueFrom:
            secretKeyRef:
              name: fbe-secret
              key: dbpass
        - name: DATABASE_URL
          value: postgresql://wordcount_dbadmin:${DBPASS}@db/wordcount
        - name: REDISTOGO_URL
          value: redis://redis:6379
        image: griggheo/flask-by-example:v1
        name: worker
        resources: {}
      restartPolicy: Always
status: {}

Create the worker Deployment object in the same way as for the other deployments, by calling kubectl create -f:

$ kubectl create -f worker-deployment.yaml
deployment.extensions/worker created

List the pods:

$ kubectl get pods
NAME                      READY   STATUS              RESTARTS   AGE
db-67659d85bf-vrnw7       1/1     Running             1          21h
redis-c6476fbff-8kpqz     1/1     Running             1          21h
worker-7dbf5ff56c-vgs42   0/1     Init:ErrImagePull   0          7s

Note that the worker pod is shown with status Init:ErrImagePull. To see details about this status, run kubectl describe:

$ kubectl describe pod worker-7dbf5ff56c-vgs42 | tail -10
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  2m51s                default-scheduler
  Successfully assigned default/worker-7dbf5ff56c-vgs42 to minikube

  Normal   Pulling    76s (x4 over 2m50s)  kubelet, minikube
  Pulling image "griggheo/flask-by-example:v1"

  Warning  Failed     75s (x4 over 2m49s)  kubelet, minikube
  Failed to pull image "griggheo/flask-by-example:v1": rpc error:
  code = Unknown desc = Error response from daemon: pull access denied for
  griggheo/flask-by-example, repository does not exist or may require
  'docker login'

  Warning  Failed     75s (x4 over 2m49s)  kubelet, minikube
  Error: ErrImagePull

  Warning  Failed     62s (x6 over 2m48s)  kubelet, minikube
  Error: ImagePullBackOff

  Normal   BackOff    51s (x7 over 2m48s)  kubelet, minikube
  Back-off pulling image "griggheo/flask-by-example:v1"

The deployment tried to pull the griggheo/flask-by-example:v1 private Docker image from Docker Hub, and it lacked the appropriate credentials to access the private Docker registry. Kubernetes includes a special type of object for this very scenario, called an imagePullSecret.

Create an encrypted file with sops containing the Docker Hub credentials and the call to kubectl create secret:

$ sops --pgp E14104A0890994B9AC9C9F6782C1FF5E679EFF32 
create_docker_credentials_secret.sh.enc

The contents of the file are:

DOCKER_REGISTRY_SERVER=docker.io
DOCKER_USER=Type your dockerhub username, same as when you `docker login`
DOCKER_EMAIL=Type your dockerhub email, same as when you `docker login`
DOCKER_PASSWORD=Type your dockerhub pw, same as when you `docker login`

kubectl create secret docker-registry myregistrykey 
--docker-server=$DOCKER_REGISTRY_SERVER 
--docker-username=$DOCKER_USER 
--docker-password=$DOCKER_PASSWORD 
--docker-email=$DOCKER_EMAIL

Decode the encrypted file with sops and run it through bash:

$ sops -d create_docker_credentials_secret.sh.enc | bash -
secret/myregistrykey created

Inspect the Secret:

$ kubectl get secrets myregistrykey -oyaml
apiVersion: v1
data:
  .dockerconfigjson: eyJhdXRocyI6eyJkb2NrZXIuaW8iO
kind: Secret
metadata:
  creationTimestamp: "2019-07-17T22:11:56Z"
  name: myregistrykey
  namespace: default
  resourceVersion: "16062"
  selfLink: /api/v1/namespaces/default/secrets/myregistrykey
  uid: 47d29ffc-69e4-41df-a237-1138cd9e8971
type: kubernetes.io/dockerconfigjson

The only change to the worker deployment manifest is to add these lines:

      imagePullSecrets:
      - name: myregistrykey

Include it right after this line:

     restartPolicy: Always

Delete the worker deployment and recreate it:

$ kubectl delete -f worker-deployment.yaml
deployment.extensions "worker" deleted

$ kubectl create -f worker-deployment.yaml
deployment.extensions/worker created

Now the worker pod is in a Running state, with no errors:

$ kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
db-67659d85bf-vrnw7       1/1     Running   1          22h
redis-c6476fbff-8kpqz     1/1     Running   1          21h
worker-7dbf5ff56c-hga37   1/1     Running   0          4m53s

Inspect the worker pod’s logs with the kubectl logs command:

$ kubectl logs worker-7dbf5ff56c-hga37
20:43:13 RQ worker 'rq:worker:040640781edd4055a990b798ac2eb52d'
started, version 1.0
20:43:13 *** Listening on default...
20:43:13 Cleaning registries for queue: default

The next step is to tackle the application deployment. When the application was deployed in a docker-compose setup in Chapter 11, a separate Docker container was employed to run the migrations necessary to update the Flask database. This type of task is a good candidate for running as a sidecar container in the same pod as the main application container. The sidecar will be defined as a Kubernetes initContainer inside the application deployment manifest. This type of container is guaranteed to run inside the pod it belongs to before the start of the other containers included in the pod.

Add this section to the app-deployment.yaml manifest file that was generated by the Kompose utility, and delete the migrations-deployment.yaml file:

      initContainers:
      - args:
        - manage.py
        - db
        - upgrade
        env:
        - name: APP_SETTINGS
          value: config.ProductionConfig
        - name: DATABASE_URL
          value: postgresql://wordcount_dbadmin:@db/wordcount
        image: griggheo/flask-by-example:v1
        name: migrations
        resources: {}

$ rm migrations-deployment.yaml

Reuse the fbe-secret Secret object created for the worker deployment in the application deployment manifest:

$ cat app-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    kompose.cmd: kompose convert
    kompose.version: 1.16.0 (0c01309)
  creationTimestamp: null
  labels:
    io.kompose.service: app
  name: app
spec:
  replicas: 1
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        io.kompose.service: app
    spec:
      initContainers:
      - args:
        - manage.py
        - db
        - upgrade
        env:
        - name: APP_SETTINGS
          value: config.ProductionConfig
        - name: DBPASS
          valueFrom:
            secretKeyRef:
              name: fbe-secret
              key: dbpass
        - name: DATABASE_URL
          value: postgresql://wordcount_dbadmin:${DBPASS}@db/wordcount
        image: griggheo/flask-by-example:v1
        name: migrations
        resources: {}
      containers:
      - args:
        - manage.py
        - runserver
        - --host=0.0.0.0
        env:
        - name: APP_SETTINGS
          value: config.ProductionConfig
        - name: DBPASS
          valueFrom:
            secretKeyRef:
              name: fbe-secret
              key: dbpass
        - name: DATABASE_URL
          value: postgresql://wordcount_dbadmin:${DBPASS}@db/wordcount
        - name: REDISTOGO_URL
          value: redis://redis:6379
        image: griggheo/flask-by-example:v1
        name: app
        ports:
        - containerPort: 5000
        resources: {}
      restartPolicy: Always
status: {}

Create the application deployment with kubectl create -f, then list the pods and describe the application pod:

$ kubectl create -f app-deployment.yaml
deployment.extensions/app created

$ kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
app-c845d8969-l8nhg       1/1     Running   0          7s
db-67659d85bf-vrnw7       1/1     Running   1          22h
redis-c6476fbff-8kpqz     1/1     Running   1          21h
worker-7dbf5ff56c-vgs42   1/1     Running   0          4m53s

The last piece in the deployment of the application to minikube is to ensure that a Kubernetes service is created for the application and that it is declared as type LoadBalancer, so it can be accessed from outside the cluster:

$ cat app-service.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    kompose.cmd: kompose convert
    kompose.version: 1.16.0 (0c01309)
  creationTimestamp: null
  labels:
    io.kompose.service: app
  name: app
spec:
  ports:
  - name: "5000"
    port: 5000
    targetPort: 5000
  type: LoadBalancer
  selector:
    io.kompose.service: app
status:
  loadBalancer: {}
Note

Similar to the db service, the app service is tied to the app deployment through a label declaration that exists in both the deployment and the service manifest for the application:

  labels:
    io.kompose.service: app

Create the service with kubectl create:

$ kubectl create -f app-service.yaml
service/app created

$ kubectl get services
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
app          LoadBalancer   10.99.55.191    <pending>     5000:30097/TCP   2s
db           ClusterIP      10.110.108.96   <none>        5432/TCP         21h
kubernetes   ClusterIP      10.96.0.1       <none>        443/TCP          26h
redis        ClusterIP      10.106.44.183   <none>        6379/TCP         21h

Next, run:

$ minikube service app

This command opens the default browser with the URL http://192.168.99.100:30097/ and shows the home page of the Flask site.

In our next section, we will take the same Kubernetes manifest files for our application and deploy them to a Kubernetes cluster that will be provisioned with Pulumi in the Google Cloud Platform (GCP).

Launching a GKE Kubernetes Cluster in GCP with Pulumi

In this section, we’ll make use of the Pulumi GKE example and also of the GCP setup documentation, so use these links to obtain the necessary documents before proceeding.

Start by creating a new directory:

$ mkdir pulumi_gke
$ cd pulumi_gke

Set up the Google Cloud SDK using the macOS instructions.

Initialize the GCP environment using the gcloud init command. Create a new configuration and a new project named pythonfordevops-gke-pulumi:

$ gcloud init
Welcome! This command will take you through the configuration of gcloud.

Settings from your current configuration [default] are:
core:
  account: [email protected]
  disable_usage_reporting: 'True'
  project: pulumi-gke-testing

Pick configuration to use:
 [1] Re-initialize this configuration [default] with new settings
 [2] Create a new configuration
Please enter your numeric choice:  2

Enter configuration name. Names start with a lower case letter and
contain only lower case letters a-z, digits 0-9, and hyphens '-':
pythonfordevops-gke-pulumi
Your current configuration has been set to: [pythonfordevops-gke-pulumi]

Pick cloud project to use:
 [1] pulumi-gke-testing
 [2] Create a new project
Please enter numeric choice or text value (must exactly match list
item):  2

Enter a Project ID. pythonfordevops-gke-pulumi
Your current project has been set to: [pythonfordevops-gke-pulumi].

Log in to the GCP account:

$ gcloud auth login

Log in to the default application, which is pythonfordevops-gke-pulumi:

$ gcloud auth application-default login

Create a new Pulumi project by running the pulumi new command, specifying gcp-python as your template and pythonfordevops-gke-pulumi as the name of the project:

$ pulumi new
Please choose a template: gcp-python
A minimal Google Cloud Python Pulumi program
This command will walk you through creating a new Pulumi project.

Enter a value or leave blank to accept the (default), and press <ENTER>.
Press ^C at any time to quit.

project name: (pulumi_gke_py) pythonfordevops-gke-pulumi
project description: (A minimal Google Cloud Python Pulumi program)
Created project 'pythonfordevops-gke-pulumi'

stack name: (dev)
Created stack 'dev'

gcp:project: The Google Cloud project to deploy into: pythonfordevops-gke-pulumi
Saved config

Your new project is ready to go! 

To perform an initial deployment, run the following commands:

   1. virtualenv -p python3 venv
   2. source venv/bin/activate
   3. pip3 install -r requirements.txt

Then, run 'pulumi up'.

The following files were created by the pulumi new command:

$ ls -la
ls -la
total 40
drwxr-xr-x  7 ggheo  staff  224 Jul 16 15:08 .
drwxr-xr-x  6 ggheo  staff  192 Jul 16 15:06 ..
-rw-------  1 ggheo  staff   12 Jul 16 15:07 .gitignore
-rw-r--r--  1 ggheo  staff   50 Jul 16 15:08 Pulumi.dev.yaml
-rw-------  1 ggheo  staff  107 Jul 16 15:07 Pulumi.yaml
-rw-------  1 ggheo  staff  203 Jul 16 15:07 __main__.py
-rw-------  1 ggheo  staff   34 Jul 16 15:07 requirements.txt

We are going to make use of the gcp-py-gke example from the Pulumi examples GitHub repository.

Copy *.py and requirements.txt from examples/gcp-py-gke to our current directory:

$ cp ~/pulumi-examples/gcp-py-gke/*.py .
$ cp ~/pulumi-examples/gcp-py-gke/requirements.txt .

Configure GCP-related variables needed for Pulumi to operate in GCP:

$ pulumi config set gcp:project pythonfordevops-gke-pulumi
$ pulumi config set gcp:zone us-west1-a
$ pulumi config set password --secret PASS_FOR_KUBE_CLUSTER

Create and use a Python virtualenv, install the dependencies declared in requirements.txt, and then bring up the GKE cluster defined in mainpy by running the pulumi up command:

$ virtualenv -p python3 venv
$ source venv/bin/activate
$ pip3 install -r requirements.txt
$ pulumi up
Tip

Make sure you enable the Kubernetes Engine API by associating it with a Google billing account in the GCP web console.

The GKE cluster can now be seen in the GCP console.

To interact with the newly provisioned GKE cluster, generate the proper kubectl configuration and use it. Handily, the kubectl configuration is being exported as output by the Pulumi program:

$ pulumi stack output kubeconfig > kubeconfig.yaml
$ export KUBECONFIG=./kubeconfig.yaml

List the nodes comprising the GKE cluster:

$ kubectl get nodes
NAME                                                 STATUS   ROLES    AGE
   VERSION
gke-gke-cluster-ea17e87-default-pool-fd130152-30p3   Ready    <none>   4m29s
   v1.13.7-gke.8
gke-gke-cluster-ea17e87-default-pool-fd130152-kf9k   Ready    <none>   4m29s
   v1.13.7-gke.8
gke-gke-cluster-ea17e87-default-pool-fd130152-x9dx   Ready    <none>   4m27s
   v1.13.7-gke.8

Deploying the Flask Example Application to GKE

Take the same Kubernetes manifests used in the minikube example and deploy them to the Kubernetes cluster in GKE, via the kubectl command. Start by creating the redis deployment and service:

$ kubectl create -f redis-deployment.yaml
deployment.extensions/redis created

$ kubectl get pods
NAME                             READY   STATUS    RESTARTS   AGE
canary-aqw8jtfo-f54b9749-q5wqj   1/1     Running   0          5m57s
redis-9946db5cc-8g6zz            1/1     Running   0          20s

$ kubectl create -f redis-service.yaml
service/redis created

$ kubectl get service redis
NAME    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
redis   ClusterIP   10.59.245.221   <none>        6379/TCP   18s

Create a PersistentVolumeClaim to be used as the data volume for the PostgreSQL database:

$ kubectl create -f dbdata-persistentvolumeclaim.yaml
persistentvolumeclaim/dbdata created

$ kubectl get pvc
NAME     STATUS   VOLUME                                     CAPACITY
dbdata   Bound    pvc-00c8156c-b618-11e9-9e84-42010a8a006f   1Gi
   ACCESS MODES   STORAGECLASS   AGE
   RWO            standard       12s

Create the db deployment:

$ kubectl create -f db-deployment.yaml
deployment.extensions/db created

$ kubectl get pods
NAME                             READY   STATUS             RESTARTS  AGE
canary-aqw8jtfo-f54b9749-q5wqj   1/1     Running            0         8m52s
db-6b4fbb57d9-cjjxx              0/1     CrashLoopBackOff   1         38s
redis-9946db5cc-8g6zz            1/1     Running            0         3m15s

$ kubectl logs db-6b4fbb57d9-cjjxx

initdb: directory "/var/lib/postgresql/data" exists but is not empty
It contains a lost+found directory, perhaps due to it being a mount point.
Using a mount point directly as the data directory is not recommended.
Create a subdirectory under the mount point.

We hit a snag when trying to create the db deployment. GKE provisioned a persistent volume that was mounted as /var/lib/postgresql/data, and according to the error message above, was not empty.

Delete the failed db deployment:

$ kubectl delete -f db-deployment.yaml
deployment.extensions "db" deleted

Create a new temporary pod used to mount the same dbdata PersistentVolumeClaim as /data inside the pod, so its filesystem can be inspected. Launching this type of temporary pod for troubleshooting purposes is a useful technique to know about:

$ cat pvc-inspect.yaml
kind: Pod
apiVersion: v1
metadata:
  name: pvc-inspect
spec:
  volumes:
    - name: dbdata
      persistentVolumeClaim:
        claimName: dbdata
  containers:
    - name: debugger
      image: busybox
      command: ['sleep', '3600']
      volumeMounts:
        - mountPath: "/data"
          name: dbdata

$ kubectl create -f pvc-inspect.yaml
pod/pvc-inspect created

$ kubectl get pods
NAME                             READY   STATUS    RESTARTS   AGE
canary-aqw8jtfo-f54b9749-q5wqj   1/1     Running   0          20m
pvc-inspect                      1/1     Running   0          35s
redis-9946db5cc-8g6zz            1/1     Running   0          14m

Use kubectl exec to open a shell inside the pod so /data can be inspected:

$ kubectl exec -it pvc-inspect -- sh
/ # cd /data
/data # ls -la
total 24
drwx------    3 999      root          4096 Aug  3 17:57 .
drwxr-xr-x    1 root     root          4096 Aug  3 18:08 ..
drwx------    2 999      root         16384 Aug  3 17:57 lost+found
/data # rm -rf lost+found/
/data # exit

Note how /data contained a directory called lost+found that needed to be removed.

Delete the temporary pod:

$ kubectl delete pod pvc-inspect
pod "pvc-inspect" deleted

Create the db deployment again, which completes successfully this time:

$ kubectl create -f db-deployment.yaml
deployment.extensions/db created

$ kubectl get pods
NAME                             READY   STATUS    RESTARTS   AGE
canary-aqw8jtfo-f54b9749-q5wqj   1/1     Running   0          23m
db-6b4fbb57d9-8h978              1/1     Running   0          19s
redis-9946db5cc-8g6zz            1/1     Running   0          17m

$ kubectl logs db-6b4fbb57d9-8h978
PostgreSQL init process complete; ready for start up.

2019-08-03 18:12:01.108 UTC [1]
LOG:  listening on IPv4 address "0.0.0.0", port 5432
2019-08-03 18:12:01.108 UTC [1]
LOG:  listening on IPv6 address "::", port 5432
2019-08-03 18:12:01.114 UTC [1]
LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2019-08-03 18:12:01.135 UTC [50]
LOG:  database system was shut down at 2019-08-03 18:12:01 UTC
2019-08-03 18:12:01.141 UTC [1]
LOG:  database system is ready to accept connections

Create wordcount database and role:

$ kubectl exec -it db-6b4fbb57d9-8h978 -- psql -U postgres
psql (11.4 (Debian 11.4-1.pgdg90+1))
Type "help" for help.

postgres=# create database wordcount;
CREATE DATABASE
postgres=# q

$ kubectl exec -it db-6b4fbb57d9-8h978 -- psql -U postgres wordcount
psql (11.4 (Debian 11.4-1.pgdg90+1))
Type "help" for help.

wordcount=# CREATE ROLE wordcount_dbadmin;
CREATE ROLE
wordcount=# ALTER ROLE wordcount_dbadmin LOGIN;
ALTER ROLE
wordcount=# ALTER USER wordcount_dbadmin PASSWORD 'MYNEWPASS';
ALTER ROLE
wordcount=# q

Create the db service:

$ kubectl create -f db-service.yaml
service/db created
$ kubectl describe service db
Name:              db
Namespace:         default
Labels:            io.kompose.service=db
Annotations:       kompose.cmd: kompose convert
                   kompose.version: 1.16.0 (0c01309)
Selector:          io.kompose.service=db
Type:              ClusterIP
IP:                10.59.241.181
Port:              5432  5432/TCP
TargetPort:        5432/TCP
Endpoints:         10.56.2.5:5432
Session Affinity:  None
Events:            <none>

Create the Secret object based on the base64 value of the database password. The plain-text value for the password is stored in a file encrypted with sops:

$ echo MYNEWPASS | base64
MYNEWPASSBASE64

$ sops secrets.yaml.enc

apiVersion: v1
kind: Secret
metadata:
  name: fbe-secret
type: Opaque
data:
  dbpass: MYNEWPASSBASE64

$ sops -d secrets.yaml.enc | kubectl create -f -
secret/fbe-secret created

kubectl describe secret fbe-secret
Name:         fbe-secret
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
===
dbpass:  21 bytes

Create another Secret object representing the Docker Hub credentials:

$ sops -d create_docker_credentials_secret.sh.enc | bash -
secret/myregistrykey created

Since the scenario under consideration is a production-type deployment of the appication to GKE, set replicas to 3 in worker-deployment.yaml to ensure that three worker pods are running at all times:

$ kubectl create -f worker-deployment.yaml
deployment.extensions/worker created

Make sure that three worker pods are running:

$ kubectl get pods
NAME                             READY   STATUS    RESTARTS   AGE
canary-aqw8jtfo-f54b9749-q5wqj   1/1     Running   0          39m
db-6b4fbb57d9-8h978              1/1     Running   0          16m
redis-9946db5cc-8g6zz            1/1     Running   0          34m
worker-8cf5dc699-98z99           1/1     Running   0          35s
worker-8cf5dc699-9s26v           1/1     Running   0          35s
worker-8cf5dc699-v6ckr           1/1     Running   0          35s

$ kubectl logs worker-8cf5dc699-98z99
18:28:08 RQ worker 'rq:worker:1355d2cad49646e4953c6b4d978571f1' started,
 version 1.0
18:28:08 *** Listening on default...

Similarly, set replicas to two in app-deployment.yaml:

$ kubectl create -f app-deployment.yaml
deployment.extensions/app created

Make sure that two app pods are running:

$ kubectl get pods
NAME                             READY   STATUS    RESTARTS   AGE
app-7964cff98f-5bx4s             1/1     Running   0          54s
app-7964cff98f-8n8hk             1/1     Running   0          54s
canary-aqw8jtfo-f54b9749-q5wqj   1/1     Running   0          41m
db-6b4fbb57d9-8h978              1/1     Running   0          19m
redis-9946db5cc-8g6zz            1/1     Running   0          36m
worker-8cf5dc699-98z99           1/1     Running   0          2m44s
worker-8cf5dc699-9s26v           1/1     Running   0          2m44s
worker-8cf5dc699-v6ckr           1/1     Running   0          2m44s

Create the app service:

$ kubectl create -f app-service.yaml
service/app created

Note that a service of type LoadBalancer was created:

$ kubectl describe service app
Name:                     app
Namespace:                default
Labels:                   io.kompose.service=app
Annotations:              kompose.cmd: kompose convert
                          kompose.version: 1.16.0 (0c01309)
Selector:                 io.kompose.service=app
Type:                     LoadBalancer
IP:                       10.59.255.31
LoadBalancer Ingress:     34.83.242.171
Port:                     5000  5000/TCP
TargetPort:               5000/TCP
NodePort:                 5000  31305/TCP
Endpoints:                10.56.1.6:5000,10.56.2.12:5000
Session Affinity:         None
External Traffic Policy:  Cluster
Events:
Type    Reason                Age   From                Message
----    ------                ----  ----                -------
Normal  EnsuringLoadBalancer  72s   service-controller  Ensuring load balancer
Normal  EnsuredLoadBalancer   33s   service-controller  Ensured load balancer

Test the application by accessing the endpoint URL based on the IP address corresponding to LoadBalancer Ingress: http://34.83.242.171:5000.

We have demonstrated how to create Kubernetes objects such as Deployments, Services, and Secrets from raw Kubernetes manifest files. As your application becomes more complicated, this approach will start showing its limitations, because it will get harder to customize these files per environment (for example, for staging versus integration versus production). Each environment will have its own set of environment values and secrets that you will need to keep track of. In general, it will become more and more complicated to keep track of which manifests have been installed at a given time. Many solutions to this problem exist in the Kubernetes ecosystem, and one of the most common ones is to use the Helm package manager. Think of Helm as the Kubernetes equivalent of the yum and apt package managers.

The next section shows how to use Helm to install and customize Prometheus and Grafana inside the GKE cluster.

Installing Prometheus and Grafana Helm Charts

In its current version (v2 as of this writing), Helm has a server-side component called Tiller that needs certain permissions inside the Kubernetes cluster.

Create a new Kubernetes Service Account for Tiller and give it the proper permissions:

$ kubectl -n kube-system create sa tiller

$ kubectl create clusterrolebinding tiller 
  --clusterrole cluster-admin 
  --serviceaccount=kube-system:tiller

$ kubectl patch deploy --namespace kube-system 
tiller-deploy -p  '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

Download and install the Helm binary for your operating system from the official Helm release page, and then install Tiller with the helm init command:

$ helm init

Create a namespace called monitoring:

$ kubectl create namespace monitoring
namespace/monitoring created

Install the Prometheus Helm chart in the monitoring namespace:

$ helm install --name prometheus --namespace monitoring stable/prometheus
NAME:   prometheus
LAST DEPLOYED: Tue Aug 27 12:59:40 2019
NAMESPACE: monitoring
STATUS: DEPLOYED

List pods, services, and configmaps in the monitoring namespace:

$ kubectl get pods -nmonitoring
NAME                                             READY   STATUS    RESTARTS AGE
prometheus-alertmanager-df57f6df6-4b8lv          2/2     Running   0        3m
prometheus-kube-state-metrics-564564f799-t6qdm   1/1     Running   0        3m
prometheus-node-exporter-b4sb9                   1/1     Running   0        3m
prometheus-node-exporter-n4z2g                   1/1     Running   0        3m
prometheus-node-exporter-w7hn7                   1/1     Running   0        3m
prometheus-pushgateway-56b65bcf5f-whx5t          1/1     Running   0        3m
prometheus-server-7555945646-d86gn               2/2     Running   0        3m

$ kubectl get services -nmonitoring
NAME                            TYPE        CLUSTER-IP    EXTERNAL-IP  PORT(S)
   AGE
prometheus-alertmanager         ClusterIP   10.0.6.98     <none>       80/TCP
   3m51s
prometheus-kube-state-metrics   ClusterIP   None          <none>       80/TCP
   3m51s
prometheus-node-exporter        ClusterIP   None          <none>       9100/TCP
   3m51s
prometheus-pushgateway          ClusterIP   10.0.13.216   <none>       9091/TCP
   3m51s
prometheus-server               ClusterIP   10.0.4.74     <none>       80/TCP
   3m51s

$ kubectl get configmaps -nmonitoring
NAME                      DATA   AGE
prometheus-alertmanager   1      3m58s
prometheus-server         3      3m58s

Connect to Prometheus UI via the kubectl port-forward command:

$ export PROMETHEUS_POD_NAME=$(kubectl get pods --namespace monitoring 
-l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")

$ echo $PROMETHEUS_POD_NAME
prometheus-server-7555945646-d86gn

$ kubectl --namespace monitoring port-forward $PROMETHEUS_POD_NAME 9090
Forwarding from 127.0.0.1:9090 -> 9090
Forwarding from [::1]:9090 -> 9090
Handling connection for 9090

Go to localhost:9090 in a browser and see the Prometheus UI.

Install the Grafana Helm chart in the monitoring namespace:

$ helm install --name grafana --namespace monitoring stable/grafana
NAME:   grafana
LAST DEPLOYED: Tue Aug 27 13:10:02 2019
NAMESPACE: monitoring
STATUS: DEPLOYED

List Grafana-related pods, services, configmaps, and secrets in the monitoring namespace:

$ kubectl get pods -nmonitoring | grep grafana
grafana-84b887cf4d-wplcr                         1/1     Running   0

$ kubectl get services -nmonitoring | grep grafana
grafana                         ClusterIP   10.0.5.154    <none>        80/TCP

$ kubectl get configmaps -nmonitoring | grep grafana
grafana                   1      99s
grafana-test              1      99s

$ kubectl get secrets -nmonitoring | grep grafana
grafana                                     Opaque
grafana-test-token-85x4x                    kubernetes.io/service-account-token
grafana-token-jw2qg                         kubernetes.io/service-account-token

Retrieve the password for the admin user for the Grafana web UI:

$ kubectl get secret --namespace monitoring grafana 
-o jsonpath="{.data.admin-password}" | base64 --decode ; echo

SOMESECRETTEXT

Connect to the Grafana UI using the kubectl port-forward command:

$ export GRAFANA_POD_NAME=$(kubectl get pods --namespace monitoring 
-l "app=grafana,release=grafana" -o jsonpath="{.items[0].metadata.name}")

$ kubectl --namespace monitoring port-forward $GRAFANA_POD_NAME 3000
Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000

Go to localhost:3000 in a browser and see the Grafana UI. Log in as user admin with the password retrieved above.

List the charts currently installed with helm list. When a chart is installed, the current installation is called a “Helm release”:

$ helm list
NAME        REVISION  UPDATED                   STATUS    CHART
    APP VERSION NAMESPACE
grafana     1         Tue Aug 27 13:10:02 2019  DEPLOYED  grafana-3.8.3
    6.2.5       monitoring
prometheus. 1         Tue Aug 27 12:59:40 2019  DEPLOYED  prometheus-9.1.0
    2.11.1      monitoring

Most of the time you will need to customize a Helm chart. It is easier to do that if you download the chart and install it from the local filesystem with helm.

Get the latest stable Prometheus and Grafana Helm charts with the helm fetch command, which will download tgz archives of the charts:

$ mkdir charts
$ cd charts
$ helm fetch stable/prometheus
$ helm fetch stable/grafana
$ ls -la
total 80
drwxr-xr-x   4 ggheo  staff    128 Aug 27 13:59 .
drwxr-xr-x  15 ggheo  staff    480 Aug 27 13:55 ..
-rw-r--r--   1 ggheo  staff  16195 Aug 27 13:55 grafana-3.8.3.tgz
-rw-r--r--   1 ggheo  staff  23481 Aug 27 13:54 prometheus-9.1.0.tgz

Unarchive the tgz files, then remove them:

$ tar xfz prometheus-9.1.0.tgz; rm prometheus-9.1.0.tgz
$ tar xfz grafana-3.8.3.tgz; rm grafana-3.8.3.tgz

The templatized Kubernetes manifests are stored by default in a directory called templates under the chart directory, so in this case these locations would be prometheus/templates and grafana/templates. The configuration values for a given chart are declared in the values.yaml file in the chart directory.

As an example of a Helm chart customization, let’s add a persistent volume to Grafana, so we don’t lose the data when we restart the Grafana pods.

Modify the file grafana/values.yaml and set the the value of the enabled subkey under the persistence parent key to true (by default it is false) in this section:

## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
  enabled: true
  # storageClassName: default
  accessModes:
    - ReadWriteOnce
  size: 10Gi
  # annotations: {}
  finalizers:
    - kubernetes.io/pvc-protection
  # subPath: ""
  # existingClaim:

Upgrade the existing grafana Helm release with the helm upgrade command. The last argument of the command is the name of the local directory containing the chart. Run this command in the parent directory of the grafana chart directory:

$ helm upgrade grafana grafana/
Release "grafana" has been upgraded. Happy Helming!

Verify that a PVC has been created for Grafana in the monitoring namespace:

kubectl describe pvc grafana -nmonitoring
Name:        grafana
Namespace:   monitoring
StorageClass:standard
Status:      Bound
Volume:      pvc-31d47393-c910-11e9-87c5-42010a8a0021
Labels:      app=grafana
             chart=grafana-3.8.3
             heritage=Tiller
             release=grafana
Annotations: pv.kubernetes.io/bind-completed: yes
             pv.kubernetes.io/bound-by-controller: yes
             volume.beta.kubernetes.io/storage-provisioner:kubernetes.io/gce-pd
Finalizers:  [kubernetes.io/pvc-protection]
Capacity:    10Gi
Access Modes:RWO
Mounted By:  grafana-84f79d5c45-zlqz8
Events:
Type    Reason                 Age   From                         Message
----    ------                 ----  ----                         -------
Normal  ProvisioningSucceeded  88s   persistentvolume-controller  Successfully
provisioned volume pvc-31d47393-c910-11e9-87c5-42010a8a0021
using kubernetes.io/gce-pd

Another example of a Helm chart customization, this time for Prometheus, is modifying the default retention period of 15 days for the data stored in Prometheus.

Change the retention value in prometheus/values.yaml to 30 days:

  ## Prometheus data retention period (default if not specified is 15 days)
  ##
  retention: "30d"

Upgrade the existing Prometheus Helm release by running helm upgrade. Run this command in the parent directory of the prometheus chart directory:

$ helm upgrade prometheus prometheus
Release "prometheus" has been upgraded. Happy Helming!

Verify that the retention period was changed to 30 days. Run kubectl describe against the running Prometheus pod in the monitoring namespace and look at the Args section of the output:

$ kubectl get pods -nmonitoring
NAME                                            READY   STATUS   RESTARTS   AGE
grafana-84f79d5c45-zlqz8                        1/1     Running  0          9m
prometheus-alertmanager-df57f6df6-4b8lv         2/2     Running  0          87m
prometheus-kube-state-metrics-564564f799-t6qdm  1/1     Running  0          87m
prometheus-node-exporter-b4sb9                  1/1     Running  0          87m
prometheus-node-exporter-n4z2g                  1/1     Running  0          87m
prometheus-node-exporter-w7hn7                  1/1     Running  0          87m
prometheus-pushgateway-56b65bcf5f-whx5t         1/1     Running  0          87m
prometheus-server-779ffd445f-4llqr              2/2     Running  0          3m

$ kubectl describe pod prometheus-server-779ffd445f-4llqr -nmonitoring
OUTPUT OMITTED
      Args:
      --storage.tsdb.retention.time=30d
      --config.file=/etc/config/prometheus.yml
      --storage.tsdb.path=/data
      --web.console.libraries=/etc/prometheus/console_libraries
      --web.console.templates=/etc/prometheus/consoles
      --web.enable-lifecycle

Destroying the GKE Cluster

It pays (literally) to remember to delete any cloud resources you’ve been using for testing purposes if you do not need them anymore. Otherwise, you may have an unpleasant surprise when you receive the billing statement from your cloud provider at the end of the month.

Destroy the GKE cluster via pulumi destroy:

$ pulumi destroy

Previewing destroy (dev):

     Type                            Name                            Plan
 -   pulumi:pulumi:Stack             pythonfordevops-gke-pulumi-dev  delete
 -   ├─ kubernetes:core:Service      ingress                         delete
 -   ├─ kubernetes:apps:Deployment   canary                          delete
 -   ├─ pulumi:providers:kubernetes  gke_k8s                         delete
 -   ├─ gcp:container:Cluster        gke-cluster                     delete
 -   └─ random:index:RandomString    password                        delete

Resources:
    - 6 to delete

Do you want to perform this destroy? yes
Destroying (dev):

     Type                            Name                            Status
 -   pulumi:pulumi:Stack             pythonfordevops-gke-pulumi-dev  deleted
 -   ├─ kubernetes:core:Service      ingress                         deleted
 -   ├─ kubernetes:apps:Deployment   canary                          deleted
 -   ├─ pulumi:providers:kubernetes  gke_k8s                         deleted
 -   ├─ gcp:container:Cluster        gke-cluster                     deleted
 -   └─ random:index:RandomString    password                        deleted

Resources:
    - 6 deleted

Duration: 3m18s

Exercises

  • Use Google Cloud SQL for PostgreSQL, instead of running PostgreSQL in a Docker container in GKE.

  • Use the AWS Cloud Development Kit to launch an Amazon EKS cluster, and deploy the example application to that cluster.

  • Use Amazon RDS PostgreSQL instead of running PostgreSQL in a Docker container in EKS.

  • Experiment with Kustomize as an alternative to Helm for managing Kubernetes manifest YAML files.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.134.76.72