Chapter 5. Google Cloud Kubernetes Engine

Google Cloud Kubernetes Engine is a fully managed and secured platform that provides you the ability to run your constrained workloads. This chapter contains recipes for creating, and managing your containers including unique methods to automate deployments, plus deploying real-world applications as in MongoDB and Java applications.

All code for this recipe is located at: https://github.com/ruiscosta/google-cloud-cookbook/06-kubernetes

5.1 Create a Zonal Cluster

Problem

You want to run an application on Kubernetes, but want to run that application only within a single zone, within a region. You want to be able to create and upgrade your kubernetes cluster quickly, and you are less concerned about things like availability and placing clusters closer to your disparate users.

Solution

Run your application on a zonal Kubernetes cluster. With a single control plane managing your Kubernetes cluster, it’s very easy to get started quickly.

Prerequisites

Ensure the following API is enabled:

  • Kubernetes Engine API

  1. Sign in to Google Cloud Console.

  2. In the main menu, navigate to Compute and click on Kubernetes Engine

  3. Click the Create button at the top of the screen

  4. Click Configure next to the Standard option.

  5. In the Cluster Basics section:

    • Choose a Name for your cluster

    • In Location Type, select Zonal

    • In Zone, select any zone of your choice

    • Leave the remaining settings to default

  6. In the left navigation pane there are a lot of other options that could be set, however we will leave them to default for the purposes of this recipe.

  7. Click Create at the bottom of the screen

  8. You will be navigated back to the Clusters screen where you will see your cluster spinning up. This process can take more than a minute to complete.

  9. Once complete, you will see a green checkmark icon next to the name of your cluster. Your cluster is now ready for the deployment of applications.

Discussion

In this recipe, you used the Google Cloud Console to create a zonal Kubernetes cluster with GKE. Creating a zonal GKE cluster is a quick and easy way to get going with Kubernetes versus trying to launch a self managed Kubernetes cluster. The Kubernetes Control Plane is managed by GCP, so you don’t need to worry about the operational overhead that comes with managing it. Beyond this recipe, you should look through the configuration options you have with GKE, which affords you tons of flexibility and additional configuration around Nodepools, Automation, Networking, Security, Metadata, and more.

5.2 Create a Regional Cluster

Problem

You want to run an application on Kubernetes, but want to run that application across two or more zones within a region. You value the availability of your application over the flexibility that may come with a zonal kubernetes cluster.

Solution

Run your application on a regional Kubernetes cluster. Regional clusters allow for higher availability, fault tolerance, no-downtime upgrades, making your application more resilient and spread across multiple zones within a single region.

  1. Sign in to Google Cloud Console.

  2. In the main menu, navigate to Compute and click on Kubernetes Engine

  3. Click the Create button at the top of the screen

  4. Click Configure next to the Standard option.

  5. In the Cluster Basics section:

    • Choose a Name for your cluster

    • In Location Type, select Regional

    • In Region, select any region of your choice

    • Leave the remaining settings to default

  6. In the left navigation pane there are a lot of other options that could be set, however we will leave them to default for the purposes of this recipe.

  7. Click Create at the bottom of the screen

  8. You will be navigated back to the Clusters screen where you will see your cluster spinning up. This process can take more than a minute to complete.

  9. Once complete, you will see a green checkmark icon next to the name of your cluster.

Discussion

In this recipe, you used the Google Cloud Console to create a regional Kubernetes cluster with GKE. Creating a regional GKE cluster is a quick and easy way to get going with Kubernetes versus trying to self manage a kubernetes cluster of your own. With a regional cluster, you have nodes deployed across the zones within that region, so expect that the Number of Nodes, Total vCPUs, and Total Memory are larger than your zonal GKE deployment with the same configuration. The Control Plane (managed by GCP) is also spread out across the zones within the region, so you don’t need to worry about configuring it beyond deploying the cluster itself. Beyond this recipe, you should look through the configuration options you have with GKE, which afford you tons of flexibility and configurability around Nodepools, Automation, Networking, Security, Metadata, and more.

5.3 Deploy a MongoDB Database with StatefulSets

Problem

You want to run MongoDB on Google Cloud Kubernetes Engine which requires access to a persistent disk.

Solution

Since MongoDB requires persistent disk you deploy MongoDB as a stateful application. This will require you to use the StateFulSet controller to deploy MongoDB for a persistent identity for when pods need to be rescheduled or restarted. For MongoDB you will need to maintain access to the same volume if the pods are rescheduled or restarted.

  1. Sign in to Google Cloud Console.

  2. Launch the Google Cloud Shell.

  3. Run the following command to instantiate a hello-world cluster:

                      gcloud container clusters create hello-world 
    --region us-central1
    
  4. It will take a few minutes for the cluster to start up.

  5. Use the following as the working directory which is the cloned repository from step 5 for the remaining steps below.

                    cd ./mongo-k8s-sidecar/example/StatefulSet/
                  
  6. In the cloned repository root fol run the following command to create your StorageClass which tells Kubernetes what disk type you want for your MongoDB database:

                      cat >> mongodb_storage_class_ssd.yaml <<EOL
    kind: StorageClass
    apiVersion: storage.k8s.io/v1beta1
    metadata:
      name: fast
    provisioner: kubernetes.io/gce-pd
    parameters:
      type: pd-ssd
    	EOL
    
  7. Run the following command to apply the StorageClass to Kubernetes:

                    kubectl apply -f mongodb_storage_class_ssd.yaml
                  
  8. In Kubernetes terms, a service describes policies or rules for accessing specific pods. In brief, a headless service is one that doesn’t prescribe load balancing. When combined with StatefulSets, this will give you individual DNSs to access your pods, and in turn a way to connect to all of your MongoDB nodes individually.

  9. Run the following command to create the mongo-statefulset.yaml file. You can also access the code for this recipe at: https://github.com/ruiscosta/google-cloud-cookbook/06-kubernetes

                      cat >> mongo-statefulset.yaml <<EOL
    apiVersion: v1
    kind: Service
    metadata:
      name: mongo
      labels:
        name: mongo
    spec:
      ports:
      - port: 27017
        targetPort: 27017
      clusterIP: None
      selector:
        role: mongo
    ---
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: mongo
    spec:
      serviceName: "mongo"
      replicas: 3
      selector:
        matchLabels:
          role: mongo
          environment: test
      template:
        metadata:
          labels:
            role: mongo
            environment: test
        spec:
          terminationGracePeriodSeconds: 10
          containers:
            - name: mongo
              image: mongo
              command:
                - mongod
                - "--replSet"
                - rs0
              ports:
                - containerPort: 27017
              volumeMounts:
                - name: mongo-persistent-storage
                  mountPath: /data/db
            - name: mongo-sidecar
              image: cvallance/mongo-k8s-sidecar
              env:
                - name: MONGO_SIDECAR_POD_LABELS
                  value: "role=mongo,environment=test"
      volumeClaimTemplates:
      - metadata:
          name: mongo-persistent-storage
          annotations:
            volume.beta.kubernetes.io/storage-class: "fast"
        spec:
          accessModes: [ "ReadWriteOnce" ]
          resources:
            requests:
              storage: 100Gi
    EOL
    
  10. To deploy the Headless Service and the StatefulSet run the following command:

                    kubectl apply -f mongo-statefulset.yaml
                  
  11. Before connecting to the MongoDB replica set, validate it’s running by running the following command:

    kubectl get statefulset
    
  12. You should receive an output as:

  13. To list the pods in your cluster run the following command:

    kubectl get pods
    
  14. Connect to the first replica set member:

                      kubectl exec -ti mongo-0 mongo
                    
  15. To instantiate the replica set run the following command:

                        rs.initiate()
    exit
    
  16. If you need to scale the replica set, run the following command to increase the replica set from 3 to 5:

                        kubectl scale --replicas=5 statefulset mongo
                      
  17. If you need to scale down the replica set, run the following command to decrease the replica set from 5 to 3:

    kubectl scale --replicas=3 statefulset mongo
    
  18. You can now connect to your MongoDB replica set using the following URI formatting

  19.                   "mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo:27017/dbname_?"
                    
  20. Each pod in a StatefulSet backed by a Headless Service will have a stable DNS name. The template follows this format: <pod-name >.< service-name>

Discussion

In this recipe, you deployed MongoDB as a stateful application. Using a headless service allows you to describe policies or rules for accessing specific pods. Simply a headless service is one that doesn’t prescribe load balancing. When combined with StatefulSets, this will give you individual DNSs to access your pods which allows you to connect to all of our MongoDB nodes individually. The StatefulSet configuration is the configuration for the workload that runs MongoDB and what orchestrates the Kubernetes resources. The next part is the terminationGracePeriodSeconds which is used to gracefully shutdown the pod when you scale down the number of replicas. Finally you have the volumeClaimTemplates which connects to the StorageClass we created before to provision the volume. With this you have successfully deployed MongoDB on Kubernetes.

5.4 Resizing a Cluster

Problem

You are running a Kubernetes cluster that has either too few nodes, and thereby is unable to meet spiky demand for your application; or your Kubernetes cluster has too many nodes and is over provisioned for the level of traffic it is receiving.

Solution

You should resize the number of nodes your Kubernetes cluster is running in order to ensure you have setup an optimal cluster based on your application traffic patterns.

Prerequisites

Ensure the following APIs are enabled:

  • Kubernetes Engine API

Ensure you have a Zonal or Regional cluster running that you can resize (see Create a Zonal Cluster or Create a Regional Cluster above)

  1. Sign in to Google Cloud Console.

  2. In the main menu, navigate to Compute and click on Kubernetes Engine

  3. If you completed the recipes Create a Zonal Cluster or Create a Regional Cluster, you should have at least one Kubernetes cluster running. In the screenshot below, you will see we have two regional clusters running, one in us-east1 and another in us-west1. For this recipe, we will increase the number of nodes in cluster-1. (Note: The process is the same for resizing a cluster’s nodes, whether it’s regional or zonal).

  4. Click on the name of your cluster, here it is cluster-1

  5. Now, click on Nodes, underneath the name of your cluster

  6. You should now see one node pool, called default-pool. Click default-pool (or whatever the name of your particular node pool is).

  7. Click Edit at the top of the node pool screen.

  8. Now we are able to increase/decrease the default size of our node pool to any number of nodes that we prefer. In this example, we will increase the node pool size from 3 to 5. (Note: If you want your Kubernetes cluster to autoscale up based on node utilization, you can check the Enable autoscaling box. Checking the box will give you the option to set minimum and maximum node thresholds.).

  9. Click the Save button at the bottom of the screen. You should see this screen:

  10. This may take a minute or two, but once completed, you have effectively resized your default node pool from running 3 nodes to running 5 nodes in the default-pool nodepool.

Discussion

Resizing the number of nodes in a nodepool is a relatively simple process. The number of nodes you run, as well as the number of node pools you run, should be thought out carefully in order to meet the needs of your particular application. Enabling autoscaling in your node pools is a major value add that Kubernetes brings to the table, allowing your node pools to increase and decrease based on the utilization of that node and within a range you specify.

5.5 Load Testing with Locust

Problem

You want to run a load test on your kubernetes cluster(s). You want this load test to be executed on a single, or multiple, machines that are separate from your application. You want to write these load test scenarios in Python, and want the flexibility to be able to configure the test using a web based UI. This distributed load test should enable you to understand and validate how your application endures in real life scenarios of users using your website and/or web application.

Solution

Use Locust to run a distributed load test against your application, and watch in real time how your deployment endures through the load test. Using Locust, you are able to test your application regardless of what infrastructure it runs on top of.

Prerequisites

Ensure the following APIs are enabled:

  • Cloud Build API

  • Kubernetes Engine

  • Google App Engine Admin API

  • Cloud Storage

  1. Sign in to Google Cloud Console.

  2. In the main menu, navigate to Compute and click on Kubernetes Engine

  3. Click the Create button at the top of the screen

  4. Click Configure next to the Standard option.

  5. In the Cluster Basics section:

    • In Name, set your cluster name to locust-cluster

    • In Location Type, select Zonal

    • In Zone, select us-central1-b

  6. Now click default-pool under Node Pools in the left hand menu

    • Check the box next to Enable autoscaling

    • Set the value of Minimum number of nodes to 3

    • Set the value of Maximum number of nodes to 10

    • The rest can remain on default settings

  7. Click Create at the bottom of the screen

  8. After a minute or so you should see this in the Clusters view in your console

  9. Now let’s get the credentials for the locust-cluster so we can execute kubectl commands on it from the cloud shell.

  10. Open up the Cloud Shell by clicking this button in the top right corner of your screen

  11. First, let’s set a variable for our Project ID so we don’t need to retype it every single time. Enter the following in your Cloud Shell and press enter:

                    PROJECT=$(gcloud info --format='value(config.project)')
                  
  12. Then, type the following into the cloud shell and press enter:

                      gcloud container clusters 
    		get-credentials locust-cluster --zone us-central1-c
      
  13. Now that we have the credentials to use this cluster from the command line, let’s pull the git repository with the sample code for our web app, as well as our locust configuration. Enter the following to your cloud shell:

                      git clone 
    https://github.com/GoogleCloudPlatform/distributed-load-testing-using-kubernetes.git 
    && cd distributed-load-testing-using-kubernetes
    
  14. Let’s build the docker image for the Locust application, and store it in our local container repository. This could take a minute:

                      gcloud builds submit --tag 
    	gcr.io/$PROJECT/locust-tasks:latest docker-image/.
    
  15. We will now launch the web application to AppEngine. To do this, run the following command in your cloud shell:

                      gcloud app deploy 
    	sample-webapp/app.yaml --project=$PROJECT
    
  16. You may be prompted to select a region to deploy your App Engine application. You can feel free to select any region, but for the purposes of this tutorial you can simply choose 14 which is us-central

  17. You may also be prompted with a summary of what you are deploying, and be given the Y/n option to proceed with deployment, in which you should enter Y. Note, the target url gives you the address of the AppEngine application’s endpoint that you can open in a web browser.

  18. Once deployed, enter the target urlin a new browser tab and confirm you are able to load the website. It should, very simply, display “Welcome to the “Distributed Load Testing Using Kubernetes” sample web app.

  19. Now that we’ve deployed our web application, let’s configure and deploy our locust master and worker nodes to our locust-cluster.

  20. First, let’s add another variable to the cloud shell. Enter the following:

                    TARGET=“$PROJECT.appspot.com”
                  
  21. Now, we need to set the variables for our target web application and our project ID in the locust config files. You can do this quickly by running the following commands in your cloud shell:

                      sed -i -e "s/[TARGET_HOST]/$TARGET/g"
    	kubernetes-config/locust-master-controller.yaml
    sed -i -e "s/[TARGET_HOST]/$TARGET/g"
    	kubernetes-config/locust-worker-controller.yaml
    sed -i -e "s/[PROJECT_ID]/$PROJECT/g"
    	kubernetes-config/locust-master-controller.yaml
    sed -i -e "s/[PROJECT_ID]/$PROJECT/g"
    	kubernetes-config/locust-worker-controller.yaml
    
  22. Now, we will deploy Locust to our locust-cluster. To do so, run the following in your cloud shell:

                      kubectl apply -f 
    	kubernetes-config/locust-master-controller.yaml
    kubectl apply -f 
    	kubernetes-config/locust-master-service.yaml
    kubectl apply -f 
    	kubernetes-config/locust-worker-controller.yaml
    
  23. Once completed, enter kubectl get services in the cloud shell and ensure your services are running. You should see something like this:

  24. Open a new browser tab, and enter the EXTERNAL-IP for the locust-master service and append “:8089” into the URL bar (in our example, that means 34.67.174.219:8089) and press enter. You should now see the Locust testing UI:

  25. Ensure that the HOST field in the top right corner of the UI accurately has the web address of your web application that we deployed to AppEngine earlier. If it does not, please return to Step #20 and verify the commands were run correctly.

  26. We can now run a simple load test by entering numbers for Number of users to simulate, and Hatch rate. For simple testing purposes, you can enter 50 for Number of users to simulate and 1 for the Hatch rate. Once entered, press Start swarming.

  27. Congratulations! You are running a distributed load test on your sample web application. Once you’re done observing how your application performs in the Locust UI, you can press Stop to end the test.

Discussion

In this example, you deployed a simple web application on AppEngine, and then launched a Locust deployment to your Kubernetes cluster. The Locust deployment was able to mimic real life traffic against your AppEngine application. Locust is a powerful tool and can be configured (using Python and the Web UI) to load and test any application. You can write your own test cases in Python to ensure all parts of your application are properly tested. More information about Locust can be found at www.locust.io.

5.6 Multi-Cluster Ingress

Problem

You have an application that runs on multiple Kubernetes clusters -- that are located in different regions -- and you want to be able to automatically route user traffic to the cluster that is nearest to them using a single HTTP(S) load balancer.

Solution

Use Multicluster Ingress for Anthos in order to run your application across as many Kubernetes clusters as you’d like, and route traffic to the nearest cluster based on the origin of the request.

Prerequisites

Ensure the following APIs are enabled:

  • Kubernetes Engine API

  • GKE Hub

  • Anthos

  • Multi Cluster Ingress API

First we will create two regional clusters, in two different regions (us-east1 and us-west1)

  1. Sign in to Google Cloud Console.

  2. In the main menu, navigate to Compute and click on Kubernetes Engine

  3. Click the Create button at the top of the screen

  4. Click Configure next to the Standard option.

  5. In the Cluster Basics section:

    • In Name, set your cluster name to cluster-1

    • In Location Type, select Regional

    • In Region, select us-east1

    • Leave the remaining settings to default

  6. Click Create at the bottom of the screen

You will be navigated back to the Clusters screen where you will see your cluster spinning up. This process can take more than a minute to complete.

  1. We will repeat this process and create another Regional cluster in a different region to the one we just created.

  2. Click the Create button at the top of the screen

  3. Click Configure next to the Standard option.

  4. In the Cluster Basics section:

    • In Name, set your cluster name to cluster-2

    • In Location Type, select Regional

    • In Region, select us-west1

    • Leave the remaining settings to default

  5. Click Create at the bottom of the screen

  6. You will be navigated back to the Clusters screen where you will see your cluster spinning up. This process can take more than a minute to complete.

  7. Now we will register the clusters to the same environment.

  8. In the main menu, navigate to Anthos and click on Clusters in the submenu

  9. Now click Register Existing Cluster

  10. You will now see both the clusters you created are ready to be registered

  11. Next to cluster-1, hit REGISTER

  12. You’ll be asked for a Service Account for to register to the environ, choose Workload Identity

  13. Click Submit

  14. Repeat steps 16 - 18 for the second cluster, cluster-2

  15. Now, we will set up Ingress for Anthos.

  16. Next, click Features within the Anthos screen

  17. Click Enable next to Ingress, then click Enable Ingress
  18. In the Config Membership dropdown, select the first cluster you spun up (cluster-1) and click Install. After a minute or so, refresh the screen and you should see the following

  19. Open up the cloud shell by clicking this button in the top right corner of your screen

  20. Type the following into your cloud shell to make a directory that will hold the .yaml files we will need for the remainder of this tutorial:

    mkdir multicluster-ingress-demo 
    	&& cd multicluster-ingress-demo
  21. Before we can work with our clusters via kubectl in the cloud shell, we need to configure our cluster access by generating a kubeconfig entry. You can do this by running the following command for both of your clusters in the cloud shell:

    gcloud container clusters 
    	get-credentials cluster-1 --region us-east1
    gcloud container clusters 
    	 get-credentials cluster-2 --region us-west1
  22. Ensure you received a confirmation for each cluster:

  23. Now we can work with the clusters from the cloud shell command line. Let’s create the namespace for our application to run. You can do this by typing nano namespace.yaml into the cloud shell.

  24. Paste this into the yaml file:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: zoneprinter
  25. Save the file. Before we proceed, let’s set the shell variable for our Project ID. Enter the following in the cloud shell:

    PROJECT=$(gcloud info --format='value(config.project)')
  26. Now let’s apply namespace.yaml to both of our clusters, cluster-1 and cluster-2. You can do this by running the following:

    kubectl config use-context 
    	gke_$(echo $PROJECT)_us-east1_cluster-1
    kubectl apply -f namespace.yaml
    kubectl config use-context 
    	gke_$(echo $PROJECT)_us-west1_cluster-2
    kubectl apply -f namespace.yaml
  27. We will now deploy a sample app that shows the location of the datacenter you are reaching to both clusters, from an image called zone-printer.

  28. Create a new yaml file by typing nano app.yaml in the gcloud terminal and paste the following into the yaml:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: zone-ingress
      namespace: zoneprinter
      labels:
        app: zoneprinter
    spec:
      selector:
        matchLabels:
          app: zoneprinter
      template:
        metadata:
          labels:
            app: zoneprinter
        spec:
          containers:
          - name: frontend
            image: gcr.io/google-samples/zone-printer:0.2
            ports:
            - containerPort: 8080
  29. Save the file. Then let’s apply app.yaml to both of our clusters, cluster-1 and cluster-2. You can do this by running the following in the cloud shell:

    kubectl config use-context 
    gke_$(echo $PROJECT)_us-east1_cluster-1
    kubectl apply -f app.yaml
    kubectl config use-context 
    gke_$(echo $PROJECT)_us-west1_cluster-2
    kubectl apply -f app.yaml
  30. Now that the app is running in both clusters, in the same namespace. Let’s wrap up by creating the MultiClusterService and MultiClusterObject.

  31. Let’s first create the MultiClusterService. Create a new yaml file by typing nano mcs.yaml in the cloud shell, and paste the following into the yaml:

    apiVersion: networking.gke.io/v1
    kind: MultiClusterService
    metadata:
      name: zone-mcs
      namespace: zoneprinter
    spec:
      template:
        spec:
          selector:
            app: zoneprinter
          ports:
          - name: web
            protocol: TCP
            port: 8080
            targetPort: 8080
  32. Save the file. Now, lets apply this file to cluster-1

    kubectl config use-context 
    		gke_$(echo $PROJECT)_us-east1_cluster-1
    kubectl apply -f mcs.yaml
  33. Lets now create the MultiClusterIngress. Create a new yaml file by typing nano mci.yaml in the cloud shell, and paste the following into the yaml:

    apiVersion: networking.gke.io/v1
    kind: MultiClusterIngress
    metadata:
      name: zone-ingress
      namespace: zoneprinter
    spec:
      template:
        spec:
          backend:
            serviceName: zone-mcs
            servicePort: 8080
  34. Save the file. Now, lets apply this file to cluster-1:

    kubectl apply -f mci.yaml
  35. Finally, let’s pull the Virtual IP (VIP) to access our application from the MultiCluster Ingress. Run this command in your cloud shell:

    kubectl describe mci zone-ingress -n zoneprinter
  36. In the output, under the Status heading you will find an entry that says VIP: <ip address>. If you don’t see VIP: <ip address> immediately, that’s ok, the Ingress may take a few minutes to spin up. Keep running the describe command until you see the IP appear. The desired output should look similar to this:

  37. Once you get the VIP, open a new tab, paste the VIP to the URL bar, and hit enter. You should see a webpage similar to this:

  38. Congratulations! You’ve set up a Multicluster Ingress on your Kubernetes clusters running in different regions.

Discussion

In summary, and in order: we created two GKE clusters in us-east1 and us-west1, then we registered the clusters to an environ and enabled the Ingress for Anthos feature, we then created the proper namespace and deployed the zone-printer application to both clusters, we then used cluster-1 as our config cluster, and deployed a Multicluster Service and Multicluster Ingress to that cluster. The request now routes through a L7 HTTP Load Balancer and to the nearest cluster running the application from the location of the request. A Multicluster Ingress using Ingress for Anthos will allow you to route requests to your Kubernetes clusters running anywhere in the world

5.7 Continuous Delivery with Spinnaker and Kubernetes

Problem

With new application changes you need a way to automatically rebuild, retest, and redeploy the new application version to Kubernetes.

Solution

In this recipe you will use Google Cloud Source Repositories, Google Cloud Container Builder, and Spinnaker so when your application code changes it will trigger the continuous delivery pipeline to automatically rebuild, retest, and redeploy the new version.

  1. Sign in to Google Cloud Console.

  2. Launch the Google Cloud Shell.

  3. Run the following command to instantiate a Kubernetes cluster:

    gcloud container clusters create spinnaker-cluster 
    	--machine-type=n1-standard-2
  4. Create a Cloud Identity Access Management service account to delegate permissions to Spinnaker, allowing it to store data in Cloud Storage:

    gcloud iam service-accounts create spinnaker-account 
    	--display-name spinnaker-account
  5. Store the service account email address and your current project ID in the Cloud Shell environment variables:

    export SA_EMAIL=$(gcloud iam service-accounts list 
    	--filter="displayName:spinnaker-account" 
    	--format='value(email)')
    export PROJECT=$(gcloud info 
    	--format='value(config.project)')
  6. Bind the storage.admin role to the newly created service account:

    gcloud projects add-iam-policy-binding $PROJECT 
    	--role roles/storage.admin 
    	--member serviceAccount:$SA_EMAIL
  7. Download the service account key, you will use this key when you install Spinnaker:

    gcloud iam service-accounts keys create spinnaker-sa.json 
    	--iam-account $SA_EMAIL
  8. Create the Cloud Pub/Sub topic for notifications from Container Registry:

    gcloud pubsub topics create projects/$PROJECT/topics/gcr
  9. Create a subscription so Spinnaker can receive notifications of images being pushed:

    gcloud pubsub subscriptions create gcr-triggers 
    	--topic projects/${PROJECT}/topics/gcr
  10. Provide the Spinnaker service account access the Pub/Sub subscription:

    export SA_EMAIL=$(gcloud iam service-accounts list 
    	--filter="displayName:spinnaker-account" 
    	--format='value(email)')
    gcloud beta pubsub subscriptions 
    	add-iam-policy-binding gcr-triggers 
    	--role roles/pubsub.subscriber --member 
    	serviceAccount:$SA_EMAIL
  11. To install Spinnaker you will be using Helm. Helm is a package manager you can use to deploy Kubernetes applications. Download and install Helm:

    wget https://get.helm.sh/helm-v3.1.1-linux-amd64.tar.gzinux-amd64.tar.gz
  12. Extract the files from the downloaded archive file

    tar zxfv helm-v3.1.1-linux-amd64.tar.gz
    cp linux-amd64/helm .
  13. Run the following command to grant Helm access to your cluster:

    kubectl create clusterrolebinding user-admin-binding 
    	--clusterrole=cluster-admin 
    	--user=$(gcloud config get-value account)
  14. Run the following command to grant Spinnaker the cluster-admin role so it can deploy resources across all namespaces:

    kubectl create clusterrolebinding 
    	--clusterrole=cluster-admin 
    	--serviceaccount=default:default spinnaker-admin
  15. Add the stable charts deployments to Helm’s usable repositories

    ./helm repo add stable https://charts.helm.sh/stable
    ./helm repo update
  16. Create a bucket for Spinnaker to store its pipeline configuration:

    export PROJECT=$(gcloud info 
    	--format='value(config.project)')
    export BUCKET=$PROJECT-spinnaker-config
    gsutil mb -c regional -l us-central1 gs://$BUCKET
  17. Run the following command to create a spinnaker-config.yaml file, which describes how Helm should install Spinnaker:

    export SA_JSON=$(cat spinnaker-sa.json)
    export PROJECT=$(gcloud info --format='value(config.project)')
    export BUCKET=$PROJECT-spinnaker-config
    cat > spinnaker-config.yaml <<EOF
    gcs:
      enabled: true
      bucket: $BUCKET
      project: $PROJECT
      jsonKey: '$SA_JSON'
    dockerRegistries:
    - name: gcr
      address: https://gcr.io
      username: _json_key
      password: '$SA_JSON'
      email: [email protected]
    # Disable minio as the default storage backend
    minio:
      enabled: false
    # Configure Spinnaker to enable GCP services
    halyard:
      spinnakerVersion: 1.19.4
      image:
        repository: us-docker.pkg.dev/spinnaker-community/docker/halyard
        tag: 1.32.0
        pullSecrets: []
      additionalScripts:
        create: true
        data:
          enable_gcs_artifacts.sh: |-
            $HAL_COMMAND config artifact gcs account add gcs-$PROJECT --json-path /opt/gcs/key.json
            $HAL_COMMAND config artifact gcs enable
          enable_pubsub_triggers.sh: |-
            $HAL_COMMAND config pubsub google enable
            $HAL_COMMAND config pubsub google subscription add gcr-triggers 
              --subscription-name gcr-triggers 
              --json-path /opt/gcs/key.json 
              --project $PROJECT 
              --message-format GCR
    EOF
  18. Use the Helm command-line interface run the following command to deploy the chart with your configuration set:

    ./helm install -n default cd stable/spinnaker 
    	-f spinnaker-config.yaml 
    	--version 2.0.0-rc9 --timeout 10m0s --wait
  19. The command on step 18 will take a few minutes to complete. Once completed you can continue.

  20. Run the following commands to set up port forwarding to Spinnaker:

    	export DECK_POD=$(kubectl get pods --namespace 
    		default -l "cluster=spin-deck" 
    		-o jsonpath="{.items[0].metadata.name}")
    kubectl port-forward --namespace default 
    		$DECK_POD 8080:9000 >> /dev/null &
  21. To open the newly deployed Spinnaker interface, click the Web Preview icon in the Cloud Shell and select Preview on Port 8080

  22. You should see the Spinnaker interface

  23. To allow Cloud Build to monitor changes to your source code, you will need to build a Docker image and push it to the Container registry.

  24. In your Cloud Shell download the sample source code provided by Google:

    cd ~
    wget https://gke-spinnaker.storage.googleapis.com/sample-app-v4.tgz
  25. Unpack the source code:

    tar xzfv sample-app-v4.tgz
  26. Change directories to the source code:

    cd sample-app
  27. Make the first commit to your source code repository:

    git init
    git add .
    git commit -m "First commit"
  28. Create a source repository to store your source code:

    gcloud source repos create sample-app
    git config credential.helper gcloud.sh
  29. Add your newly created repository as remote:

    export PROJECT=$(gcloud info 
    	--format='value(config.project)')
    git remote 
    	add origin
    	https://source.developers.google.com/p/$PROJECT/r/sample-app
  30. Push your code to the new repository’s main branch:

    git push origin master
  31. In the next steps you will configure Container Builder to build and push Docker images everytime you make changes to your source code.

  32. In the Cloud Platform Console, click Navigation menu > Cloud Build > Triggers.

  33. Click Create trigger.

  34. Set the following trigger settings:

    • Name: sample-app-tags

    • Event: Push new tag

  35. Select your newly created sample-app repository.

    • Tag: v1.*

    • Build configuration: Cloud Build configuration file (yaml or json)

    • Cloud Build configuration file location: /cloudbuild.yaml

  36. Click Create

  37. Run the following commands to allow Spinnaker needs access to your Kubernetes manifests:

    export PROJECT=$
    	(gcloud info --format='value(config.project)')
    gsutil mb -l us-central1 gs://$PROJECT-kubernetes-manifests
  38. Enable versioning on the bucket so that you have a history of your manifests:

    gsutil versioning set on gs://$PROJECT-kubernetes-manifests
  39. Set the your project ID in your kubernetes deployment manifests:

    sed -i s/PROJECT/$PROJECT/g k8s/deployments/*
  40. Commit the changes to the repository:

    git commit -a -m "Set project ID"
  41. Push your first image:

    git tag v1.0.0
  42. Push the tag:

    git push --tags
  43. Go to the Cloud Console. In Cloud Build, click History in the left pane to check that the build has been triggered.

  44. You will now set up your container to deploy to the Kubernetes cluster automatically.

  45. Download the 1.14.0 version of spin CLI:

    curl -LO https://storage.googleapis.com/spinnaker-artifacts/spin/1.14.0/linux/amd64/spin
  46. Make spin executable:

    chmod +x spin
  47. Use the spin CLI to create an app called sample in Spinnaker.

    ./spin application save --application-name sample 
    --owner-email "$(gcloud config get-value core/account)" 
    	--cloud-providers kubernetes 
    	--gate-endpoint http://localhost:8080/gate
  48. From your sample-app source code directory, run the following command to upload an example pipeline to your Spinnaker instance:

    export PROJECT=$(gcloud info --format='value(config.project)')
    sed s/PROJECT/$PROJECT/g spinnaker/pipeline-deploy.json > pipeline.json
    ./spin pipeline save --gate-endpoint http://localhost:8080/gate -f pipeline.json
  49. By pushing a Git tag that starts with “v”, you trigger Container Builder to build a new Docker image and push it to Container Registry. Spinnaker detects that the new image tag begins with “v” and triggers a pipeline to deploy the image to all pods in the deployment.

  50. From your sample-app directory, change the source code of your application:

    sed -i 's/orange/blue/g' cmd/gke-info/common-service.go
  51. Tag your change and push it to the source code repository:

    git commit -a -m "Change color to blue"
    git tag v1.0.1
    git push --tags
  52. Congratulations you have completed the auto deployment of your sample application to Kubernetes Engine.

5.8 Deploy a Spring Boot Java Application

Problem

You need to deploy a Java Spring Boot REST service to Kubernetes.

Solution

In this recipe you will use Google Cloud Source Repositories, Google Cloud Container Builder, and Jib to containerize and deploy the Spring Boot REST service to a Kubernetes cluster.

For this recipe you will need to git clone the following repository to your local workstation:https://github.com/ruiscosta/google-cloud-cookbook/06-kubernetes

  1. On your local workstation go to the working directory for this recipe from the cloned repository.

  2. cd google-cloud-cookbook/06-kubernetes/6-8-java

  3. Test the sample java application locally by running the following command:

    ./mvnw -DskipTests spring-boot:run
  4. In your browser go to http://localhost:8080 you should see a similar screen:

  5. Run the following command to enable Google Cloud Container Registry API to store the container image:

  6. gcloud services enable containerregistry.googleapis.com

  7. Use Jib to create the container image and push it to the the Container Registry, replace $GOOGLE_CLOUD_PROJECT with your Google Cloud Project ID:

    mvn compile 
    	com.google.cloud.tools:jib-maven-plugin:2.0.0:build 
    	-Dimage=gcr.io/$GOOGLE_CLOUD_PROJECT/hello-java:v1
  8. Build and push the image to a container registry:

    mvn compile jib:build 
    	-Dimage=gcr.io/ruicosta-blog/hello-java:v1
  9. To test Docker installation run the following command:

    mvn compile jib:dockerBuild 
    	-Dimage=gcr.io/ruicosta-blog/hello-java:v1
  10. To list the Docker images run the following command:

    docker images
  11. You should see an output similar to the following:

  12. Run the following command to run the Docker container locally on your machine, replace the Image ID with yours from step 9:

    docker run -p 8080:8080 -t IMAGE_ID
  13. In your browser go to http://localhost:8080 and you should see a similar screen to step 3. You have now tested the Spring Boot Application containerized application locally.

  14. Create a Kubernetes two node cluster

    gcloud container clusters create hello-java-cluster 
    	--num-nodes 2 
    	--machine-type n1-standard-1 
    	--zone us-central1-c
  15. A Kubernetes deployment can create, manage, and scale multiple instances of your application using the container image that you created. Deploy one instance of your application to Kubernetes cluster, replace the the GOOGLE_CLOUD_PROJECT with your Google Cloud Project ID:

    kubectl create deployment hello-java 
    	--image=gcr.io/$GOOGLE_CLOUD_PROJECT/hello-java:v1
  16. In order to make the hello-java container accessible from outside the Kubernetes cluster, you have to expose the Pod as a Kubernetes service:

    kubectl create service loadbalancer hello-java --tcp=8080:8080
  17. To find the publicly accessible IP address of the service, run the following command:

    kubectl get services
  18. You should see a similar output,

  19. Visit in your web browser http://EXTERNAL-IP:8080, you should see a similar page:

  20. You have successfully deployed a Spring Boot Java application to a Kubernetes Cluster.

Discussion

In this recipe, you deployed a Java Spring Boot application to Google Cloud Kubernetes Engine. You leveraged Jib which builds containers without having to declare a Dockerfile. Jib was developed by Google to simplify the process of building Java containers, no need to create a Docker file, or wait for the build to complete. With Jib it handles all the steps required to build your Java container.

5.9 Skaffold

Problem

You need a method to develop, build, push and deploy your Java application quickly to Kubernetes.

Solution

In this recipe you will use Skaffold, and Cloud Code plugin for IntelliJ to develop, build, push and deploy your application to Kubernetes all from the IntelliJ IDE.

  1. Install Cloud Code for IntelliJ by following this guide: https://cloud.google.com/code/docs/intellij/install

  2. Create a new IntelliJ project.

  3. Choose Cloud Code: Kubnertes > Java: Hello World

  4. Enter the location of your Container repository.

  5. Choose a project name and location for your project files.

  6. Navigate to the Kubernetes Explorer from the right side panel, or by going to Tools > Cloud Code > Kubernetes > View Cluster Explorer

  7. Select Add a new GKE Cluster and click Create a New GKE Cluster

  8. This will open the Google Cloud Console in a Web Browser to the cluster wizard page. In the Google Cloud Console create a new cluster.

  9. Once you cluster is created your screen should update with the cluster name similar to the image below:

  10. Click OK

  11. Click Run Develope on Kubernetes

  12. Once the process is complete, you should see the Workload created in the Google Cloud Console.

  13. Besides making it easy to deploy your application from IntelliJ to Google Cloud, it also tunnels the traffic from your local workstation to Kubenrtes. In your web browser go to http://localhost and you should see a screen similar to the image below. The application is now running on a Google Cloud Kubernetes cluster in your Google Cloud project.

5.10 GKE Autopilot

Problem

You want to run your application on Kubernetes, but don’t want to have to manage nodes, nodepools, images, networking, and the other operational components of running a Kubernetes cluster. Effectively, you want to run your application(s) on Kubernetes while not having to worry about the management or operation of the cluster itself.

Solution

Run your application on GKE Autopilot. With GKE autopilot, many operational aspects of Kubernetes are abstracted away, and you are left with a Kubernetes infrastructure that is largely configured to Google GKE best practices.

  1. Sign in to Google Cloud Console.

  2. In the main menu, navigate to Compute and click on Kubernetes Engine

  3. Click the Create button at the top of the screen

  4. Click Configure next to the Autopilot option.

  5. In the Cluster Basics section:

    • In Name, give your autopilot cluster any name of your choice

    • In Region, you can pick any region of your choice

  6. The remaining options can be left as default. Click the Create button at the bottom of the screen

  7. The cluster should take a minute or so to spin up and be ready for usage

Discussion

At the time of writing, GKE Autopilot is a new GCP offering that allows users to run Kubernetes clusters in a more managed format versus running GKE in Standard mode. The managed aspect of GKE Autopilot reduces the users full control over the cluster in exchange for an ease of operational overhead. GKE Autopilot is an exciting mode of operation that enables users to get started much more quickly with deploying their production workloads in GCP.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset