7

Deploying a Cloud-Native Architecture Using Cloud Run

In the previous chapter, we applied concepts we covered in this book’s first part to create a traditional three-tier architecture. In this chapter, we will deploy a cloud-native architecture using only managed Google Cloud services. The central component of this architecture is Cloud Run (https://cloud.google.com/run), Google Cloud’s managed service to deploy containers in a serverless fashion.

Again, we will use a layered approach while reusing some of the code from the previous chapter. Then, we will show how to provision a flexible load balancer that serves static content from Cloud Storage and can accommodate any Cloud Run service without any changes to the load balancer.

We will also show two contrasting methods to deploy a Cloud Run service – one using Terraform, and a second one using a gcloud command – so that you can decide which method works better in your environment.

In this chapter, we’re going to cover the following main topics:

  • Provisioning Redis and connecting it via a VPC connector
  • Using Terraform to configure a flexible load balancer for Cloud Run
  • Using Terraform to provision Cloud Run services
  • To Terraform or not to Terraform

Technical requirements

If you ran terraform destroy in the project from the previous chapter, you can reuse it, or you can create a new project. The code in this book’s GitHub repository (https://github.com/PacktPublishing/Terraform-for-Google-Cloud-Essential-Guide/tree/main/chap07) works either way. You should destroy the project afterwards to save costs. If you use a service account for Terraform, ensure you have the appropriate IAM permissions, including Secret Manager Admin and Cloud Run Admin Permissions.

This chapter assumes you have a basic knowledge of Docker containers and Cloud Run (https://cloud.google.com/run). If you are new to Cloud Run, we suggest that you complete one of the many examples in the Cloud Run Quickstarts (https://cloud.google.com/run/docs/quickstarts).

Overview

Managed Cloud Run is a highly efficient and scalable way to run stateless containers in Google Cloud. When combining it with other managed services such as the global load balancer and Google Cloud’s managed database services, whether Cloud SQL or Memory Store, Cloud Run deploys a scalable and very cost-effective architecture in minutes.

Figure 7.1 is a graphical representation of the architecture that we will provision. Traffic enters through a global load balancer. We configure the load balancer to serve static content, including the home page from Cloud Storage, but Cloud Run serves the dynamic content. The global load balancer is a layer 7 load balancer that supports path-based routing. That means we can configure it so that any URL with the /api prefix is directed to Cloud Run. Furthermore, we can configure the load balancer to dynamically map the URL to the name of the Cloud Run service. The URL with the /api/<service> prefix automatically maps to the Cloud Run service with that name. For example, http://www.example.com/api/hello is served by the Cloud Run service called hello, whereas http://www.example.com/api/view is served by the Cloud Run view service. This makes it very flexible as we can add new Cloud Run services and have them served immediately by the load balancer without any configuration changes.

In this example, we will use Redis as a managed database to store any persistent data, though we could utilize other database services, such as Cloud SQL or Firestore:

Figure 7.1 – Cloud Run architecture

Figure 7.1 – Cloud Run architecture

Again, we are using a layered approach to provision this architecture. In the first layer, we build the VPC and the database. We create the load balancer and the Cloud Run services in the second layer.

Provisioning Redis and connecting it via a VPC connector

Note

The code for this section can be found in the chap07/foundation directory in this book’s GitHub repository.

Similar to what we did in the previous chapter, we will start by building the foundation. This architecture requires several project services, a service account, and a VPC with one subnet. As we used the same setup in the previous chapter, we can simply reuse the code from the previous chapter by copying the following files:

  • project-services.tf
  • sa.tf
  • vpc.tf

We only need to change our variable assignments in terraform.tfvars.

Note

Of course, it would be even more efficient to create modules and then use them.

Provisioning a basic Redis instance in Google Cloud using Terraform only requires a few lines of code. However, for a Cloud Run service to connect to Redis, we need to create a VPC connector (https://cloud.google.com/vpc/docs/configure-serverless-vpc-access).

Here is one of the cases where we want to use the Google Beta provider. At the time of writing this book, the subnet argument was only available in the Beta version. Hence, we are explicitly specifying the google-beta provider:

chap07/foundation/redis.tf

resource "google_vpc_access_connector" "this" {
  provider = google-beta
  name     = var.vpc_connector_name
  region   = var.region
  subnet {
    name       = var.subnets[0].name
  }
}
resource "google_redis_instance" "this" {
  name               = "redis"
  memory_size_gb     = 1
  tier               = "BASIC"
  region             = var.region
  authorized_network = google_compute_network.this.self_link
}
resource "google_secret_manager_secret" "redis_ip" {
  secret_id  = "redis-ip"
  replication {
    automatic = true
  }
}
resource "google_secret_manager_secret" "redis_ip" {
  depends_on = [google_project_service.this["secretmanager"]]
  secret_id  = "redis-ip"
  replication {
    automatic = true
  }
}

As we did previously, we store the connection information – in this case, the IP address of the Redis instance – in the secret manager so that we can easily pass the value to Cloud Run.

Using Terraform to configure a flexible load balancer for Cloud Run

Note

The code for this section can be found in the chap07/main directory in this book’s GitHub repository.

In the previous chapter, we provisioned a global load balancer using Terraform. In this architecture, we will expand on it by making it more flexible – all static assets will be served from Cloud Storage, whereas all dynamic traffic will be served from Cloud Run.

Google Cloud Storage is a very cost-effective way to host static assets for web applications, such as images, CSS files, and static HTML pages. Two configurations are required to use a bucket as a website. First, we need to assign specialty pages such as the default page and notFoundpage (https://cloud.google.com/storage/docs/hosting-static-website#specialty-pages).

Second, as our website is open to the public, we need to make the bucket publicly accessible by using the google_storage_bucket_iam_binding resource. We will also use Terraform to upload the actual specialty pages and a sample image, as shown in the following code:

chap06/main/website.tf

resource "google_storage_bucket" "static" {
  name                        = "${var.project_id}-static"
  location                    = var.region
  uniform_bucket_level_access = true
  website {
    main_page_suffix = "index.html"
    not_found_page   = "404.html"
  }
}
resource "google_storage_bucket_iam_binding" "binding" {
  bucket  = google_storage_bucket.static.name
  role    = "roles/storage.objectViewer"
  members = ["allUsers", ]
}
resource "google_storage_bucket_object" "index" {
  name          = "index.html"
  source        = "../static/index.html"
  bucket        = google_storage_bucket.static.name
  cache_control = "no-store"
}
…

The setup of the global load balancer is similar to the one in the previous chapter, with a few additions. First, we must configure a serverless Network Endpoint Group (NEG) (https://cloud.google.com/load-balancing/docs/negs/serverless-neg-concepts) to use a serverless service such as Cloud Run. Then, we must set up a backend service that points to that NEG. A particularly useful feature of a serverless NEG is the concept of a URL mask. (https://cloud.google.com/load-balancing/docs/negs/serverless-neg-concepts#url_masks).

A URL mask maps the URL path to the service without specifying the actual service name. For example, we can define a URL mask of /api/<service>. Thus, any call on our website with the /api prefix will map to the Cloud Run service of that name. For example, http://example.com/api/hello maps to the Cloud Run hello service, whereas http://example.com/api/login maps to the Cloud Run login service. This feature is handy, particularly for an application that utilizes microservices, as shown here:

chap06/main/load-balancer.tf

resource "google_compute_region_network_endpoint_group" "api" {
  name                  = "cloud-run"
  network_endpoint_type = "SERVERLESS"
  region                = var.region
  cloud_run {
    url_mask = "/api/<service>"
  }
}
resource "google_compute_backend_service" "api" {
  name                  = "cloud-run"
  load_balancing_scheme = "EXTERNAL"
  port_name             = "http"
  backend {
    group           = google_compute_region_network_endpoint_group.api.self_link
  }
}

At this point, we have configured the load balancer, and we can test it by accessing static assets.

Note

The load balancer takes a few minutes to become fully operational after being provisioned.

However, we have not deployed any Cloud Run Services yet. Remember that we discussed the overlap between Infrastructure as Code (IaC) and configuration management in Chapter 1, Getting Started with Terraform on Google Cloud. Now, Cloud Run is an interesting managed service. It deploys application code in the form of containers in seconds. So, is Cloud Run infrastructure, or is it application code? Should we use Terraform to deploy Cloud Run, or is it easier to use another tool to deploy Cloud Run services? The answer, of course, is: it depends. First, we will use Terraform to provision two Cloud Run services, and then we will use the CLI (gcloud) to deploy the same Cloud Run services. Thus, you can compare the two methods and decide which will work better in your environment.

Using Terraform to provision Cloud Run services

In our example, we will deploy two Cloud Run services. The first one is a simple Hello example. The second service accesses the Redis database. We will include the code for both services for your reference so that you can build your own container images. The sample code uses container images from a public container repository.

We want the Cloud Run service to be publicly accessible – that is, not require any authentication. For enhanced security, we will allow the Cloud Run services to ingress only via the load balancer and use the service account we provisioned in the foundation layer.

First, let’s have a look at how we deploy the hello service. First, we must provision the Cloud Run service using the google_cloud_run_service resource. We need to specify the container source and the service account name. To restrict the ingress restriction, we need to add an annotation, as follows:

chap06/main/cloudrun.tf

resource "google_cloud_run_service" "hello" {
  name     = "hello"
  location = var.region
  metadata {
    annotations = {
      "run.googleapis.com/ingress" = "internal-and-cloud-load-balancing"
    }
  }
  template {
    spec {
      containers {
        image = var.container_images.hello
      }
      service_account_name = data.terraform_remote_state.foundation.outputs.service_account_email
    }
  }
}
resource "google_cloud_run_service_iam_binding" "hello_world" {
  location = google_cloud_run_service.hello_world.location
  service  = google_cloud_run_service.hello_world.name
  role     = "roles/run.invoker"
  members = [
    "allUsers",
  ]
}

Now, by default, Cloud Run services require authentication. We need to set the appropriate IAM service policy to allow public access. There are several ways to set or update the IAM policy for a Cloud Run service (https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/cloud_run_service_iam). We will choose the authoritative method of using the google_cloud_run_service_iam_binding resource. Authoritative means that it will replace the current IAM policy if it exists. We must set the role of roles/run.invoker, which sets the permission to invoke a Cloud Run service and use allUsers as a member. allUsers is a special identifier in Google Cloud that denotes any user – that is, the Cloud Run service is open to the public.

Now that we have provisioned the simple hello service, let’s focus on the Cloud Run service that accesses the Redis instance we created in the foundation. The Redis service requires two additions. We want to pass the Redis IP that we stored in the secret manager, and we need to specify that the Cloud Run service uses the VPC connector.

To specify the VPC connector, we need to add two annotations to our template. To retrieve the secret from the secret manager, we can use a special block called value_from, which is nested in the env block. The value_from block specifies to dynamically retrieve the value of the secret stored when the Cloud Run service is provisioned and passes it as an environment variable to the container. Please note that Terraform only passes the secret’s name, not the secret’s actual value. The secret’s value – the actual password – is retrieved dynamically when the container is created, as can be seen in the following code:

chap06/main/cloudrun.tf

resource "google_cloud_run_service" "redis" {
  name = "redis"
  location = var.region
  metadata {
    annotations = {
      "run.googleapis.com/ingress" = "internal-and-cloud-load-balancing"
    }
  }
  template {
    metadata {
      annotations = {
        "run.googleapis.com/vpc-access-connector" = var.vpc_connector_name
        "run.googleapis.com/vpc-access-egress"    = "private-ranges-only"
      }
    }
    spec {
      containers {
        image = var.container_images.redis
        env {
          name = "REDIS_IP"
          value_from {
            secret_key_ref {
              name = data.terraform_remote_state.foundation.outputs.redis_ip_secret_id
              key  = "latest"
            }
          }
        }
      }
      service_account_name = data.terraform_remote_state.foundation.outputs.service_account_email
    }
  }
}
resource "google_cloud_run_service_iam_binding" "redis" {
  location = google_cloud_run_service.redis.location
  service  = google_cloud_run_service.redis.name
  role     = "roles/run.invoker"
  members = [
    "allUsers",
  ]
}

Note

If you get an IAM policy error, please ensure that your Terraform service account has the Cloud Run Admin role.

As you can see, the preceding code is not trivial and requires much data to be passed as metadata. So, the question is – should we use Terraform in this case or are other, better alternatives?

To Terraform or not to Terraform

As we mentioned earlier, Cloud Run is a fully managed Google Cloud service that blurs the line between infrastructure and application code. Terraform is designed to provision cloud infrastructure. For example, if a developer updates the code of the Cloud Run service container image and a new image is built, the infrastructure – that is, the Cloud Run service – has not changed. Thus, upon rerunning Terraform, it does not detect any difference and does not deploy a new version of the Cloud Run service. We could explicitly update the version of the container image every time we update the code, then specify the version as part of the container image, or we can use something else to deploy Cloud Run services.

For example, we can use the Google Cloud CLI. (You can read more about it here: https://cloud.google.com/sdk/gcloud/reference/run/deploy.) The following gcloud commands are equivalent to the Terraform code in the cloudrun.tf file:

$ export SERVICE_ACCOUNT=`gcloud iam service-accounts list  --format="value(email)"  --filter=name:cloudrun`
$ gcloud run deploy hello  --region us-central1   
 --image gcr.io/terraform-for-gcp/helloworld:latest 
 --platform managed  --allow-unauthenticated  
 --service-account $SERVICE_ACCOUNT 
 --ingress internal-and-cloud-load-balancing
$ gcloud run deploy redis  --region us-central1   
 --image gcr.io/terraform-for-gcp/redis:latest 
 --platform managed  --allow-unauthenticated  
 --service-account $SERVICE_ACCOUNT 
 --ingress internal-and-cloud-load-balancing 
  --vpc-connector vpccon 
 --update-secrets=REDIS_IP=redis-ip:latest 
--ingress internal-and-cloud-load-balancing

There is a one-to-one relationship between the gcloud command and the Terraform resources. You can decide which method works best for you: Terraform or gcloud.

We mentioned earlier that we configured the load balancer to be flexible. Thus, we can add a Cloud Run service and have it immediately accessible through the load balancer. For example, the following gcloud command deploys the sample Cloud Run container:

$ gcloud run deploy my-hello --region us-central1   
--image us-docker.pkg.dev/cloudrun/container/hello 
--platform managed --allow-unauthenticated 
--service-account $SERVICE_ACCOUNT 
--ingress internal-and-cloud-load-balancing

Once it has been deployed, it is accessible through http://<ip-address>/api/my-hello.

Thus, in this architecture, you can easily use Cloud Run to deploy additional microservices, which the load balancer can service without any changes. Hence, developers can add microservices by simply developing code, deploying it into a container image, and then using a single gcloud command to add it to the application.

Summary

In this chapter, we deployed a modern, serverless architecture using Terraform. We reused existing code from the previous chapter so that we could build a foundation with new variable assignments and a few lines of additional Terraform code to provision the Redis database and the VPC connector.

Then, we set up a static website and a flexible global load balancer to map a URL to the corresponding Cloud Run service. Lastly, we demonstrated two contrasting methods to deploy Cloud Run services – Terraform and the Google Cloud CLI.

In the next chapter, we will continue our journey by deploying a GKE architecture using only public Terraform modules.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.219.0.159