Chapter 2. Understanding Serverless Services with Cloud Run

After the first two introductory chapters, we are now ready to write and deploy our first serverless service. We will do this with Cloud Run, which is a container-based platform. We will write an example service with Go, package it in a container and send it off to Cloud Run to deploy. If you have never written a serverless service before, this will be a very gentle introduction.

When we are done with that first step, we will spend time on getting a deeper understanding of how Cloud Run works. A primary design goal of Cloud Run is elastic scalability. I will explain what this means, and from this angle we will explore the platform. I think this is time well spent; when you know why and how something works, you can build better solutions as an engineer.

When you have a better understanding of the inner workings of Cloud Run, we will discuss other options to run serverless application logic on Google Cloud. Next to Cloud Run, you can also consider Cloud Functions and App Engine. In this book I am biased towards Cloud Run, and I do want to give insight in that decision. By describing the other products and providing you with key trade-offs, you will be able to make the right choice in your specific situation.

First Service with Cloud Run

Serverless services provide a way to run user-supplied code in response to HTTP requests. This means that they are not ‘always-on’ like a server. We will now build a serverless service with Cloud Run, which is a platform that allows you to supply your code packaged in a docker container. When the container starts, it is supposed to start listening for HTTP requests. You can use any programming language and runtime, as long as it runs in a container.

In this section, we will do three things. First, we will write some Go code that will contain an HTTP server that responds with “Hello Reader”. Second, we will put it into a docker container, and finally, we push the container to Cloud Run and watch the platform run it for us on an HTTPS endpoint. The platform will transparently handle HTTPS for us, we don’t need to worry about setting up and renewing certificates.

Step 1: Write Code

We will start with a straight-forward main.go file that will render an index.html page. Let’s take a look at Listing 3.1, this is how our directory will look like after we are done with this first Chapter.

Table 2-1. The files that make up our example service
serverless
├── Dockerfile < This specifies the container
├── main.go < Here is our server entrypoint
├── web
└── index.html < The HTML template

In chapter 2, we have already set up our Google Cloud project and installed the tooling necessary to build our site. When you type go version, it should report go1.12 or higher.

Building an HTTP Server with Go

In Listing 3.2 you can find the contents of the main.go file. Let’s take a look at what is happening in this file. If you are reading this online you can copy & paste this right into your editor.

package main

import (
    "html/template"
    "log"
    "net/http"
    "os"
)

func main() {
    // Setting up net/http to handle the main page with
    // function 'index'
    http.HandleFunc("/", index)
    // Grab the PORT from the environment
    port := os.Getenv("PORT")
    if port == "" {
        port = "8080"
    }
    log.Println("Listening on port " + port)
    // Start the HTTP server on PORT
    srv := &http.Server{
        Addr: ":" + port,
    }
    log.Fatal(srv.ListenAndServe())
}

// IndexPage contains the data for the page
type IndexPage struct {
    Title string
    Name  string
}

func index(w http.ResponseWriter, r *http.Request) {
    // Data for the template
    page := IndexPage{
        Title: "Index Page",
        Name:  "Reader",
    }
    // Create a template, pass the data and let
    // it render to the response
    tpl := template.Must(
        template.ParseFiles("web/index.html"))
    tpl.Execute(w, page)
}

First, we declare our imports. We need net/http to create an API-server. One of the strong points of Go is the well-designed, and stable standard library. For most tasks you don’t need to reach out to a dependency. In my opinion, this really makes Go stand apart from other programming languages, where you often spend a considerable amount of time finding dependencies that are worth depending on.

We are grabbing the port number for the server to listen on from the environment variable PORT. This is how Cloud Run tells us the port number. We will set the port number to default to 8080, so when you start this service on your localhost, you can browse to http://localhost:8080 to check if it works.

Looking at the last and final part of main.go in Listing 3.2, we first define the initial model of our index page. For now, it just has a title and a name. You can change “Reader” to your own name, it will make you feel more welcome. In the index function, we first create some dummy data and then pass it to the template file index.html.

Setting up the html template

We want to output an HTML file to the browser for display. To put the data into this html file, we use the built-in html/template library. This is a templating library; it offers some control structures to add conditionals, iterate over collections and include sub templates.

When using the template library to render html that displays user-supplied input, take care to use html/template and not text/template. The latter offers exactly the same interface but does not escape insecure input, setting you up for script injection.

   <!DOCTYPE html>
   <html lang="en">
     <head>
       <meta charset="UTF-8" />
       <title>{{.Title}}</title>
     </head>
     <body>
       <h1>{{.Title}}</h1>
       <p>Hello {{.Name}}</p>
     </body>
   </html>

In Listing 3.3 you can see a very bare-bones index.html. It demonstrates how to access data from the object we passed to it.

Viewing the page in a browser

We now can run the main.go file and check if everything looks like we expect it to look. If you go to the main directory using our terminal and execute go run main.go, you should see Listening on port 8080. You can now open http://localhost:8080. You should see something that looks like this:

Table 2-2. Hello Reader page
Index Page
Hello Reader

Step 2: Package It into a Container

In this section, we will learn how to package our little example page in a container. Today, containers are the standard way to package applications for deployment. We will put our page in a container, taking care to optimize the container for speed and security.

The What and Why of Containers

Docker was introduced in 2014, which sparked the mainstream adoption of containers. Instead of taking your software and installing it on a specific server with an install script or manual, you can build your software, get it running in a container, and start that container anywhere.

Virtual machines are another way to create ‘packages’ which you can take to another host. Conceptually, the process is very similar. You can take your software, install it on a virtual machine, and create an image. This image you can then boot somewhere else. However, containers are smaller, faster and use less resources, when compared to virtual machines. This is because a virtual machine is just that, it is virtual hardware. You get a disk, a CPU, and some memory from the hypervisor. You need to bring, and boot an entire operating system to do something useful with those. With containers, you share the OS system kernel with your host, and only need to bring your own system libraries.

Once you have a container image with your software, you can boot it anywhere docker runs. It will run the same on Windows Server as it will on a Linux OS, no changes required. It goes without saying that this is a very powerful abstraction. It has never been easier to run an exact copy of the production server on your local machine.

Containers are now so widely adopted, that they are here to stay. This will guarantee that software you package today will still be runnable tomorrow, or even years from now.

A Container Definition for Our Service

You can find the Dockerfile for our container in Listing 3.5. You can view a Dockerfile as the recipe for a container image, that contains the instructions to build the image. A Dockerfile for any Go application should probably look a lot like Listing 3.5.

           Dockerfile
# Stage for building
FROM golang:1.12 AS gobuilder

## Copy source code
WORKDIR /app
COPY . .

## Build the service
ENV CGO_ENABLED 0
ENV GOOS linux
ENV GOARCH amd64
RUN go build -o main ./main.go

# Stage for running
FROM gcr.io/distroless/base

## Copy binary and assets
WORKDIR /app
COPY --from=gobuilder /app/web /app/web
COPY --from=gobuilder /app/main /app/main

## Run the service
ENTRYPOINT ["./main"]
            

As you can see, the steps that are outlined in the Dockerfile are about two concerns; building and running the service. You might be surprised by this approach, and I want to reassure you: there are multiple ways you can design this process to get from source code to a final container for production. Another approach that you might be more familiar with is to build the application binaries on a separate build server, like Jenkins. In Chapter 8 we will take a look at more complex build pipelines.

In the Dockerfile in this example, I use a multi-stage docker build. The first stage runs on the golang image, the second stage on gcr.io/distroless, an image specially made for running applications. I like this approach because it is simple, predictable and requires no extra infrastructure for me to set up.

What a Multi Stage Dockerfile Brings Us

When you consider that one Dockerfile is handling different tasks, it is easy to see that it makes sense to use different base images to run those tasks on. For compiling an application, you need an entire compiler toolchain. However, for running an application, you want the container to be as small as possible.

Small is better for running applications. First, when a container is just a few megabytes in size, it will load and boot faster. Shorter boot times mean better scalability and less resources wasted. Second, smaller containers will be more secure because of the smaller attack surface. Remember that a container contains your application, maybe a runtime for your application, and the system libraries that belong to the OS base image you use. In a 2019 study by SNYK, researchers found that the top ten most popular images each contained 30 or more system libraries that had known vulnerabilities. This included popular images such as ubuntu, node, and nginx. There are several base images out there that are tailor-made with the least amount of system libraries. The image alpine is a popular choice to make. We can even use scratch, the built-in image that contains no (as in zero) system libraries. In this book we use the distroless image from the google container tools project: gcr.io/distroless. The images from distroless only contain the system libraries and supporting files that are strictly necessary to run an application.

When you run your binary on a different base image then the one you use to build it, it’s generally a smart idea to statically link your application to the system libraries you use. This means that these libraries will be put inside the resulting binary. The default behaviour is to dynamically link, which means the OS will provide the libraries at runtime. This can give surprising or wrong results when both base images have different versions of a system library. Setting the flag CGO_ENABLED to 0 will cause our app to be statically compiled.

Cloud Run requires the binaries in the container to be compiled for 64-bit Linux, which is why we also set the environment variables GOOS and GOARCH in the Dockerfile.

Finally, we can verify that everything works as expected. Make sure that you are in the same directory as the Dockerfile and run the commands as demonstrated in Listing 3.8. Now open your browser and go to http://localhost:8080. When you want to stop the server again, just hit Ctrl + C in your terminal.

Building the docker image and running it:

user@host# docker build . -t serverless
Sending build context to Docker daemon 7.916MB
Step 1/11 : FROM golang:1.12 AS gobuilder
...skipping for brevity...
Successfully built 2c0b0efd58e6
Successfully tagged serverless:latest
user@host# docker run -p8080:8080 serverless 
Listening on port 8080
            

Step 3: Build with Cloud Build, Run with Cloud Run

Packaging our app was most of the work, we’re almost there! In just two commands, you will be running the “Hello, Reader” service on a production environment using Cloud Run. First, you will submit the source to Cloud Build to build an image and publish it in your own private Container Registry, inside your Google Cloud project. Next, you will submit this container image to Cloud Run, which will create a public https endpoint serving the container. In Chapter 2, we already set up the gcloud CLI so we can get started right away.

First, set an environment variable “PROJECT” that contains the id of you Google Cloud project, like this:

export PROJECT=$(gcloud config get-value project)

When you often work with multiple accounts or projects on your machine, make sure to double check the value of $PROJECT. If you are like me, you will inevitably select the wrong project sometimes. Then, from our working directory with the Dockerfile, submit a build to Cloud Build. The next command will upload the entire working directory and run docker build for us on Cloud Build. If you run this for the first time, you might be asked to enable a few APIs.

gcloud builds submit --tag gcr.io/$PROJECT/app

With the --tag argument we name the image and publish it to Container Registry, which is the private docker container registry included with your Google Cloud project. If all went well, you should see “STATUS” reported as “SUCCESS”. Now that you published the image privately, you are ready to deploy the service with Cloud Run:

gcloud beta run deploy --image gcr.io/$PROJECT/app --platform managed --allow-unauthenticated

This command will first ask you which region to choose. Pick the one closest to your geographic location. If it asks you to provide a service name, you can accept the default. Finally, you will get the URL where the service is running. This will be a line that looks like this:

Service [app] revision [app-qtr75] has been deployed and is serving 100 percent of traffic at https://app-rwlmxisqmq-ew.a.run.app

See if your app works when you request the URL that was returned. You should see “Hello Reader”.

By putting --allow-unauthenticated as an argument, we made sure that the endpoint allows public access. The alternative is to protect the endpoint and require authentication. We will see an example of that in chapter 4, but first, let’s learn more about Cloud Run.

Understanding Cloud Run

I believe that building a good understanding of a platform helps you build better solutions with it. Let’s look at the fully managed Cloud Run platform. It offers great reliability, availability and elastic scalability. This means that every architectural decision was made to support these key system qualities. Keep this in mind when reading the rest of this section, because we will keep coming back to it.

Elastic Scalability

Scalability is used to describe the ability of a system to handle a rapid increase in demand. Elastic scalability then means that it can also quickly scale back down to handle decreasing demand. When a system has very good elastic scalability, its capacity will grow and shrink, closely tracking demand.

When your system has a very stable and constant demand, you might think that there is nothing to gain from elastic scalability. However, think about what happens when you deploy a new version. This version will then have to scale up to meet demand, and the old version has to scale down. With good elastic scalability, this happens quickly. This allows you to push new versions to production fast, even when your system is under load.

Next, we will look at the key design decisions in Cloud Run. We will explore ephemeral container instances, the lack of session affinity, down-throttling of idle instances, ephemeral disks and finally, we look at long-lived TCP connections. Don’t worry if some of this doesn’t make sense to you right now, we will get to it soon.

No Session Affinity

A serverless service in Cloud Run runs on an actual container instance. First, let’s look at how requests get to an instance. Incoming HTTP requests are assigned by a load balancer to a container instance in a pool of instances. The load balancer will make sure to only send requests to instances that are ready to handle them. For example, they have not crashed and have enough capacity remaining. In Figure 3.10, you can see what this looks like, schematically.

Figure 2-1. HTTP requests getting to a Cloud Run service

A key fact to realise is that the Cloud Run load balancers do not offer session affinity. You will not be able to route requests to a specific instance, based on for instance the client IP or a session cookie.

Session affinity hurts scalability because the distribution of requests over instances can easily become imbalanced. When you remove an instance, you also terminate the sessions running on it. When you want to avoid session termination, it limits the speed of scaling down. The other way around is also limiting: when you add a fresh instance to the pool, it will only receive sessions that just started, reducing the speed of scaling up. When sessions are assigned to specific instances, some sessions might experience performance problems, because the instance they are assigned to is out of capacity.

Session affinity is a key feature of most traditional load balancers and a lot of serverful applications depend on it to work correctly. If you are looking to migrate an existing application that relies on session affinity to Cloud Run, you need to restructure your application. We will look into how you can do user session management without session affinity on Cloud Run in chapter 10.

Ephemeral Container Instances

The container instances in the pool are managed by Cloud Run. It will make the container instances appear and disappear, depending on the demand for your service. This is why we call these instances ephemeral, or transient. There is no way at all to influence when an instance is added or removed.

The consequence of this is that we need to find other ways to run background tasks. With serverful applications, it is a common practice to run jobs that can take longer to complete in the background, and not while the browser waits for the response. Examples of this are image resizing or database updates.

We will get to know how to do that in chapter 4.

Down-Throttling Idle Instances

When a new instance is needed to serve current demand, there are a few steps that need to happen. The container image needs to be retrieved, it needs to start and it will take a while before your app is ready to receive requests. This can take some time, limiting scale up speed. As a workaround, Cloud Run will keep all instances in ‘standby’-mode when they are not actively handling an HTTP request. The wake-up time of an instance is an order of magnitude quicker than the time it needs to boot and become ready for requests. When in standby mode, the instance will not have access to CPU, or is severely throttled. This reduces the resource costs of the entire pool of instances, which will allow Cloud Run to maintain a bigger pool size to handle demand variations.

This adds another important limitation; you can only count on having CPU available while you are handling an HTTP request. If ephemeral instances were not enough, this pretty much rules out the possibility to do meaningful background processing on the instance. In chapter 4 we will look at the ‘serverless way’ to handle background tasks.

Ephemeral Disk Storage

Serverless services in Cloud Run do have a small disk attached. You can write to this to save temporary results. This filesystem lives in-memory and has a limited size. As such, the contents will disappear when the container instance goes away.

This is why some people refer to serverless as being ‘stateless’. I think this is a confusing term to use. If you have a background in functional programming, you will interpret stateless as “no side-effects”, which certainly is not the case. From a container instance in Cloud Run, you can connect to databases or other external systems to create side-effects.

When you do want to save data that needs to be shared with other instances, it is common to write the data to Cloud Storage. Here’s a concrete example: let’s say you have a service that processes images. You can have users upload an image. First, you write it to the temporary storage on the instance itself. Next you can save some metadata to Firestore, do some more processing, and when you are done, you write the end result to Cloud Storage for persistent storage. This is a very common way to work with persistent data that needs to be shared over multiple instances.

Long-Lived TCP Connections

Cloud Run only guarantees the availability of an instance, and its access to system resources, while handling an HTTP request-response cycle. It should not come as a surprise that long-lived TCP connections to the same instance are also not supported.

Figure 2-2. TCP connection from the browser connects to the Google Frontend

When your browser requests a Cloud Run endpoint, it will build a TCP connection to the Google Frontend servers, which will in turn build their own connections to the Cloud Run instances and forward the HTTP requests. In figure 3.11 this is illustrated.

Inbound WebSockets and gRPC streaming require longer-lived TCP connections that connect to a single instance, and are not possible in this model. Depending on your use-case, there are ways to resolve this. However, be prepared for the fact that if you really need long-lived TCP connections, you probably need an always-on server at the edge of your infrastructure to manage those connections yourself.

Cloud Run Key Points

Looking at Cloud Run, these are the key points to remember:

Serverless services provide a way to run user-supplied code on ephemeral container instances, in response to HTTP requests

Session affinity is not supported, which means that you cannot route requests to specific container instances.

You can only perform meaningful work while handling an HTTP request: background tasks that run on the instance, after returning the HTTP response, will fail due to down-throttling and ephemeral instances.

While you can save a limited amount of data on disk and in memory, you can only use it for transient data that can be easily rebuilt.

Reading this might have put a damper on your initial enthusiasm about serverless. Some things that are very easy to do in a serverful environment are different with serverless services, like handling large file uploads or running long background tasks. That can feel restrictive, especially if you have a substantial background in serverful programming. Give yourself some time to adjust, in the end it is worth it.

I am very excited about the fully managed Cloud Run platform. The characteristics of the Cloud Run platform provide an environment which gently pushes you to building applications that will not go down under load or become very slow and unattractive for your users. The limitations will help you build applications that are highly scalable, available and reliable, just like the platform itself.

Choosing a Serverless Compute Product on Google Cloud

This book teaches you how to build services with Cloud Run, because it is container-based and unparalleled portability, on top of a cost model that is friendly for concurrent workloads. However, there are other options; you can pick Cloud Functions or App Engine. I want to highlight the trade-offs that are involved with deciding which option is good for your use-case, to make sure you are not using the proverbial hammer to put a screw in.

Source-based Functions with Cloud Functions

Cloud Functions is very similar to Cloud Run, but it offers source-based deploys instead. You write your code, state the dependencies and use a gcloud command to deploy it. All the packaging is handled by the product itself.

At runtime, your code will plug into the Cloud Functions language specific runtime framework. Essentially, you export a method or function that will be called by this framework.

Cloud Functions is very well-suited for handling internal cloud events. The product excels at being the glue that connects and extends existing APIs. It is less suited for hosting a more complete workload such as a web application or an API. For example, Cloud Functions does not support traffic splitting and easy revision management to roll back versions. If you want to explore Cloud Functions, we will look at an example in chapter 7.

Container-based Services with Cloud Run

We have already built our first service with Cloud Run, so you should already have a pretty good idea of how it works. There is one thing I want to highlight, and that is portability. Cloud Run is container-based, which makes it easy to take your services and run them somewhere else. On top of that, the platform itself is API-compatible with Knative. This is an open source add-on you can install on top of Kubernetes. This guarantees that you can take a Cloud Run workload, and install it on a Kubernetes cluster you maintain yourself.

On Google Cloud, there are two compatible, but entirely different implementations of Cloud Run. First, there is the managed platform we used at the start of this chapter. It runs directly on top of the same platform that powers Cloud Functions and App Engine Standard. When I talk about Cloud Run in this book, I refer to the managed platform unless I clearly state otherwise. The other implementation is Cloud Run for Anthos, which runs on top of Google Kubernetes Engine with Istio and KNative installed, or in your own datacenter.

App Engine: Platform as a Service

This overview would not be complete without discussing App Engine, the application platform that predates Google Cloud. App Engine can be considered serverless to a large extent. If you use one of the standard runtimes, you are actually using the same underlying platform that powers Cloud Functions and Cloud Run. In fact, if you want to prototype an application quickly, and you want to use one of the mature web frameworks out there, App Engine will get you to production fast. Some notable implementations include Snapchat and Pokemon Go.

One thing making App Engine less attractive, is that it carries the weight of a decade of product development. This means some features are in there because they have been there for a long time, not necessarily because they are still relevant today. This is particularly apparent if you dive into the different runtimes. There are three distinctly different classes of runtimes to choose from. There is a Standard and a Flexible version, and Standard offers a first and a second generation runtime. The Standard version runs on the same platform als Cloud Run and Cloud Functions, whereas Flexible runs on top of Compute Engine.

With App Engine Flexible, the underlying server instances are visible to you and you can even use SSH to get to them. App Engine Flexible is a popular option if you need features that are only available on Compute Engine, but still want App Engine to manage your servers.

Key Differences

Now that we have learned about Cloud Run, Cloud Functions, and App Engine, how do you actually make a decision for one of the products? Let’s look at the key differences.

The use-case for Cloud Functions is very well-defined. You can use Cloud Functions as the glue you use connect and extend existing services. A concrete example is when you want to process images that are uploaded to a Cloud Storage bucket. You use a function to create a thumbnail, and call the Vision API to figure out what the picture is about. Functions typically do not require you to listen for requests and handle only one endpoint (URI).

If you want to build something that looks more like a service, like an API or a web application, you can use either Cloud Run or App Engine. This choice can be harder to make. In my opinion, the crucial differences are with: source-based versus container-based, request-oriented versus instance-oriented, and finally the scaling characteristics. Let’s look at them one by one.

Source- versus container-based

App Engine is source-based. Your code runs inside a standard, language-specific runtime. You can upload your source code and list your dependencies, binary dependencies excluded. Cloud Run takes a container-based approach: it allows you to a supply container image which is supposed to start listening on a port and handle HTTP requests.

I favor a container-based platform over a source-based one. A container provides a very clear division of responsibilities between me, the developer, and the platform it runs on. My responsibility is to start a binary which opens a port to receive HTTP requests on, the platform runs my container. The container runtime contract is very stable and unlikely to change. Using a container-based platform does require me to build a Dockerfile and start my own HTTP server, but that is a small price to pay for long-term stability and portability.

A very real benefit of the source-based approach is that the base layers of the underlying instances are automatically patched. The runtime receives minor version updates on deployment: Go 1.12.1 to Go 1.12.2 without your intervention, and the OS will get patches that are automatically installed. This offers peace of mind and reduces your maintenance responsibilities.

Request- vs instance-oriented

Cloud Run is request-oriented; the lifetime of the underlying instances is only guaranteed while handling a request-response cycle. App Engine is instance-oriented and closer to the traditional serverful model. To give a few examples: App Engine Flexible supports long-lived TCP connections to an instance, which means WebSockets are still possible. With App Engine Standard, you can still use background threads in a meaningful way. The App Engine load balancers support session affinity.

An instance-oriented product might be better for you if you are looking for an easy entry into serverless and do not want to re-architect your application too much.

Table 2-3. Drawing a path from serverful to serverless
Serverful Serverless
Virtual Servers App Engine Cloud Run and
Cloud Functions
Instance-oriented Instance-oriented Request-oriented

Scaling

If the ability to react quickly to increases in demand is important to you, or you don’t want to wait long for new version deploys to happen during peak traffic, scaling is a factor you should take into account. Within the Google Cloud product portfolio, there are two outliers. App Engine Flexible runs on top of Compute Engine virtual machines, they are much slower to provision. That means ‘gradual scaling’ is all you can get. Cloud Run definitely has the potential to scale the fastest, with the caveat that your container image needs to start quickly and be ready to serve requests. If your image takes 10 minutes to get ready, that will have an effect on scaling performance.

What Will the Future Look Like?

I am fairly certain that Cloud Run is the future of serverless compute on Google Cloud. Google is traditionally very secretive about their product roadmap and product lifecycle. As an outsider, all I can do is speculate. Nevertheless, I want to share my theories with you.

After its introduction, Cloud Run was rapidly rolled out to multiple regions, which shows that Google put significant resources behind the development of the platform. Apparently, Google is eager to have people adopt the product.

The pricing model of Cloud Run seems to be designed to be cheaper than both App Engine and Cloud Functions for most workloads, which nudges customers to adopt Cloud Run.

With the introduction of the second generation runtimes on App Engine, the proprietary App Engine SDK has been removed, and was replaced with open source client libraries. Cloud Tasks and Cloud Scheduler, once part of App Engine, are now separate services. This increases the portability of code that runs on App Engine, and provides an upgrade path to Cloud Run. The story for Cloud Functions is similar. Starting with the Node.js 10 runtime, the supporting functions framework has been open sourced, also providing an easy upgrade path to Cloud Run.

I don’t think existing customers of App Engine have to worry that their favorite platform will suddenly be discontinued. The proposition caters to unique use-cases and the installed base is simply too big. I recently built a new service on top of App Engine Flexible. I needed session affinity, websockets and tight integration with Compute Engine, which made Cloud Run an incompatible choice.

Summary

In this chapter, you started with building a “Hello World”-service on the fully managed Google Cloud Run platform. Cloud Run is built on open standards, and has a container-based runtime environment. It offers unparalleled scalability and portability.

We learned about the three different products on Google Cloud that can run serverless application logic: Cloud Run, Cloud Functions and App Engine. We learned that Cloud Functions are very suitable to be the glue that connects and extends existing APIs. The choice between Cloud Run and App Engine is less obvious.

When choosing between Cloud Run and App Engine, you need to decide whether a source-based or container-based platform is better for you. You also need to consider if you want to fully embrace request-oriented serverless, or if you want to stay with the more instance-oriented App Engine. Every use-case is unique and sometimes App Engine is a better fit.

I think Cloud Run is the future of serverless computing on Google Cloud. The characteristics of the platform help you, as a developer, to build systems that are highly scalable, resilient and reliably perform well, regardless of load. To make the most of Cloud Run, you need to change how you architect your system. In the next chapter, we will learn how to work with background tasks.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.252.201