Chapter 1. Our Use Case and Framework

When I first told my son I was writing a book he asked me, Dad, what is the book about? As I went into the details of running a Cloud-Native Application on Google Cloud he paused me with a question, Do you have an outline? I said sure and the first chapter is the Use Case. He looked at me and asked what is a use case? I struggled with how to explain a use case, I could not find the words. The term use case came naturally to me and I never put thought into its meaning. From a quick search on Google, a use case is a specific situation in which a product or service could potentially be used. As we take this journey through this book, I thought it would be helpful to develop a fictional use case to apply it to.

This book as explained in the preface is a journey; each chapter builds on itself. You can skip to the chapter of most interest, but to fully grasp Cloud Native applications running on Google Cloud it’s best to understand the use case and follow along through each chapter. I have designed the framework for you and now we will learn all the associated Google Cloud services of the framework that will be applied to our use case. I implore you to revisit this chapter often to re-read the use case as it will allow you to grasp why certain services were chosen.

Fictional Use Case

Pigeon Travel is an online travel agency. Pigeon Travel’s website and mobile application are used to book airline tickets, hotel reservations, and car rentals. They also have a large call center to provide support for their customers. The agents in the call center support their customers via live chat as well as call-in support.

For measuring customer satisfaction and calculating their Net Promoter Score, they provide the user with a survey at the end of the call or chat. A Net Promoter Score is a measurement used to answer the question: Will the customer recommend our product or service to a friend? Customers who respond 9 to 10 are called promoters, responses from 7 to 8 are called passive and responses from 0 to 6 are called detractos. The small percentage of users that leave feedback, the feedback provides for an overall low customer satisfaction and Net Promoter Score. Issues customers have raised are:

  • Call Availability: Long hold times before speaking or chatting with a customer representative

  • High Handling Rates

Pigeon Travel also noticed a high abandonment rate which correlates to the call availability issue listed by their customers.

The business has asked you as the lead developer to put a plan together and a prototype on how Pigeon Travel can increase the Net Promoter Score, increase customer satisfaction, and provide the following metrics to the business:

  • Customer Sentiment

  • Call Topics (Why are users calling?)

  • Understanding Trending Topics

  • Understanding call quality including Silence Scores, Call Duration and Call Escalation Paths.

Business Objectives

We can meet some of the objectives of the business by taking a potentially biased and inefficient manual approach which would require analysts to randomly select calls to collect the key performance indicators. However, the framework which we will cover in a few will allow us to use Google Cloud services to meet the goals and analyze all recorded calls to get insights in near real-time. These insights can include:

  • Overall call sentiment.

  • Sentence-by-sentence sentiment.

  • Insight into which agent quality metrics to track (such as call silence, call duration, agent speaking time, user speaking time, and sentence heatmaps).

  • Insights on how to reduce call center volume by analyzing keywords in transcripts.

Now that we understand the business objectives, we need to define the framework. How we can define a framework without knowing the services available from Google Cloud. The following chapters of the book will do just that. But I have gone ahead and defined a framework for you that will guide you through the book, Figure 1-1 shows the architecture we will work through.

Framework Diagram
Figure 1-1. Framework Diagram

The Framework

The framework presented in this book uses Google Cloud AI services such as Speech-to-Text and Cloud Natural Language API. The framework also employs Pub/Sub, a messaging system, and Dataflow, a data processing framework, for data streaming and transformation. The resulting framework consists of is a cloud-native and server-less jobs running in Google Cloud. To implement the framework, you don’t need any machine learning experience, and all data infrastructure needs, such as storage, scaling, and security, are managed by Google Cloud. While I have chosen Google Cloud as the platform, you have many choices such as Amazon Web Services and Azure. They both have similar offerings to Google Cloud that will allow you to translate the Framework components to their offerings to fully a working framework. However, I wanted a wide partner ecosystem of Business Intelligence Tools, as well as have Google scale querying for the large amounts of data that will be created from our transcription of the audio files thus Google Cloud is the best fit for our requirements.

Let me clearly define a cloud-native application as it is loosely used throughout the book. A cloud-native application is an approach to building applications that take advantage of cloud computing resources. It is often associated with public clouds, but can also be used with on-prem clouds. Cloud native applications are about how they are deployed rather than where they are deployed. A cloud-native application will leverage the perceived limitless resources available. For Google Cloud a cloud-native application can scale up and down as needed, the application is built to use and release resources based on the rules defined.

A cloud-native application leverages the framework of the cloud computing platform of choice. The methodology of designing and building the application needs to take into account the services provided by the chosen cloud provider. For our application, we have chosen Google Cloud. We need to take into account the following considerations for our framework:

  1. Our application needs to be abstracted from the cloud infrastructure.

  2. We need a method of Continuous Integration and Continuous Delivery.

  3. Our application has to be able to scale up and scale down as needed.

  4. How will our application manage failures?

  5. The application needs to be globally accessible as our fictional business Pigeon Travel operates multiple call centers globally.

  6. Our framework needs to be based on a microservice architecture.

  7. How will our application be secured?

As you think about these considerations, review Figure 1-1, and try to map each consideration to a section of the diagram. Let’s begin with the first step shown in Figure 1-1: storing audio recordings.

Storing Audio Recordings

The first step is getting the files to Google Cloud. For the upload we will use a tool called gsutil, which is a Python application that lets you access Google Cloud Storage from the command line. The basic idea is when the audio file is finished recording a job running on-prem would upload the file to Cloud Storage. When the audio file is uploaded to the bucket, we will use custom metadata that can identify the recording with categories like caller ID, customer ID, and other metrics collected from the contact center. To add custom metadata to a Cloud Storage object, you use the x-goog-meta command with the -h flag. The following command shows an example;

gsutil -h x-goog-meta-agentid:55551234 -h cp [FILENAME].flac gs://[BUCKET]

Figure  illustrates a sample of the metadata written with the gsutil command.

 add caption

You can retrieve the metadata programmatically which allows you to associate a file with multiple key:value pairs. As an example, below is how we can extract the custom metadata value for the agentid key.

let agentId  = object.metadata.agentid === undefined ? 

‘undefined’ : object.metadata.agentid;

Hopefully, you’re seeing how practical this can be. Programmatically we can list the files in a bucket and extract the custom metadata to know that the audio file is associated with an agent or customer. For our framework, we will include the caller ID, customer ID, and other metrics to allow us to perform joins in BigQuery. We will cover BigQuery in detail in a later chapter, but BigQuery is a fully-managed serverless data warehouse service provided by Google Cloud. Our data is analytical in nature, we need to query large amounts of data quickly to provide insights to our users and BigQuery fills this requirement. A join will allow our framework to join two datasets and use the customerID as the key field. This will allow us to show the metrics for a recorded call as well as the customer information.

Processing Audio Recordings

In the framework design, you configure a Cloud Function to be triggered when you upload an audio file to the bucket. Cloud Functions is a service to create single-purpose functions that respond to events without requiring to manage the underlying infrastructure. Google Cloud functions have the following event methods:

  • HTTP

    You can invoke Cloud Functions with an HTTP request using the POST, PUT, GET, DELETE, and OPTIONS HTTP methods.

  • Cloud Storage

    Cloud Functions can respond to change notifications from Google Cloud Storage. These events can include object creation, deletion, archiving and metadata updates.

  • Cloud Pub/Sub

    Cloud Functions can be triggered by messages published to Pub/Sub topics.

  • Cloud Firestore

    Cloud Functions can handle events in Cloud Firestore such as create, update, delete, and write.

Figure 1-2 provides an example of the usefulness of a Cloud Firestore event method. In the image, we have a write event to a Firestore document that triggers a user-defined cloud function. This user-defined function leverages the Firebase SDK to push a message to a device using Firebase Cloud Messaging. This notification can be used to notify a user that they have a new follower.

Firestore Write Trigger Event
Figure 1-2. Firestore Write Trigger Event

In our framework we will leverage the Cloud Storage event method. Within this event method we have multiple trigger type values:

  • google.storage.object.finalize

  • google.storage.object.delete

  • google.storage.object.archive

  • google.storage.object.metadataUpdate

We will leverage the google.storage.object.finalize trigger, once the audio file is created within a specified Cloud Storage bucket the function will be triggered. The function then sends the audio file to the Google Cloud Speech-To-Text API to be processed. The Speech-To-Text API response will be a job name that we will capture and send to Cloud Pub/Sub. The message will await in Pub/Sub until “something” pulls the message to be processed.

Framework Workflow
Figure 1-3. Framework Workflow

Our function does a little more than send the audio file to Pub/Sub, and you will get to see the code in an example. If are curious here is the repo: https://github.com/GoogleCloudPlatform/dataflow-contact-center-speech-analysis

Before we move on, it would be helpful to understand what is Google Cloud Pub/Sub. Cloud Pub/Sub is a fully-managed real-time messaging service that allows you to send and receive messages between independent applications. Imagine having a bucket between two people (Figure 1-3), one person puts a note in the bucket and the other person retrieves the note from the bucket. You have now passed a message between two people and the bucket was the transport mechanism.

Cloud Pub Sub
Figure 1-4. Cloud Pub/Sub

Gather Data from the Audio File

At this point, we have an uploaded audio file, we have a function that was triggered and a message sitting in Pub/Sub. Cloud Pub/Sub is an asynchronous messaging service that allows applications to share data. This message contains a few key:value pairs, but one of the key values is the Speech-To-Text job name. To transcribe long audio files we used the asynchronous speech recognition method, which does not return the result of the transcription but rather the job name allowing us to check the status for when it completes the process of transcribing.

How do we take the note from the bucket and read its content? Cloud Dataflow will allow us to build a pipeline that will read the message from Pub/Sub and enrich or transform the data as defined by the developer. Cloud Dataflow is a fully managed service for executing Apache Beam pipelines within Google Cloud. Apache Beam is an open-source model for executing batch and streaming pipelines. Figure 1-5 illustrates the pipeline that will be used in our framework.

Cloud Dataflow Pipeline
Figure 1-5. Cloud Dataflow Pipeline

Cloud Dataflow is a powerful tool to perform ETL operations on our data (Cloud Dataflow is covered in more detail in Chapter XYZ]. Cloud Dataflow is a core component of the framework as it pulls the messages from Pub/Sub enriches the data by calling additional APIs within the pipeline and finally writes the data to BigQuery allowing us to visualize the metrics collected from the audio file.

Cloud Dataflow pipelines can be written in Python or Java. For our framework, I have chosen Python as I feel it is easier to read and understand even if you don’t have a programming background. As an example, review the code snippet below and try to determine what those lines of code are doing.

def stt_output_response(data):
    from oauth2client.client import GoogleCredentials
    from googleapiclient import discovery
    credentials = GoogleCredentials.get_application_default()
    pub_sub_data = json.loads(data)
    speech_service = discovery.build('speech', 'v1p1beta1', credentials=credentials)
    get_operation = speech_service.operations().get(
                           name=pub_sub_data['sttnameid'])
    response = get_operation.execute()
    # handle polling of STT
    if pub_sub_data['duration'] != 'NA':
        sleep_duration = round(int(float(pub_sub_data['duration'])) / 2)
    else:
        sleep_duration = 5
    logging.info('Sleeping for: %s', sleep_duration)
    time.sleep(sleep_duration)
    retry_count = 10
    while retry_count > 0 and not response.get('done', False):
        retry_count -= 1
        time.sleep(120)
        response = get_operation.execute()

If you think it’s polling the Speech-To-Text API you’re right!

Cloud Dataflow will perform the following tasks in this order:

  1. Pulls the message from Pub/Sub

  2. Polls Speech-To-Text to determine if the job is completed

  3. Sends the completed transcript to Cloud Natural Language

  4. Receives the response from Cloud Natural Language

  5. Enriches the data with custom-defined functions

  6. Matches the data values to the BigQuery schema

  7. Finally writes the data to BigQuery

Once Cloud Dataflow completes its pipeline successfully you will have your data available in BigQuery! Figure 1-6 illustrates how the data will be represented in BigQuery.

BigQuery Nested Repeated Fields Preview
Figure 1-6. BigQuery Nested Repeated Fields Preview

In Figure 1-6 you will notice that rows are nested within rows; we will cover this in detail in the BigQuery chapter.

Visualization of the Enriched Data

Part of the Cloud Dataflow pipeline as described is to leverage the Speech-To-Text and Natural Language APIs to enrich the data. Enriching the data is the process of providing additional data points that our use case calls for. As an example, determining call silence time. We can use Cloud Dataflow to run custom algorithms to formulate the call silence per audio file.

Determining the speaker, talk time, and silence time

The following code snippet is an example of how you can process long audio files. It employs speaker diarization , a feature that detects when speakers change and add a numbered label to the individual voices detected in the audio.

You can use the speaker diarization and word timestamps feature to determine the speaker, speaker talk time, and call silence. You can also create a sentiment heatmap for more details.

const audioConfig = {
   encoding:"FLAC",
   sampleRateHertz: 44100,
   languageCode: `en-US`,
   enableSpeakerDiarization: true,
   diarizationSpeakerCount: 2,
   enableAutomaticPunctuation: true,
   enableWordTimeOffsets: false,
   useEnhanced: true,
   model: 'phone_call'
 };
 const audioPath = {
   uri: `gs://${object.bucket}/${object.name}`
 };
 const audioRequest = {
   audio: audioPath,
   config: audioConfig,
 };
 return spclient
   .longRunningRecognize(audioRequest)
   .then(data => {
     const operation = data[0];
     return operation.promise();
   })

Call center leads can see the progression of the call, including how the call started and ended. In addition to the visual progression, they can also drill into each square to view the sentence sentiment.

Framework screenshot from our Frontend built with React.js
Figure 1-7. Framework screenshot from our Frontend built with React.js

Extracting sentiment from the conversation

We will use the Natural Language API to extract overall transcription sentiment, sentence sentiment, and entities. With this data available for the transcribed audio file, you can create heatmaps and sentiment timelines. You can also build word clouds.

The following example shows a code snippet to capture sentence sentiment:

client
     .analyzeSentiment({document: document})
     .then(results => {
       const sentences = results[0].sentences;
       sentences.forEach(sentence => {
         pubSubObj.sentences.push({
           'sentence': sentence.text.content,
           'score': sentence.sentiment.score,
           'magnitude': sentence.sentiment.magnitude
         })
      });

Figure 1-7 is a sample of what we will build to address the use case, we are giving the business users access to a sentence heatmap that allows them to drill into each audio file and understand the progression of the call.

For the user visualization and APIs, the application will be deployed on Google Kubernetes Engine (we’ll cover this in more detail later). The GitHub repo for this book includes an API built with Express.js that leverages the BigQuery Node.js SDK to run SQL statements to retrieve data. The SQL commands are invoked in response to a user clicking on the visualization. This Express.js application will be deployed on Google Kubernetes Engine alongside the user interface.

Remember that our source of truth will be BigQuery. We will visualize this data to the user through a web browser, and our application for visualization will be developed using the React Framework and Node.js.

The following sample query looks for all the words in the transcript that are currently stored as a nested repeated field. The query statement is executed using the BigQuery SDK, which gets all the words from the relevant record. This is a sample of our Express.js API, this response is received by the web application.

const sqlQueryCallLogsWords = `SELECT
  ARRAY(SELECT AS STRUCT word, startSecs, endSecs FROM UNNEST(words)) words
  FROM `` + bigqueryDatasetTable + ``
  where fileid = '` + queryFileId + `'`

Securing the Application

We covered a wealth of information, stay with me a bit longer.

For the final component of the framework, we need to implement authorization allowing access to the application to who with deem. Keeping in mind authorization is only one component of security; we also have to take into consideration items as encryption and securing sensitive data. For now, let’s focus on authentication. Google Cloud provides a service called Identity-Aware Proxy (IAP). IAP works by verifying the identity of the user and the context of the request to either allow or deny access to the application. I like to think about it as a proxy managed by Google Cloud that sits in front of our application that intercepts requests and authorizes users on our behalf.

Why is IAP revolutionary? IAP abstracts the developer from having to build a backend authentication system, it allows the developer to focus on the core functionality of the application. Basically IAP allows us to go to the market quicker. Our application will have no authorization code, that is right no code to authorize the users. Meaning no SDK from third parties for validating tokens or any other complex code to get and validate a JSON Web Token. All our application will do is focus on its core role of providing insights to users on the audio files we have transcribed and enriched.

IAP also supports a range of provider types:

  • Email and password

  • OAuth (such as Google, Facebook, Twitter, and more)

  • SAML

  • OIDC

  • Phone number

  • Custom

  • Anonymous

We can secure the application while going to market quicker, providing a wide range of authentication providers, and simply access to the user by not requiring a VPN client to access the application.

Cloud Native Checkpoint

Now that you have a full understanding of the use case and framework we will use to meet the use case requirements. Let’s revisit the considerations we listed for our cloud native architecture:

  1. Our application needs to be abstracted from the cloud infrastructure.

  2. We need a method of Continuous Integration and Continuous Delivery.

  3. Our application has to be able to scale up and scale down as needed.

  4. How will our application manage failures?

  5. The application needs to be globally accessible.

  6. Our framework needs to be based on a microservice architecture.

  7. How will our application be secured?

Abstracted from the Cloud Infrastructure

Abstraction from the cloud infrastructure can be a misleading term. When I first started learning about cloud native frameworks I always considered this a method of being cloud-agnostic: Build an application and deploy it to any cloud provider. However, abstraction from the cloud infrastructure is the decoupling of the application from the underlying infrastructure. But if decoupled it can we not run it on any cloud provider including on-prem? This where cloud native can be interpreted differently by the individual. We build cloud native applications as a method to provision resources, automate the application lifecycle, and be scalable and secure. However, there will always be a dependency on the cloud provider and their core requirements to build cloud-native applications. We need to take these into account when building our cloud-native application which we will throughout this book. But keep in mind that this book is about running cloud-native applications on Google Cloud, and for that we will focus on building a scalable and secure application with their services as much as possible.

Continuous Integration and Continuous Delivery

One core component of cloud native applications is Continuous Integration (CI) and Continuous Delivery (CD) architecture. In our framework, we will use GitLab CI/CD services to create our deployment pipeline. We have ventured outside of Google Cloud for this service but we are still within the cloud native application framework. We are still meeting the core requirements of having a pipeline to build, test, and validate the new code prior to merging the changes within your GitLab repository, as well as ensuring that our validated code has an application deployment pipeline once the CI pipeline has successfully passed all tests.

CI CD Pipeline
Figure 1-8. CI/CD Pipeline

Scalability

Another core component of applications is the ability to scale up and scale down. We accomplish this with microservices where the state of the application is stored externally to the container. This allows instances to come up and down as required. This also allows each instance to work independently where each instance can service a request.

Note

If you want to learn more about Cloud Native Computing or get engaged in the community I suggest you visit The Cloud Native Computing Foundation (CNCF) website https://www.cncf.io/.

This sounds amazing--creating microservices that are stateless, can come and go and can service requests independently of each other. The downside is the management of such an approach. How can we manage these services, secure them, set declarative statements for their state, and orchestrate this workflow? Kubernetes (k8s) fills this requirement. Kubernetes is an open-source container-orchestration application to manage the deployment, scaling, security and much more. Here are some key features of Kubernetes:

  1. Horizontal scaling

  2. Self-healing

  3. Automated rollouts and rollbacks

  4. Service discovery and load balancing

  5. Secret and configuration management

Kubernetes is an open source project originally developed at Google and released as open source in 2014. I will say Kubernetes is a difficult concept to initially grasp, and when I first started learning about it and heard terms as pods I was ready to scream. However, Google Cloud has provided us another solution: Google Cloud introduced the Google Kubernetes Engine which abstracts much of the complexity of managing a Kubernetes cluster. It provides Kubernetes as a managed service, no clustering software installs, no worries about confirming monitoring, basically a fully managed cluster by the company that created Kubernetes.

I will cover Google Kubernetes Engine in greater detail in the future chapters. But for now, it’s worth covering a few high-level concepts as it will allow you to understand why Kubernetes engine fits nicely with cloud native applications.

Application Reliability

Even though Google Cloud provides service-level agreements (SLAs) and has mechanisms in place to provide reliability, things to do fail. We need to consider how our application will handle failures in these situations. In the scalability section, I touched briefly on Kubernetes which alone would provide some form of fault tolerance as our cluster at a minimum would have three nodes. But if a Google Cloud region fails, our cluster would fail as well. Many of the Google Cloud services provide for regional failures as BigQuery others do not. We have to be mindful of each service and their respective fault-tolerance features.

I can’t cover all the components in a single sitting, but each chapter will have a section on a cloud native checkpoint that will make sure we address the cloud native framework which includes application reliability. Besides large failures as a zonal or regional we also have to consider failures as bugs in our code, human errors, and others when building our cloud native application.

It also is a good idea to understand Google Cloud regions and zones to start us thinking about how to manage those failures.

  • Google Cloud Zone

    Zones have high bandwidth and low-latency between other zones in the same region. A zone is an isolated location with a region

    Zones should be considered a single failure domain within a region

  • Google Cloud Region

    Regions are collections of zones. It is recommended to deploy applications across regions for high availability

    Regions are independent geographic locations

    Within regions tend to have round-trip network latencies of under <1ms on the 95th percentile

  • Zonal Resources

    Zonal resources operate within a single zone

    If a Google Cloud zone becomes unavailable, all of the zonal resources in that zone are unavailable

  • Regional Resources

    Regional resources are redundantly deployed across all the zones within a region Certain Google Cloud services like App Engine are a regional resource

  • Multiregional Resources

    A few managed services provided by Google Cloud are distributed across regions. The following have multiregional deployments options:

    • Datastore

    • Cloud Key Management Service

    • Cloud Storage

    • BigQuery

    • Cloud Spanner

    • Cloud Bigtable

    • Cloud Healthcare API

To build applications that can withstand zone failures, use regional resources, or build your application with resiliency across zones. To build applications that can withstand a loss of the entire region, use multiregional resources, or build resiliency with your applications.

Globally Accessible

We did not cover this in the framework, but our application needs to be globally accessible as we will have users using the platform from different global regions. We need the users to ingress to Google Cloud as close as possible to their location and access the application nearest to their global region. We are in luck, Google Cloud provides us a managed service for load balancing with a global IP address. As stated on Google’s website “Worldwide autoscaling and load balancing”.

Global Cloud Load Balancing
Figure 1-9. Global Cloud Load Balancing

Security

We covered it at a high-level IAP which allows us to authenticate and authorize users. That is one element of what we need to consider when designing a cloud native application. Other elements we need to consider are:

  1. Key Management

  2. Audit Logs

  3. Identity Access Management

Each component of our framework will need to take these key elements into consideration as each component might provide different methods of securing it. You should also take note that security concerns require a dedicated book. While I cover as much as I can in this book (per the service provided by the framework), I recommend reading some of the documentation for the respective services for additional information.

Key Management

Google Cloud encrypts data sitting on disk, and manages the keys, including rotation. I would say most customers leverage the default setting, but other customers want a little more flexibility. Google Cloud offers the following services for key management:

  1. Google Managed Keys

  2. Customer Managed Keys with Cloud Key Management Service

  3. Customer Managed keys with a third-party key management system

The easiest option is to allow Google Cloud to manage the keys. However, if you choose to manage your keys, here is a simple process to create and assign keys leveraging the Cloud Key Management System:

Create a keyring named pigeonkeyring, and a key named pigeonkey.

gcloud kms keyrings create "pigeonkeyring" 
    --location "global"
gcloud kms keys create "pigeonkey" 
    --location "global" 
    --keyring "pigeonkeyring" 
    --purpose "encryption"

You can use the list option to view the name and metadata for the key that you just created.

gcloud kms keys list 
    --location "global" 
    --keyring "pigeonkeyring"

Let’s store some text to be encrypted in a file called “pigeonsecret.txt”.

echo -n "My super-secret message is encrypted" > pigeonsecret.txt

Now, let us encrypt the data with created key information, specify the name of the plaintext file to encrypt, and specify the name of the file that will contain the encrypted content:

gcloud kms encrypt 
    --location "global" 
    --keyring "pigeonkeyring" 
    --key "pigeonkey" 
    --plaintext-file ./pigeonsecret.txt 
    --ciphertext-file ./pigeonsecret.txt.encrypted

You now have your message encrypted in the file, with your key and Cloud KMS. So how do I read the contents of the encrypted file? To decrypt the data with gcloud kms decrypt, provide your key information, specify the name of the encrypted file to decrypt, and specify the name of the file that will contain the decrypted content:

gcloud kms decrypt 
    --location "global" 
    --keyring "pigeonkeyring" 
    --key "pigeonkey" 
    --ciphertext-file ./pigeonsecret.txt.encrypted 
    --plaintext-file ./pigeonsecret.txt.decrypted

You have successfully encrypted and decrypted the contents of the sample file!

Audit Logs

An important component of the framework is logging. In a security context, we want to know who did what and when. Google Cloud Audit Logs records every administrative activity. The logs are stored in a protected storage area, resulting in an immutable log, and highly durable audit trail.

Cloud Audit Logs maintains three audit logs for each Google Cloud project, folder, and organization:

  • Admin Activity

    Admin Activity audit logs contain log entries for API calls or other administrative actions that modify the configuration or metadata of cloud resources.

  • Data Access

    Data Access audit logs contain API calls that read the configuration or metadata of resources, it also captures user-driven API calls that create, modify, or read user-provided resource data.

  • System Event

    System Event audit logs contain log entries for administrative actions that modify the configuration of resources.

Identity Access Management

Identity Access Management (IAM) is a framework that implements policies for ensuring users and services have the proper access to resources. Google Cloud provides us a service to manage these Identity Access Management called Cloud IAM, simple and to the point name. Cloud IAM has three main components:

  1. Member. A member can be a Google Account user, a service account, a Google group, or a G Suite or Cloud Identity domain that can access a resource.

  2. Role. A role is a collection of permissions, permissions determine what is allowed on a resource.

  3. Policy. A policy binds members to a role.

Let’s touch on permissions to give you a little bit of background about why IAM is important. I will cover security for our framework in detail in later chapters. In the Google Cloud IAM world, permissions are represented in the form of service.resource.verb. This can become very complicated and confusing when we build our application with least privilege. To simplify the process of assigning permissions we use the concept of a role which is a collection of permissions. Google Cloud offers users lots of flexibility, you can use custom roles, or predefined roles. To make the process of managing permissions easier again I try to stay within the predefined roles Google Cloud provides us. For example, the predefined role Pub/Sub Publisher (roles/pubsub.publisher) provides access to only publish messages to a Pub/Sub topic.

Besides IAM I will also cover user authentication in a future chapter, we will leverage IAM, user authentication, logs and encryption to build a secure cloud native application.

Summary

At this point, you should have a good understanding of the use case, our framework, and how we will leverage a cloud native approach to meet the business requirements. We covered many topics in this chapter and I promise it will get easier as each chapter will build on itself.

I also want to point out that the use case used in this book is as a means to apply the learnings to a practical application. However, you can apply what you learn to any cloud native application as all will require the similar methods discussed in this chapter. As we move on I will have a cloud native checkpoint at the end of each chapter to validate we are meeting the requirements.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
54.166.170.195