This chapter will introduce the new AWS Proton service and the need for it within the developer community. You will understand how AWS Proton helps both the developers and DevOps/infrastructure engineers with their work in the Software Development Life Cycle (SDLC). Then, we will look at the basic blocks of the Proton service, which helps create the environment and service template. We will learn how to use environment templates to spin up multiple infrastructure environments, and how to deploy container instances on those environments. This chapter will also walk you through the code review process in case any pull requests are raised, as well as how to scan the source code and find any vulnerabilities and secret leaks. We will use AWS CodeGuru Reviewer to review and perform static code analysis.
In this chapter, we are going to cover the following main topics:
To get started, you will need an AWS account and the source code contained in the folder chapter-03-aws-proton-template, chapter-03-aws-proton, chapter-03-codeguru-sample:
The AWS Proton service was developed by AWS after they considered lots of feedback from customers, where the main issue was how to maintain infrastructure, build pipelines, and deploy applications at scale. Initially, when the service became generally available, it was difficult to understand the components of AWS Proton and how it is different from AWS Service Catalog and other developer tools. This service has a couple of components that will be a bit confusing to you if you are just reading the documentation and not looking at the template code. So, in this section, we will dive deep into how AWS Proton solves the problem of maintenance of infrastructure, as well as how it helps you build pipelines and application deployments at scale.
AWS Proton is a two-fork automation framework engine that does the following:
The following diagram simplifies these two points:
The target users for the Proton service are Administrators/DevOps and Developers. Admin/DevOps are responsible for creating environment template code (using CloudFormation). To create service template code (using CloudFormation), both DevOps and the Developers need to work together. Developer interaction is required because they will be aware of the build steps, which will be embedded into the service template code. The workflow for AWS Proton is as follows:
Now, suppose the developer needs to deploy the application in the staging environment. Here, the code should be from the staging branch. Then, the developer just needs to create another service using the same service template and configure it with the staging environment. This way, you don't need to create another pipeline manually; the service template will create another staging CI/CD resource for you, and then deploy it to the staging environment.
The following diagram shows a flow representation of the preceding steps.
We will see the preceding stages in action in the next section. Based on the aforementioned points, you may have gotten the idea as to how Proton resolves the issue of maintaining infrastructure, building pipelines, and deploying applications at scale.
There are some additional features in AWS Proton that make it more robust to use in terms of its capabilities, such as version management and cross-account support. You can manage multiple versions of the environment template and update all the environments with the latest version with a single click. AWS Proton also supports cross-account access, which means that if an admin wants to, they can use the environment template of account A (Management account) and create an environment infrastructure in account B (Environment account). Similarly, a developer can also deploy the services from account A to the environment infrastructure of account B:
Apart from writing templates in CloudFormation and template versioning, there are some new features in the roadmap of AWS Proton that adds more capabilities, such as the following:
Now that we've gotten an idea of what AWS Proton is and its components, as well as the environment template and the service template, let's look at the Proton environment template.
In this section, we will learn how to create an environment template bundle for a standard environment and the tips we should use while writing an effective template. After that, we will register an environment template in AWS Proton and create multiple environments using an environment template. We will be using the aws-proton-template repository, which was mentioned in the Technical requirements section.
As we mentioned previously, in AWS Proton, the environment template defines the shared infrastructure that's used by multiple resources. With an environment template, we can create multiple environment infrastructures. An environment template typically includes resources related to compute, storage, and network. In our case, the environment template that we will be registering contains the following resources:
To register an environment template, we need to create an environment template bundle. The environment template directory structure looks like this:
/infrastructure
cloudformation.yaml
manifest.yaml
/schema
schema.yaml
As you can see, the infrastructure directory includes two files – cloudformation.yaml and manifest.yaml. The Cloudformation.yaml file defines the compute, storage, and network resources. If you go to the chapter-03-aws-proton-template folder, you will see an environment folder. If you navigate to infrastructure/cloudformation.yaml, then you will be able to see the content inside cloudformation.yaml, which defines the infrastructure resources, as shown here:
AWSTemplateFormatVersion: '2010-09-09'
Description: AWS Fargate cluster running containers in a public subnet. Only supports
public facing load balancer, and public service discovery namespaces.
Mappings:
# The VPC and subnet configuration is passed in via the environment spec.
SubnetConfig:
VPC:
CIDR: '{{environment.inputs.vpc_cidr}}'
PublicOne:
CIDR: '{{environment.inputs.subnet_one_cidr}}'
PublicTwo:
CIDR: '{{environment.inputs.subnet_two_cidr}}'
Resources:
VPC:
Type: AWS::EC2::VPC
Properties:
EnableDnsSupport: true
EnableDnsHostnames: true
CidrBlock: !FindInMap ['SubnetConfig', 'VPC', 'CIDR']
There are certain processes you must follow to create an environment template bundle that defines infrastructure resources. These processes are as follows:
The following code explains how to use the customization parameter.
The following are the necessary mappings:
# The VPC and subnet configuration is passed in via the environment spec.
SubnetConfig:
VPC:
CIDR: '{{environment.inputs.vpc_cidr}}' #customization param
PublicOne:
CIDR: '{{environment.inputs.subnet_one_cidr}}'
PublicTwo:
CIDR: '{{environment.inputs.subnet_two_cidr}}'
Now, we need to identify the resource-based parameters. Resource-based parameters are those parameters that reference output parameters from other infrastructure template files. For example, the output values of the infrastructure template can be used in the service template as resource parameters. The following snippet can explain more:
Outputs:
ClusterName:
Description: The name of the ECS cluster
Value: !Ref 'ECSCluster'
ECSTaskExecutionRole:
Description: The ARN of the ECS role
Value: !GetAtt 'ECSTaskExecutionRole.Arn'
The preceding snippet contains the output values of the infrastructure template. The values of the preceding outputs (ClusterName) can be used in the following service template as resource parameters:
Service:
Type: AWS::ECS::Service
DependsOn: LoadBalancerRule
Properties:
Cluster: '{{service_instance.environment.outputs.ClusterName}}' # imported resource parameter
LaunchType: FARGATE
DeploymentConfiguration:
MaximumPercent: 200
MinimumHealthyPercent: 75
Once you have identified the resources and parameters, you can define a schema, which serves as the customization parameter interface between AWS Proton and the infrastructure template files. AWS Proton uses the Jinja templating engine to handle parameters values in the schema file and the cloudformation file. The following diagram explains how the AWS Proton backend works. After this, we will have a look at the relationship between the schema file and the cloudformation file:
The schema file, which is shown on the left, shows one input property, vpc_cidr, which is used in the cloudformation file in the Mappings section:
Once you have your infrastructure CloudFormation and schema files, you must organize them into directories. You also need to create a manifest file that lists the infrastructure files and needs to adhere to the format and the content, as shown in the following snippet:
The preceding points will help you create an environment template bundle that includes the cloudformation.yaml, manifest.yaml, and schema.yaml files. To use the environment template bundle in AWS Proton, we need to perform the following steps:
$git clone https://github.com/PacktPublishing/Accelerating-DevSecOps-on-AWS.git
# Assuming you already have awscli configured
$aws s3api create-bucket –bucket "proton-cli-templates-${account-id}"
$ cd Accelerating-DevSecOps-on-AWS/chapter-03-aws-proton-template
$ tar -zcvf env-template.tar.gz environment/
$ aws s3 cp env-template.tzr.gz s3://proton-cli-templates-${account-id}/env-template.tar.gz
# creating IAM Role
$aws iam create-role --role-name aws_proton_svc_admin --assume-role-policy-document file://policy/proton-service-assume-policy.json
#attaching policy to the role
$aws iam attach-role-policy --role-name aws_proton_svc_admin --policy-arn arn:aws:iam::aws:policy/AdministratorAccess
#Allowing Proton to use this role
$aws proton update-account-settings --pipeline-service-role-arn "arn:aws:iam::${accountid}:role/aws_proton_svc_admin"
With that, we have just created two environments using the same environment template via AWS Proton. In the next section, we will learn how to create service templates and deploy services in these environments.
So far, we have learned how to create an environment template bundle. In this section, we will learn how to create a service template bundle and register it with AWS Proton. After that, we will learn how to create a service and service instance that will be deployed in both the staging and dev environments.
At the beginning of this chapter, we provided a brief overview of the service template. The service template includes two sub-templates – one is an application-service related file that contains information regarding the task definition, load balancer target group, alarms, and so on, while the other sub-template is a build and deploy pipeline template that includes definitions related to developer tools such as CodeBuild, CodeDeploy, and CodePipeline. Using a service template, we can create multiple Proton services (that refer to their respective application branches), which will build the application and deploy it to a certain infrastructure environment. Application services that are deployed to an environment are known as service instances.
A service template bundle consists of the cloudformation.yaml and manifest.yaml files, in both the instance_infrastructure and pipeline_infrastructure folder. It also includes schema files:
The tips and recommendations for writing service templates are the same as those for writing environment templates, such as using customization and resource parameters. The service template is also in this book's GitHub repository. To use the service template bundle with AWS Proton, perform the following steps:
$ cd chapter-03-aws-proton-template
$ tar -zcvf svc-template.tar.gz service/
$ aws s3 cp svc-template.tar.gz s3://proton-cli-templates-${account_id}/svc-template.tar.gz
Once the service template has been published, the developer can use this service template to create a service instance, which helps deploy the application to the environment. We will deploy the containerized application in the next section.
In this section, we will create a service instance to deploy the containerized application on both environments. First, we will create a source connection to the application repository (This repository you need to create in Github and push the files of chapter-03-aws-proton folder in the master branch. In my case I have created a repo aws-proton in Github) so that it can be used by AWS Proton. You also need to create a dev branch out of master branch and edit the line 93 of index.html file. You need to replace the string Staging to Dev. We will deploy the application from dev branch to stark-env-dev and then deploy the master branch to stark-env-staging.
To create a source connection with your VCS (GitHub, though you can use Bitbucket or GitLab as well), perform the following steps:
Now that we've created the source connection, let's deploy the application by creating a service instance.
To deploy the application on the environment, perform the following steps:
It was so straightforward to deploy the application related to the dev branch in the dev environment, without even writing a separate task definition file for ECS. Instead, we leveraged the service template. This is the power of templating and the AWS Proton service, where we can spin up multiple infrastructure environments or deploy multiple instances of the application on the environment at scale. Now, we can make this environment secure by making sure that the template we are using passes the CloudFormation guard checks. In the next section, we will learn how to scan the application code using AWS CodeGuru.
In the software development life cycle, a code review process takes place when all the developers write their code and raise a pull request to merge to an upstream branch. The code review is generally done by the team leader of the project, but it could be a slow process to eyeball the entire code. The code review process is important, but it shouldn't increase the workload for reviewers and become a bottleneck in development. By using code review tools, we can automate the process of reviewing the code. Some famous tools in the market do this magic for us, such as SonarQube. Recently, Amazon launched a new service called Amazon CodeGuru, which can perform code reviews as well as provide application performance. This not only helps in improving the reliability of the software but also lets us dig deep and cut down on the time spent finding difficult issues, such as sensitive data, race conditions, undefined functions, and slow resource leaks.
CodeGuru is empowered by machine learning, best practices, and a big code base. It learned from the millions of code reviews that are used in open source projects, as well as internally at Amazon.
CodeGuru provides the following two functionalities:
The following diagram shows the capabilities of Amazon CodeGuru:
At the time of writing, CodeGuru supports two languages: Java and Python. It works with the following VCSes:
We will use CodeGuru Reviewer to review the code in the CodeCommit repository in the next section.
In this section, we will be creating a CodeCommit repository and pushing the code to the repository. We will associate the CodeCommit repository with CodeGuru. We will create another branch and modify the code in the new branch and raise a pull request. Then, we will look at the recommendation provided by CodeGuru on the pull request.
To get the recommendation from CodeGuru in the CodeCommit repository, perform the following steps:
$ git clone https://git-codecommit.us-east-1.amazonaws.com/v1/repos/codeguru-sample-app
$git clone https://github.com/PacktPublishing/Modern-CI-CD-on-AWS.git
$ cd Modern-CI-CD-on-AWS/chapter-03-codeguru-sample
$ cp -rpf *codeguru-sample-app
$ cd codeguru-sample-app
$ git add .
$ git commit -m "initial push"
$ git push origin master
Based on the recommendation provided by CodeGuru, the reviewer can easily ask the person to fix the code and then raise the pull request again. This saves lots of time, as well as manual work. We will learn more about CodeGuru in Chapter 9, DevSecOps Pipeline with AWS Services and Tools Popular Industry-Wide, where we will be implementing a full CI/CD pipeline with security in place.
AWS Proton is an amazing service when it comes to automating the process of codifying your infrastructure and application deployment at scale. We learned how to create an environment and service template bundle and covered various writing tips. We also spun up multiple environments using a single environment template and deployed the containerized application from a different branch in the respective environment using a service instance. When it came to reviewing the code, we learned how Amazon CodeGuru can give amazing recommendations, even at the time of raising pull requests.
The next chapter will cover how we can implement a service mesh in an EKS cluster and restrict network and API communication between services and pods.
3.145.50.222