THE AWS CERTIFIED SOLUTIONS ARCHITECT ASSOCIATE EXAM OBJECTIVES COVERED IN THIS CHAPTER MAY INCLUDE, BUT ARE NOT LIMITED TO, THE FOLLOWING:
Operations include the ongoing activities required to keep your applications running smoothly and securely on AWS. Achieving operational excellence requires automating some or all of the tasks that support your organization's workloads, goals, and compliance requirements.
The key prerequisite to automating is to define your infrastructure and operations as annotated code. Unlike manual tasks, which can be time‐consuming and are prone to human error, tasks embodied in code are repeatable and can be carried out rapidly. It's also easier to track code changes over time and repurpose it for testing and experimentation. For us humans, annotated code also functions as de facto documentation of your infrastructure configurations and operational workflows. In this chapter, you'll learn how to leverage the following AWS services to automate the deployment and operation of your AWS resources:
As you learned in Chapter 11, “The Performance Efficiency Pillar,” CloudFormation uses templates to let you simultaneously deploy, configure, and document your infrastructure as code. Because CloudFormation templates are code, you can store them in a version‐controlled repository just as you would any other codebase. The resources defined by a template compose a stack. When you create a stack, you must give it a name that's unique within the account.
One advantage of CloudFormation is that you can use the same template to build multiple identical environments. For example, you can use a single template to define and deploy two different stacks for an application, one for production and another for development. This approach would ensure both environments are as alike as possible. In this section, we'll consider two templates: network‐stack.json
and web‐stack.json
. To download these templates, visit awscsa.github.io
.
In a CloudFormation template, you define your resources in the Resources
section of the template. You give each resource an identifier called a logical ID. For example, the following snippet of the network‐stack.json
template defines a VPC with the logical ID of PublicVPC
:
"Resources": {
"PublicVPC": {
"Type": "AWS::EC2::VPC",
"Properties": {
"EnableDnsSupport": "true",
"EnableDnsHostnames": "true",
"CidrBlock": "10.0.0.0/16"
}
}
}
The logical ID, also sometimes called the logical name, must be unique within a template. The VPC created by CloudFormation will have a VPC ID, such as vpc‐0380494054677f4b8
, also known as the physical ID.
To create a stack named Network
using a template stored locally, you can issue the following AWS command‐line interface (CLI) command:
aws cloudformation create-stack --stack-name Network --template-body
file://network-stack.json
CloudFormation can also read templates from an S3 bucket. To create a stack named using a template stored in an S3 bucket, you can issue the AWS command‐line interface (CLI) command in the following format:
aws cloudformation create-stack --stack-name Network --template-url
https://s3.amazonaws.com/cf-templates-c23z8b2vpmbb-us-east-1/network-stack.json
You can optionally define parameters in a template. A parameter lets you pass custom values to your stack when you create it, as opposed to hard‐coding values into the template. For instance, instead of hard‐coding the Classless Interdomain Routing (CIDR) block into a template that creates a VPC, you can define a parameter that will prompt for a CIDR when creating the stack.
You can delete a stack from the web console or the AWS command‐line interface (CLI). For instance, to delete the stack named Network
, issue the following command:
aws cloudformation delete-stack --stack-name Network
If termination protection is not enabled on the stack, CloudFormation will immediately delete the stack and all resources that were created by it.
You don't have to define all of your AWS infrastructure in a single stack. Instead, you can break your infrastructure across different stacks. A best practice is to organize stacks by lifecycle and ownership. For example, the network team may create a stack named Network
to define the networking infrastructure for a web‐based application. The network infrastructure would include a virtual private cloud (VPC), subnets, Internet gateway, and a route table. The development team may create a stack named Web
to define the runtime environment, including a launch template, Auto Scaling group, application load balancer, IAM roles, instance profile, and a security group. Each of these teams can maintain their own stack.
When using multiple stacks with related resources, it's common to need to pass values from one stack to another. For example, an application load balancer in the Web
stack needs the logical ID of the VPC in the Network
stack. You therefore need a way to pass the VPC's logical ID from the Network
stack to the Web
stack. There are two ways to accomplish this: using nested stacks and exporting stack output values.
Because CloudFormation stacks are AWS resources, you can configure a template to create additional stacks. These additional stacks are called nested stacks, and the stack that creates them is called the parent stack. To see how this works, we'll consider the
network‐stack.json
and web‐stack.json
templates.
In the template for the Web
stack (web‐stack.json
), you'd define the template for the Network
stack as a resource, as shown by the following snippet:
"Resources": {
"NetworkStack" : {
"Type" : "AWS::CloudFormation::Stack",
"Properties" : {
"TemplateURL" : "https://s3.amazonaws.com/cf-templates-c23z8b2vpmbb-us-east-1/network-stack.json"
}
},
The logical ID of the stack is NetworkStack
, and the TemplateURL
indicates the location of the template in S3. When you create the Web
stack, CloudFormation will automatically create the Network
stack first.
The templates for your nested stacks can contain an Outputs
section where you define which values you want to pass back to the parent stack. You can then reference these values in the template for the parent stack. For example, the network‐stack.json
template defines an output with the logical ID VPCID
and the value of the VPC's physical ID, as shown in the following snippet:
"Outputs": {
"VPCID": {
"Description": "VPC ID",
"Value": {
"Ref": "PublicVPC"
}
},
The Ref
intrinsic function returns the physical ID of the PublicVPC
resource. The parent web‐stack.json
template can then reference this value using the Fn::GetAtt
intrinsic function, as shown in the following snippet from the Resources
section:
"ALBTargetGroup": {
"Type": "AWS::ElasticLoadBalancingV2::TargetGroup",
"Properties": {
"VpcId": { "Fn::GetAtt" : [ "NetworkStack", "Outputs.VPCID" ] },
Follow the steps in Exercise 14.1 to create a nested stack.
If you want to share information with stacks outside of a nested hierarchy, you can selectively export a stack output value by defining it in the Export
field of the Output
section, as follows:
"Outputs": {
"VPCID": {
"Description": "VPC ID",
"Value": {
"Ref": "PublicVPC"
},
"Export": {
"Name": {
"Fn::Sub": "${AWS::StackName}-VPCID"
}
}
},
Any other template in the same account and region can then import the value using the Fn::ImportValue
intrinsic function, as follows:
"ALBTargetGroup": {
"Type": "AWS::ElasticLoadBalancingV2::TargetGroup",
"Properties": {
"VpcId": { "Fn::ImportValue" : {"Fn::Sub": "${NetworkStackName}-VPCID"} },
Keep in mind that you can't delete a stack if another stack is referencing any of its outputs.
If you need to reconfigure a resource in a stack, the best way is to modify the resource configuration in the source template and then either perform a direct update or create a change set.
To perform a direct update, upload the updated template. CloudFormation will ask you to enter any parameters the template requires. CloudFormation will deploy the changes immediately, modifying only the resources that changed in the template.
If you want to understand exactly what changes CloudFormation will make, create a change set instead. You submit the updated template, and once you create the change set, CloudFormation will display a list of every resource it will add, remove, or modify. You can then choose to execute the change set to make the changes immediately, delete the change set, or do nothing. You can create multiple change sets using different templates, compare them, and then choose which one to execute. This approach is useful if you want to compare several different configurations without having to create a new stack each time.
How CloudFormation updates a resource depends on the update behavior of the resource's property that you're updating. Update behaviors can be one of the following:
To prevent specific resources in a stack from being modified by a stack update, you can create a stack policy when you create the stack. If you want to modify a stack policy or apply one to an existing stack, you can do so using the AWS CLI. You can't remove a stack policy.
A stack policy follows the same format as other resource policies and consists of the same Effect
, Action
, Principal
, Resource
, and Condition
elements. The Effect
element functions the same as in other resource policies, but the Action
, Principal
, Resource
, and Condition
elements differ as follows:
Action
The action must be one of the following:
Update:Modify
Allows updates to the specific resource only if the update will modify and not replace or delete the resourceUpdate:Replace
Allows updates to the specific resource only if the update will replace the resourceUpdate:Delete
Allows updates to the specific resource only if the update will delete the resourceUpdate:*
Allows all update actionsPrincipal
The principal must always be the wildcard (*
). You can't specify any other principal.Resource
This element specifies the logical ID of the resource to which the policy applies. You must prefix it with the text LogicalResourceId/
.Condition
This element specifies the resource types, such as AWS::EC2::VPC
. You can use wildcards to specify multiple resource types within a service, such as AWS::EC2::*
to match all EC2 resources. If you use a wildcard, you must use the StringLike
condition. Otherwise, you can use the StringEquals
condition.The following stack policy document named stackpolicy.json
allows all stack updates except for those that would replace the PublicVPC
resource. Such an update would include changing the VPC CIDR.
{
"Statement" : [
{
"Effect" : "Allow",
"Action" : "Update:*",
"Principal": "*",
"Resource" : "*"
},
{
"Effect" : "Deny",
"Action" : "Update:Replace",
"Principal": "*",
"Resource" : "LogicalResourceId/PublicVPC",
"Condition" : {
"StringLike" : {
"ResourceType" : ["AWS::EC2::VPC"]
}
}
}
]
}
CloudFormation doesn't preemptively check whether an update will violate a stack policy. If you attempt to update a stack in such a way that's prevented by the stack policy, CloudFormation will still attempt to update the stack. The update will fail only when CloudFormation attempts to perform an update prohibited by the stack policy. Therefore, when updating a stack, you must verify that the update succeeded; don't just start the update and walk away.
You can temporarily override a stack policy when doing a direct update. When you perform a direct update, you can specify a stack policy that overrides the existing one. CloudFormation will apply the updated policy during the update. After the update is complete, CloudFormation will revert to the original policy.
Git (git-scm.com
) is a free and open source version control system invented by Linus Torvalds to facilitate collaborative software development projects. However, people often use Git to store and track a variety of file‐based assets, including source code, scripts, documents, and binary files. Git stores these files in a repository, colloquially referred to as a repo. The AWS CodeCommit service hosts private Git‐based repositories for version‐controlled files, including code, documents, and binary files. CodeCommit provides advantages over other private Git repositories, including the following:
A Git repository and S3 have some things in common. They both store files and provide automatic versioning, allowing you to revert to a previous version of a deleted or overwritten file. But Git tracks changes to individual files using a process called differencing, allowing you to see what changed in each file, who changed it, and when. It also allows you to create additional branches, which are essentially snapshots of an entire repository. For example, you could take an existing source code repository, create a new branch, and experiment on that branch without causing any problems in the original or master branch.
You create a repository in CodeCommit using the AWS Management Console or the AWS CLI. When you create a new repository in CodeCommit, it is always empty. You can add files to the repository in three ways:
CodeCommit uses IAM policies to control access to repositories. To help you control access to your repositories, AWS provides three managed policies. Each of the following policies allows access to all CodeCommit repositories using both the AWS Management Console and Git.
AWSCodeCommitFullAccess
This policy provides unrestricted access to CodeCommit. This is the policy you'd generally assign to repository administrators.AWSCodeCommitPowerUser
This policy provides near full access to CodeCommit repositories but doesn't allow the principal to delete repositories. This is the policy you'd assign to users who need both read and write access to repositories.AWSCodeCommitReadOnly
This policy grants read‐only access to CodeCommit repositories.If you want to restrict access to a specific repository, you can copy one of these policies to your own customer‐managed policy and specify the ARN of the repository.
Most users will interact with a CodeCommit repository via the Git command‐line interface, which you can download from git-scm.com
. Many integrated development environments (IDEs) such as Eclipse, IntelliJ, Visual Studio, and Xcode provide their own user‐friendly Git‐based tools.
Only IAM principals can interact with a CodeCommit repository. AWS recommends generating a Git username and password for each IAM user from the IAM Management Console. You can generate up to two Git credentials per user. Since CodeCommit doesn't use resource‐based policies, it doesn't allow anonymous access to repositories.
If you don't want to configure IAM users or if you need to grant repository access to a federated user or application, you can grant repository access to a role. You can't assign Git credentials to a role, so instead you must configure Git to use the AWS CLI Credential Helper to obtain temporary credentials that Git can use.
Git‐based connections to CodeCommit must be encrypted in transit using either HTTPS or SSH. The easiest option is HTTPS, since it requires inputting only your Git credentials. However, if you can't use Git credentials—perhaps because of security requirements—you can use SSH. If you go this route, you must generate public and private SSH keys for each IAM user and upload the public key to AWS. The user must also have permissions granted by the IAMUserSSHKeys
AWS‐managed policy. Follow the steps in Exercise 14.2 to create your own CodeCommit repository.
CodeDeploy is a service that can deploy applications to EC2 or on‐premises instances. You can use CodeDeploy to deploy binary executables, scripts, web assets, images, and anything else you can store in an S3 bucket or GitHub repository. You can also use CodeDeploy to deploy Lambda functions. CodeDeploy offers a number of advantages over manual deployments, including the following:
The CodeDeploy agent is a service that runs on your Linux or Windows instances and performs the hands‐on work of deploying the application onto an instance. You can install the CodeDeploy agent on an EC2 instance at launch using a user data script or you can bake it into an AMI. You can also use AWS Systems Manager to install it automatically. You'll learn about Systems Manager later in this chapter.
To deploy an application using CodeDeploy, you must create a deployment that defines the compute platform, which can be EC2/on‐premises or Lambda, and the location of the application source files. CodeDeploy currently supports only deployments from S3 or GitHub. CodeDeploy doesn't automatically perform deployments. If you want to automate deployments, you can use CodePipeline, which we'll cover later in this chapter.
Prior to creating a deployment, you must create a deployment group to define which instances CodeDeploy will deploy your application to. A deployment group can be based on an EC2 Auto Scaling group, EC2 instance tags, or on‐premises instance tags.
When creating a deployment group for EC2 or on‐premises instances, you must specify a deployment type. CodeDeploy gives you the following two deployment types to give you control over how you deploy your applications.
With an in‐place deployment, you deploy the application to existing instances. In‐place deployments are useful for initial deployments to instances that don't already have the application. These instances can be stand‐alone or, in the case of EC2 instances, part of an existing Auto Scaling group.
On each instance, the application is stopped and upgraded (if applicable) and then restarted. If the instance is behind an elastic load balancer, the instance is deregistered before the application is stopped and then reregistered to the load balancer after the deployment to the instance succeeds. Although an elastic load balancer isn't required to perform an in‐place deployment, having one already set up allows CodeDeploy to prevent traffic from going to instances that are in the middle of a deployment.
A blue/green deployment is used to upgrade an existing application with minimal interruption. With Lambda deployments, CodeDeploy deploys a new version of a Lambda function and automatically shifts traffic to the new version. Lambda deployments always use the blue/green deployment type.
In a blue/green deployment against EC2 instances, the existing instances in the deployment group are left untouched. A new set of instances is created, to which CodeDeploy deploys the application.
Blue/green deployments require an existing Application, Classic, or Network Load balancer. CodeDeploy registers the instances to the load balancer's target group after a successful deployment. At the same time, instances in the original environment are deregistered from the target group.
Note that if you're using an Auto Scaling group, CodeDeploy will create a new Auto Scaling group with the same configuration as the original. CodeDeploy will not modify the minimum, maximum, or desired capacity settings for an Auto Scaling group. You can choose to terminate the original instances or keep them running. You may choose to keep them running if you need to keep them available for testing or forensic analysis.
When creating your deployment group, you must also select a deployment configuration. The deployment configuration defines the number of instances CodeDeploy simultaneously deploys to, as well as how many instances the deployment must succeed on for the entire deployment to be considered successful. The effect of a deployment configuration differs based on the deployment type. There are three preconfigured deployment configurations you can choose from: OneAtATime, HalfAtATime, and AllAtOnce.
For both in‐place and blue/green deployments, if the deployment group has more than one instance, CodeDeploy must successfully deploy the application to one instance before moving on to the next one. The overall deployment succeeds if the application is deployed successfully to all but the last instance. For example, if the deployment succeeds to the first two instances in a group of three, the entire deployment will succeed. If the deployment fails to any instance but the last one, the entire deployment fails.
If the deployment group has only one instance, the overall deployment succeeds only if the deployment to the instance succeeds. For blue/green deployments, CodeDeploy reroutes traffic to each instance as deployment succeeds on the instance. If CodeDeploy is unable to reroute traffic to any instance except the last one, the entire deployment fails.
For in‐place and blue/green deployments, CodeDeploy will deploy to up to half of the instances in the deployment group before moving on to the remaining instances. The entire deployment succeeds only if deployment to at least half of the instances succeeds. For blue/green deployments, CodeDeploy must be able to reroute traffic to at least half of the new instances for the entire deployment to succeed.
For in‐place and blue/green deployments, CodeDeploy simultaneously deploys the application to as many instances as possible. If the application is deployed to at least one instance, the entire deployment succeeds. For blue/green deployments, the entire deployment succeeds if CodeDeploy reroutes traffic to at least one new instance.
You can also create custom deployment configurations. This approach is useful if you want to customize how many instances CodeDeploy attempts to deploy to simultaneously. The deployment must complete successfully on these instances before CodeDeploy moves onto the remaining instances in the deployment group. Hence, the value you must specify when creating a custom deployment configuration is called the number of healthy instances. The number of healthy instances can be a percentage of all instances in the group or a number of instances.
An instance deployment is divided into lifecycle events, which include stopping the application (if applicable), installing prerequisites, installing the application, and validating the application. During some of these lifecycle events, you can have the agent execute a lifecycle event hook, which is a script of your choosing. The following are all the lifecycle events during which you can have the agent automatically run a script:
ApplicationStop
You can use this hook to stop an application gracefully prior to an in‐place deployment. You can also use it to perform cleanup tasks. This event occurs prior to the agent copying any application files from your repository. This event doesn't occur on original instances in a blue/green deployment, nor does it occur the first time you deploy to an instance.BeforeInstall
This hook occurs after the agent copies the application files to a temporary location on the instance but before it copies them to their final location. If your application files require some manipulation, such as decryption or the insertion of a unique identifier, this would be the hook to use.AfterInstall
After the agent copies your application files to their final destination, this hook performs further needed tasks, such as setting file permissions.ApplicationStart
You use this hook to start your application. For example, on a Linux instance running an Apache web server, this may be as simple as running a script that executes the systemctl httpd start
command.ValidateService
With this event, you can check that the application is working as expected. For instance, you may check that it's generating log files or that it's established a connection to a backend database. This is the final hook for in‐place deployments that don't use an elastic load balancer.BeforeBlockTraffic
With an in‐place deployment using an elastic load balancer, this hook occurs first, before the instance is unregistered.AfterBlockTraffic
This hook occurs after the instance is unregistered from an elastic load balancer. You can use this hook to wait for user sessions or in‐process transfers to complete.BeforeAllowTraffic
For deployments using an elastic load balancer, this event occurs after the application is installed and validated. You can use this hook to perform any tasks needed to warm up the application or otherwise prepare it to accept traffic.AfterAllowTraffic
This is the final event for deployments using an elastic load balancer.Notice that not all of these lifecycle events can occur on all instances in a blue/green deployment. The BeforeBlockTraffic
event, for example, wouldn't occur on a replacement instance since it makes no sense for CodeDeploy to unregister a replacement instance from a load balancer during a deployment.
Each script run during a lifecycle event must complete successfully before CodeDeploy will allow the deployment to advance to the next event. By default, the agent will wait one hour for the script to complete before it considers the instance deployment failed. You can optionally set the timeout to a lower value, as shown in the following section.
The application specification (AppSpec) file defines where the agent should copy the application files onto your instance and what scripts it should run during the deployment process. You must place the file in the root of your application repository and name it appspec.yml
. It consists of the following five sections:
Version
Currently the only allowed version of the AppSpec file is 0.0.OS
Because the CodeDeploy agent works only on Linux and Windows, you must specify one of these as the operating system.Files
This section specifies one or more source and destination pairs identifying the files or directories to copy from your repository to the instance.Permissions
This section optionally sets ownership, group membership, file permissions, and Security‐Enhanced Linux (SELinux) context labels for the files after they're copied to the instance. This applies to Amazon Linux, Ubuntu, and RedHat Enterprise Linux instances only.Hooks
This section is where you define the scripts the agent should run at each lifecycle event. You must specify the name of the lifecycle event followed by a tuple containing the following:
Note that you can specify multiple locations, timeouts, and script tuples under a single lifecycle event. Keep in mind that the total timeouts for a single lifecycle event can't exceed one hour. The following is a sample appspec.yml
file:
version: 0.0
os: linux
files:
- source: /index.html
destination: /var/www/html/
hooks:
BeforeInstall:
- location: scripts/install_dependencies
timeout: 300
runas: root
You can optionally set up triggers to generate an SNS notification for certain deployment and instance events, such as when a deployment succeeds or fails. You can also configure your deployment group to monitor up to 10 CloudWatch alarms. If an alarm exceeds or falls below a threshold you define, the deployment will stop.
You can optionally have CodeDeploy roll back, or revert, to the last successful revision of an application if the deployment fails or if a CloudWatch alarm is triggered during deployment. Despite the name, rollbacks are actually new deployments.
CodePipeline lets you automate the different stages of your software development and release process. These stages are often implemented using continuous integration (CI) and continuous delivery (CD) workflows or pipelines. Continuous integration (CI) and continuous delivery (CD) are different but related concepts.
Continuous integration is a method whereby developers use a version control system such as Git to regularly submit or check in their changes to a common repository. This first stage of the pipeline is called the source stage.
Depending on the application, a build system may compile the code or build it into a binary file, such as an executable, AMI, or container image. This is called the build stage. One goal of CI is to ensure that the code developers are adding to the repository works as expected and meets the requirements of the application. Thus, the build stage may also include unit tests, such as verifying that a function given a certain input returns the correct output. This way, if a change to an application causes something to break, the developer can learn of the error and fix it early. Not all applications require a build stage. For example, a web‐based application using an interpreted language like PHP doesn't need to be compiled.
Continuous delivery incorporates elements of the CI process but also deploys the application to production. A goal of CD is to allow frequent updates to an application while minimizing the risk of failure. To do this, CD pipelines usually include a test stage. As with the build stage, the actions performed in the test stage depend on the application. For example, testing a web application may include deploying it to a test web server and verifying that the web pages display the correct content. On the other hand, testing a Linux executable that you plan to release publicly may involve deploying it to test servers running a variety of Linux distributions and versions. Of course, you always want to run such tests in a separate, nonproduction VPC.
The final stage is deployment, in which the application is deployed to production. Although CD can be fully automated without requiring human intervention, it's common to require manual approval before releasing an application to production. You can also schedule releases to occur regularly or during opportune times such as maintenance windows.
Because continuous integration and continuous delivery pipelines overlap, you'll often see them combined as the term CI/CD pipeline. Keep in mind that even though a CI/CD pipeline includes every stage from source to deployment, that doesn't mean you have to deploy to production every time you make a change. You can add logic to require a manual approval before deployment. Or you can disable transitions from one stage to the next. For instance, you may disable the transition from the test stage to the deployment stage until you're actually ready to deploy.
Every CodeDeploy pipeline must include at least two stages and can have up to 10. Within each stage, you must define at least one task or action to occur during the stage. An action can be one of the following types:
CodePipeline integrates with other AWS and third‐party providers to perform the actions. You can have up to 20 actions in the same stage, and they can run sequentially or in parallel. For example, during your testing stage you can have two separate test actions that execute concurrently. Note that different action types can occur in the same stage. For instance, you can perform build and test actions in the same stage.
The source action type specifies the source of your application files. The first stage of a pipeline must include at least one source action and can't include any other types of actions. Valid providers for the source type are CodeCommit, S3, or GitHub.
If you specify CodeCommit or S3, you must also specify the ARN of a repository or bucket. AWS can use CloudWatch events to detect when a change is made to the repository or bucket. Alternatively, you can have CodePipeline periodically poll for changes.
To add a GitHub repository as a source, you'll have to grant CodePipeline permission to access your repositories. Whenever you update the repository, GitHub creates a webhook that notifies CodePipeline of the change.
Not all applications require build actions. An interpreted language such as those used in shell scripts and declarative code such as CloudFormation templates doesn't require compiling. However, even noncompiled languages may benefit from a build stage that analyzes the code for syntax errors and style conventions.
The build action type can use AWS CodeBuild as well as third‐party providers CloudBees, Jenkins, Solano CI, and TeamCity. AWS CodeBuild is a managed build service that lets you compile source code and perform unit tests. CodeBuild offers on‐demand build environments for a variety of programming languages, saving you from having to create and manage your own build servers.
The test action type can also use AWS CodeBuild as a provider. For testing against smartphone platforms, AWS DeviceFarm offers testing services for Android iOS and web applications. Other supported providers are BlazeMeter, Ghost Inspector, HPE StormRunner Load, Nouvola, and Runscope.
The approval action type includes only one action: manual approval. When pipeline execution reaches this action, it awaits manual approval before continuing to the next stage. If there's no manual approval within seven days, the action is denied and pipeline execution halts. You can optionally send an SNS notification, which includes a link to approve or deny the request and may include a URL for the approver to review.
For deployment, CodePipeline offers integrations with CodeDeploy, CloudFormation, Elastic Container Service, Elastic Beanstalk, OpsWorks Stacks, Service Catalog, and XebiaLabs. Recall that CodeDeploy doesn't let you specify a CodeCommit repository as a source for your application files. But you can specify CodeCommit as the provider for the source action and CodeDeploy as the provider for the deploy action.
If you want to run a custom Lambda function as part of your pipeline, you can invoke it by using the invoke action type. For example, you can write a function to create an EBS snapshot, perform application testing, clean up unused resources, and so on.
When you create a pipeline, you must specify an S3 bucket to store the files used during different stages of the pipeline. CodePipeline compresses these files into a zip file called an artifact. Different actions in the pipeline can take an artifact as an input, generate it as an output, or both.
The first stage in your pipeline must include a source action specifying the location of your application files. When your pipeline runs, CodePipeline compresses the files to create a source artifact.
If the second stage of your pipeline is a build stage, CodePipeline then unzips the source artifact and passes the contents along to the build provider. The build provider uses this as an input artifact. The build provider yields its output; let's say it's a binary file. CodePipeline takes that file and compresses it into another zip file, called an output artifact.
This process continues throughout the pipeline. When creating a pipeline, you must specify an IAM service role for CodePipeline to assume. It uses this role to obtain permissions to the S3 bucket. The bucket must exist in the same region as the pipeline. You can use the same bucket for multiple pipelines, but each pipeline can use only one bucket for artifact storage.
AWS Systems Manager, formerly known as EC2 Systems Manager and Simple Systems Manager (SSM), lets you automatically or manually perform actions against your AWS resources and on‐premises servers.
From an operational perspective, Systems Manager can handle many of the maintenance tasks that often require manual intervention or writing scripts. For on‐premises and EC2 instances, these tasks include upgrading installed packages, taking an inventory of installed software, and installing a new application. For your AWS resources, such tasks may include creating an AMI golden image from an EBS snapshot, attaching IAM instance profiles, or disabling public read access to S3 buckets. Systems Manager provides the following two capabilities:
Actions let you automatically or manually perform actions against your AWS resources, either individually or in bulk. These actions must be defined in documents, which are divided into three types:
Automation enables you to perform actions against your AWS resources in bulk. For example, you can restart multiple EC2 instances, update CloudFormation stacks, and patch AMIs. Automation provides granular control over how it carries out its individual actions. It can perform the entire automation task in one fell swoop, or it can perform one step at a time, enabling you to control precisely what happens and when. Automation also offers rate control, so you can specify as a number or a percentage how many resources to target at once.
While automation enables you to automate tasks against your AWS resources, run commands let you execute tasks on your managed instances that would otherwise require logging in or using a third‐party tool to execute a custom script.
Systems Manager accomplishes this via an agent installed on your EC2 and on‐premises managed instances. The Systems Manager agent is installed on all Windows Server and Amazon Linux AMIs.
AWS offers a variety of preconfigured command documents for Linux and Windows instances; for example, the AWS‐InstallApplication
document installs software on Windows, and the AWS‐RunShellScript
document allows you to execute arbitrary shell scripts against Linux instances. Other documents include tasks such as restarting a Windows service or installing the CodeDeploy agent.
You can target instances by tag or select them individually. As with automation, you optionally may use rate limiting to control how many instances you target at once.
Session Manager lets you achieve interactive Bash and PowerShell access to your Linux and Windows instances, respectively, without having to open inbound ports on a security group or a network ACL or even having your instances in a public subnet. You don't need to set up a bastion host or worry about SSH keys. All Linux versions and Windows Server 2008 R2 through 2016 are supported.
You open a session using the web console or AWS CLI. You must first install the Session Manager plug‐in on your local machine to use the AWS CLI to start a session. The Session Manager SDK has libraries for developers to create custom applications that connect to instances. This is useful if you want to integrate an existing configuration management system with your instances without opening ports in a security group or network ACL.
Connections made via Session Manager are secured using TLS 1.2. Session Manager can keep a log of all logins in CloudTrail and store a record of commands run within a session in an S3 bucket.
Patch Manager helps you automate the patching of your Linux and Windows instances. You can individually choose instances to patch, patch according to tags, or create a patch group. A patch group is a collection of instances with the tag key Patch Group
. For example, if you wanted to include some instances in the Webservers
patch group, you'd assign tags to each instance with the tag key of Patch Group
and the tag value of Webservers
. Keep in mind that the tag key is case‐sensitive.
Patch Manager uses patch baselines to define which available patches to install, as well as whether the patches will be installed automatically or require approval. AWS offers default baselines that differ according to operating system but include patches that are classified as security‐related, critical, important, or required. The patch baselines for all operating systems except Ubuntu automatically approve these patches after seven days. This is called an auto‐approval delay.
For more control over which patches get installed, you can create your own custom baselines. Each custom baseline contains one or more approval rules that define the operating system, the classification and severity level of patches to install, and an auto‐approval delay.
You can also specify approved patches in a custom baseline configuration. For Windows baselines, you can specify knowledgebase and security bulletin IDs. For Linux baselines, you can specify CVE IDs or full package names. If a patch is approved, it will be installed during a maintenance window that you specify. Alternatively, you can forego a maintenance window and patch your instances immediately. Patch Manager executes the AWS‐RunPatchBaseline
document to perform patching.
Patch Manager can help ensure your instances are all at the same patch level, but State Manager is a configuration management tool that ensures your instances have the software you want them to have and are configured in the way you define. More generally, State Manager can automatically run command and policy documents against your instances, either one time only or on a schedule. For example, you may want to install antivirus software on your instances and then take a software inventory.
To use State Manager, you must create an association that defines the command document to run, any parameters you want to pass to it, the target instances, and the schedule. Once you create an association, State Manager will immediately execute it against the target instances that are online. Thereafter, it will follow the schedule.
There is currently only one policy document you can use with State Manager: AWS‐GatherSoftwareInventory
. This document defines what specific metadata to collect from your instances. Despite the name, in addition to collecting software inventory, you can also have it collect network configurations; file information; CPU information; and, for Windows, registry values.
Insights aggregate health, compliance, and operational details about your AWS resources into a single area of AWS Systems Manager. Some insights are categorized according to AWS resource groups, which are collections of resources in an AWS region. You define a resource group based on one or more tag keys and optionally tag values. For example, you can apply the same tag key to all resources related to a particular application—EC2 instances, S3 buckets, EBS volumes, security groups, and so on. Insights are categorized, as we'll cover next.
Built‐in insights are monitoring views that Systems Manager makes available to you by default. Built‐in insights include the following:
Business and Enterprise support customers get access to all Trusted Advisor checks. All AWS customers get the following security checks for free:
The Inventory Manager collects data from your instances, including operating system and application versions. Inventory Manager can collect data for the following:
You choose which instances to collect data from by creating a regionwide inventory association by executing the AWS‐GatherSoftwareInventory
policy document. You can choose all instances in your account or select instances manually or by tag. When you choose all instances in your account, it's called a global inventory association, and new instances you create in the region are automatically added to it. Inventory collection occurs at least every 30 minutes.
When you configure the Systems Manager agent on an on‐premises server, you specify a region for inventory purposes. To aggregate metadata for instances from different regions and accounts, you may configure Resource Data Sync in each region to store all inventory data in a single S3 bucket.
Compliance insights show how the patch and association status of your instances stacks up against the rules you've configured. Patch compliance shows the number of instances that have the patches in their configured baseline, as well as details of the specific patches installed. Association compliance shows the number of instances that have had an association successfully executed against them.
Deploying your resources to different AWS accounts helps you improve the security and scalability of your AWS environment. Recall that the root user for an AWS account has full access to the account. By having your resources spread out across multiple accounts, you reduce your risk footprint. AWS Landing Zone can automatically set up a multiaccount environment according to AWS best practices. Landing Zone uses the concept of an Account Vending Machine (AVM) that allows users to create new AWS accounts that are preconfigured with a network and a security baseline. Landing Zone begins with four core accounts, each containing different services and performing different roles:
Automation is certainly nothing new. System administrators have been writing scripts for decades to automate common (and often mundane) tasks. But in the cloud, automation can be extended to infrastructure deployments as well. Because of the flexibility of cloud infrastructure, the line between infrastructure—which has traditionally been static—and operations is becoming more blurred. In the cloud, deploying infrastructure and operating it are essentially the same thing.
Despite this, there are some differences to keep in mind. Because the infrastructure belongs to AWS, you have to use the AWS web console, CLI, or SDKs to configure it. You can automate this task by using CloudFormation, or you can write your own scripts. Conversely, when it comes to configuring the operating system and applications running on instances, you can write scripts and use your own tooling, or you can use the services AWS provides, namely, the AWS Developer Tools and Systems Manager. The combination of scripts, tools, and services you use is up to you, but as an AWS architect, you need to understand all options available to you and how they all fit together.
Export
field to the Output
sectionAWS::EC2::VPC
AWS::CloudFormation::Stack
Fn::ImportValue
aws codecommit get‐repository
git clone
git push
git add
ValidateService
AfterAllowTraffic
BeforeAllowTraffic
AllowTraffic
AmazonEC2RoleforSSM
managed policy to the role you're using for the instance profile.AWS‐RunPatchBaseline
document.AWS‐GatherSoftwareInventory
policy document against the instance.AWS‐SetupManagedInstance
automation document against the instance.3.149.26.176