8
Automating AWS Infrastructure

There have been many systems put in place over the years to help successfully manage and deploy complicated software applications on complicated hardware stacks. We are living through the next iteration of potential promise in the IT world, and that’s the public cloud. Move your applications to the cloud, and you can also “be all in” and think everything is great. It is certainly great for the public cloud providers like AWS. Hundreds of thousands of customers are experimenting and using AWS cloud services, and many companies have been using their services for years.

This book has focused on explaining how the AWS services work and how the services are integrated. Looking at the AWS cloud as a complete entity—an operating system hosted on the Internet—the one characteristic of AWS that stands above all others is the level of integrated automation utilized to deploy, manage, and recover our AWS services. There is not a single AWS service offered that is not heavily automated for deployment and in its overall operation; when you order a virtual private network (VPN), it’s created and available in seconds. Order an Elastic Compute Cloud (EC2) instance, either through the Management console or by using the AWS command-line interface (CLI) tools, and it’s created and available in minutes. Automated processes provide the just-in-time response we demand when we order cloud services.

Automating with AWS

It wasn’t that long ago when you ordered a virtual machine from a cloud provider and waited several days until an email arrived telling you that your service was ready to go.

AWS services are being changed, enhanced, and updated 24/7, with features and changes appearing similarly every day. AWS as a whole is deployed and maintained using a combination of developer agility and automated processes matching the agile definition of being able to move quickly and easily with a partnership of developers, system operations, project managers, network engineers, and security professionals working together from the initial design stages, through the development process, to production and continual updates.

AWS wasn’t always this automated and regimented; in the early days, Amazon was a burgeoning online e-commerce bookseller. The increasing popularity of the Amazon store introduced problems with being able to scale its online resources to match its customers’ needs. Over time, rules were defined at Amazon for all developers mandating that each underlying service that supported the Amazon store must be accessible through a core set of shared application programming interfaces (APIs) that were shared with all developers, and that each service was built and maintained on a common core of compute and storage resources.

These days it may seem that Jeff Bezos, the founder, chairman, CEO, and president of Amazon, and Andy Jassy, the CEO of Amazon Web Services, knew exactly what they were doing, but they say they really had no idea that they were building a scalable cloud-hosted operating system utilizing a combination of custom software and custom hardware components. But that’s what they have built. Today, the AWS cloud allows any company or developer to build, host, and run applications on AWS’s infrastructure platform using the many custom tools developed internally at AWS to help run the Amazon.com store effectively.

Amazon built and continues to build its hosting environment using mandated internal processes, which I define as a mixture of Information Technology Infrastructure Library (ITIL), SCRUM (a framework for managing product development working together as a development team), Agile (a software development cycle for teams of developers working with the overall tenets of planning, designing, developing, testing, and evaluating as a team with open communications), and currently DevOps (a continuation of the agile framework with full collaboration among the development and operations teams. You may not agree with these definitions as written, and that’s okay; there are many different definitions of these terms. The reality is that the work at AWS is being done together in an effective manner; the result is hundreds of changes being made to the AWS hardware and software environment every month. In addition, all AWS services are being monitored, scaled, rebuilt, and logged through completely automated processes. Amazon doesn’t use manual processes to do anything, and your long-term goal should be that your company doesn’t either.

This book is not going to turn into a Bible of how to operate with a DevOps or Agile mind-set because there are already many excellent books on these topics.

However, as your experience with AWS grows, you’re going to want to start using automation to help run and manage your day-to-day operations running AWS and to help you solve problems when they occur. There are numerous services available that don’t cost anything additional to use other than the time it takes to become competent in using them. This might sound too good to be true, but most of Amazon’s automation services are indeed free to use; what you are charged for are the AWS compute and storage resources that each service uses. AWS charges for compute, storage, and data transfer.

Automation services will always manage your resources more effectively than you can manually. At AWS, the automation of infrastructure is typically called infrastructure as code; hopefully this definition makes sense; when we create resources using the AWS Management console, in the background AWS uses automated processes running scripts to finish the creation and management of resources.

Regardless of how you define your own deployment or development process, there are a number of powerful tools in the AWS toolbox that can help you automate your procedures. This chapter is primarily focused on automating AWS’s infrastructure rather than exploring the software development in detail, but we will do some exploration of the available software development tools at AWS. If you’re a developer who’s been told that your applications are now going to be developed in the AWS cloud, you first need to know the infrastructure components available at AWS and how they work together. Then you must understand the tools that can be used for automation of AWS infrastructure and what tools are also available for coding. We are going to look at some of the software deployment tools AWS offers, such as Elastic Beanstalk, and additional management tools that will help you deploy applications using processes called continuous integration (CI) and continuous deployment (CD). The topics for this chapter include these:

  • Automating deployment options at AWS

  • CloudFormation stack-based architecture deployments

  • Service Catalog to help secure CloudFormation templates

  • Exploring the 12-Factor App guidelines for building and deploying cloud-hosted software applications

  • Elastic Beanstalk for deploying applications and infrastructure together

  • Continuous integration and deployment with CodeDeploy, CodeBuild, and CodePipeline

  • Serviceless computing with Lambda functions

  • The API Gateway

Terra Firma wants to do the following:

  • Learn how to automate the creation of its AWS infrastructure

  • Move toward a DevOps mind-set utilizing its programmers to manage AWS system operations through automation processes

  • Deploy a self-serve portal of its AWS infrastructure stacks allowing developers to build test environments quickly and properly the first time

  • Create a hosted application using serviceless computing components for conference registrations

  • Explore using an AWS hosted code repository rather than Git

From Manual to Automated Infrastructure with CloudFormation

In this book, we’ve looked at deploying a variety of AWS resources, such as EC2 instances, elastic block storage (EBS) and Simple Storage Service (S3) storage, virtual private cloud (VPC) networks, elastic load balancers (ELB), and EC2 auto scaling groups. For those beginning to learn AWS, we’ve focused on using the Management console as the starting point, which is a great place to commence deploying and managing AWS services such as EC2 instances.

Yet, the second and third time you deploy an EC2 instance using the Management console, you will probably not perform the steps the same as the first time. Even if you do manage to complete a manual task with the same steps, by the tenth installation, you will have made changes or will have decided to make additional changes because your needs have changed, or there was a better option available. The point is, a manual process rarely stays the same over time. Even the simplest manual process can be automated at AWS.

Peering under the hood at any management service running at AWS, you’ll find the process command-set driven by JavaScript Object Notation (JSON) scripts. At the GUI level, using the Management console, we fill in the blanks; once we click Create, JSON scripts are executed in the background carrying out our requests.

CloudFormation is the AWS-hosted orchestration engine that works with JSON templates to deploy AWS resources, as shown in Figure 8-1. AWS uses CloudFormation extensively, and so can you. More than 300,000 AWS customers use CloudFormation to manage deployment of just about everything, including all infrastructure stack deployments. Each CloudFormation template declares the desired infrastructure stack to be created, and the CF engine automatically deploys and links the resources. Additional control variables can be added to each CF template to manage and control the precise order of the installation of resources.

Of course, there are third-party solutions, such as Chef, Puppet, and Ansible, that perform automated deployments of compute infrastructure. CloudFormation is not going to replace these third-party products but can be a useful tool for building automated solutions for your AWS infrastructure if you don’t use one of these third-party orchestration tools. AWS has a managed service called OpsWorks in three flavors that might be useful to your deployments at AWS if your company currently uses one the following Chef or Puppet versions:

  • OpsWorks Stacks—Manage applications and services that are hosted at AWS and on-premise running Chef recipes, bask, or PowerShell scripts.

  • OpsWorks for Chef Automate—Build a fully managed Chef Automate server that supports the latest versions of Chef server and Chef Automate, any community-based tools or cookbooks, and native Chef tools.

  • OpsWorks for Puppet Enterprise—A fully managed Puppet Enterprise environment that patches, updates, and backs up your existing Puppet environment and allows you to manage and administrate both Linux and Windows server nodes hosted on EC2 instances and on-premise.

    CloudFormation console is shown in a screenshot.
    Figure 8-1 The CloudFormation console

JSON’s extensive use at AWS follows the same concept as Microsoft Azure and its extensive use of PowerShell; at AWS, JSON scripts are used internally for many tasks and processes: creating security policy with IAM and working with CloudFormation are the two most common examples that you will come across.

If you use Windows EC2 instances at AWS, you can also use PowerShell scripting. Both Microsoft and AWS heavily rely on automation tools. Let’s compare the manual deployment process at AWS against the automated process starting with CloudFormation.

Time spent—Over time, running manual processes at AWS becomes a big waste of time for the human carrying out the process. In the past, maintaining manual processes such as building computer systems and stacks would have been your job security; these days it’s just not a prudent way to deploy production resources. For one thing, there’s just too many steps in a manual process to consider. Every CloudFormation deployment does take time, but much less time than a manual process because each step in a CloudFormation script is essential. There are no wasted steps, and all steps are in the proper order. Over time, executing an automated process to build EC2 instances will save you hours if not weeks of time; the CF process runs in the background, allowing you to do something else. CloudFormation can also perform updates and deletions to existing AWS resources.

Security issues—Humans make mistakes, and manual changes can end up being huge security mistakes due to the lack of oversight. CloudFormation templates can be secured and controlled for usage by specific IAM users and groups. Templates also carry the same steps every time they are executed, which helps solve any fat-finger mistakes we humans make. Service Catalog, another AWS service, integrates with CloudFormation, mandating which users or accounts can use CloudFormation templates to build infrastructure stacks.

Documentation—It’s difficult to document manual processes if they constantly change, and who has the time to create documentation anyway? CloudFormation templates are readable, and once you get used to the format, they are actually self-documenting. Again, there are no wasted steps in a CloudFormation script; what is described is exactly what is deployed; if there are mistakes found in a CloudFormation script during deployment, all changes that have been carried out are reversed.

Repeatability—If you’re lucky, you can repeat your manual steps the same way every time. However, you’re just wasting valuable time in the long run. With a CloudFormation script, you can deploy and redeploy the listed AWS resources in multiple environments, such as separate development, staging, and production environments. Every time a CloudFormation template is executed, it is repeating the same steps.

Cost savings—CloudFormation automation carries out stack deployments and updates much faster than manual processes ever could. In addition, CloudFormation automation can be locked down using a companion service called AWS Systems Manager, discussed later in this chapter. ensuring only specific IAM users and groups can access and execute specific CF deployment tasks.

CloudFormation Components

CloudFormation works with templates, stacks, and change sets. A CloudFormation template is an AWS resource blueprint that can create a complete application stack, or a single stack component such as a VPC network complete with multiple subnets, Internet gateways, and NAT services all automatically deployed and configured. A template type called a change set can be created to help you visualize how proposed changes will affect AWS resources that were deployed by a current CloudFormation template.

CloudFormation Templates

Each CloudFormation template is a text file that follows either JSON or YAML formatting standards; CloudFormation responds to files saved with JSON, YAML, or txt extensions. Each template can deploy, or update a multiple AWS resource, or a single resource such as a VPC or an EC2 instance JSON format as shown in Figure 8-2; to compare Figure 8-3 displays the same information but in YAML format. It’s really a matter of personal preference as to which format you use. When creating CloudFormation templates, you might find YAML easier to read, which could be helpful in the long term. YAML seems more self-documenting because it’s easier to read.

CLI program in JSON format.
Figure 8-2 CloudFormation template in JSON format
CLI program in YAML format is as follows. AWSTemplateFormatVersion: '2010-09-09' Description: EC2 instance Resources: EC2Instance: Type: AWS::EC2::Instance Properties: ImageId: ami-0ff8a91497e77f667.
Figure 8-3 CloudFormation template in YAML format

CloudFormation templates can utilize multiple sections, as shown in Figure 8-4; however, the only mandated section that must be present is Resources. Like any template or script, the better the internal documentation, the better the understanding can be for the individual who needs to understand the script but wasn’t its author. It is highly recommended to use the Metadata section for comments to ensure the understanding of the script while it is being written, and much later when somebody is trying to remember just what this script is supposed to do.

Cloud formation template shown in a CLI program.
Figure 8-4 Valid sections in CloudFormation template

To access the companion videos, register your book at informit.com/register.

Stacks

AWS has many sample CloudFormation templates that you can download from the online CloudFormation documentation, as shown in Figure 8-5, and are available to deploy in many AWS regions. A CloudFormation stack can be as simple as a single VPC or as complex as a complete three-tier application stack complete with the required network infrastructure and associated services. CloudFormation can be useful in deploying infrastructure at AWS, including the following areas:

  • Network—Define a baseline template for developers to ensure that their VPC network setup matches company policy.

  • Front-end infrastructure—Deploy Internet gateways, associated route table entries, or load balancers into existing AWS network infrastructure.

  • Back-end infrastructure—Create complete mastery in standby database infrastructure, including subnet groups and associated security groups.

  • Two-tier application—Using a two-tier CloudFormation script allows you to rebuild a complete application stack with required network and infrastructure components when failures or disaster occur, or launch the application stack in another AWS region.

  • Windows Server Active Directory—Deploy Active Directory on Windows Server 2008 R2 instances in a VPC.

  • Demo applications—Define an application stack for demonstrations, allowing the sales team or end users to quickly create the entire environment.

  • AWS managed services—CloudFormation template that can be used to automate the setup of any AWS managed services; for example, AWS Config or Inspector could be enabled and set up using a CloudFormation template.

    Sample solutions window is shown in CloudFormation template.
    Figure 8-5 AWS has many sample solutions complete with CloudFormation templates

Note

There are many standardized CloudFormation templates available from AWS at https://aws.amazon.com/quickstart/ that have been built by AWS solution architects and trusted partners to help you deploy complete solutions on AWS. These templates are called AWS Quick Starts and are designed following the current AWS best practices for security and high availability.

Creating an EC2 Instance with ElP

If you’re like I am, you will want to see examples that work when looking at using a programming or scripting utility. I highly recommend looking at the AWS Quick Starts website https://aws.amazon.com/quickstart/ to see how powerful CloudFormation can be. Here’s a simple example that creates an EC2 instance using a CloudFormation template, as shown in Figure 8-6. The template parameters are easily readable from top to bottom. Under Properties, the EC2 instance ID, subnet ID, and EC2 instance type must all be already present in the AWS region where the template is executed; otherwise, deployment will fail. If there are issues in the CloudFormation script during deployment, CloudFormation rolls back and removes any infrastructure that the template created. The Ref statement is used in this template to attach the elastic IP (EIP) to the defined EC2 instance that was deployed and referenced under the listed resources in EC2 Machine.

Creation of EC2 instance with a CLI program.
Figure 8-6 CloudFormation templates for creating an EC2 instance

Updating with Change Sets

When a deployed CloudFormation resource stack needs to be updated, change sets allow you to preview how your existing AWS resources will be modified, as shown in Figure 8-7. Selecting an original CloudFormation template to edit, the desired set of changes to be made are inputted; CloudFormation then analyzes your requested changes against the existing CloudFormation stack, producing a change set that you can then review and approve the changes for or cancel.

A block diagram shows the transition of original stack to CloudFormation. Original stack creates change set. This state can view or change set. Execution of change set is fed to CloudFormation.
Figure 8-7 Using change sets with CloudFormation

Multiple change sets can be created for various comparison purposes. Once a change set is created, reviewed, and approved, CloudFormation updates your current resource stack.

Working with CloudFormation Stack Sets

Stack sets allow you to create a single CloudFormation template to deploy, update, or delete AWS infrastructure across multiple AWS regions and AWS accounts. When a CloudFormation template will be deploying infrastructure across multiple accounts, as shown in Figure 8-8, and AWS regions, you must ensure that the AWS resources that the template references are available in each AWS account or region; for example, EC2 instances, EBS volumes, and key pairs are always created in a specific region. These region-specific resources must be copied to each AWS region, where the CloudFormation template is executed. Global resources such as IAM roles and S3 buckets that are being created by the CloudFormation template should also be reviewed to make sure there are no naming conflicts during creation, as global resources must be unique across all AWS regions.

Stack sets as resource to two target accounts.
Figure 8-8 Stack sets with two AWS target accounts

Once a stack set is updated, all instances of the stack that were created are updated as well. For example, if you had 10 AWS accounts across 3 AWS regions, 30 stack instances would be updated when the master stack set is executed. If a stack set is deleted, all corresponding stack sets will also be deleted.

A stack set is first created in a single AWS account. Before additional stack instances can be created from the master stack set, trust relationships using IAM roles must be created between the initial AWS administrator account and the desired target accounts.

For testing purposes, one example available in the AWS CloudFormation console is a sample stack set that allows you to enable AWS Config across selected AWS regions or accounts. Just a reminder: AWS Config allows you to control AWS account compliance by defining rules that monitor specific AWS resources to ensure the desired level of compliance has been followed.

AWS Service Catalog

Using a CloudFormation template provides great power for creating, modifying, and updating AWS infrastructure. Creating AWS infrastructure always costs money; therefore, perhaps you would like to control who gets to deploy specific CloudFormation templates. Using Service Catalog allows you to manage the distribution of CloudFormation templates as a product list to an AWS account ID, an AWS Organizations account, or an Organizational Unit contained within an AWS organization. Service Catalog is composed of portfolios, as shown in Figure 8-9, which are a collection of one or more products.

Product information in a defined portfolio.
Figure 8-9 A Service Catalog product is part of a defined portfolio

When an approved product is selected, Service Catalog delivers the Confirmation template to CloudFormation, which then executes the template creating the product. Third-party products hosted in the AWS Marketplace are also supported by Service Catalog, as Software appliances are bundled with a CloudFormation template.

Each IAM user in an AWS account can be granted access to a Server Catalog portfolio of multiple approved products. Because products are built using common Confirmation templates, any AWS infrastructure components, including EC2 instances and databases hosted privately in a VPC, can be deployed. In addition, VPC endpoints using AWS PrivateLink allow access to the AWS Service Catalog service.

When you’re creating Service Catalog products, constraints using IAM roles can be used to limit the level of administrative access to the resources contained in the stack being deployed by the product itself. Service actions can also be assigned for rebooting, starting, or stopping deployed EC2 instances, as shown in Figure 8-10.

Configure window is shown with action details specifications such as SSM document version, action name and description.
Figure 8-10 Server action constraints controlled by Service Catalog

In addition, rules can be added that control any parameter values that the end user enters; for example, you could mandate that specific subnets must be used for a stack deployment. Rules can also be defined, which allow you to control which AWS account and region the product can launch.

Each deployed product can also be listed by version number, allowing end users to select products by the latest version so they could update currently deployed products that are running an older version. Terra Firma will use CloudFormation and Service Catalog in combination to create a self-serve portal for developers.

The 12-Factor Methodology

For developers getting ready to create their first application in the cloud, there are several rules generally accepted by developers for successfully creating applications that run exclusively in the public cloud. These rules are called the 12-factor app rules.

Note

The original website for the 12-factor app rules is https://12factor.net/.

Several years ago, Heroku cofounder Adam Wiggins released a suggested blueprint for creating native software as a service (SaaS) applications hosted in the public cloud. Heroku is a managed platform as a service (PaaS) provider that Salesforce owns. Incidentally, Heroku is hosted at AWS. The software engineers at Heroku were attempting to provide guidance for applications that were going to be created in the public cloud based on their real-world experience.

These guidelines can be viewed as a set of best practices to consider utilizing. Of course, depending on your deployment methods, you may quibble with some of the rules, and that’s okay. The point is, these are handy rules to consider and discuss before deploying applications in the cloud. Your applications that are hosted in the cloud also need infrastructure; as a result, these rules for proper application deployment in the cloud don’t stand alone; cloud infrastructure is also a necessary part of the 12 rules. Let’s look at these 12 rules from an infrastructure point of view and identify the AWS services that can help with the goal of each defined rule.

Rule 1. Codebase—One Codebase That Is Tracked with Version Control Allows Many Deploys

In development circles, this rule is non-negotiable; it must be followed. Creating an application usually involves three separate environments: development, testing, and production, as shown in Figure 8-11. The same code base should be used in each environment, whether it’s the developer’s laptop, a set of testing server EC2 instances, or the production EC2 instances. If you think about it, operating systems, off-the-shelf software, dynamic-link libraries (DLLs), development environments, and application code are all controlled by versions. And each version of application code needs to be stored separately and securely in a safe location.

A flow diagram is shown for one codebase. Here, Codebase version 1 is connected to development, testing, and production. This is also implemented in the user computers.
Figure 8-11 One codebase regardless of location

All developers likely use a code repository such as GitHub to store their code. As your codebase undergoes revisions, each revision needs to be tracked; after all, a single codebase might be responsible for thousands of deployments. Documenting and controlling the separate versions of the codebase just makes sense. Amazon also has a code repository, called CodeCommit, that might be more useful than Git for applications hosted at AWS. We will cover CodeCommit in the next section.

At the infrastructure level at Amazon, we also have dependencies. The AWS components to keep track of include these:

  • AMIs—Images for Web, application, database, and appliance instances. Each AMI should be version controlled.

  • EBS volumes—Boot volumes and data volumes should be tagged by version number.

  • EBS snapshots—Snapshots used to create boot volumes will be part of an AMI.

  • Containers—Each container image is referenced by its version number.

AWS CodeCommit

CodeCommit is a hosted AWS version control service with no storage size limits, as shown in Figure 8-12. It allows AWS customers to privately store their source and binary code that are automatically encrypted at rest and at transit at AWS. CodeCommit allows Terra Firma to store its code versions at AWS rather than at Git and not worry about running out of storage space. CodeCommit is also HIPAA eligible and supports Payment Card Industry Data Security Standard (PCI DSS) and ISO 27001 standards.

Codecommit repository is shown in developer tools. The left section shows source, build, display, and pipeline. The main section shows the list of repositories, with their name, description, and last modified time.
Figure 8-12 A CodeCommit repository

CodeCommit supports common Git commands and, as mentioned, there are no limits on file size, type, and repository size. CodeCommit is designed for collaborative software development environments. When developers make multiple file changes, CodeCommit manages the changes across multiple files. You may remember that S3 buckets also support file versioning, but S3 versioning is really meant for recovery of older versions of files. It’s not designed for collaborative software development environments; as a result, S3 buckets are better suited for files that are not source code.

Rule 2. Dependencies—Explicitly Declare and Isolate Dependencies

Any application that you have written or will write depends on some specific components, whether it’s a database, a specific operating system version, a required utility, or a software agent that needs to be present. Document these dependencies so you know the components and the version of each component required by the application. Applications that are being deployed should never rely on the assumed existence of required system components; instead, each dependency needs to be declared and managed by a dependency manager to ensure that only the defined dependencies will be installed with the codebase. A dependency manager uses a configuration file to determine what dependency to get, what version of the dependency, and what repository to get it from. If there is a specific version of system tools that the codebase always requires, perhaps the system tools could be added into the operating system that the codebase will be installed on. However, over time software versions for every component will change. An example of a dependency manager could be Composer, which is used with PHP projects, or Maven, which can be used with Java projects. The other benefit of using a dependency manager is that the versions of your dependencies will be the same versions used in the dev, test, and production environments.

If there is duplication with the operating system versions, the operating system and its feature set can also be controlled by AMI versions, and CodeCommit can be used to host the different versions of the application code. CloudFormation also includes a number of helper scripts that can allow you to automatically install and configure applications, packages, and operating system services that execute on EC2 Linux and Windows instances.

  • cfn-init—Can install packages, create files, and start operating system services

  • cfn-signal—Can be used with a wait condition to synchronize installation timings only when the required resources are installed and available

  • cdn-get-metadata—Can be used to retrieve metadata from the EC2 instance’s memory

Rule 3. Config—Store Config in the Environment

Your codebase should be the same in the development, testing, and production environments. However, your database instances or your S3 buckets will have different paths, or URLs, used in testing or development. Obviously, a local database shouldn’t be stored on a compute instance operating as a Web or an application server. Other configuration components, such as API keys, plus database credentials for access and authentication, should never be hard-coded. We can use AWS Secrets for storing database credentials and secrets, and we can use identity and access management (IAM) roles for accessing data resources at AWS, including S3 buckets, DynamoDB tables, and RDS databases. API Gateway can also be used to store your APIs. You’ll learn more about the API Gateway at the end of this chapter.

Development frameworks define environment variables through configuration files. Separating your application components from the application code allows you to reuse your backing services in different environments using environment variables to point to the desired resource from the dev, test, or production environment.

Amazon has a few services that can help centrally store application configurations:

  • AWS Secrets allows you to store application secrets such as database credentials, API keys, and Oauth tokens.

  • AWS Certificate Manager allows you to create and manage any public secure sockets layer/transport layer security (SSL/TLS) certificates used for any hosted AWS websites or applications. ACM also supports creating a private certificate authority and issuing X.509 certificates for identification of IAM users, EC2 instances, and AWS services.

  • AWS Key Management Services can be used to create and manage encryption keys.

  • AWS Systems Manager Parameter Store stores configuration data and secrets for EC2 instances, including passwords, database strings, and license codes.

Rule 4. Backing Services—Treat Backing Services as Attached Resources

All infrastructure services at AWS can be defined as backing services; AWS services can be accessed by Hypertext Transfer Protocol Secure (HTTPS) private endpoints. Backing services hosted at AWS are connected over the AWS private network and include databases (relational database service [RDS], DynamoDB), shared storage (S3 buckets, elastic file system [EFS]), Simple Mail Transfer Protocol (SMTP) services, queues (Simple Queue Service [SQS]), caching systems (such as ElastiCache, which manages Memcached or Redis in-memory queues or databases), and monitoring services (CloudWatch, Config, and CloudTrail). Under certain conditions, backing services should be completely swappable; for example, a MySQL database hosted on-premise should be able to be swapped with a hosted copy of the database at AWS without changing application code; the only variable that needs to change is the resource handle in the config file that points to the database location.

Note

All backing services provided by AWS services have associated metrics that can be monitored using CloudWatch alarms and alerts.

Rule 5. Build, Release, Run—Separate, Build and Run Stages

If you are creating applications that will have updates, whether on a defined schedule or at unpredictable times, you will want to have defined stages where testing can be carried out on the application state before it is approved and moved to production. Amazon has several such platform as a service (PaaS) services that work with multiple stages. Elastic Beanstalk allows you to upload and deploy your application code combined with a config file that builds the AWS environment and deploys your application, as shown in Figure 8-13.

A dashboard of Elastic Beanstalk shows the running application, its version and health information, and configuration details.
Figure 8-13 Elastic Beanstalk dashboard showing running application and configuration

The Elastic Beanstalk build stage takes your application code from the defined repo storage location, which could be an S3 bucket or CodeCommit, and compiles it into executable code that is combined with the current config file and automatically deployed at AWS. Elastic Beanstalk also supports Blue/Green deployments, where application and infrastructure updates can be seamlessly deployed into production environments using multiple stages.

You can also use the Elastic Beanstalk CLI to push your application code commits to AWS CodeCommit. When you run the CLI command EB create or EB deploy to create or update an EBS environment, the selected application version is pulled from the defined CodeCommit repository, uploading the application and required environment to Elastic Beanstalk. Other AWS services that work with deployment stages include these:

  • CodePipeline provides a continuous delivery service for automating deployment of your applications using multiple staging environments.

  • CodeDeploy helps automate application deployments to EC2 instances hosted at AWS or on-premise; details are later in this chapter.

  • CodeBuild compiles your source code and runs tests on prebuilt environments, producing executives that are ready to deploy without your having to build the test server environment.

Rule 6. Process—Execute the App as One or More Stateless Processes

Stateless processes provide fault tolerance for the instance running your applications by separating the important data records being worked on by the application and storing them in a centralized storage location such as an SQS message queue. An example of a stateless design using an SQS queue could be a design in which an SQS message queue, as shown in Figure 8-14, is deployed as part of the workflow to add a corporate watermark to all training videos uploaded to an associated S3 bucket. A number of EC2 instances are subscribed to the SQS queue; every time a video is uploaded to the S3 bucket, a message is sent to the SQS queue. The EC2 servers that have subscribed to the SQS queue poll for any updates to the queue; when an update message is received by a subscribed server, the server carries out the work of adding a watermark to the video and then stores the video in another S3 bucket.

SQS queue details window is shown.
Figure 8-14 SQS queues provide stateless memory-resident storage for applications

Others stateless options available at AWS include these:

  • AWS Simple Notification Services is a hosted messaging service that allows applications to deliver push-based notifications to subscribers such as SQS queues or Lambda.

  • Amazon MQ is a hosted managed message broker service specifically designed for Apache Active MQ, an open-source message broker service similar to SQS queue functionality.

  • AWS Simple Email Service is a hosted email-sending service that includes an SMTP interface allowing you to integrate the email service into your application for communicating with an end user.

Each of these AWS services is stateless, carrying out its task as requested and blissfully unaware of its purpose. Its only job is to maintain and when necessary make available the redundantly stored data records. Let’s see how stateless services can solve an ongoing problem. At Terra Firma, new employees need to create a profile on their first day of work. The profile application runs on a local server and involves entering pertinent information that each new hire needs to enter. Each screen of information is stored within the application running on the local server until the profile creation has completed. This local application is known to fail without warning, causing problems and wasting time. Moving the profile application to the AWS cloud, a proper redesign with hosted stateless components provides redundancy and availability by hosting the application on multiple EC2 instances behind a load balancer. Stateless components such as an SQS queue can retain the user information in a redundant stateless data store. If one of the application servers crashes during the profile creation process, another server takes over, and the process completes successfully.

Data that needs to persist for an undefined period of time should always be stored in a redundant stateful storage service such as a DynamoDB database table, an S3 bucket, an SQS queue, or a shared file store such as the EFS. Once the user profile creation is complete, the application can store the relevant records in a DynamoDB database table and can communicate with the end user using the Simple Email Service.

Rule 7. Port Binding—Export Services via Port Binding

Instead of your using a local Web server installed on the local host and accessible only from a local port, services should be accessible by binding to an external port where the service is located and accessible using an external URL. Therefore, for this example, all Web requests are carried out by binding to an external port where the Web service is hosted and accessed from. The service port that the application needs to connect to is defined by the development environment’s configuration file, defined in Rule 3: Config—Store Config in the Environment. Backing services can be used multiple times by different applications and the different dev, test, and production environments.

Rule 8. Concurrency—Scale Out via the Process Model

If your application can’t scale horizontally, it’s not designed for cloud operation. As we have discussed, many AWS services are designed to automatically scale horizontally:

  • EC2 instances—Instances can be scaled with EC2 auto scaling and CloudWatch metric alarms.

  • Load balancers—The ELB load balancer infrastructure horizontally scales to handle demand.

  • S3 storage—The S3 storage array infrastructure horizontally scales in the background to handle reads.

  • DynamoDB—DynamoDB horizontally scales tables within the AWS region. Tables can also be designed as global tables, which can scale across multiple AWS regions.

  • AWS Managed Services—All infrastructure-supporting AWS management services scale horizontally based on demand.

Rule 9. Disposability—Maximize Robustness with Fast Startup and Graceful Shutdown

Except for our stateful and short-term stateless storage of data records in ElastiCache in-memory queues, SQS message, or SNS notification queues, everything else in our application stack should be disposable. After all, our application configuration and bindings, source code, and backing services are being hosted by AWS managed services, each with its own levels of redundancy and durability. Data is stored in a persistent packing storage location such as S3 buckets, RDS, or DynamoDB databases, and possibly, EFS or FSx shared storage. Processes should shut down gracefully or automatically fail over when issues occur:

  • A Web application hosted on an EC2 instance can be ordered to stop listening through auto scale or load balancer health checks.

  • Load balancer failures are redirected using Route 53 alias records to another load balancer, which is assigned the appropriate elastic IP address (EIP).

  • The RDS relational database master instance automatically fails over to the standby database instance. The master database instance is automatically rebuilt.

  • DynamoDB tables are replicated a minimum of six times across three availability zones (AZs) throughout each AWS region.

  • Spot EC2 instances can automatically hibernate when resources are taken back.

  • Compute failures in stateless environments return the current job to the SQS work queue.

  • Tagged resources can be monitored by CloudWatch alerts using Lambda functions to shut down resources.

Rule 10. Dev/Prod Parity—Keep Development, Staging, and Production as Similar as Possible

“Similar in nature” does not relate to the number of instances or the size of database instances and supporting infrastructure. Your development environment must be exact in the codebase being used but can be dissimilar in the number of instances or database servers being utilized. Other than the infrastructure components, everything else in the codebase must remain the same. CloudFormation can be used to automatically build each environment using a single template file with conditions that define what infrastructure resources to build for each dev, test, and production environment.

Rule 11. Logs—Treat Logs as Event Streams

In the dev, staging, and production environments, each running process log stream must be stored externally. At AWS, logging is designed as event streams. CloudWatch logs or S3 buckets can be created to store EC2 instances’ operating system and application logs. CloudTrail logs, which track all API calls to the AWS account, can also be streamed to CloudWatch logs for further analysis. Third-party monitoring solutions support AWS and can interface with S3 bucket storage. All logs and reports generated at AWS by EC2 instances or AWS managed services eventually end up in an S3 bucket.

Rule 12. Admin Processes—Run Admin/Management Tasks as One-Off Processes

Administrative processes should be executed in the same method regardless of the environment in which the admin task is executed. For example, an application might require a manual process to be carried out; the steps to carry out the manual process must remain the same, whether they are executed in the development, testing, or production environment.

Take what you can from the 12-factor steps. The goal is to think about your applications and infrastructure and over time to implement as many of these steps as possible. This might be an incredibly hard task to do for applications that are simply moved to the cloud. Newer applications that are completely developed in the cloud should attempt to follow these steps as closely as possible.

Elastic Beanstalk

A common situation that a developer faces today when moving to the AWS cloud is this: Develop a Web app, or migrate an existing Web app with little time and budget into the AWS cloud while adhering to the company’s compliance standards. The Web application needs to be reliable, able to scale, and easy to update. Perhaps for these situations, Elastic Beanstalk can be of some help.

Elastic Beanstalk has been around since 2011 and was launched as a PaaS offering from AWS to help developers be able to easily deploy Web applications in the AWS cloud hosted on AWS Linux and Windows EC2 instances. As briefly mentioned earlier in this chapter, Elastic Beanstalk automates both the application deployment, as shown in Figure 8-15, and the required infrastructure components, including single and multiple EC2 instances behind an elastic load balancer hosted in an auto scaling group. Monitoring of your Elastic Beanstalk environment is carried out with CloudWatch metrics for monitoring the health of your application. Elastic Beanstalk also integrates with AWS X-Ray, which can help you monitor and debug the internals of your hosted application.

Installation options are shown. A description about the selection of environment tier is shown. The options are, web server environment (selected) and worker environment.
Figure 8-15 Elastic Beanstalk creates the infrastructure and installs the application

Elastic Beanstalk supports a number of development platforms, including Java (Apache HTTP or Tomcat) for PHP, Node.js (Nginx or Apache HTTP), Python (Apache HTTP), Ruby (Passenger), .NET (IIS), and Go. Elastic Beanstalk allows you to deploy different runtime environments across multiple technology stacks that can all be running AWS at the same time; the technology stacks can be EC2 instances or Docker containers.

Developers can use Elastic Beanstalk to quickly deploy and test applications on a predefined infrastructure stack. If the application checks out, that’s great. If not, the infrastructure can be quickly discarded at little cost.

Make no mistake, Elastic Beanstalk is not a development environment like Visual Studio. Your application must be written and ready to go before Elastic Beanstalk is useful. After your application has been written, debugged, and approved from your Visual Studio (or Eclipse development environment combined with the associated AWS Toolkit), upload your code. Then create and upload your configuration file that details the infrastructure that needs to be built. Elastic Beanstalk finishes off the complete deployment process for the infrastructure and the application. The original goal of Elastic Beanstalk was to remove the timeframe for hardware procurement for applications, which in some cases could take weeks or months.

Elastic Beanstalk also fits into the mind-set of corporations that are working with a DevOps mentality, where the developer is charged with assuming some of the operational duties. Elastic Beanstalk can help developers automate the tasks and procedures previously carried out by administrators and operations folks when your application was hosted in your on-premise data center. Elastic Beanstalk carries out the following tasks for you automatically:

  • Provisions and configures EC2 instances, containers, and security groups using a CloudFormation template.

  • Configures your RDS database server environment.

  • Stores the application server’s source code, associated logs, and artifacts in an S3 bucket.

  • Enables CloudWatch alarms that monitor the load of your application, triggering auto scaling of your infrastructure out and in as necessary.

  • Routes access from the hosted application to a custom domain name.

Elastic Beanstalk is free of charge to use; you are only charged for the resources used for the deployment and hosting of your application. The AWS resources that you use are provisioned within your AWS account, and you have full control of these resources, unlike other PaaS solutions where the provider controls access to the infrastructure resources. At any time, you can go into the Elastic Beanstalk configuration of your application and make changes, as shown in Figure 8-16. Although Beanstalk functions like a PaaS service, you still have access to tune and change the infrastructure resources, as desired.

Modification of capacity options is shown in Elastic Beanstalk application. The screen shows the selection of environment type and the number of instances are entered (here, minimum-1; maximum-4).
Figure 8-16 Modify capacity of Elastic Beanstalk application infrastructure

Applications supported by Elastic Beanstalk include simple HTTPS Web applications, or applications with workers’ nodes that could be subscribed to SQS queues to carry out more complex, longer running processes.

After your application has been deployed by Elastic Beanstalk, AWS can automatically update the selected application platform environment by enabling managed platform updates, which can be deployed during a defined maintenance window. Updates are minor platform version updates and security patching but are not major platform updates to the Web services being used. Major updates must be initiated manually.

Database support includes any EC2 instance that can be installed on an EC2 instance, RDS database options, or DynamoDB. The database can be provisioned by Elastic Beanstalk during launch or be exposed to the application using environmental variables. You can also choose to deploy your instances hosting your applications in multiple AZs and control your application HTTPS security and authentication by deploying an Application Load Balancer.

Updating Elastic Beanstalk Applications

New versions of your application can be deployed to your Elastic Beanstalk environment in several ways, depending on the complexity of your application. During updates, Elastic Beanstalk archives the old application version in an S3 bucket. The methods available for updating Elastic Beanstalk applications include these:

  • All at once—The new application version is deployed to all EC2 instances simultaneously. With this choice, your application will be unavailable while the deployment process is underway. If you want to keep your older version of your application functioning until the new version is deployed, choose the Immutable or Blue/Green update method.

  • Rolling—The application is deployed in batches to a select number of EC2 instances defined in each batch configuration, as shown in Figure 8-17. As the batches of EC2 instances are being updated, they are detached from the load balancer queue. Once the update is finished, and after passing load-balancing health checks, the batch is added back into the load-balancing queue once again. The first updated batch of EC2 instances must be healthy before the next batch of EC2 instances is updated.

    Application deployments section in Beanstalk application.
    Figure 8-17 Apply rolling updates to Elastic Beanstalk application
  • Immutable—The application update is only installed on new EC2 instances contained in a second auto scaling group launched in your environment. Only after the new environment passes health checks will the old application version be removed. The new application servers are made available all at once. Because new EC2 instances and auto scaling groups are being deployed, the immutable update process takes longer.

  • Blue/Green—The new version of the application is deployed to a separate environment. After the new environment is healthy, the CNAMEs of the two environments are swapped, redirecting traffic immediately to the new application version. In this scenario, if a production database is to be used to maintain connectivity with the database, the database must be installed separately from the Elastic Beanstalk deployment. Externally installed databases remain operational and are not removed when the new Elastic Beanstalk application version is installed and swapped.

CodePipeline

Perhaps your application changes are faster than every couple of months. Perhaps you need continuous delivery. AWS CodePipeline provides a delivery service for environments that want build, test, and deploy software on a continuous basis.

The CodePipeline works with a defined workflow that mandates what testing must happen to updates at each stage of development before it is approved as production code. CodePipeline creates a workflow composed of stages. Creating your first pipeline, AWS CodePipeline stores the software contents that will be controlled by the workflow managed by the pipeline into a CodeCommit repository or an S3 bucket, as shown in Figure 8-18. CloudWatch events monitor and alert when any additions occur to the CodePipeline-defined software stages, starting the analysis of the software update as it begins to travel through the CodePipeline and its defined stages.

Creation of new pipeline in the setup page.
Figure 8-18 Initial setup of CodePipeline

Each stage in the CodePipeline workflow is linked to a test runtime environment, where your code is tested. Each stage can have multiple defined actions that must be carried out before testing is complete. Actions are carried out in a defined order of operation. The first stage in the pipeline is defined as the source stage. The defined location of the code to the pipeline is shown in Figure 8-19. Pipeline processing begins when a change is made to the code in the source location. Optionally, you can manually start the workflow processing cycle.

Codepipeline workflow settings page.
Figure 8-19 Adding the source stage to the CodePipeline workflow

After a stage has completed testing of the source code, all revisions and testing notes or changes created by the testing process are delivered to the next stage in the pipeline. All changes that have been carried out by the actions in each stage and associated testing notes are stored in the associated S3 bucket.

Only one source code revision can run through each stage in the CodePipeline workflow at a time. Approval actions are required before the testing process moves to the next stage. Any failures that occur by any action at any stage in the workflow cause the software being tested not to move to the next action in the stage it is currently in or to the next stage in the pipeline until the failed actions are retried. Once testing is complete and approved, your workflow enters the next deployment stage, as shown in Figure 8-20. Companion services at AWS that support the AWS CodePipeline include CloudFormation, CodeDeploy, Elastic Beanstalk, Service Catalog, and ECS.

Deployment stage options are shown to add a deploy stage. Delivery provider (AWS cloudformation) and action mode (creating or updating stack) are selected.
Figure 8-20 Deployment stage options for CodePipeline

AWS CodeDeploy

AWS CodeDeploy allows you to coordinate your application deployments and updates across test and production environments on a variety of server options, including containers, EC2 instances, on-premise servers running Ubuntu 14.04 LTS, RHEL 7.x, Windows Server 2008 R2 and later, and serverless deployment using Lambda functions. Instead of manually spinning up EC2 instances, loading your custom code, and testing it manually, CodeDeploy can carry out your application deployment and updates.

The type of application files that can be managed by CodeDeploy include application code, configuration files, executables, and deployment scripts. CodeDeploy can interface with storage locations such as S3 buckets or integrate with a repository like CodeCommit, Git, and CodePipeline. Updates to applications hosted on compute instances, containers, and serviceless environments are performed with Blue/Green updates, like Elastic Beanstalk, but with much more granular control:

  • Instances—Traffic is shifted from an original set of instances to a replacement set of instances.

  • Containers—Traffic is shifted from an ECS task set to a replacement task set.

  • Lambda function—Traffic is shifted from an existing function to a newer version of the Lambda function based on a defined percentage of network traffic flow.

CodeDeploy to EC2 Instances: Big-Picture Steps
  1. Tag EC2 instances for CodeDeploy.

  2. Create a service role for CodeDeploy to access your EC2 instances.

  3. Install the CodeDeploy agent using user data, or bundle the agent into the current AMI.

  4. Create an AppSpec file that defines the source file location for your application version to be tested and the scripts that need to be run during each stage of the deployment/testing process. For example, script bundles and their location can be defined to be carried out on the EC2 instance before and after installation, after the application has successfully started, and during final validation checks.

  5. Upload the AppSpec file and application content to be deployed to the S3 bucket.

  6. Describe your deployment scheme to CodeDeploy, and create a deployment group describing your EC2 instance configuration, as shown in Figure 8-21.

    Application deployment options are shown. Deployment type can be either in-place or blue/green (selected). Environment configuration can done either automatically (selected) or manually.
    Figure 8-21 Plan how CodeDeploy performs updates
  7. The CodeDeploy agent installed on the EC2 instance begins polling CodeDeploy for instructions on when to start the deployment/test process.

Serviceless Computing with Lambda

Serviceless computing is one of the fancy buzzwords being bandied about today in the cloud, but it’s been around for quite a while. The first concept to understand with serviceless computing is that there are still good old EC2 instances in the background running the requested code functions. We haven’t reached the point yet of artificial intelligence being able to dispense with servers. However, the code being run on the EC2 instance managed by AWS is being defined per function; and each function is being executed in a firecracker VM after being triggered by an event. At AWS, serviceless computing means Lambda. With Lambda, you are charged for every function that runs based on the RAM/CPU and processing time the function consumes. With a serviceless environment, there are no EC2 instances that you need to manage, and you’re not paying for idle processing time. Therefore, the coding hosted by Lambda is focused on the single function based on the logic that is required at the time. Serviceless computing is how you will get the best bang for your buck at AWS; after all, you’re not paying for EC2 instances, EBS volumes, Auto Scaling, ELB load balancers, or CloudWatch monitoring; Amazon takes care of all those functions for you. At AWS, we have been able to use serviceless computing with a variety of AWS management services that have been integrated with Lambda:

  • S3 bucket—A file is uploaded to a bucket, which triggers a Lambda function. The Lambda function, in turn, converts the file into three different resolutions and stores the file in three different S3 buckets.

  • DynamoDB table—An entry is made to a DynamoDB table, which triggers a Lambda function that could perform a custom calculation and deposit the result into another field in the table.

  • CloudWatch alerts—Define a condition for an AWS service such as IAM; for example, fire off a Lambda function, which alerts you whenever the root account is used in an AWS account.

  • AWS Config—Create rules that analyze whether resources created in an AWS account follow a company’s compliance guidelines. The rules are checked using Lambda functions; if the result is an AWS resource that doesn’t meet the defined compliance levels, a Lambda function is executed to remove the out-of-bounds resource.

Lambda allows you to upload and run code from many languages, including Java, Go, PowerShell node.js, C#, and Python. Code can be packaged as zip files and uploaded to an S3 bucket; uploads must be less than 50 MB. Lambda is the engine behind many mobile applications. The application functions run on Amazon servers, therefore, you don’t have to maintain servers anymore, just your code. How would you call a Lambda function from a mobile app? You’d use the API Gateway.

API Gateway

The API Gateway allows customers to publish APIs they have crafted to a central, hosted location at AWS. But what’s an API? The stock definition of an API is application programming interface, which in English means that an API could be considered a defined path to a back-end service, or function. To a user’s app hosted on a phone running a mobile application, the API or APIs for the application could be hosted back at AWS. The API is part of the source code or the entire source code for an application, but its location is at AWS. Let’s expand the definition of API a bit more:

  • The A in application could be a custom function, the entire app, or somewhere in between.

  • The P is related to the type of programming language or platform that created the API.

  • The I stands for interface, and the API Gateway interfaces with HTTP/REST APIs or WebSocket APIs. Both API types can direct HTTP requests to AWS on the private AWS network; the APIs, however, are only exposed publicly with HTTPS endpoints.

APIs are commonly made available by third-party companies for use on other mobile and Web applications. One of the most popular APIs you have used is the API for Google Maps. When you book a hotel room using a mobile application, the application will probably be using the Google API to call Google Maps with a location request and receive a response back. Most websites and social media sites have several third-party APIs that are part of the overall application from the end user’s point of view.

Note

For an older example, think of an EXE file, which is matched up with a library of DLLs. The library file contains any number of functions that, if called by the EXE file, would be fired carrying out a job. If the EXE was a word processor, the associated DLL could contain the code for calling the spell check routine or review.

Think of the API Gateway as a doorway into any service of AWS that you need to integrate with your mobile or Web application. You can also think of the API Gateway as the front door that, once authenticated, allows entry to the AWS back door where the selected AWS service resides that you need to communicate with. Remember, the API Gateway is another one of the AWS managed services hosted by a massive server farm running a custom software program that accepts the hundreds of thousands of requests to the stored APIs. Both HTTP/REST APIs and WebSocket APIs are accessed from exposed HTTPS endpoints, as shown in Figure 8-22.

A window shows to choose the protocol (either rest or websocket). Below the section, creation of new API is shown. The options are New API, Import from swagger, and example API (selected).
Figure 8-22 Choose the API protocol to create

Note

API Gateway can call Lambda functions hosted in your AWS account or HTTP endpoints hosted on Elastic Beanstalk or EC2 instances.

If you’re programming applications that will be hosted at AWS, you should consider hosting your applications’ APIs using API Gateway. API Gateway has the following features:

  • Security—API Gateway supports IAM and AWS Cognito for authorizing API access.

  • Traffic throttling—API responses to incoming requests can be cached and take the load off the back-end service, as cached responses to an API with the same query can be answered from the cache. The number of requests an API can receive can also be defined; metering plans for an API’s allowed level of traffic can also be defined.

  • Multiple version support—Multiple API versions can be hosted at the same time by the API Gateway.

  • Metering—Using metering allows you to throttle and control desired access levels to your hosted API.

  • Access—When an API is called, API Gateway checks whether an authorized process can carry out the task that the API needs done. Choices are either a Lambda authorizer or a Cognito user pool; API Gateway then calls the selected Authorizer, as shown in Figure 8-23, passing the incoming authorization token for verification. Remember: a Cognito user pool can be configured to allow a mobile application to authenticate an end user request in a variety of methods, including single sign-on (SSO), use of Oauth, or their email address to access the back-end application components.

    A window shows the type (Lambda), lambda function name, event payload (either token or request), token source and validation, and authorization caching.
    Figure 8-23 Selecting authorizer for API Gateway

Note

API Gateway can create client-side SSL certifi cates to verify that all requests made to your back-end resources were sent by API Gateway using the associated public key of the certifi cate. Private APIs can be created for use only with select VPCs across private VPC endpoints.

Building a Serverless Web App

Terra Firma wants to use Lambda to create an event website to sell tickets to its next corporate function. The Web-based interface will be simple and allow users to register for the corporate function after they have registered as attendees.

Create a Static Website

The first step is to create a website that can be hosted in an S3 bucket, as shown in Figure 8-24. The website is going to be hosted in an S3 bucket; therefore, it can be a simple static website with no dynamic assets. After configuring the S3 bucket for website hosting, all the HTML, cascading style sheets (CSS), images, and Web server files are uploaded and stored. A URL is provided using a registered domain owned by Terra Firma via email to each corporate user who wants to sign up for the conference. To host a website, the S3 bucket must also have public read access, and the DNS records must be updated on Route 53 by adding alias records that point to the website.

Static website hosting window is shown. Index document and error document details are entered. These options are shown since host a website button is selected.
Figure 8-24 Using an S3 bucket for static website hosting

User Authentication

A Cognito user pool needs to be created for the users who will be registering for the conference (see Figure 8-25). The corporate users will use their corporate email addresses to register themselves as new users on the website. After they register on the conference website, Cognito has been configured to send them a standard confirmation email that includes a verification code they will use to confirm their identity.

Options to sign in are shown in a window.
Figure 8-25 Create authentication pool using Cognito

After the users have signed in successfully to the website, a JavaScript function communicates with AWS Cognito authenticating them using the Secure Remote Password protocol and returning a Web token that will be used to identify users as they request access to the conference.

Serverless Back-End Components

The Lambda function, which registers users to the conference and sends them an attendance code, runs at AWS. When each user registers for the conference, the request is first stored in a DynamoDB table and then returns the registration code to the end user (see Figure 8-26).

Procedure to create a DynamoDB table is shown. A description is shown for DynamoDB. Table name and primary key are entered.
Figure 8-26 Creating DynamoDB table

Set Up the API Gateway

The registration request invokes the Lambda function, which is securely called from the user’s browser carrying out the registration as a RESTful API call to the Amazon API Gateway (see Figure 8-27). This background process allows registered users to register to the conference. Remember: the registered users have already been approved through registration and verification by being a member of the Cognito user pool.

The users, unbeknownst to them, are utilizing JavaScript in the background to register for the conference using the publicly exposed API hosted by the API Gateway and carrying out a stateful RESTful request. Representational State Transfer (REST) is a key authentication component of the AWS cloud, and the API Gateway and RESTful APIs are the most common AWS API format. REST uses HTTP verbs to describe the type of each request:

  • GET (request a record)

  • PUT (update a record)

  • POST (create a record)

  • DELETE (delete a record)

When users type a URL into their browser, they are carrying out a GET request. Submitting a request for the conference is a POST request.

RESTful communication is defined as stateless; therefore, all the information needed to process a RESTful request is self-contained within the actual request; the server doesn’t need additional information to be able to process the request.

The beauty of this design is that you don’t need any of your own servers at the back end; you just need Lambda hosting your functions that are called based on the logic of the application and what application request is carried out by the user.

A window shows to choose the protocol (either rest or websocket). Below the section, creation of new API is shown. The options are New API, Import from swagger, and example API (selected).
Figure 8-27 Registering the RESTful API with the API Gateway

In Conclusion

We’ve looked at several possibilities with automating infrastructure. Automation is the best long-term goal for your application/infrastructure deployments and redeployments at AWS. Automation could be as simple as using a User Data script to build your instances. Certainly, if you are looking at hosting applications that you have written, spend some time with Elastic Beanstalk.

The setup and configuration of Elastic Beanstalk includes most of the infrastructure components that we’ve looked at in one service: VPCs, instances, AMIs, load balancers, and auto scaling. We also spent some time with CloudFormation, which is a powerful deployment, update, and deletion engine with a somewhat steep learning curve. Despite that, CloudFormation is worthwhile because of what it will save you in time and process. Hopefully as a developer or administrator, you now have a good idea of what AWS has to offer as far as pieces to support, test, and update your code.

Be sure to look at the companion videos bundled with this book to explore these powerful tools in more detail.

This is the last top 10 list of things to consider for your company when moving toward automation at AWS. If you’ve spent time with the 80 discussion points presented in this book, you’ve probably come to some positive and useful conclusions that will be great first steps when moving forward with your AWS deployment.

Top 10 Big-Picture Discussion Points: Moving Toward Stateless Design

  1. Can CloudFormation templates help you redeploy infrastructure stacks?

  2. How useful are CloudFormation templates before deploying VPC network infrastructure?

  3. Does using Service Catalog help you lock down infrastructure deployments?

  4. Does moving your code hosted at Git to CodeCommit save you money?

  5. Who wants to lead the discussion on the 12-factor rules?

  6. Can some of your websites be changed to hosted S3 static websites?

  7. Do Elastic Beanstalk Blue/Green deployments help you move toward a DevOps mind-set?

  8. Which AWS Quick Start can help you in testing AWS services?

  9. Which Lambda functions can you create to assist you with automated responses using CloudWatch alerts or S3 bucket uploads?

  10. Does the API Gateway help you create mobile apps more effectively?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
44.199.212.254