If you want to know how to automate running Terraform, this chapter is for you. Until now, I have assumed you are deploying Terraform from your local machine. This is a reasonable assumption for individuals and even small teams, as long as you are using a remote-state backend. On the other hand, large teams and organizations with many individual contributors may benefit from automating Terraform.
In chapter 6, we discussed how HashiCorp has two products to automate running Terraform: Terraform Cloud and Terraform Enterprise. These products are basically the same; Terraform Cloud is simply the software as a service (SaaS) version of Terraform Enterprise. In this chapter, we develop a continuous integration / continuous delivery (CI/CD) pipeline to automate deploying Terraform workspaces, modeled after the design of Terraform Enterprise. The stages of the CI/CD pipeline are shown in figure 12.1.
Figure 12.1 A four-stage CI/CD pipeline for Terraform deployments. Changes to configuration code stored in a version-controlled source (VCS) source repository trigger running terraform plan. If the plan succeeds, manual approval is required before the changes are applied in production.
By the end of this chapter, you will have the skills necessary to automate Terraform deployments using a CI/CD pipeline. I will also give some advice on how to structure more complex Terraform CI/CD pipelines, although the actual implementation is outside the scope of this chapter.
Why develop a custom solution to automate running Terraform when HashiCorp already has Terraform Enterprise? Two good reasons are ownership and cost:
Ownership—By owning the pipeline, you can design the solution that works best for you and troubleshoot when anything goes wrong.
Cost—Terraform Enterprise is not free. You can save a lot of money by forgoing the licensing fees and developing a homegrown solution.
Of course, Terraform Enterprise has several advanced features that are not easy to replicate (if there weren’t, nobody would have a reason to buy a license). To design our bootleg Terraform Enterprise, we’ll start by going through a list of features that Terraform Enterprise offers; from there, we’ll design a solution that delivers as many of those features as possible.
All the features of Terraform Enterprise fall into one of two categories: collaboration and automation. Collaboration features are designed to help people share and develop Terraform with each other, while automation features make it easier to integrate Terraform with existing toolchains.
Our poor person’s Terraform Enterprise will support all the collaboration and automation features of Terraform Enterprise listed in table 12.1, with the exception of remote operations and Sentinel “policy as code”—open source Terraform does not support remote operations, and Sentinel is a proprietary technology. We talk more about Sentinel in chapter 13 because it’s still worth mentioning and is highly relevant to managing secrets.
Table 12.1 Key features of Terraform Enterprise categorized by theme
Figure 12.2 shows a conceptual diagram of what we are going to build. It’s a concrete implementation of the generalized Terraform CI/CD workflow depicted earlier. The basic idea is that users check in configuration code to a GitHub repository, which then fires a webhook that triggers AWS CodePipeline.
Figure 12.2 A concrete implementation of a general Terraform CI/CD workflow. Users check in configuration code to a source repository, which triggers the execution of AWS CodePipeline. The pipeline has four stages: Source, Plan, Approve, and Apply.
AWS CodePipeline is a GitOps service similar to Google Cloud Platform (GCP) Cloud Build or Azure DevOps. It supports having multiple stages that can run predefined tasks or custom code, as defined by a YAML build specification file. Our CI/CD pipeline will have four such stages: Source, to create a webhook and download source code from a GitHub repository; Plan, to run terraform plan
; Approve, to obtain manual approval; and Apply, to run terraform
apply
. Having a manual approval stage is necessary because it acts as a gate to let stakeholders (i.e., approvers and other interested parties) read the output of terraform
plan
before applying changes. Figure 12.3 illustrates the pipeline.
Figure 12.3 Terraform automation workflow. Source downloads source code from GitHub. Plan runs terraform plan. Approve notifies stakeholders to manually approve or reject changes. Apply runs terraform apply.
Our goal is to design a Terraform project that can automate deployments of other Terraform workspaces. Essentially, we are using Terraform to manage Terraform. In this section, we walk through the detailed design of the project so that we can start coding immediately afterward.
At the root level, we will declare two modules: one for deploying AWS CodePipeline and another for deploying an S3 remote backend. The codepipeline
module contains all the resources for provisioning the pipeline: IAM resources, CodeBuild projects, a Simple Notification Service (SNS) topic, a CodeStar connection, and an S3 bucket. The s3backend
module will deploy a remote state backend for securely storing, encrypting, and locking Terraform state files. We will not detail what goes into the s3backend
module, as this was covered in chapter 6. Figure 12.4 depicts the project’s overall structure.
Figure 12.4 At the root level are two modules: codepipeline, which defines the resources for creating a CI/CD pipeline in AWS CodePipeline, and s3backend, which provisions an S3 remote backend (see chapter 6 for more details on this module).
Note This project combines a nested module structure with a flat module structure. Usually I recommend sticking to one or the other, but it is not wrong to incorporate both as long as the code is clear and understandable.
The completed directory structure will contain 10 files spread over 4 directories:
$ tree -C . ├── modules │ └── codepipeline │ ├── templates │ │ ├── backend.json │ │ ├── buildspec_apply.yml │ │ └──buildspec_plan.yml │ ├── outputs.tf │ ├── variables.tf │ ├── iam.tf │ └── main.tf ├── policies │ └── helloworld.json ├── terraform.tfvars └── main.tf 4 directories, 10 files
First, we need to create a new Terraform workspace and declare the s3backend
and codepipeline
modules.
variable "vcs_repo" { type = object({ identifier = string, branch = string }) } provider "aws" { region = "us-west-2" } module "s3backend" { ❶ source = "terraform-in-action/s3backend/aws" principal_arns = [module.codepipeline.deployment_role_arn] } module "codepipeline" { ❷ source = "./modules/codepipeline" name = "terraform-in-action" vcs_repo = var.vcs_repo environment = { CONFIRM_DESTROY = 1 } deployment_policy = file("./policies/helloworld.json") ❸ s3_backend_config = module.s3backend.config }
❶ Deploys an S3 backend that will be used by codepipeline
❷ Deploys a CI/CD pipeline for Terraform
❸ We will create this file later.
NOTE Don’t worry about terraform.tfvars; we will come back to it later.
In this section, we define the module that provisions AWS CodePipeline and all of its dependencies.
Create a ./modules/codepipeline directory, and then switch into it. This will be the source directory for the CodePipeline module. In this directory, create a variables.tf file and add the following code.
variable "name" { type = string default = "terraform" description = "A project name to use for resource mapping" } variable "auto_apply" { type = bool default = false description = "Whether to automatically apply changes when a Terraform ➥ plan is successful. Defaults to false." } variable "terraform_version" { type = string default = "latest" description = "The version of Terraform to use for this workspace. ➥ Defaults to the latest available version." } variable "working_directory" { type = string default = "." description = "A relative path that Terraform will execute within. ➥ Defaults to the root of your repository." } variable "vcs_repo" { type = object({ identifier = string, branch = string }) description = "Settings for the workspace's VCS repository." } variable "environment" { type = map(string) default = {} "A map of environment variables to pass into pipeline" } variable "deployment_policy" { type = string default = null description = "An optional IAM deployment policy" } variable "s3_backend_config" { type = object({ bucket = string, region = string, role_arn = string, dynamodb_table = string, }) description = "Settings for configuring the S3 remote backend" }
We need to create two service roles with execution policies, one for CodeBuild and one for CodePipleine. The CodeBuild role will also have the deployment policy (helloworld.json—which we have not yet defined) attached, as this will define supplementary permissions used during the Plan and Apply stages. Since the details of IAM roles and policies are not particularly interesting, I present the code here for you to peruse at your leisure.
resource "aws_iam_role" "codebuild" { name = "${local.namespace}-codebuild" assume_role_policy = <<-EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "codebuild.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } EOF } resource "aws_iam_role_policy" "codebuild" { role = aws_iam_role.codebuild.name policy = <<-EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Resource": [ "*" ], "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ] }, { "Effect":"Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion", "s3:GetBucketVersioning" ], "Resource": [ "${aws_s3_bucket.codepipeline.arn}", "${aws_s3_bucket.codepipeline.arn}/*" ] } ] } EOF } resource "aws_iam_role_policy" "deploy" { count = var.deployment_policy != null ? 1 : 0 role = aws_iam_role.codebuild.name policy = var.deployment_policy } resource "aws_iam_role" "codepipeline" { name = "${local.namespace}-codepipeline" assume_role_policy = <<-EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "codepipeline.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } EOF } resource "aws_iam_role_policy" "codepipeline" { role = aws_iam_role.codepipeline.id policy = <<-EOF { "Version": "2012-10-17", "Statement": [ { "Effect":"Allow", "Action": [ "s3:GetObject", "s3:GetObjectVersion", "s3:GetBucketVersioning", "s3:PutObject", "s3:PutObjectAcl" ], "Resource": [ "${aws_s3_bucket.codepipeline.arn}", "${aws_s3_bucket.codepipeline.arn}/*" ] }, { "Effect": "Allow", "Action": [ "kms:Encrypt", "kms:Decrypt", "kms:ReEncrypt*", "kms:GenerateDataKey*", "kms:DescribeKey" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "sns:Publish" ], "Resource": "${aws_sns_topic.codepipeline.arn}" }, { "Effect": "Allow", "Action": [ "codebuild:BatchGetBuilds", "codebuild:StartBuild", "codebuild:ListConnectedOAuthAccounts", "codebuild:ListRepositories", "codebuild:PersistOAuthToken", "codebuild:ImportSourceCredentials" ], "Resource": "*" }, { "Effect": "Allow", "Action": [ "codestar-connections:UseConnection" ], "Resource": "${aws_codestarconnections_connection.github.arn}" } ] } EOF }
We can now create the outputs file. The only output value is deployment_role_arn
, which references the Amazon Resource Name (ARN) of the CodeBuild role. The s3backend
module uses this output to authorize CodeBuild to read objects from the S3 bucket storing Terraform state.
output "deployment_role_arn" { value = aws_iam_role.codebuild.arn }
In this section, we build the Plan and Apply stages of the pipeline. Both of these stages use AWS CodeBuild. Before we begin, let’s add a random_string
resource to main.tf to prevent namespace collisions (as we did in chapter 5).
resource "random_string" "rand" { length = 24 special = false upper = false } locals { namespace = substr(join("-", [var.name, random_string.rand.result]), 0, 24) }
Now, let’s configure an AWS CodeBuild project for the Plan and Apply stages of the pipeline. (Source and Approve do not use AWS CodeBuild.) As the CodeBuild projects for Plan and Apply are nearly identical, we’ll use templates to make the code more concise and readable (see figure 12.5).
Figure 12.5 aws_codebuild_project has a meta-argument count of two and reads from template files to configure the buildspec.
Add the following code to main.tf to provision the two AWS CodeBuild projects.
...
locals {
projects = ["plan", "apply"]
}
resource "aws_codebuild_project" "project" {
count = length(local.projects)
name = "${local.namespace}-${local.projects[count.index]}"
service_role = aws_iam_role.codebuild.arn
artifacts {
type = "NO_ARTIFACTS"
}
environment {
compute_type = "BUILD_GENERAL1_SMALL"
image = "hashicorp/terraform:${var.terraform_version}" ❶
type = "LINUX_CONTAINER"
}
source {
type = "NO_SOURCE"
buildspec = file("${path.module}/templates/buildspec_${local.projects[count.index]}.yml")
}
}
❶ Points to an image published by HashiCorp
The version of Terraform the pipeline uses is configurable with var.terraform _version
. This variable selects the image
tag hashicorp/terraform
to use for the container runtime. HashiCorp maintains this image and creates a tagged release for each version of Terraform. This image is basically Alpine Linux with the Terraform binary baked in. We are using it here to obviate the need to download Terraform at runtime (a potentially slow operation).
A build specification (buildspec) file contains the collection of build
commands and related settings that AWS CodeBuild executes. Create a ./templates folder in which to put the buildspec files for the Plan and Apply stages.
First create a buildspec_plan.yml file that will be used by the Plan stage.
Listing 12.7 buildspec_plan.yml
version: 0.2
phases:
build:
commands:
- cd $WORKING_DIRECTORY
- echo $BACKEND >> backend.tf.json
- terraform init
- |
if [[ "$CONFIRM_DESTROY" == "0" ]]; then ❶
terraform plan
else
terraform plan -destroy
fi
❶ If CONFIRM_DESTROY is 0, run terraform plan; otherwise, run destroy plan.
As you can see, the Plan stage does a bit more than simply run terraform plan
. Specifically, here is what it does:
Switches into the working directory of the source code as specified by the WORKING_DIRECTORY
environment variable. This defaults to the current working directory.
Writes a backend.tf.json file. This file configures the S3 backend for remote state storage.
Performs terraform plan
if CONFIRM_DESTROY
is set to 0
; otherwise, performs a destroy plan (terraform plan -destroy
).
Apply’s build specification is similar to Plan’s, except it actually runs terraform apply
and terraform destroy
instead of just performing a dry run. Create a buildspec_apply.yml file in the ./templates folder with the code from listing 12.8.
Note It’s possible to create a general buildspec that works for both Plan and Apply. However, I don’t think it’s worth the trouble.
Listing 12.8 buildspec_apply.yml
version: 0.2 phases: build: commands: - cd $WORKING_DIRECTORY - echo $BACKEND >> backend.tf.json - terraform init - | if [[ "$CONFIRM_DESTROY" == "0" ]]; then terraform apply -auto-approve else terraform destroy -auto-approve fi
Users can configure environment variables on the container runtime by passing values into the var.environment
input variable. Environment variables are great for tuning optional Terraform settings and configuring secrets on Terraform providers. We talk more about how to use environment variables in the next chapter.
Environment variables passed by users are merged with default environment variables and passed into the stage configuration. AWS CodeBuild requires (see http://mng.bz/pJB5) these variables to be passed in JSON format, which we can achieve with the help of a for
expression. This is shown in figure 12.6.
Figure 12.6 User-supplied environment variables are merged with default environment variables in a new map. Using a for expression, the map is then converted into a JSON list of objects that is used to configure AWS CodePipeline.
Note You can also set environment variables in the buildspec file or aws_codebuild_project
.
The environment configuration is created by merging local.default_environment
with var.environment
and transformed with a for
expression, as shown in listing 12.9.
Note User-supplied environment variables override default values.
... locals { backend = templatefile("${path.module}/templates/backend.json", { config : var.s3_backend_config, name : local.namespace }) ❶ default_environment = { ❷ TF_IN_AUTOMATION = "1" TF_INPUT = "0" CONFIRM_DESTROY = "0" WORKING_DIRECTORY = var.working_directory BACKEND = local.backend, } environment = jsonencode([for k, v in merge(local.default_environment, var.environment) : { name : k, value : v, type : "PLAINTEXT" }]) ❸ }
❶ Template for the backend configuration
❷ Declares default environment variables
❸ Merges default environment variables with user-supplied values
As you can see, there are five default environment variables. The first two are Terraform settings, and the next three are used by the code in our buildspec:
TF_IN_AUTOMATION
—If set to a non-empty value, Terraform adjusts the output to avoid suggesting specific commands to run next.
TF_INPUT
—If set to 0
, disables prompts for variables that don’t have values set.
CONFIRM_DESTROY
—If set to 1
, AWS CodeBuild will queue a destroy run instead of a create run.
WORKING_DIRECTORY
—A relative path in which to execute Terraform. Defaults to the source code root directory.
BACKEND
—A JSON-encoded string that configures the remote backend.
The remote state backend is configured by echoing the value of BACKEND
to backend.tf.json prior to initialing Terraform (see figure 12.7). This is done so users do not need to check backend configuration into version control (as it’s an unimportant implementation detail).
Figure 12.7 Before Terraform is initialized, a backend.tf.json file is created by echoing the BACKEND environment variable (set from templating a separate backend.json file). This makes it so users do not have to check backend configuration code into version control.
We’ll generate the backend configuration by using a template file. Create a backend.json file with the following code, and put it in the ./templates directory.
{ "terraform": { "backend": { "s3": { "bucket": "${config.bucket}", "key": "aws/${name}", "region": "${config.region}", "encrypt": true, "role_arn": "${config.role_arn}", "dynamodb_table": "${config.dynamodb_table}" } } } }
AWS CodePipeline relies on three miscellaneous resources. First is an S3 bucket that is used to cache artifacts between build stages (it’s just part of how CodePipeline works). Second, the Approve stage uses an SNS topic to send notifications when manual approval is required (currently these notifications go nowhere, but SNS could be configured to send notifications to a designated target). Finally, a CodeStarConnections
connection manages access to GitHub (so you do not need to use a private access token).
TIP SNS can trigger the sending of an email to a mailing list (via SES), texts to a cellphone (via SMS), or notifications to a Slack channel (via ChimeBot). Unfortunately, you cannot manage these resources with Terraform, so this activity is left as an exercise for the reader.
Add the following code to main.tf to declare an S3 bucket, an SNS topic, and a CodeStar Connections connection.
resource "aws_s3_bucket" "codepipeline" { bucket = "${local.namespace}-codepipeline" acl = "private" force_destroy = true } resource "aws_sns_topic" "codepipeline" { name = "${local.namespace}-codepipeline" } resource "aws_codestarconnections_connection" "github" { name = "${local.namespace}-github" provider_type = "GitHub" }
With that out of the way, we are ready to declare the pipeline. As a reminder, the pipeline has four stages:
Add the following code to main.tf.
resource "aws_codepipeline" "codepipeline" { name = "${local.namespace}-pipeline" role_arn = aws_iam_role.codepipeline.arn artifact_store { location = aws_s3_bucket.codepipeline.bucket type = "S3" } stage { name = "Source" action { name = "Source" category = "Source" owner = "AWS" provider = "CodeStarSourceConnection" version = "1" output_artifacts = ["source_output"] configuration = { FullRepositoryId = var.vcs_repo.identifier BranchName = var.vcs_repo.branch ConnectionArn = aws_codestarconnections_connection.github.arn ❶ } } } stage { name = "Plan" action { name = "Plan" category = "Build" owner = "AWS" provider = "CodeBuild" input_artifacts = ["source_output"] version = "1" configuration = { ProjectName = aws_codebuild_project.project[0].name ❷ EnvironmentVariables = local.environment } } } dynamic "stage" { for_each = var.auto_apply ? [] : [1] ❸ content { name = "Approve" action { name = "Approve" category = "Approval" owner = "AWS" provider = "Manual" version = "1" configuration = { CustomData = "Please review output of plan and approve" NotificationArn = aws_sns_topic.codepipeline.arn } } } } stage { name = "Apply" ❹ action { name = "Apply" category = "Build" owner = "AWS" provider = "CodeBuild" input_artifacts = ["source_output"] version = "1" configuration = { ProjectName = aws_codebuild_project.project[1].name EnvironmentVariables = local.environment } } } }
❶ Source fetches code from GitHub using CodeStar.
❷ Plan uses the zero-index CodeBuild project defined earlier.
❸ Dynamic block with a feature flag
❹ Apply is the last stage that runs.
One interesting thing to point out is the use of a dynamic block with a feature flag. var.auto_apply
is a feature flag that toggles the creation of the Approve stage. This is done using a boolean in a for_each
expression to create either zero or one instance of the Approve nested block. The logic for toggling dynamic blocks with feature flags is shown in figure 12.8.
Figure 12.8 If var.auto_apply is set to true, then for_each iterates over an empty list and no blocks will be created. If var.auto_apply is set to false, then for_each iterates over a list of length one, meaning exactly one block will be created.
Warning It is not recommended to turn off manual approval for anything mission-critical! There should always be at least one human verifying the results of a plan before applying changes.
For your reference, the complete code for main.tf is shown in the following listing.
Listing 12.13 Complete main.tf
resource "random_string" "rand" { length = 24 special = false upper = false } locals { namespace = substr(join("-", [var.name, random_string.rand.result]), 0, 24) projects = ["plan", "apply"] } resource "aws_codebuild_project" "project" { count = length(local.projects) name = "${local.namespace}-${local.projects[count.index]}" service_role = aws_iam_role.codebuild.arn artifacts { type = "NO_ARTIFACTS" } environment { compute_type = "BUILD_GENERAL1_SMALL" image = "hashicorp/terraform:${var.terraform_version}" type = "LINUX_CONTAINER" } source { type = "NO_SOURCE" buildspec = file("${path.module}/templates/ buildspec_${local.projects[count.index]}.yml") } } locals { backend = templatefile("${path.module}/templates/backend.json", { config : var.s3_backend_config, name : local.namespace }) default_environment = { TF_IN_AUTOMATION = "1" TF_INPUT = "0" CONFIRM_DESTROY = "0" WORKING_DIRECTORY = var.working_directory BACKEND = local.backend, } environment = jsonencode([for k, v in merge(local.default_environment, var.environment) : { name : k, value : v, type : "PLAINTEXT" }]) } resource "aws_s3_bucket" "codepipeline" { bucket = "${local.namespace}-codepipeline" acl = "private" force_destroy = true } resource "aws_sns_topic" "codepipeline" { name = "${local.namespace}-codepipeline" } resource "aws_codestarconnections_connection" "github" { name = "${local.namespace}-github" provider_type = "GitHub" } resource "aws_codepipeline" "codepipeline" { name = "${local.namespace}-pipeline" role_arn = aws_iam_role.codepipeline.arn artifact_store { location = aws_s3_bucket.codepipeline.bucket type = "S3" } stage { name = "Source" action { name = "Source" category = "Source" owner = "AWS" provider = "CodeStarSourceConnection" version = "1" output_artifacts = ["source_output"] configuration = { FullRepositoryId = var.vcs_repo.identifier BranchName = var.vcs_repo.branch ConnectionArn = aws_codestarconnections_connection.github.arn } } } stage { name = "Plan" action { name = "Plan" category = "Build" owner = "AWS" provider = "CodeBuild" input_artifacts = ["source_output"] version = "1" configuration = { ProjectName = aws_codebuild_project.project[0].name EnvironmentVariables = local.environment } } } dynamic "stage" { for_each = var.auto_apply ? [] : [1] content { name = "Approval" action { name = "Approval" category = "Approval" owner = "AWS" provider = "Manual" version = "1" configuration = { CustomData = "Please review output of plan and approve" NotificationArn = aws_sns_topic.codepipeline.arn } } } } stage { name = "Apply" action { name = "Apply" category = "Build" owner = "AWS" provider = "CodeBuild" input_artifacts = ["source_output"] version = "1" configuration = { ProjectName = aws_codebuild_project.project[1].name EnvironmentVariables = local.environment } } } }
In this section, we create the source repository, configure Terraform variables, deploy the pipeline, and connect the pipeline to GitHub.
We need something for our pipeline to deploy. It can be anything, so we might as well do something easy. We’ll use the “Hello World!” example from chapter 1, which deploys a single EC2 instance. Create a new Terraform workspace with a single main.tf file containing the following code.
provider "aws" {
region = "us-west-2" ❶
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-*"]
}
owners = ["099720109477"]
}
resource "aws_instance" "helloworld" {
ami = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
}
❶ AWS credentials will be supplied using CodeBuild’s service role.
Now upload this code to a GitHub repository: for example, terraform-in-action/helloworld_deploy (see figure 12.9).
Figure 12.9 A source GitHub repository with the “Hello World!” configuration code
We also need to create a least privileged deployment policy that will be attached to the AWS CodeBuild service role. Terraform will use this policy to deploy the “Hello World!” configuration. Because all “Hello World!” does is deploy an EC2 instance, the permissions are fairly short. Put the following code into a ./policies/helloworld.json file.
{ "Version": "2012-10-17", "Statement": [ { "Action": [ "ec2:DeleteTags", "ec2:CreateTags", "ec2:TerminateInstances", "ec2:RunInstances", "ec2:Describe*" ], "Effect": "Allow", "Resource": "*" } ] }
Note You don’t have to be super granular when it comes to least-privileged policies, but you also don’t want to be extremely open. There’s no reason to use a deployment role with admin permissions, for example.
The last thing we need to do is set Terraform variables. Switch back into the root directory, and create a terraform.tfvars file with the following code. You will need to replace the VCS identifier with the identifier of your GitHub repository and the branch, if you are not using master.
Listing 12.16 terraform.tfvars
vcs_repo = { branch = "master" ❶ identifier = "terraform-in-action/helloworld_deploy" ❶ }
❶ Branch and identifier of the GitHub source repository
Once you have set the variables, initialize Terraform and then run terraform apply
:
$ terraform apply ... # module.s3backend.random_string.rand will be created + resource "random_string" "rand" { + id = (known after apply) + length = 24 + lower = true + min_lower = 0 + min_numeric = 0 + min_special = 0 + min_upper = 0 + number = true + result = (known after apply) + special = false + upper = false } Plan: 20 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value:
After you confirm the apply
, it should take only a minute or two for the pipeline to be deployed:
module.codepipeline.aws_codepipeline.codepipeline: Creating...
module.s3backend.aws_iam_role_policy_attachment.policy_attach: Creation
complete after 1s [id=s3backend-5uj2z9wr2py09v-tf-assume-role-
20210114124350988700000004]
module.codepipeline.aws_codepipeline.codepipeline: Creation complete after
r0m6-pipeline]
Apply complete! Resources: 20 added, 0 changed, 0 destroyed.
Figure 12.10 shows the deployed pipeline as viewed from the AWS console.
Figure 12.10 The deployed AWS CodePipeline, as viewed from the AWS console. Currently it is in the errored state because a manual step is needed to complete the CodeStar connection.
Note The pipeline is currently in the errored state because a manual step is required to complete the CodeStar connection.
The pipeline run shows that it has failed because AWS CodeStar’s connection is stuck in the PENDING
state. Although aws_codestarconnections_connection
is a managed Terraform resource, it’s created in the PENDING
state because authentication with the connection provider can only be completed through the AWS console.
Note You can use a data source or import an existing CodeStar connection resource if that makes things easier for you, but the manual authentication step cannot be avoided.
To authenticate the AWS CodeStar connection with the connection provider, click the big Update Pending Connection button in the AWS console (see figure 12.11). At a minimum, you will need to grant permissions for the connection to access the source repository with the identifier specified in terraform.tfvars. For more information on how to authenticate AWS CodeStar, refer to the official AWS documentation (http://mng.bz/YAro).
Figure 12.11 Authenticating the AWS CodeStar connection to GitHub through the console
In this section, we deploy and un-deploy the “Hello World!” Terraform configuration using the pipeline. Because the pipeline run failed the first time through (since the CodeStar connection was not complete), we have to retry it. Click the Release Change button to retry the run (figure 12.12).
Figure 12.12 Click the Release Change button to retry the run.
Note Runs are also triggered whenever a commit is made to the source repository.
After the Source and Plan stages succeed, you will be prompted to manually approve changes (figure 12.13). Once approved, the Apply stage will commence, and the EC2 instance will be deployed to AWS (figure 12.14).
Figure 12.13 After the plan succeeds, you need to give manual approval before the apply will run.
Figure 12.14 The EC2 instance deployed as a result of running Terraform through the pipeline
Destroy runs are the same as performing terraform destroy
. For this scenario, I have followed Terraform Enterprise’s example by using a CONFIRM_DESTROY
flag to trigger destroy runs. If CONFIRM_DESTROY
is set to 0
, a normal terraform
apply
takes place. If it is set to anything else, a terraform destroy
run occurs, instead.
Let’s queue a destroy run to clean up the EC2 instance. If we deleted the CI/CD pipeline without first queuing a destroy run, we would be stuck with orphaned resources (the EC2 instance would still exist but wouldn’t have a state file managing it anymore, because the S3 backend would have been deleted). You will have to update the code of the root module to set CONFIRM_DESTROY
to 1
. Also set auto_apply
to true
so you don’t have to perform a manual approval.
variable "vcs_repo" { type = object({ identifier = string, branch = string }) } provider "aws" { region = "us-west-2" } module "s3backend" { source = "terraform-in-action/s3backend/aws" principal_arns = [module.codepipeline.deployment_role_arn] } module "codepipeline" { source = "./modules/codepipeline" name = "terraform-in-action" vcs_repo = var.vcs_repo auto_apply = true environment = { CONFIRM_DESTROY = 1 } deployment_policy = file("./policies/helloworld.json") s3_backend_config = module.s3backend.config }
Apply changes with a terraform apply
.
$ terraform apply -auto-approve ... module.codepipeline.aws_codepipeline.codepipeline: Modifying... [id=terraform-in-action-r0m6-pipeline] module.codepipeline.aws_codepipeline.codepipeline: Modifications complete after 1s [id=terraform-in-action- r0m6-pipeline] Apply complete! Resources: 0 added, 3 changed, 0 destroyed.
After the apply
succeeds, you will need to manually trigger a destroy run by clicking Release Change in the UI (although you won’t have to do a manual approval this time). Logs of the destroy run are shown in figure 12.15.
Figure 12.15 Logs from AWS CodeBuild after completing a destroy run. The previously provisioned EC2 instance is destroyed.
Once the EC2 instance has been deleted, clean up the pipeline by performing terraform destroy
. This concludes the scenario on automating Terraform:
$ terraform destroy -auto-approve module.s3backend.aws_kms_key.kms_key: Destruction complete after 23s module.s3backend.random_string.rand: Destroying... [id=s1061cxz3u3ur7271yv8fgg7] module.s3backend.random_string.rand: Destruction complete after 0s Destroy complete! Resources: 20 destroyed.
In this chapter, we created and deployed a CI/CD pipeline to automate running Terraform. We used a four-stage CI/CD pipeline to download code from a GitHub repository, run terraform
plan
, wait for manual approval, and perform terraform apply
. In the next chapter, we focus on secrets management, security, and governance.
Before finishing this chapter, I want to cover some questions that I’m frequently asked about automating Terraform but didn’t have a chance to address earlier in the text:
How do I implement a private module registry? Private modules can be sourced from many different places. The easiest (as noted in chapter 6) is a GitHub repository or S3, but if you are feeling adventurous, you can also implement your own module registry by implementing the module registry protocol (see http://mng.bz/G6VM).
How do I install custom and third-party providers? Any provider that’s on the provider registry will be downloaded as part of terraform init
. If a provider is not on the provider registry, you can install it with local filesystem mirrors or by creating your own private provider registry. Private provider registries must implement the provider registry protocol (http://mng.bz/zGjw).
How do I handle other kinds of secrets variables and environment variables? We discuss everything you need to know about secrets and secrets management in chapter 13.
What about validation, linting, and testing? You can add as many stages as you like to handle these tasks.
How do I deploy a project that has multiple environments? There are three main strategies for deploying projects that have multiple environments. What you choose comes down to a matter of personal preference:
GitHub branches—Each logical environment is managed as its own GitHub branch: for example dev, staging, and prod. Promoting from one environment to the next is accomplished by merging a pull request from a lower branch into a higher branch. The advantage of this strategy is that it’s quick to implement and works well with any number of environments. The disadvantage is that it requires strict adherence to GitHub workflows. For example, you wouldn’t want someone merging a dev branch directly into prod without first going through staging.
Many-staged pipelines—As discussed earlier, a Terraform CI/CD pipeline generally has four stages (Source, Plan, Approve, Apply), but there is no reason this has to be the number. You could add additional stages to the pipeline for each environment. For example, to deploy to three environments, you could have a 10-stage pipeline: Source, Plan (dev), Approve (dev), Apply (dev), Plan (staging), Approve (staging), Apply (staging), Plan (prod), Approve (prod), Apply (prod). I do not like this method because it only works for linear pipelines and does not allow bypassing lower-level environments in the event of a hotfix.
Linking pipelines together—This is the most extensible and flexible option of the three, but it also requires the most wiring. The overall idea is simple enough: a successful apply
from one pipeline triggers execution in the next pipeline. Configuration code is promoted from one pipeline to the next so that only the lowest-level environment is connected directly to a version-controlled source repository; the others get their configuration code from earlier environments. The advantage of this strategy is that it allows you to roll back individual environments to previously deployed configuration versions.
Terraform can be run at scale as part of an automated CI/CD pipeline. This is comparable to how Terraform Enterprise and Terraform Cloud work.
A typical Terraform CI/CD pipeline consists of four stages: Source, Plan, Approve, Apply.
JSON syntax is favored over HCL when generating configuration code. Although it’s generally more verbose and harder to read than HCL, JSON is more machine-friendly and has better library support.
Dynamic blocks can be toggled on or off with a boolean flag. This is helpful when you have a code block that needs to exist or not exist depending on the result of a conditional expression.