It's a common and recommended setup to have different infrastructure environments, with some level of parity. Those environments can vary greatly between companies and projects in both names and focus, but here are commonly found environments:
We'll see how using infrastructure-as-code and especially how Terraform fundamentally helps to build strong and replicated environments. This time we'll use a CoreOS AMI for a change.
To step through this recipe, you will need the following:
Using infrastructure-as-code, the easiest thing is to simply duplicate the code to create as many environments as needed. However, there's a much more powerful way to leverage the full capabilities of Terraform.
Let's define the requirements of simple target environments that we'll translate into dynamic Terraform code:
Parameter |
Staging |
Production |
---|---|---|
Number of instances |
1 |
3 |
Type of instance |
|
|
Operating system |
CoreOS Stable |
CoreOS Stable |
AMI in eu-west-1 |
|
|
AMI in us-east-1 |
|
|
S3 bucket naming |
iacbook-staging |
iacbook-production |
Default environment |
Yes |
No |
Let's start by declaring those variables in the variables.tf
file, exactly as we' saw in Chapter 2, Provisioning laaS with Terraform, except we'll describe environments such as staging and production instead of the AWS regions for the cluster size and instance types.
Define the CoreOS AMI variable:
variable "aws_coreos_ami" { type = "map" default = { eu-west-1 = "ami-85097ff6" us-east-1 = "ami-0aef8e1d" } }
Define a cluster size variable with different values according to the environment:
variable "cluster_size" { type = "map" default = { staging = "1" production = "3" } description = "Number of nodes in the cluster" }
Finally, define the different AWS instance types:
variable "aws_instance_type" { type = "map" default = { staging = "t2.micro" production = "t2.medium" } description = "Instance type" }
Now let's use those in a highly dynamic infrastructure code (instances.tf
), using the aws_instance
resource and by choosing automatically the correct cluster size and instance type according to the environment, while choosing the right AMI according to the execution region:
resource "aws_instance" "coreos" { count = "${lookup(var.cluster_size, var.environment)}" ami = "${lookup(var.aws_coreos_ami, var.aws_region)}" instance_type = "${lookup(var.aws_instance_type, var.environment)}" key_name = "${aws_key_pair.admin_key.key_name}" associate_public_ip_address = true tags { Name = "coreos_${var.environment}_${count.index+1}" Environment = "${var.environment}" } }
Our specification table indicates we need two different S3 buckets as well. Let's reuse in s3.tf
something close to what we did in Chapter 2, Provisioning IaaS with Terraform:
resource "aws_s3_bucket" "bucket" { bucket = "iacbook-${var.environment}" tags { Name = "IAC Book ${var.environment} Bucket" Environment = "${var.environment}" } }
It's the same construction here, each environment will get its bucket dynamically named after it.
It's strongly recommended to not mix Terraform state files between environments. One elegant solution to keep them well separated is to use the following option when executing the terraform
command:
$ terraform apply -state=staging.tfstate
Your default environment (set to staging) will now reside in the staging.tfstate
file.
Now we have our staging infrastructure running smoothly, it's time to launch the real thing—the production environment. As we're already using a dedicated terraform state file, let's do the same for production, and set the environment
variable directly through the command line:
$ terraform plan -state=production.tfstate -var environment=production
You now have two clearly separated environments using the very same code, but living independently from each other. Concise and elegant!
18.217.220.114