Handling different environments with Terraform

It's a common and recommended setup to have different infrastructure environments, with some level of parity. Those environments can vary greatly between companies and projects in both names and focus, but here are commonly found environments:

  • Development: where developers can implement and quickly test new features
  • Staging: where the new features are tested in a more consistent environment than the development one, sometimes very similar to a preproduction environment
  • Preproduction: this environment is the most similar possible to production
  • Production: the full-featured live production environment

We'll see how using infrastructure-as-code and especially how Terraform fundamentally helps to build strong and replicated environments. This time we'll use a CoreOS AMI for a change.

Getting ready

To step through this recipe, you will need the following:

  • A working Terraform installation
  • An AWS account with an SSH key configured in Terraform (refer to the Chapter 2, Provisioning IaaS with Terraform, recipes)
  • An Internet connection

How to do it…

Using infrastructure-as-code, the easiest thing is to simply duplicate the code to create as many environments as needed. However, there's a much more powerful way to leverage the full capabilities of Terraform.

Let's define the requirements of simple target environments that we'll translate into dynamic Terraform code:

Parameter

Staging

Production

Number of instances

1

3

Type of instance

t2.micro

t2.medium

Operating system

CoreOS Stable

CoreOS Stable

AMI in eu-west-1

ami-85097ff6

ami-85097ff6

AMI in us-east-1

ami-0aef8e1d

ami-0aef8e1d

S3 bucket naming

iacbook-staging

iacbook-production

Default environment

Yes

No

Let's start by declaring those variables in the variables.tf file, exactly as we' saw in Chapter 2, Provisioning laaS with Terraform, except we'll describe environments such as staging and production instead of the AWS regions for the cluster size and instance types.

Define the CoreOS AMI variable:

variable "aws_coreos_ami" {
  type = "map"

  default = {
    eu-west-1 = "ami-85097ff6"
    us-east-1 = "ami-0aef8e1d"
  }
}

Define a cluster size variable with different values according to the environment:

variable "cluster_size" {
  type = "map"

  default = {
    staging    = "1"
    production = "3"
  }

  description = "Number of nodes in the cluster"
}

Finally, define the different AWS instance types:

variable "aws_instance_type" {
  type = "map"

  default = {
    staging    = "t2.micro"
    production = "t2.medium"
  }

  description = "Instance type"
}

Now let's use those in a highly dynamic infrastructure code (instances.tf), using the aws_instance resource and by choosing automatically the correct cluster size and instance type according to the environment, while choosing the right AMI according to the execution region:

resource "aws_instance" "coreos" {
  count                       = "${lookup(var.cluster_size, var.environment)}"
  ami                         = "${lookup(var.aws_coreos_ami, var.aws_region)}"
  instance_type               = "${lookup(var.aws_instance_type, var.environment)}"
  key_name                    = "${aws_key_pair.admin_key.key_name}"
  associate_public_ip_address = true

  tags {
    Name        = "coreos_${var.environment}_${count.index+1}"
    Environment = "${var.environment}"
  }
}

Note

We constructed each instance Name tag according to its environment and its numerical value in the count (that is, coreos_production_2).

Our specification table indicates we need two different S3 buckets as well. Let's reuse in s3.tf something close to what we did in Chapter 2, Provisioning IaaS with Terraform:

resource "aws_s3_bucket" "bucket" {
  bucket = "iacbook-${var.environment}"

  tags {
    Name        = "IAC Book ${var.environment} Bucket"
    Environment = "${var.environment}"
  }
}

It's the same construction here, each environment will get its bucket dynamically named after it.

Keeping the tfstate isolated

It's strongly recommended to not mix Terraform state files between environments. One elegant solution to keep them well separated is to use the following option when executing the terraform command:

$ terraform apply -state=staging.tfstate

Your default environment (set to staging) will now reside in the staging.tfstate file.

Setting the production flag

Now we have our staging infrastructure running smoothly, it's time to launch the real thing—the production environment. As we're already using a dedicated terraform state file, let's do the same for production, and set the environment variable directly through the command line:

$ terraform plan -state=production.tfstate -var environment=production

You now have two clearly separated environments using the very same code, but living independently from each other. Concise and elegant!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.217.220.114