Connecting remote states together

Up until now, we naively stored all of our Terraform code in a single repository. We had a single template responsible for creating a network, routes, virtual machines, security groups, and everything else. It works pretty well, provided you have a single application with modest infrastructure around it. A single VPC, a few subnets, a small database, and a couple of instances: with this scale, there are few reasons to go beyond the single repository for all the infrastructure templates.

If you are part of a large organization, this approach can get you only so far. Companies that heavily rely on AWS tend to have dozens of use cases for many, various services. Only the IAM service has quite a few entities to manage: roles, policies, users, groups, and so on. Normally, there are many roles for different servers and even more policies for these roles. The network is also kind of complicated; at the very least, you would have one VPC per environment or even one per product per environment.

The problem becomes even more evident if there are multiple providers of infrastructure. While you might have your virtual machines on EC2, there could be other parts located elsewhere. For example, you could use a service different than the AWS Route53 DNS service, or some workloads could be located in a bare metal servers provider, such as Packet. All of this is hardly manageable via a single Terraform repository. There are two steps to make Terraform templates easy to maintain and reuse:

  • Slice templates into different levels
  • Build a collection of reusable modules

Once you note that your templates have grown fat and nasty, the first thing you should do is to slice them into different levels and then keep each level in a different repository and different state file. Configuration for services such as IAM is global for all AWS accounts, and it makes much more sense to manage it centrally, instead of spreading it over multiple repositories.

There is a special provider in Terraform named Terraform Oops, which has a data resource capable of fetching outputs from remote state files, and it works with all the remote storage backends that Terraform has. Let's learn how to use it by taking the IAM example described earlier. The IAM service is responsible for the fine-grained permissions setup for all AWS services for users, groups of users, and server roles. The last one is really important: on EC2, you should never use access keys to let servers talk to other AWS services. Instead, IAM roles must be used.

In addition, let's also refactor away the complete network setup. In the end, we will end up with something like this:

Note the RDS (Relational Database Service). As an exercise, try to implement it yourself, after we are done with IAM and VPC.

Create another two folders on your machine: packt-terraform-iam and packt-terraform-vpc. Initialize a git repository in both of them. We will start with packt-terraform-iam. The final code will be available for download on GitLab at https://gitlab.com/Fodoj/packt-terraform-iam.

Create a folder named policies. That's where we are going to store all the JSON definitions of the various policies we have. Right inside, create a file named cloudwatch=@put_metric.json with the following content:

{ 
  "Version": "2012-10-17", 
  "Statement": [ 
    { 
      "Action": [ 
        "cloudwatch:PutMetric" 
      ], 
      "Effect": "Allow", 
      "Resource": "*" 
    } 
  ] 
} 

This policy will allow us to put metrics to CloudWatch: monitoring and log service from AWS. If we want EC2 instance to use, we need to assign a role to it, and this role should have the policy mentioned earlier attached to it.

Note the naming convention: $serviceName=$resourveName@$actionName. Thus makes it much easier to find out which policy does what just from the file name. This naming convention scales well for complex policies with dozens of lines of code.

In addition, we need a policy that allows the assumption of this role. Create another file policies/sts=@assume_role.json:

{ 
  "Version": "2012-10-17", 
  "Statement": [ 
    { 
      "Sid": "", 
      "Effect": "Allow", 
      "Principal": { 
        "Service": "ec2.amazonaws.com" 
      }, 
      "Action": "sts:AssumeRole" 
    } 
  ] 
} 

Now, let's write a template that creates a role, an instance profile, and policy for the role. It also returns the role name as an output; otherwise, we won't be able to retrieve it from the remote state:

resource "aws_iam_role" "base" { 
  name = "base" 
  assume_role_policy = "${file("./policies/sts=@assume_role.json")}" 
} 
resource "aws_iam_instance_profile" "base" { 
  name = "base" 
  roles = ["${aws_iam_role.base.name}"] 
} 
resource "aws_iam_policy" "cloudwatch-put-metric" { 
  name = "cloudwatch=@put_metric" 
  policy = "${file("./policies/cloudwatch=@put_metric.json")}" 
} 
resource "aws_iam_policy_attachment" "cloudwatch-put-metric-attachment" { 
  name = "cloudwatch=@put_metric attachment" 
  roles = [ "${aws_iam_role.base.name}" ] 
  policy_arn = "${aws_iam_policy.cloudwatch-put-metric.arn}" 
} 
output "base-role-name" { 
  value = "${aws_iam_role.base.name}" 
} 

Do NOT apply this template yet. We need the state to be stored remotely, so first of all configure the remote storage using the same S3 bucket:

    terraform remote config 
        -backend=s3 
        -backend-config="bucket=packt-terraform" 
        -backend-config="key=iam/terraform.tfstate" 
        -backend-config="region=eu-central-1"

Now you can apply the template. Note that even though IAM is a global service, Terraform will still ask you for the AWS region.

We have a remote state that can be used inside the MightyTrousers application! Add a new variable to the application module, name it iam_role, and use inside the launch configuration. Then, inside template.tf, just before invoking the module, add this configuration:

data "terraform_remote_state" "iam" { 
    backend = "s3" 
    config { 
        bucket = "packt-terraform" 
        key = iam/terraform.tfstate 
        region = eu-central-1 
    } 
} 

Then pass it to the module:

module "mighty_trousers" { 
  source = "./modules/application" 
  # ... 
  iam_role = "${data.terraform_remote_state.iam.base-role-name}" 
} 

It's done! You can verify that the role name is pulled from the remote state by running the terraform plan command. Now it's time to move the network away as well. The final code is on GitLab at https://gitlab.com/Fodoj/packt-terraform-vpc.

Move the vpc_cidr and subnet_cidr variables from variables.tf to a new repository packt-terraform-vpc in the new variables.tf file. Then, simply move all VPC configuration: VPC, subnets, route table, and Internet gateway to the packt-terraform-vpc/template.tf file. Finally, add a few outputs for this template:

output "public-subnet-1-id" { 
  value = "${aws_subnet.public-1.id}" 
} 
output "public-subnet-2-id" { 
  value = "${aws_subnet.public-2.id}" 
} 
output "vpc_id" { 
  value = "${aws_vpc.my-vpc.id}" 
} 

Don't forget to configure the remote destination:

    terraform remote config 
        -backend=s3 
        -backend-config="bucket=packt-terraform" 
        -backend-config="key=vpc/terraform.tfstate" 
        -backend-config="region=eu-central-1"

Once again, apply the template and head back to MightyTrousers. Add another data source:

data "terraform_remote_state" "vpc" { 
  backend = "s3" 
  config { 
    bucket = "packt-terraform" 
    key = "vpc/terraform.tfstate" 
    region = "eu-central-1" 
  } 
} 

Use this data source inside the module:

module "mighty_trousers" { 
  source = "./modules/application" 
  vpc_id = "${data.terraform_remote_state.vpc.vpc_id}" 
  subnets = [             "${data.terraform_remote_state.vpc.public-subnet-1-id}",             "${data.terraform_remote_state.vpc.public-subnet-2-id}"            ]   # .. 
} 

Don't forget to update the default security group to use remote vpc_id as well.

We've decoupled IAM and VPC management from the application template completely. Developers can focus on the template for the software they write and AWS administrators can design and update network and permissions in parallel.

Developers are not exposed to this level of configuration if administrators don't want them to be. In the background, the IAM and VPC repositories can grow a lot by adding more and more policies, roles, users, and networks. All these changes will be invisible to the authors of the application template, as long as remote states of the IAM and VPC repositories still return outputs it expects.

We've slimmed down an application template a lot, but there is still a big piece of code that doesn't really belong to the application template repository: the application module itself.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.138.141.202