Bringing in high availability

With AWS, going HA can be as simple as putting instances into different subnets, assigned to different availability zones. An availability zone is an isolated location within one AWS region, and you can look at it as a separate data center.

We are passing subnet ID as a variable to the application module, and we are passing exactly one subnet ID. This needs to be changed, that is, we will update the subnet_id variable to be the list of two elements. Then, depending on the index of an aws_instance resource, we will assign either first or the second subnet to it.

First of all, replace the subnet_id variable with the subnets variable and specify type as list to avoid anything else to be passed to the module:

variable "subnets"   { type = "list "} 

Note

If you set the default value of variable to be [], then Terraform will understand that this variable is a list.

In root template, we need to change the subnet_cidr variable to make it availability zone aware and to extend to support four subnets:

variable "subnet_cidrs" { 
  description = "CIDR blocks for public and private subnets" 
  default = { 
    "eu-central-1a-public" = "10.0.1.0/24", 
    "eu-central-1a-private" = "10.0.2.0/24", 
    "eu-central-1b-public" = "10.0.3.0/24", 
    "eu-central-1b-private" = "10.0.4.0/24" 
  } 
} 

Terraform doesn't support nested maps as of version 0.8.1, so we have to make this variable a bit uglier than it should be in a perfect world. Use this new variable inside template.tf (similar code for private subnet is omitted):

resource "aws_subnet" "public-1" { 
  vpc_id = "${aws_vpc.my_vpc.id}" 
  availability_zone = "eu-central-1a" 
  cidr_block = "${lookup(var.subnet_cidrs, "eu-central-1a-public")}" 
  map_public_ip_on_launch = true 
} 
resource "aws_subnet" "public-2" { 
  vpc_id = "${aws_vpc.my_vpc.id}" 
  availability_zone = "eu-central-1b" 
  cidr_block = "${lookup(var.subnet_cidrs, "eu-central-1b-public")}" 
  map_public_ip_on_launch = true 
} 

Perhaps when you see this slightly repetitive code, you would want to refactor it to be a single aws_subnet resource with a count of 2. It's not forbidden to do so, of course, but it would be a rather hard-to-digest piece of code. When you choose between a minor duplication and adding more complexity, choose minor duplication. It is a harder choice when you are dealing with it in the programming world, but don't get fooled: writing Terraform templates is not a real programming. It's more similar to writing configuration files.

It's been a long period of time during which Terraform didn't support passing lists and maps as variables to modules. These harsh times are gone and starting from version 0.7 you can do it. So, let's do it and pass a subnets list to the application module!

module "mighty_trousers" { 
  source = "./modules/application" 
  vpc_id = "${aws_vpc.my_vpc.id}" 
  subnets = ["${aws_subnet.public-1.id}", "${aws_subnet.public-2.id}"] 
  name = "MightyTrousers" 
  keypair = "${aws_key_pair.terraform.key_name}" 
  environment = "${var.environment}" 
  extra_sgs = ["${aws_security_group.default.id}"] 
  extra_packages = "${lookup(var.extra_packages, "MightyTrousers")}" 
  external_nameserver = "${var.external_nameserver}" 
  instance_count = 2 
} 

The only thing left is to use these subnets inside aws_instance. We will need the element() function again, as well as modulo math operation: we will use the first subnet for even instances and second one for odd instances:

resource "aws_instance" "app-server" { 
  ami = "${data.aws_ami.app-ami.id}" 
  instance_type = "${lookup(var.instance_type, var.environment)}" 
  subnet_id = "${element(var.subnets, count.index % 2)}" 
   
} 

With this code, we can easily scale our application to any number of instances, and they will be evenly distributed among to availability zones. Nice and easy. High availability achieved. Well, almost. We need to put a load balancer in front of these application servers first.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.158.47