Chapter 9: Understanding Terraform Stacks

In the previous chapter, we started our journey by understanding the Terraform configuration file, and we explored how Terraform language files, which are human-readable, differ from JSON files, which are machine-readable. Moving further, we saw the different data types supported by both JSON and Terraform files. We also discussed industry best practices for writing Terraform configuration files with major cloud providers, such as Google Cloud Platform (GCP), Azure, and Amazon Web Services (AWS).

In this chapter, we will discuss how we can handle a large enterprise infrastructure deployment, upgrading, and so on using Terraform configuration code. We will be discussing infrastructure deployment to GCP, Azure, and AWS using a Terraform stack. In this chapter, you will gain a thorough understanding of Terraform stacks and modules and how stacks can be used effectively for infrastructure deployment and updates.

The following topics will be covered in this chapter:

  • Understanding Terraform stacks
  • Writing a Terraform stack for GCP
  • Writing a Terraform stack for AWS
  • Writing a Terraform stack for Azure

Technical requirements

To follow along with this chapter, you need to have an understanding of the Terraform CLI and its workflow. You need to have a good command of writing Terraform modules and other Terraform configuration files. You can find all the code used in this chapter at https://github.com/PacktPublishing/HashiCorp-Infrastructure-Automation-Certification-Guide/tree/master/chapter9.

Check out the following link to see the Code in Action video:

https://bit.ly/3yC0UHG

Understanding Terraform stacks

Suppose you are working with one of your colleagues, John. You and John have been assigned to deploy 50 virtual machines, 20 virtual networks, 10 web apps, and 5 function apps in Azure. You have many questions about this infrastructure deployment in Azure, such as how you will provision this whole infrastructure, what will be the easiest way to perform the deployment, and how you would you scale this infrastructure up or down if, in the near future, the addition or deletion of resources needs to be performed. In response to these questions, John says that you should be able to use Terraform configuration files for the deployment and management of the complete infrastructure. He also suggests writing Terraform modules for each resource and getting them published to GitHub or Terraform Registry; then, later on, stacks of the infrastructure can be prepared by referencing the modules. Preparing these stacks for infrastructure provisioning and management would be the best way to go, John says. Terraform stacks are the result of one or more modules being combined with environment-specific input parameter values. We already know that Terraform modules are written in such a way that they can be consumed again and again. In the same way, we can bring modules together and form a stack that will help us manage large enterprise infrastructures, such as the one that you and John are building:

Figure 9.1 – Terraform stacks

Figure 9.1 – Terraform stacks

That should have given you a basic understanding of what exactly Terraform stack is; you will get an even better understanding of it when we discuss them in relation to GCP, AWS, and Azure.

Writing Terraform stacks for GCP

To get a better understanding of Terraform stacks for GCP, let's try to write some GCP modules and prepare a stack using them. We will prepare some modules, names vpc, subnet, route, and storage, and using them, we will show you how you can prepare a stack. Along with it, we will try to discuss the code that has been written while drafting these modules. You can find all our code at our GitHub repository: https://github.com/PacktPublishing/HashiCorp-Infrastructure-Automation-Certification-Guide/tree/master/chapter9/gcp/modules. You can see the following files present inside the gcp directory of our GitHub:

inmishrar@terraform-vm:~/HashiCorp-Infrastructure-Automation-Certification-Guide/chapter9# tree

.

└── gcp

    ├── modules

    │   ├── route

    │   │   ├── main.tf

    │   │   ├── outputs.tf

    │   │   └── variables.tf

    │   ├── storage

    │   │   ├── main.tf

    │   │   ├── outputs.tf

    │   │   └── variables.tf

    │   ├── subnet

    │   │   ├── main.tf

    │   │   ├── outputs.tf

    │   │   └── variables.tf

    │   └── vpc

    │       ├── main.tf

    │       ├── outputs.tf

    │       └── variables.tf

    ├── stacks

    │   ├── main.tf

    │   ├── outputs.tf

    │   └── variables.tf

    └── stacks_of_stacks

        ├── main.tf

        ├── providers.tf

        ├── terraform.tfvars

        └── variables.tf

Let's discuss the code that we had defined while creating the vpc module; in main.tf, the following code is present:

resource "google_compute_network" "vpc" {

  name                            = var.vpc_name

  mtu                             = var.vpc_mtu

  description                     = var.vpc_description

  routing_mode                    = var.vpc_routing_mode

  project                         = var.project_id

  delete_default_routes_on_create = var.delete_default_routes_on_create

  auto_create_subnetworks         = var.auto_create_subnetworks

}

To draft the vpc module, we need to provide some of the arguments from Terraform Registry: https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_network. We also need to provide their respective variables, declared in the variables.tf file, which includes the following code:

variable "project_id" {

  type = string

  description = "The ID of the project where this VPC will be created"

}

variable "vpc_name" {

  type = string

  description = "The name of the network being created"

}

variable "vpc_routing_mode" {

  type        = string

  default     = "GLOBAL"

  description = "The network routing mode (default 'GLOBAL')"

}

variable "vpc_description" {

  type        = string

  description = "An optional description of this resource. The resource must be recreated to modify this field."

  default     = ""

}

variable "auto_create_subnetworks" {

  type        = bool

  description = "When set to true, the network is created in 'auto subnet mode' and it will create a subnet for each region automatically across the 10.128.0.0/9 address range. When set to false, the network is created in 'custom subnet mode' so the user can explicitly connect subnetwork resources."

  default     = false

}

variable "delete_default_routes_on_create" {

  type        = bool

  description = "If set, ensure that all routes within the network specified whose names begin with 'default-route' and with a next hop of 'default-internet-gateway' are deleted"

  default     = false

}

variable "vpc_mtu" {

  type        = number

  description = "The network MTU. Must be a value between 1460 and 1500 inclusive. If set to 0 (meaning MTU is unset), the network will default to 1460 automatically."

  default     = 0

}

We defined all the argument values as input variables so that our code would be reusable, and we should easily be able to pass the respective input variable values during runtime for any file ending with .tfvars.

In the outputs.tf file, we kept following code block to help us to export any output during the Terraform execution:

output "vpc_id" {

  value = google_compute_network.vpc.id

}

output "vpc_self_link" {

  value = google_compute_network.vpc.self_link

}

output "vpc_name" {

  value = google_compute_network.vpc.name

}

You can refer to the Terraform documentation of vpc at https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_network to get an idea of the attributes that can be exported from the vpc resource.

If you look at the vpc module code, you'll see we kept it short and sweet, without adding any complexity to it. So, we believe you should be able to write the vpc module on your own.

Moving on, let's try to understand the code block present inside the subnet module. In the main.tf file, the following code is present:

locals {

  subnets = {

    for x in var.subnets :

    "${x.subnet_region}/${x.subnet_name}" => x

  }

}

/******************************************

  Subnet Code

 *****************************************/

resource "google_compute_subnetwork" "subnet" {

  for_each                 = local.subnets

  name                     = each.value.subnet_name

  ip_cidr_range            = each.value.subnet_ip

  region                   = each.value.subnet_region

  private_ip_google_access = lookup(each.value, "subnet_private_access", "false")

  dynamic "log_config" {

    for_each = lookup(each.value, "subnet_flow_logs", false) ? [{

      aggregation_interval = lookup(each.value, "subnet_flow_logs_interval", "INTERVAL_5_SEC")

      flow_sampling        = lookup(each.value, "subnet_flow_logs_sampling", "0.5")

      metadata             = lookup(each.value, "subnet_flow_logs_metadata", "INCLUDE_ALL_METADATA")

    }] : []

    content {

      aggregation_interval = log_config.value.aggregation_interval

      flow_sampling        = log_config.value.flow_sampling

      metadata             = log_config.value.metadata

    }

  }

  network     = var.vpc_name

  project     = var.project_id

  description = lookup(each.value, "description", null)

  secondary_ip_range = [

    for i in range(

      length(

        contains(

        keys(var.secondary_ranges), each.value.subnet_name) == true

        ? var.secondary_ranges[each.value.subnet_name]

        : []

    )) :

    var.secondary_ranges[each.value.subnet_name][i]

  ]

}

Do you find the subnet module to be a little bit complex? We do too! You can see that in that code block, we used some of the key items of Terraform stacks, such as these:

  • locals
  • for loops
  • for_each loops
  • lookup functions
  • length functions
  • contains functions
  • Dynamic iteration

We discussed all these items in our previous chapters, so we will not discuss them here. We encourage you to go back and read Chapter 4, Deep Dive into Terraform, to understand their use.

You might be wondering why we have written such complex code for the subnet module. The benefit of having this code is as follows: suppose you wish to deploy hundreds of subnets within a VPC. You could easily do so by just providing the respective subnet argument input values for Terraform to read from any file ending with .tfvars or .tfvars.json. To provide the flexibility of defining arguments and iterations in our code, we have used dynamic iteration and for_each and for loops.

Similarly, we wrote code blocks for route and storage modules. You can refer to and get them directly from our GitHub repository, so we will not explain them here.

Now the question is, how can we consume these modules? We have created a directory named stacks where we have included all the necessary code. In the main.tf file, you can see the following:

module "vpc" {

  source                          = "github.com/PacktPublishing/HashiCorp-Infrastructure-Automation-Certification-Guide.git//chapter9/gcp/modules/vpc?ref=v1.12"

  vpc_name                        = var.vpc_name

  vpc_mtu                         = var.vpc_mtu

  vpc_description                 = var.vpc_description

  vpc_routing_mode                = var.vpc_routing_mode

  project_id                      = var.project_id

  delete_default_routes_on_create = var.delete_default_routes_on_create

  auto_create_subnetworks         = var.auto_create_subnetworks

}

module "subnet" {

  source           = "github.com/PacktPublishing/HashiCorp-Infrastructure-Automation-Certification-Guide.git//chapter9/gcp/modules/subnet?ref=v1.13"

  project_id       = var.project_id

  vpc_name         = var.vpc_name

  subnets          = var.subnets

  secondary_ranges = var.secondary_ranges

  depends_on       = [module.vpc.id]

}

module "routes" {

  source     = "github.com/PacktPublishing/HashiCorp-Infrastructure-Automation-Certification-Guide.git//chapter9/gcp/modules/route?ref=v1.10"

  project_id = var.project_id

  vpc_name   = var.vpc_name

  routes     = var.routes

  depends_on = [module.vpc.id]

}

module "storage" {

  source        = "github.com/PacktPublishing/HashiCorp-Infrastructure-Automation-Certification-Guide.git//chapter9/gcp/modules/storage?ref=v1.11"

  stg_name      = var.stg_name

  location      = var.location

  force_destroy = var.force_destroy

  storage_class = var.storage_class

  project_id    = var.project_id

  labels        = var.labels

}

By now, you should understand what stacks are: put simply, they are collections of modules defined in a single .tf file or multiple .tf files inside the same directory.

While calling a module, if the input variables values in the resource or module code block is variable name then that variable name should be defined as an input argument in the new module code block. Now this input argument in the module code block can take either value or again we can define variable. For example, in the preceding code block, we wrote the storage module. You can see stg_name on the left side of the = sign, which is acting as an argument, and on the right side, we define it again as a variable, var.stg_name. Now, var.stg_name can be passed either from the CLI or from any file ending with .tfvars or .auto.tfvars.

The benefits of creating stacks are that you get better flexibility in defining code blocks, it reduces the length and complexity of the Terraform configuration code, and it makes modules reusable.

Now in the gcp directory, you can see that there is a directory with the name stacks_of_stacks. You must be wondering why we created that directory and what exactly we defined inside it. In the main.tf file, we have the following code:

module "gcp_stacks" {

  source                          = "../stacks"

  zone                            = var.zone

  region                          = var.region

  project_name                    = var.project_name

  vpc_name                        = var.vpc_name

  vpc_mtu                         = var.vpc_mtu

  vpc_description                 = var.vpc_description

  vpc_routing_mode                = var.vpc_routing_mode

  project_id                      = var.project_id

  delete_default_routes_on_create = var.delete_default_routes_on_create

  auto_create_subnetworks         = var.auto_create_subnetworks

  subnets                         = var.subnets

  secondary_ranges                = var.secondary_ranges

  routes                          = var.routes

  stg_name                        = var.stg_name

  location                        = var.location

  force_destroy                   = var.force_destroy

  storage_class                   = var.storage_class

  labels                          = var.labels

}

In the preceding code block, you can see that module "gcp_stacks" been referenced as a module only. Here, we defined all the arguments that you can see on the left side of the = sign, which are basically declared variables in our earlier-defined variables.tf file, present inside the stacks folder. While creating stacks of stacks, we need to provide source = "<Local Path>"; that is why we defined source = "../stacks": so that the module will be able to take all the code present inside the stacks directory. This stack of stacks will help to reduce the declaration of the common variables and the code complexity.

We are all set now. Let's see what happens when we execute terraform init, plan, and apply. The following is the code snippet we get when we run the terraform init command:

$ terraform init

Initializing modules...

- gcp_stacks in ..stacks        

Downloading github.com/PacktPublishing/HashiCorp-Infrastructure-Automation-Certification-Guide.git?ref=v1.10 for gcp_stacks.routes...

- gcp_stacks.routes in .terraformmodulesgcp_stacks.routeschapter9gcpmodules oute      

Downloading github.com/PacktPublishing/HashiCorp-Infrastructure-Automation-Certification-Guide.git?ref=v1.11 for gcp_stacks.storage...

- gcp_stacks.storage in .terraformmodulesgcp_stacks.storagechapter9gcpmodulesstorage

Downloading github.com/PacktPublishing/HashiCorp-Infrastructure-Automation-Certification-Guide.git?ref=v1.13 for gcp_stacks.subnet...

- gcp_stacks.subnet in .terraformmodulesgcp_stacks.subnetchapter9gcpmodulessubnet

Downloading github.com/PacktPublishing/HashiCorp-Infrastructure-Automation-Certification-Guide.git?ref=v1.12 for gcp_stacks.vpc...

- gcp_stacks.vpc in .terraformmodulesgcp_stacks.vpcchapter9gcpmodulesvpc

Initializing the backend...

Initializing provider plugins...

- Finding hashicorp/google versions matching "~> 3.0"...

- Installing hashicorp/google v3.53.0...

- Installed hashicorp/google v3.53.0 (signed by HashiCorp)

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running terraform plan to see any changes that are required for your infrastructure. All Terraform commands should now work.

If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.

After running terraform init, we can see that it downloads all the modules from our GitHub repository. If we go ahead and run terraform plan and use apply, it will provision vpc, subnet, route, and storage in GCP. The following is the code snippet we get when we run terraform apply:

$ terraform apply

An execution plan has been generated and is shown below.

Resource actions are indicated with the following symbols:

  + create

Terraform will perform the following actions: #module.gcp_stacks.module.routes.google_compute_route.route["egress-internet"] will be created

  + resource "google_compute_route" "route" {

      + description      = "route through IGW to access internet"

      + dest_range       = "0.0.0.0/0"

      + id               = (known after apply

. . .

    }

    . . .

  #

Plan: 6 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?

Terraform will perform the actions described above. Only 'yes' will be accepted to approve.

We have written Terraform modules for GCP, and we have brought multiple modules together to form stacks. We also learned how we can prepare stacks of stacks. With this, we can easily write Terraform modules and consume them as and when required for our GCP infrastructure.

Will there be any difference in writing Terraform stacks for AWS? In our next section, we will write Terraform modules and prepare stacks with those modules for AWS.

Writing Terraform stacks for AWS

We have learned how to develop Terraform modules for GCP. We will use that learning and try to write some AWS modules and combine those modules to prepare stacks. For better understanding, it's best to write some simple modules for AWS. We have written simple modules for vpc, subnet, and s3_bucket. You can find all the code in our GitHub repository, inside the aws directory of chapter9. Here is the directory structure:

.

└── aws

    ├── modules

    │   ├── s3

    │   │   ├── main.tf

    │   │   ├── outputs.tf

    │   │   └── variables.tf

    │   ├── vpc-subnet

    │   │   ├── main.tf

    │   │   ├── outputs.tf

    │   │   └── variables.tf

    ├── stacks

    │   ├── main.tf

    │   ├── outputs.tf

    │   └── variables.tf

    └── stacks_of_stacks

        ├── main.tf

        ├── providers.tf

     ├── terraform.tfvars

        └── variables.tf

We are not discussing all the code, just quickly demonstrating what we have written for preparing stacks.

In the stacks directory, there is a main.tf file that contains the following code:

module "vpc" {

  source                           = "github.com/PacktPublishing/HashiCorp-Infrastructure-Automation-Certification-Guide.git//chapter9/aws/modules/vpc-subnet?ref=v1.14"

  cidr_block                       = var.cidr_block

  instance_tenancy                 = var.instance_tenancy

  enable_dns_hostnames             = var.enable_dns_hostnames

  enable_dns_support               = var.enable_dns_support

  enable_classiclink               = var.enable_classiclink

  enable_classiclink_dns_support   = var.enable_classiclink_dns_support

  assign_generated_ipv6_cidr_block = var.assign_generated_ipv6_cidr_block

  vpc_name                         = var.vpc_name

  custom_tags                      = var.custom_tags

  subnet_cidr                      = var.subnet_cidr

  subnet_name                      = var.subnet_name

}

module "s3" {

  source              = "github.com/PacktPublishing/HashiCorp-Infrastructure-Automation-Certification-Guide.git//chapter9/aws/modules/s3?ref=v1.14"

  create_bucket       = var.create_bucket

  bucket_name         = var.bucket_name

  bucket_acl          = var.bucket_acl

  force_destroy       = var.force_destroy

  acceleration_status = var.acceleration_status

  custom_tags         = var.custom_tags

  depends_on          = [module.vpc.id]

}

The preceding code is the normal way of defining stacks. What if we want to reduce the length of the code? Well, we can achieve that by building stacks of stacks. For this, there is the following code in the main.tf file of the stacks_of_stacks directory:

module "aws_stacks" {

  source                           = "../stacks"

  cidr_block                       = var.cidr_block

  instance_tenancy                 = var.instance_tenancy

  enable_dns_hostnames             = var.enable_dns_hostnames

  enable_dns_support               = var.enable_dns_support

  enable_classiclink               = var.enable_classiclink

  enable_classiclink_dns_support   = var.enable_classiclink_dns_support

  assign_generated_ipv6_cidr_block = var.assign_generated_ipv6_cidr_block

  vpc_name                         = var.vpc_name

  custom_tags                      = var.custom_tags

  subnet_name                      = var.subnet_name

  subnet_cidr                      = var.subnet_cidr

  create_bucket                    = var.create_bucket

  bucket_name                      = var.bucket_name

  bucket_acl                       = var.bucket_acl

  force_destroy                    = var.force_destroy

  acceleration_status              = var.acceleration_status

}

When we prepare stacks of stacks, we need to provide the source path of the local directory. This method of preparing stacks of stacks is very beneficial when we need to deploy many environments, such as dev, test, and prod environments, that have the same infrastructure that is already defined in the stacks.

We now run terraform init. That will open and read all the .tf files containing defined modules and start downloading the modules and placing them in the .terraform directory. As you can see here, after running terraform init, Terraform initializes the modules and downloads them:

$ terraform init

Initializing modules...

- aws_stacks in ..stacks

Downloading github.com/PacktPublishing/HashiCorp-Infrastructure-Automation-Certification-Guide.git?ref=v1.14 for aws_stacks.s3...

- aws_stacks.s3 in .terraformmodulesaws_stacks.s3chapter9awsmoduless3

Downloading github.com/PacktPublishing/HashiCorp-Infrastructure-Automation-Certification-Guide.git?ref=v1.14 for aws_stacks.vpc...

- aws_stacks.vpc in .terraformmodulesaws_stacks.vpcchapter9awsmodulesvpc-subnet

Initializing the backend...

Initializing provider plugins...

- Using previously-installed hashicorp/aws v3.25.0

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work.

If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.

You can run terraform plan after terraform init, which will show you all the resources that it is going to create or update. If you wish to get infrastructure deployed, then you can run terraform apply -auto-approve. Terraform will deploy or update infrastructure. In our case, as you can see, Terraform created vpc with subnet and s3_bucket in AWS:

$ terraform apply -auto-approve

module.aws_stacks.module.vpc.aws_vpc.vpc: Creating...

module.aws_stacks.module.s3.aws_s3_bucket.s3_bucket[0]: Creating...

module.aws_stacks.module.vpc.aws_vpc.vpc: Creation complete after 16s [id=vpc-002c1a898b6caa9dc]

module.aws_stacks.module.vpc.aws_subnet.subnet: Creating...

module.aws_stacks.module.vpc.aws_subnet.subnet: Creation complete after 4s [id=subnet-0b87c63201112d0b1]

module.aws_stacks.module.s3.aws_s3_bucket.s3_bucket[0]: Creation complete after 29s [id=tf-s3-bucket32342]

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

One of the main challenges when preparing stacks of stacks is that you may not be able to define or validate the specific output from a stack because it will consist of many different modules combined and defined as an argument.

We learned about creating modules for AWS and combining those modules to prepare stacks and stacks of stacks. So, now it should be easy for you to write a module and prepare stacks for AWS. Let's use this learning and try to write modules and prepare stacks for Azure.

Writing Terraform stacks for Azure

Earlier, we discussed writing Terraform modules and preparing stacks using those modules for AWS and GCP. Now the question is, will there be any difference in writing modules and preparing stacks for Azure? In answer to that question, no, there is no major difference: if you followed and understood the earlier processes of creating stacks and modules for AWS and GCP, you can use the same knowledge to prepare stacks and modules. Let's see how we can prepare modules for Azure Storage and Azure App Service. We have placed all our code into our GitHub repository in the following directory:

.

└── azure

    ├── modules

    │   ├── storage

    │   │   ├── main.tf

    │   │   ├── outputs.tf

    │   │   └── variables.tf

    │   ├── webapp

    │   │   ├── main.tf

    │   │   ├── outputs.tf

    │   │   └── variables.tf

    ├── stacks

    │   ├── main.tf

    │   ├── outputs.tf

    │   └── variables.tf

    └── stacks_of_stacks

        ├── main.tf

        ├── providers.tf

        ├── terraform.tfvars

        └── variables.tf

We will not be discussing all the code; you can refer to our GitHub and copy the code directly from there to gain a full understanding of it.

After writing the module code for storage and webapp, we combined both of the prepared modules and made a stack. Furthermore, to reduce the code length and instead of declaring variables multiple times, we prepared stacks of stacks and kept all the code in the stacks_of_stacks directory.

The following code is present in the main.tf file of the stacks_of_stacks directory:

module "azure_stacks" {

  source                   = "../stacks"

  create_resource_group    = var.create_resource_group

  resource_group_name      = var.resource_group_name

  location                 = var.location

  tags                     = var.tags

  app_config               = var.app_config

  ip_address               = var.ip_address

  app_settings             = var.app_settings

  connection_string        = var.connection_string

  asp_config               = var.asp_config

  storage_account_name     = var.storage_account_name

  account_kind             = var.account_kind

  skuname                  = var.skuname

  allow_blob_public_access = var.allow_blob_public_access

  soft_delete_retention    = var.soft_delete_retention

  containers_list          = var.containers_list

}

If you observe the main.tf file present in the stacks directory, you'll see that we defined many input variables and declared all the variables in the variables.tf file. The declared variables will act as arguments, which will be defined on the left-hand side of the = sign in the main.tf file of stacks_of_stacks.

We passed all the required input variables using the terraform.tfvars file, which is also present inside the stacks_of_stacks directory.

When we run terraform init, Terraform downloads the storage and webapp modules from our GitHub repository and places them inside the .terraform folder, which you can see in the following code snippet:

$ terraform init

Initializing modules...

Downloading github.com/PacktPublishing/HashiCorp-Infrastructure-Automation-Certification-Guide.git?ref=v1.15 for azure_stacks.storage...

- azure_stacks.storage in .terraformmodulesazure_stacks.storagechapter9azuremodulesstorage

Downloading github.com/PacktPublishing/HashiCorp-Infrastructure-Automation-Certification-Guide.git?ref=v1.15 for azure_stacks.webapp...

- azure_stacks.webapp in .terraformmodulesazure_stacks.webappchapter9azuremoduleswebapp

Initializing the backend...

Initializing provider plugins...

- Using previously-installed hashicorp/azurerm v2.56.0

- Using previously-installed hashicorp/random v3.0.1  

The following providers do not have any version constraints in configuration, so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking changes, we recommend adding version constraints in a required_providers block in your configuration, with the constraint strings suggested below.

* hashicorp/random: version = "~> 3.0.1"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see any changes that are required for your infrastructure. All Terraform commands should now work.

If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.

In order to provision webapp and storage in Azure, we can run terraform apply -auto-approve and see that webapp and storage are provisioned in Azure, as shown in the following figure:

Figure 9.2 – Azure resources

Figure 9.2 – Azure resources

From this section of the chapter, you should have gained an understanding of how to write Terraform modules and prepare stacks from those modules in Azure. You should have understood how large and repeated enterprise infrastructure in Azure can easily be provisioned or updated using Terraform stacks.

Summary

In this chapter, we discussed Terraform stacks. We learned about what Terraform stacks are and how we can create Terraform stacks for different clouds, such as GCP, AWS, and Azure. This should help you to provision or update a very large enterprise infrastructure using Terraform IaC. In our next chapter, we are going to discuss Terraform Cloud and the enterprise version of Terraform. We will look at how Terraform Cloud and the enterprise product help enterprise customers and we will discover the benefits that an enterprise customer can get from them.

Questions

  1. Which of the following is the best option for managing a large enterprise infrastructure environment using Terraform?

    A. Data sources

    B. Stacks and modules

    C. Resources

    D. Provisioners

  2. Suppose you have created a Terraform stack and you can see that the indentation of the Terraform code is not in the correct format. Which of the following Terraform commands would you run?

    A. terraform sources

    B. terraform init

    C. terraform fmt -recursive

    D. terraform plan

  3. Which command needs to be executed to download Terraform modules from the Github source?

    A. terraform init

    B. terraform syntax

    C. terraform fmt

    D. terraform plan

  4. You created a Terraform stack containing the following code

    module "azure_stacks" {

      source                   = "../stacks"

      resource_group_name      = var.resource_group_name

    ...

    You executed terraform plan and see that it is prompting you to provide the following values:

    $ terraform plan

    var.resource_group_name

      A container that holds related resources for an Azure solution

      Enter a value:

    What could you have done to prevent it from prompting during runtime?

    A. Defined resource_group_name variable value into any file ending with .tfvars,.auto.tfvars or.tfvars.json

    B. Manually provided resource_group_name value during runtime

    C. Hardcoded resource_group_name = "Terraform-test-rg"

  5. D. Run the terraform validate command
  6. Suppose you're consuming the module and forgot to mention the version in the module code block. What could be a major problem with not defining the version in the module code?

    A. Terraform will always reference the latest version of the modules.

    B. Terraform code may fail when we run plan or apply if previously defined module code block is not compatible with the latest version.

    C.Terraform will upgrade the infrastructure as per the latest version of the modules.

    D. Terraform will run terraform apply automatically.

Further reading

You can check out the following links for more information about the topics that were covered in this chapter:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.225.255.134