20

Automating Cloud Deployments with Terraform

The previous chapter was especially fun: we were able to deploy Ubuntu in the cloud, utilizing Amazon Web Services (AWS). Deploying infrastructure in the cloud is very powerful and allows us to accomplish things that are not normally possible (or are very tedious) with physical infrastructure. We can spin up Ubuntu instances in minutes, and even set up auto-healing to cover us in situations that would normally result in complete service disruption.

This time around, we’re going to work with cloud deployments again, and check out an awesome tool called Terraform that will allow us to automate the provisioning of our cloud resources. We’ve already explored the concept of automation back in Chapter 15, Automating Server Configuration with Ansible, when we learned about the basics of Ansible. Terraform allows us to take our automation to the next level and even interact with providers such as AWS directly.

In this chapter, we’ll explore the following concepts:

  • Why it’s important to automate your infrastructure
  • Introduction to Terraform and how it can fit within your workflow
  • Installing Terraform
  • Automating an EC2 instance deployment
  • Managing security groups with Terraform
  • Using Terraform to destroy unused resources
  • Combining Ansible with Terraform for a full deployment solution

Why automate the building of our infrastructure? There are many benefits of doing so, and we’ll take a look at some of those benefits in the next section.

Why it’s important to automate your infrastructure

Automation with regards to infrastructure is an expansive topic, and it easily deserves a book of its own. In fact, there are not only books dedicated to it but entire online courses as well. There are many different utilities you can use, each with its own pros and cons. We have configuration management tools, such as Ansible, Chef, and Puppet. We looked at Ansible earlier in the book and worked through some examples to see how powerful it is. When we worked with that earlier, I’m sure you immediately saw the benefit—not having to build a solution manually is a beautiful thing.

The importance of not having to build solutions manually cannot be overstated. Perhaps the most obvious benefit is the fact that it can save you hours, or even days of work. When I first started working in IT, setting up servers was always a manual task. Sure, you could create a Bash script and automate some tasks that way, but tools specifically designed to automate will handle the task much more efficiently. An IT staff that would normally be overwhelmed at the thought of setting up a large number of servers would be able to perform the same task much quicker with automation. And with all the time that’s saved, IT staff members can focus on other tasks rather than spending the majority of their time on one task.

Another benefit of automation is that the likelihood of human error is much lower. While you’re building your automation solution, making mistakes is unavoidable. You may mistype something while writing a script that causes a syntax error, or perhaps something doesn’t get created quite the way you expected. But after you’ve spent the time building your automation scripts and verified there are no errors, then you can run them again and again and the infrastructure will get created the same way each time. Compare that to having to manually set up servers each time you wish to implement a new solution, and you can imagine how often it may happen that there may be mistakes to fix. Some of which you may not even discover until later on.

Automation also has another benefit you may not expect, Disaster Recovery. While we will cover disaster recovery in Chapter 23, Preventing Disasters, it’s worth mentioning now because an effective automation solution will make the process of recovery quicker. It’s an administrator’s worst nightmare to even think that a server that’s important to your organization may someday fail, but it’s a fact of life.

Our organization may have a very complex application that consists of one or more web servers, a load balancer, security settings, and more. It could take hours to rebuild a solution like that manually. But with automation, you would simply run your scripts to recreate the same solution in mere minutes. Automation itself won’t protect you from losing data (which would be an even scarier problem) but at the very least it can help you to provision replacement resources quicker than if you had to do the same manually. Not only that, I presume your clients (as well as your boss) will prefer your organization’s application to come back online in minutes, rather than hours or days.

In addition, your automation scripts can serve as a form of living blueprint. Even if you aren’t planning on reprovisioning your servers and related infrastructure, another administrator can look at your automation scripts and understand better which components make up the overall solution, allowing them to get up to speed quicker if they’re taking over the management of infrastructure from someone else.

Automation is one of those things that I probably won’t have to try too hard to sell to you, because if you already have experience working in IT, then you already know how tedious it can be to manually rebuild servers.

Sometimes, it may feel as though we have more tasks to complete than we have hours in the workday. But with automation, we can get some of that time back and possibly even lower our stress level a bit. And it’s not the first time we’ve worked with automation; we did take a look at Ansible earlier in the book, so you are probably well aware of the benefits. But what we’re going to do in this chapter is implement automation at a lower level than Ansible, and we’ll do so with a solution known as Terraform. What is Terraform, you may ask? In science, terraforming is an amazing process of taking a planet that is uninhabitable and converting it into one that is able to support life as we know it. But for our purposes, Terraform is the name of an awesome utility we can use to automate an entire cloud computing implementation. In the next section, we’ll define it even more.

Introduction to Terraform and how it can fit within your workflow

Terraform is an amazing tool created by a company called Hashicorp that can automate your infrastructure at a level lower than Ansible, Puppet, or other configuration management solutions. In fact, Terraform typically doesn’t replace those but complements them. With configuration management tools, we generally have to create the initial server and set up the operating system first before we can implement them. With Ansible, there are actually methods of using it to create infrastructure components, but that’s beyond the scope of the book.

Not only that, but while Ansible is able to create some types of infrastructure, that’s not what it does best. To understand where something like Terraform fits, it’s best to think of Terraform as making things exist and Ansible as taking things that already exist and ensuring they’re configured properly.

When it comes to Terraform itself, it allows you to take advantage of a neat concept, Infrastructure as Code. In the previous chapter, we set up an entire load-balanced application in AWS. We created an EC2 instance, as well as an AMI, and then we built the load balancer along with Auto Scaling. While that process was incredibly fun, it was a manual one. If you made mistakes during the process, you had to go and fix them. After you were done, your solution was created and working. What Terraform allows us to do is write code that represents our desired end state. When it runs, it checks the cloud provider and performs an inventory. If something we’ve added to our scripts isn’t present with the cloud provider, it will make sure that the current state matches the desired end state in our code. We can even provision an entire cloud solution without even logging in to AWS beyond the first time.

An important consideration when it comes to automation tools is whether or not the tool is cross-platform. Many cloud providers feature built-in tools to do the same thing that Terraform does. For example, AWS has a feature called CloudFormation that allows you to script infrastructure builds, just as you can with Terraform. But the problem is that CloudFormation is specific to AWS. You can’t utilize that service to build infrastructure in Microsoft Azure or Google Cloud. A tool that’s cross-platform can run in any environment. We already saw this with Ansible earlier in the book: Ansible doesn’t mind if the servers you’re having it configure reside in AWS or even if they’re physical machines in a rack. To Ansible, Ubuntu is Ubuntu, regardless of where it’s running. This allows you to use the same tool in multiple environments, without having to recreate a new set of automation scripts for each one. Terraform is also a cross-platform tool.

Why does it matter if a tool is cross-platform? If you have to maintain several completely different tools that all do the same thing, it’s a waste of time. If you can learn one tool and use it in every environment you support, then it’s less of a maintenance burden. This is why I always recommend avoiding platform-specific tools, such as CloudFormation in AWS. There’s even a tool within AWS called OpsWorks that’s used for the same purpose as Ansible (configuration management), but again is specific to AWS.

A typical organization will pivot in different directions multiple times throughout the life of the company. An organization that is using AWS for 100% of its infrastructure may someday decide to support other cloud providers. Sometimes, all it takes is the right client or situation to make the company consider using a cloud provider for a project that would normally not be considered.

It could also be the case that a company might change primary providers due to a change made with the current platform that increases cost, or some other reason. If you use cross-platform tools, then you can take those tools with you (for the most part) if you change providers. Also, being able to support multiple providers not only makes you a more powerful administrator but also offers additional value to your organization.

Terraform itself is not going to be 100% identical between cloud platforms, though. The syntax does change from one cloud provider to another. Currently, there doesn’t seem to be a solution available for Infrastructure as Code that is 100% portable between environments. But considering solutions such as CloudFormation are 0% transferable to other platforms, then Terraform still wins out in comparison since it is a tool you can use with multiple providers. The general consensus of how Terraform works will remain the same with each provider, so it’s going to still save you time if you use it and then switch providers.

How does Terraform work? We’ll install it in the next section and actually use it to deploy an EC2 instance in the section after that. But in a nutshell, Terraform is a utility you can download to your local laptop or desktop, and use to turn script files into actual infrastructure. It supports many different cloud platforms, such as AWS, GCP, and others. It even supports VPS providers, such as Digital Ocean and Linode. Terraform refers to each of those platforms as a provider and gives you the ability to download the appropriate plugin within Terraform to support your chosen provider(s).

As you’ll see later in the chapter, Terraform allows you to test your configuration first, and preview the changes it will make. Then, if you accept the changes, it will connect to your provider and create the infrastructure as you’ve defined it in your code. Although we won’t cover version control in this chapter, typical organizations will store their Terraform code within a Git repository or some other version control system, so that the code is safe from being accidentally deleted. In a typical organization, one or more administrators will work with the Terraform code and push their changes into the repository.

In the next section, we’ll walk through the process of installing Terraform so we will have everything we need in order to get started and build some automation around our infrastructure.

Installing Terraform

The process of running Terraform and using it to provision your cloud resources is generally initiated on your local laptop or desktop. Terraform itself is downloaded from its website, and it’s available for all of the leading operating systems.

Unlike the majority of applications, there’s no installer. Terraform is run directly from the file you download; there’s no installation process to go through. You can install it system-wide if you want to do so, but you can run it from any directory you wish. Download files for Terraform are located at the following website: https://www.terraform.io/.

Once there, you should see a Download button:

Figure 20.1: The Terraform website

After clicking the Download button, you’ll see a new page that will offer Terraform for six different operating systems, including the usual suspects such as Linux, macOS, and Windows. Most likely, it will automatically select the operating system that the computer you’re visiting the site from is using. For example, here’s what the page looks like while downloading the macOS version:

Figure 20.2: The Terraform website, downloading for macOS

At this point, all you’ll have to do is download a version of Terraform specific to your operating system. Most computers sold nowadays are 64-bit, so it should be straightforward to choose which one to download. If you’d like to run Terraform from a Raspberry Pi, choose the Arm version for Linux. Once you download it, you’ll have a ZIP file locally that you can extract. Inside, you’ll find a binary file simply titled terraform and that’s all you’ll actually need.

You’ll be able to run terraform right from the command prompt of your operating system:

Figure 20.3: Running terraform from a terminal window with no options

In the screenshot, I typed out the entire path to the downloaded and extracted terraform file, which was saved in my home directory under the downloads folder. I ran it with no options, so it printed out the help page.

If you’d like to install it system-wide, you can move terraform into the /usr/local/bin directory if you’re running Linux or macOS:

sudo mv terraform /usr/local/bin

The /usr/local/bin directory is recognized by both Linux and macOS as a directory that is searched for binary files. This concept is referred to as your $PATH, which is a special variable that holds all the directories your profile is set to look into when attempting to execute a command. The method of adding a new directory to your $PATH differs from one operating system to another, but in terms of macOS and Linux, /usr/local/bin is already recognized, so when you copy terraform into that directory, you should be able to simply type terraform in your terminal without needing to type the full path each time you wish to use Terraform. This is optional, but it makes it simpler.

In order for Terraform to be able to work with AWS, we’ll need to generate an API key for it. This is done inside the AWS Management Console, which you should sign into now so we can create what we need. In the previous chapter, we discussed IAM, which is a service within AWS that allows you to not only create user accounts for fellow administrators but also lets you create keys for programmatic access. The latter will be how we allow Terraform to connect to our AWS account in order to perform tasks on our behalf.

Inside the IAM section of the AWS management console, click on the Users link that you should see in the left-hand menu, followed by the blue Add User button that you should see on that page. The following form will appear:

Figure 20.4: Creating an IAM user for the purpose of running Terraform

In my case, I decided to call my user terraform-provisioner, but you can use whatever name you’d like. I checked the box next to Programmatic access and I left the second unchecked, because I do not want this user to be able to log in to the console. Click Next: Permissions to continue. On the next screen that comes up, we’ll set the policy that the user will have access to:

Figure 20.5: Creating an IAM user for Terraform (continued)

For this screen, click on the box that reads Attach existing policies directly to highlight it, and check the box below to add the AdministratorAccess policy to this object. This is the policy that will grant Terraform its ability to interact with AWS.

Click Next: Tags to continue, then on the next screen, you can skip adding tags (unless you’d like to add them) and you can click Next: Review and then Create User to finish the process.

The final screen that appears should report that the process was successful, and the Download .csv button gives you the ability to download your key. You can also reveal the secret access key by clicking the Show button, as I’ve done in Figure 20.6:

Figure 20.6: Creating an IAM user for Terraform (final screen)

I’d like to give you a few warnings about the key, though. First, whether you download the key or reveal it by clicking the Show button, this is the last time you’ll ever see it. You won’t be given another opportunity to download the full key. I recommend you download the key and store it in a safe place. You should protect the key and not let anyone have access to it, and you should definitely not upload it to a version control repository or any other resource that’s publicly available. And you should absolutely not show the key in clear text in a book that a bunch of people is going to read. If this key falls into the wrong hands, then anyone that has it will be able to interact with your AWS account. Treat this key with care. The only reason I show mine here is because I want you to see what the process actually looks like. I’ll delete it from my AWS account before the publishing process of this book is finalized. On your end, definitely don’t let this key leak!

Now we have everything we need to proceed to build AWS resources with Terraform. In the next section, we’ll create an EC2 instance.

Automating an EC2 instance deployment

Let’s take a look at an example Terraform configuration file that will allow us to build an EC2 instance:

provider "aws" {
    region = "us-east-1"
}
resource "aws_instance" "my-server-1" {
    ami                                   = "ami-09d56f8956ab235b3"
    associate_public_ip_address = "true"
    instance_type                         = "t2.micro"
    key_name                              = "jay_ssh"
    vpc_security_group_ids                = [ "sg-0597d57383be308b0" ]
    tags = {
        Name = "Web Server 1"
    }
}

Terraform files are saved with a .tf filename extension, and as for the actual name, you can call it whatever you wish. I named mine terraform_example_1.tf. The underscores in the filename aren’t required but make it easier to use on the command line since you won’t have to escape spaces. I placed my terraform_example_1.tf file inside a directory of its own, which is recommended. Your Terraform configuration files should be separate from other files, so having a dedicated directory for such files is ideal.

As for the actual code itself, let’s explore it section by section:

provider "aws" {
    region = "us-east-1"
}

The provider block tells Terraform what type of provider we’ll be working with. We’re setting that to aws here. As mentioned earlier, Terraform is able to work with various cloud providers, of which AWS is only one. Underneath that, we’re setting the region variable to us-east-1. On your end, I recommend setting this to whatever region you were using in the previous chapter; that will make the process easier for us since we already have some resources there that we can reuse for now.

resource "aws_instance" "my-server-1" {

Here, we’re starting a new resource block. Each provider has its own building blocks (resources), and specific to AWS, we can use aws_instance. On this line, we’re also naming the instance, and calling it my-server-1.

Note that this is a name within Terraform we’re providing, not the actual name that will be used in AWS itself. Within Terraform, we’ll want to have some sort of name to refer back to this particular AWS instance if we need to refer to it again elsewhere.

  ami                         = "ami-09d56f8956ab235b3"

Next, we’re choosing the AMI we’d like to use for our instance. As discussed in the previous chapter, an AMI is an image we can use to build a server in AWS. The instance ID that I used here is for the official Ubuntu 22.04 AMI that comes as default with AWS. AMIs are specific to the region they were created in, so the instance ID here is specific to us-east-1.

If you’re also using us-east-1, you can use the above AMI ID as-is (so long as it’s not replaced by AWS with a newer one in the future). If in doubt, you can go into the AWS console, then navigate to the EC2 console, and go through the process as if you were going to manually create an EC2 instance based on Ubuntu, and copy the AMI ID from there. Perhaps even easier, you can use the Amazon EC2 AMI Locator (provided directly by Canonical) to find an AMI ID to use: https://cloud-images.ubuntu.com/locator/ec2/.

You’re able to filter that list by Ubuntu version as well as location. That way, you can find the AMI ID for an Ubuntu 22.04 AMI that’s within your chosen region. Change the AMI ID in the example code to the one you wish to use.

Also, you’ll probably notice that there are quite a few spaces in between ami and = "ami-09d56f8956ab235b3". It’s common practice with Terraform syntax to align the equal sign of every line within a block, which makes the code look cleaner. This isn’t required, and nothing bad will happen if you don’t line everything up perfectly, but some may argue that the overall script looks cleaner that way.

  associate_public_ip_address = "true"

With this line, we’re deciding to utilize a public IP address with our instance, which is required if we wish to be able to access it remotely.

  instance_type                          = "t2.micro"

Here, we’re setting the desired instance type for our newly created server. As discussed in the previous chapter, there are multiple instance types available, each with a different cost. The t2.micro instance type is eligible for the free tier, so that’s the reason I chose it.

  key_name                                = "jay_ssh"

In the previous chapter, when we created an EC2 instance manually, part of that process was creating an OpenSSH key. The key that you’ve created is registered to your AWS account, so you can refer to it by the name you gave it. I called mine jay_ssh, but you’ll want to change this to whatever you named yours. You can see a list of your OpenSSH keys in the EC2 dashboard within AWS; there’s a section in the menu called Key Pairs where you can remind yourself what you’ve named your key if you forgot.

  vpc_security_group_ids         = [ "sg-0597d57383be308b0" ]

During the last chapter, we created a security group that allowed both Apache and OpenSSH to communicate with our instance. When you create a security group, it’s designated with its own security group ID. The security group ID I used in the example was the one that was generated for me, so it won’t work for you. If you access the Security Groups section of the EC2 console, you can find the security group ID for the one that you’ve created. The ID for it should start with sg-, followed by a series of characters. Add yours in place of what I have for mine in the example.

  tags = {
    Name = "Web Server 1"
  }

In the last section of the example, we’re setting a tag. As discussed in the previous chapter, AWS allows you to create tags that are useful information you can have attached to an instance, which can give you additional information about its intended use. The Name tag is a special tag that changes the name that you see for the instance in the AWS EC2 list. You can name yours whatever you’d like.

At this point, we should be all set to go ahead and run Terraform to create our instance using our Terraform file as a blueprint. First, we need to set the access and secret access keys for Terraform to use. In your terminal, you can enter the following commands to do this:

export AWS_ACCESS_KEY_ID="AKIAVNXBZU2OBNWQQ7ET"
export AWS_SECRET_ACCESS_KEY="KVrAFvkwUa4Vn2ZIZHGy/IKMxdMo1plaMQoXZPVv" 

Those commands are simply run from your terminal and create environment variables containing the required keys. Terraform will look for the existence of the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY variables when it runs, and by exporting them, we’re making them available in our session. There’s actually a way you can add the keys right into the Terraform file itself, but we don’t want to do that, because then the key might get uploaded somewhere public if we do upload the entire file somewhere. There’s also a way to set up variables within the Terraform file to include these keys, but that’s beyond the scope of this chapter.

After exporting the variables, we need to initialize Terraform to ensure it has the required components it needs to interact with AWS:

terraform init

With that example, you can add the full path to the terraform utility if it’s not in a shared $PATH location, such as /usr/local/bin, which is a recommended location that I mentioned earlier. If you did copy it to /usr/local/bin, then you should be able to simply type terraform instead of the full path.

The terraform init command instructs Terraform to initialize itself. It will look at any Terraform files you have in your current working directory and look for the provider line. In our case, that’s at the beginning of the file. We set it to aws. This will trigger Terraform to download the provider add-on for AWS:

Figure 20.7: Initializing Terraform

As you can see from Figure 20.7, when I ran the command on my end, it downloaded the AWS provider to prepare it for use.

Next, we should run what’s known as a Terraform plan. Running a plan instructs Terraform to not make any changes but instead to check your syntax and ensure that you haven’t mistyped anything:

terraform plan

The terraform plan command doesn’t just check syntax, it will connect to your provider, in our case AWS, to do an inventory and compare the changes in the configuration file to the current state of the provider. If it’s unable to connect to the provider, an error will be returned. If the connection is successful, Terraform will list all the changes it would’ve made if you instructed it to actually perform the tasks. In plan mode, it will never actually carry out any instructions but merely provide you with a preview.

If for some reason Terraform can’t connect to your AWS account, you should make sure you’ve run the two export commands earlier, and that you’ve done so with the appropriate values. If you close your terminal window, you’ll need to run those export commands again since those environment variables do not persist between terminal sessions. If successful, the Terraform plan will run:

Figure 20.8: Running a Terraform plan

In Figure 20.8, I’ve left off quite a bit of output. If your plan run was successful, Terraform will provide you with an overview of all the changes Terraform would’ve made if you were actually telling it to provision infrastructure.

If you would like to actually perform the changes, you can run a Terraform apply command. Before you do that though, always make it a habit to look at the output of the plan first. Notice the following line in the output:

Plan: 1 to add, 0 to change, 0 to destroy

In our case, it’s not going to destroy or change anything, but it’s going to add something if it were to run. If you scroll up, you can find additional detail about the changes it might make if we were to run an apply. Pay special attention to what it might want to destroy. For some changes, Terraform may deem it necessary to delete something and recreate it from scratch. If you’re using Terraform to update an existing server, you most likely won’t want that server to be deleted. In that case, don’t continue and run an apply. Always scrutinize the changes Terraform wants to make before you proceed and have it actually perform tasks.

Next, assuming we’re comfortable with the changes, we’ll proceed with an apply. Keep in mind that although running a plan with Terraform will cause it to look for and report syntax errors, passing the plan process with no syntax errors reported doesn’t mean that there aren’t any. There’s only so much Terraform can do before you actually run it, so there may be errors that it can only catch during an apply. As you can guess, the command to run the apply is fairly obvious:

terraform apply

When it runs, the terraform apply command will run another sanity check, show the number of changes again, and ask if you’d like to continue:

Figure 20.9: Running the terraform apply command

To proceed, type yes and press Enter. As it runs, you can actually see the resources you’re having Terraform create show up in the AWS console as they’re being provisioned. In the case of an EC2 instance as we’re doing here, we’ll be able to see the new instance in the list as it comes up:

Figure 20.10: An instance showing up in the EC2 list in AWS, created by Terraform

If all goes well, Terraform itself will report the process as being successful:

Figure 20.11: A successful terraform apply

Now we’ve created an EC2 instance in AWS, and we did so by utilizing automation. Sure, it hasn’t done all that much for us yet, but this is just a proof of concept. There are many things we can do with Terraform.

There’s something missing, though—security! We should also automate the process of adding a security group to the instance, which will provide us with the access we need to be able to connect to it and manage it. We’re able to access the instance now, but it’s very possible that it doesn’t have external internet access yet. In the next section, we’ll configure the security group for the instance as well, which will allow us to configure which ports are open and which IP addresses are able to communicate with our instance.

Managing security groups with Terraform

Security groups, as you learned in the previous chapter, allow you to control what is able to communicate with your resources. In the previous section, we reused the security group that we created last time, but it would be useful to understand how to create one from scratch.

Here’s the example Terraform file again, with some new code added:

provider "aws" {
    region = "us-east-1"
}
resource "aws_instance" "my-server-1" {
    ami                                   = "ami-09d56f8956ab235b3"
    associate_public_ip_address = "true"
    instance_type                         = "t2.micro"
    key_name                              = "jay_ssh"
    vpc_security_group_ids        = [   "${aws_security_group.external_access.id}" ]
    tags = {
        Name = "Web Server 1"
    }
}
  resource "aws_security_group" "external_access" {
        name          = "my_sg"
        description = "Allow OpenSSH and Apache"
        ingress {
        from_port   = 22
        to_port        = 22
        protocol       = "tcp"
        cidr_blocks  = [ "172.11.59.105/32" ]
        description   = "Home Office IP"
    }
    ingress {
        from_port   = 80
        to_port        = 80
        protocol       = "tcp"
        cidr_blocks  = [ "172.11.59.105/32" ]
        description   = "Home Office IP"
    }
    egress {
        from_port = 0
        to_port = 0
        protocol = "-1"
        cidr_blocks = ["0.0.0.0/0"]
    }
}

I’ve added an entirely new section to the file, but before we get to that, I also changed a line from the previous example. It’s the tenth line down:

vpc_security_group_ids        = [ "${aws_security_group.external_access.id}" ] 

Previously, we set the security group ID for this line in the file to the security group ID that already existed, the one we created in the previous chapter. The configuration I’ve added further down will create a new security group, and here I’m setting the security group ID to a variable instead. The ${aws_security_group.external_access.id} variable that was set here is known to Terraform as an output variable. We use an output variable for the security group ID because we have no idea what the security group ID will be since the new security group hasn’t even been created yet. Therefore, we use an output variable here and reference the name of the security group we’ll be creating (external_access) with .id at the end, which will reference the security group ID once it’s created. That way, we can reference a security group here and assign it to the instance, without having to know ahead of time what its ID will be.

Further down the file, we begin a new section:

resource "aws_security_group" "external_access" {

With that line, we’re telling Terraform we’d like to create another new resource, this time a security group. We’re giving this security group a name of external_access, which is just a name within Terraform we can reference it as, not an actual name it will be called within AWS.

  name          = "my_sg"

Here, we’re giving the security group its actual name, the name we’ll see it shown as within AWS outside of Terraform.

  description = "Allow OpenSSH and Apache"

For the description line, there’s nothing too surprising here: we’re giving it a description we can use to describe its purpose and what the security group will be used for. Similar to the security group we’ve created manually in the previous chapter, we’ll be opening up OpenSSH and Apache with this security group.

  ingress {
      from_port   = 22
      to_port        = 22
      protocol       = "tcp"
      cidr_blocks  = [ "172.11.59.105/32" ]
      description   = "Home Office IP"
  }

The ingress block allows us to set a port to allow connections to come in from; in this case, we’re allowing connections from port 22, which, as you probably already know, is the default port for OpenSSH. We don’t want to open this port up to receive connections from the entire public internet, so we’re allowing the incoming traffic to this port only if it’s coming from an IP address of 172.11.59.105/32.

In your case, you can replace that with the public IP address of your home office or organization.

The second ingress block is the same as the first, only this time it’s allowing connections to port 80 for Apache:

  ingress {
      from_port   = 80
      to_port        = 80
      protocol       = "tcp"
      cidr_blocks  = [ "172.11.59.105/32" ]
      description   = "Home Office IP"
  }

We also add an egress security group rule as well because, without this, our instance will not be able to reach the internet:

   egress {
      from_port = 0
      to_port = 0
      protocol = "-1"
      cidr_blocks = ["0.0.0.0/0"]
  }

As with the previous example, we’ll need to run a plan and then an apply to transform our new code into reality. I’ll leave it up to you to run both; as long as you haven’t mistyped anything, it should apply the changes and add the new security group. Inside AWS, you should see the new security group in the console, and also see it applied to your EC2 instance. Unless you reused the AMI from the previous chapter with Apache built in, you won’t be able to connect to the instance via port 80 since Apache is probably not installed, but I added it just to show you an example.

At this point, I recommend that you play around with the Terraform script we have so far, to get a feel for its syntax. Feel free to implement something extra; you can refer to the Terraform documentation for additional resources you can create with Terraform.

Congratulations on using Terraform to provision your infrastructure. Now, let’s use Terraform to destroy stuff.

Using Terraform to destroy unused resources

Although Terraform’s primary purpose is to create infrastructure, it can also be used to delete infrastructure as well. This function is known as a Terraform destroy. With destroy, Terraform will attempt to remove all infrastructure that’s defined in your configuration file. At this point, our configuration file creates an EC2 instance, as well as a security group. If we run destroy against it, then both resources will be removed.

Removing infrastructure with Terraform will likely be a use case you won’t utilize as often as creating resources. One of the values of the destroy functionality, though, is that you can use it to “reset” a test environment, by removing everything defined in the file. Then you’re free to use the same script to create everything again. On my end, I learn a lot faster by breaking things and fixing them repeatedly. You really shouldn’t run a destroy job against production infrastructure that you care about, but if you’re just using Terraform in a test account that doesn’t have any important instances inside it, then you can continually build and dismantle your test resources over and over as you learn. Another benefit is that an organization may test a Terraform build for a client in a test account first before implementing it in production, and you can verify that everything will be built correctly before performing the actual work for the client.

Performing a destroy within Terraform is just as simple as previous examples:

terraform destroy

Just like before, we’ll get confirmation first before it removes everything, showing us exactly what Terraform wants to remove when a destroy task is run:

Figure 20.12: Preparing to run terraform destroy to remove resources

Pay careful attention to what Terraform wants to delete when you run it with the destroy option. The screenshot doesn’t show the full output; it’s quite long. Similar to apply, if you scroll up, you’ll see that the output will contain detail about what in particular will be removed if we agree to continue. If you type yes and press Enter, the resources identified will be destroyed, and you’ll receive a confirmation message confirming that the task was carried out:

Figure 20.13: Final confirmation after destroying previously provisioned resources

Basic usage of the terraform command is logically structured; we looked at how to run a plan as well as an apply, and now we know how to destroy our resources as well so we can start over with a clean slate. The majority of the time spent learning Terraform will be a matter of learning the syntax of its config files, but that will come in time. At this point in our journey, we should have a solid foundation we can build upon.

However, we’re not done yet! I’ve referenced Ansible several times in this chapter, reminding you about the fact that we used it in the past to configure a server. But what if I told you we can combine Terraform and Ansible? We certainly can, and we’ll do so in the next section.

Combining Ansible with Terraform for a full deployment solution

One of the best things about automation tools is that they can often be combined to offer a shared benefit. Ansible is one of my favorite tools: you can automate the installation of packages, the creation of users, the copying of files, or most other tasks you can think of. If you are able to perform a task on the command line, chances are Ansible can automate it. Terraform, as you just saw, is really good at creating new infrastructure and automating the initial setup of servers, as well as networks and settings for AWS and other platforms. If we combine the two, it gets even better.

I find the duo of Terraform and Ansible to be a great fit. Combining these two solutions works well in my experience; we can use Terraform to create our initial server and infrastructure builds, and then use Ansible to automate future enhancements. But it’s actually even better than that; we can configure Terraform to actually launch the initial Ansible run for us, so we only have to run a single script. After Terraform creates the infrastructure, provisioning of additional settings is handed off to Ansible. It’s a great combination.

How does it work? In the previous chapter, we explored the concept of user data, which is a feature within AWS that allows you to run a script as an instance is being created. We used it to install all the patches and then proceed and install Apache. The example we went over was a simple Bash script and wasn’t very exciting in and of itself. Sure, it did work, but we can implement a better solution. And you know what? We already have. In Chapter 15, Automating Server Configuration with Ansible, we were able to utilize Ansible Pull, a special mode of Ansible that allows us to pull code from a repository, and run it locally on our instance. The Ansible playbook we wrote installs Apache for us, the same as our Bash script did in the previous chapter. As a refresher, we are able to run the following command to trigger Ansible Pull:

ansible-pull -U https://github.com/myusername/ansible.git

Of course, this requires Ansible itself to be installed, and the repository needs to already exist. If you have already followed along in that chapter and you still have the repository we’ve created, then you already have what you need to combine Ansible with Terraform.

To save you the trouble of flipping back to Chapter 15, Automating Server Configuration with Ansible, here’s the final local.yml file we ended up with:

---
- hosts: localhost
  become: true
  tasks:
  - name: Install Apache
    apt: name=apache2
  - name: Start the apache2 services
    service:
      name: apache2
      state: started
  - name: Copy index.html
    copy:
      src: index.html
      dest: /var/www/html/index.html

As you can see, this playbook is installing Apache, starting it, and also copying an index.html file to replace the default web page. It’s fairly easy to implement this in Terraform. Here’s our Terraform script again, with a new line added, shown in bold:

provider "aws" {
    region = "us-east-1"
}
resource "aws_instance" "my-server-1" {
    ami                                   = "ami-0dba2cb6798deb6d8"
    associate_public_ip_address = "true"
    instance_type                         = "t2.micro"
    key_name                              = "jay_ssh"
    vpc_security_group_ids         = [ "${aws_security_group.external_access.id}" ]
    user_data = file("bootstrap.sh")
    tags = {
        Name = "Web Server 1"
    }
}
resource "aws_security_group" "external_access" {
    name = "my_sg"
    description = "Allow OpenSSH and Apache"
    ingress {
      from_port   = 22
      to_port     = 22
      protocol    = "tcp"
      cidr_blocks = [ "173.10.59.105/32" ]
      description = "Home Office IP"
  }
  ingress {
      from_port   = 80
      to_port     = 80
      protocol    = "tcp"
      cidr_blocks = [ "173.10.59.105/32" ]
      description = "Home Office IP"
  }
  egress {
      from_port = 0
      to_port = 0
      protocol = "-1"
      cidr_blocks = ["0.0.0.0/0"]
  }
}

The new addition to the file is on line #11. We’re referencing a bootstrap script, and in that script, we’ll add any commands we wish to run on the newly created instance:

  user_data = file("bootstrap.sh")

bootstrap.sh will need to exist in the same directory as the Terraform configuration file itself. The file doesn’t exist yet though, so go ahead and create it, and inside you can place the following lines:

#!/bin/bash
sudo apt update
sudo apt install -y ansible
sudo ansible-pull -U https://github.com/myusername/ansible.git

We haven’t made an overly complex change to the file, but what we did add gives us a great deal of benefits. The user_data option allows us to leverage the same user data function that’s built into AWS and schedule commands to run when an instance is first created. In this example, we utilize the user_data option to run a series of commands against the new instance, which will install Ansible and then launch ansible-pull to download a repository containing an Ansible playbook and run it locally. The playbook itself was set up in Chapter 15, Automating Server Configuration with Ansible, so we’re just leveraging what we’ve already created in the past, and we’re having Terraform kick off the Ansible job for us when it brings up the instance.

That brings us to the end of this chapter. I hope setting up automation with Terraform was a fun experience; I definitely enjoy working with it.

Summary

There are many configuration management and provisioning tools available for automating our infrastructure builds. In this chapter, we took a look at Terraform, and then we even combined it with Ansible, which we were already using. Using Terraform, we were able to automate the creation of an EC2 instance in AWS, along with a security group to control how it can be accessed. Terraform is a very large subject, and the concepts contained in this chapter are only the beginning. There’s so much more you can do with Terraform, and I highly recommend you keep practicing with it and coming up to speed.

In the next chapter, we’re going to learn some methods we can utilize to add additional security to our Ubuntu servers. While no server is bulletproof, there’s a basic level of security we can implement that will make it less likely for our server to be compromised. It will be a very important chapter, so you won’t want to miss it.

Join our community on Discord

Join our community’s Discord space for discussions with the author and other readers:

https://packt.link/LWaZ0

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.130.227