Storing and accessing files easily and in a scalable way is an essential part of a modern infrastructure. Amazon S3 is Amazon's answer to this need. S3 stores "objects" in "buckets" and has no storage limit (one exception is the bucket name: it has to be unique on Amazon's S3, the namespace being shared). We'll see how to make the best use of S3 with Terraform.
To step through this recipe, you will need the following:
We'll start by creating a simple and explicitly public bucket on S3 named iac-book
, using the aws_s3_bucket
resource (and a tag for the sake of it):
resource "aws_s3_bucket" "iac_book" { bucket = "iac-book" acl = "public-read" tags { Name = "IAC Book Bucket in ${var.aws_region}" } }
After a terraform apply
, your bucket is immediately available for storing objects. You can see it on the AWS S3 Console (https://console.aws.amazon.com/s3/):
Let's store a first object right now, a very simple file containing a simple string ("Hello Infrastructure-as-Code Cookbook!"
). The resource is named aws_s3_bucket_object
, and you need to reference the bucket previously created, the destination name (index.html
), and its content. The ACL is here again explicitly public:
resource "aws_s3_bucket_object" "index" { bucket = "${aws_s3_bucket.iac_book.bucket}" key = "index.html" content = "<h1>Hello Infrastructure-as-Code Cookbook!</h1>" content_type = "text/html" acl = "public-read" }
You can alternatively provide a file directly instead of its content:
source = "index.html"
If you navigate to the AWS S3 Console, you can see it available with some extended information:
It would be awesome if we could know easily the URL of our file right from Terraform, so we could give it to others. Unfortunately, there's no easy function for that. However, we know how URLs are constructed: http://s3-<region>.amazonaws.com/bucket_name/object_name
. Let's create an output containing this information:
output "S3" { value = "http://s3-${aws_s3_bucket.iac_book.region}.amazonaws.com/${aws_s3_bucket.iac _book.id}/${aws_s3_bucket_object.index.key}" }
Paste the link in a web browser and you'll be able to access your file.
A workaround is to use the static website hosting feature of S3 by simply adding the following to your aws_s3_bucket
resource:
website { index_document = "index.html" }
An optional output will give you its static hosting URL (in our case, iac-book.s3-website-eu-west-1.amazonaws.com instead of http://s3-eu-west-1.amazonaws.com/iac-book/index.html):
output "S3 Endpoint" { value = "${aws_s3_bucket.iac_book.website_endpoint}" }
Using Ansible, there are many ways to create a bucket. Here's a simple bucket with public read permissions, using the classic s3
module:
--- - name: create iac-book bucket s3: bucket: iac-book mode: create permission: public-read
Here's how we would simply upload our previous index.html
file using the same s3
module:
- name: create index.html file s3: bucket: iac-book object: index.html src: index.html mode: put permission: public-read
18.191.233.205