Terraform can also be used to manipulate Docker. The classical usage is against an already running Docker server on the network, but it will work exactly the same locally with your own Docker installation. Using Terraform for controlling Docker, we'll be able to dynamically trigger Docker image updates, execute containers with every imaginable option, manipulate Docker networks, and use Docker volumes.
Here, we'll deploy an isolated blog container (Ghost) that will be publicly served by the nginx-proxy
container over HTTP. This very useful nginx-proxy
container is proposed by Jason Wilder from InfluxDB on his GitHub: https://github.com/jwilder/nginx-proxy.
To step through this recipe, you will need the following:
Before starting to code anything using Terraform, ensure you can connect to any kind of Docker Engine, local or remote:
$ docker version Client: Version: 1.12.0 API version: 1.24 Go version: go1.6.3 Git commit: 8eab29e Built: Thu Jul 28 21:15:28 2016 OS/Arch: darwin/amd64 Server: Version: 1.12.0 API version: 1.24 Go version: go1.6.3 Git commit: 8eab29e Built: Thu Jul 28 21:15:28 2016 OS/Arch: linux/amd64
If you have issues at this point, you need to fix them before going further.
Our goal is to serve, through an nginx-proxy
container, a blog container (Ghost) that will not be directly available on the network.
If you're connecting to a remote Docker server, you need to configure the Docker provider (maybe in provider.tf
). Alternatively, it can use the DOCKER_HOST
environment variable, or just the local daemon if not specified. When using locally for this exercise, you can just forget about including the provider:
provider "docker" { host = "tcp://1.2.3.4:2375" }
Let's start by declaring two data sources for each of our Docker images (in docker.tf
). The ghost
image will be in its 0.10
version tag, while nginx-proxy
will use the 0.4.0
version tag. Using a data source will help us manipulate the image later:
data "docker_registry_image" "ghost" { name = "ghost:0.10" } data "docker_registry_image" "nginx-proxy" { name = "jwilder/nginx-proxy:0.4.0" }
Now that we can access the image, let's exactly do that, using the docker_image
resource. We're reusing all the information our data source is exposing to us, such as the image name or its SHA256, so we know if a new image is available to pull:
resource "docker_image" "ghost" { name = "${data.docker_registry_image.ghost.name}" pull_trigger = "${data.docker_registry_image.ghost.sha256_digest}" } resource "docker_image" "nginx-proxy" { name = "${data.docker_registry_image.nginx-proxy.name}" pull_trigger = "${data.docker_registry_image.nginx-proxy.sha256_digest}" }
Let's now declare the private Ghost container (without any port mapping), using the docker_container
resource. Let's use the image we just declared through the docker_image
resource, and export an environment variable named VIRTUAL_HOST
, to be used by the nginx-proxy container (refer to the nginx-proxy documentation for more information). Replace with the host you want if you're not running against a local Docker host:
resource "docker_container" "ghost" { name = "ghost" image = "${docker_image.ghost.latest}" env = ["VIRTUAL_HOST=localhost"] }
Now let's start the nginx-proxy
container. We know from its documentation that it needs to share the Docker socket in read-only mode (/var/run/docker.sock
) to dynamically access the running containers, and we want it to run on the default HTTP port (tcp
/80
). Let's do that:
resource "docker_container" "nginx-proxy" { name = "nginx-proxy" image = "${docker_image.nginx-proxy.latest}" ports { internal = 80 external = 80 protocol = "tcp" } volumes { host_path = "/var/run/docker.sock" container_path = "/tmp/docker.sock" read_only = true } }
Now if you terraform apply
this, you can navigate over to http://localhost/admin
(replace localhost
with the Docker server you used) and set up your Ghost blog!
18.222.182.66