Proxying traffic from Docker containers with Traefik

Traefik is a fast, powerful, and easy-to-use reverse proxy. You run it in a container and publish the HTTP (or HTTPS) port, and configure the container to listen for events from the Docker Engine API:

docker container run -d -P `
--volume \.pipedocker_engine:\.pipedocker_engine `
sixeyed/traefik:v1.7.8-windowsservercore-ltsc2019 `
--docker --docker.endpoint=npipe:////./pipe/docker_engine
Traefik is an official image on Docker Hub, but just like NATS the only Windows images available are based on Windows Server 2016. I'm using my own image here, based on Windows Server 2019. The Dockerfile is in my sixeyed/dockerfiles-windows repository on GitHub, but you should check Docker Hub to see whether there's a 2019 variant of the official Traefik image before you use mine.

You've seen the volume option before - it's used to mount a filesystem directory on the host into the container. Her, I'm using it to mount a Windows named pipe, called docker_engine. Pipes are a networking approach for client-server communication. The Docker CLI and Docker API support connections over both TCP/IP and named pipes. Mounting a pipe like this lets a container query the Docker API without needing to know the IP address of the host where the container is running.

Traefik subscribes to the event stream from the Docker API with the named pipe connection, using the connection details in the docker.endpoint option. It will get notifications from Docker when containers are created or removed, and Traefik uses the data in those events to build its own routing map.

When you have Traefik running, you create your application containers with labels to tell Traefik which requests should be routed to which containers. Labels are just key-value pairs that can be applied to containers when you create them. They are surfaced in the event stream from Docker. Traefik uses labels with the prefix traefik.frontend to build its routing rules. This is how I run the API container with routing by Traefik:

docker container run -d `
--name nerd-dinner-api `
-l "traefik.frontend.rule=Host:api.nerddinner.local" `
dockeronwindows/ch05-nerd-dinner-api:2e;

Docker creates the container called nerd-dinner-api and then publishes an event with the new container's details. Traefik gets that event, and adds a rule to its routing map. Any requests that come into Traefik with the HTTP Host header api.nerddinner.local will be proxied from the API container. The API container does not publish any ports - the reverse proxy is the only publicly accessible component.

Traefik has a very rich set of routing rules, using different parts of the HTTP request—the host, path, headers, and query string. You can map anything from wildcard strings to very specific URLs using Traefik's rules. There's much more that Traefik can do too, like load balancing and SSL termination. The documentation can be found at https://traefik.io.

Using similar rules I can deploy the new version of NerdDinner and have all the frontend containers proxied by Traefik. The script ch05-run-nerd-dinner_part-2.ps1 is an upgrade that removes the existing web containers first:

docker container rm -f nerd-dinner-homepage
docker container rm -f nerd-dinner-web

Labels and environment variables are applied when a container is created, and they last for the life of the container. You can't change those values on an existing container; you need to remove it and create a new one. I want to run the NerdDinner web and home page containers with labels for Traefik, so I need to replace the existing containers. The rest of the script starts Traefik, replaces the web containers with a new configuration, and starts the API container:

docker container run -d -p 80:80 `
-v \.pipedocker_engine:\.pipedocker_engine `
sixeyed
/traefik:v1.7.8-windowsservercore-ltsc2019 `
--api --docker --docker.endpoint=npipe:////./pipe/docker_engine

docker container run -d `
--name nerd-dinner-homepage `
-l "traefik.frontend.rule=Path:/,/css/site.css" `
-l "traefik.frontend.priority=10" `
dockeronwindows/ch03-nerd-dinner-homepage:2e;

d
ocker container run -d `
--name nerd-dinner-web `
--env-file api-keys.env `
-l "traefik.frontend.rule=PathPrefix:/" `
-l "traefik.frontend.priority=1" `
-e "DinnerApi:Enabled=true" `
dockeronwindows/ch05-nerd-dinner-web:2e;

docker container run -d `
--name nerd-dinner-api `
-l "traefik.frontend.rule=PathPrefix:/api" `
-l "traefik.frontend.priority=5" `
dockeronwindows/ch05-nerd-dinner-api:2e;

Now when I load the NerdDinner website, I'll browse to the Traefik container on port 80. I'm using Host header routing rules, so I'll be putting http://nerddinner.local into my browser. This is a local development environment, so I've added these values to my hosts file (in test and production environments, there would be a real DNS system resolving the host names):

127.0.0.1  nerddinner.local
127.0.0.1 api.nerddinner.local

The home page request for the path / gets proxied from the home page container, and I also have a routing path specified for the CSS file so that I see the new home page complete with styling:

This response is generated by the home page container, but proxied by Traefik. I can browse to api.nerddinner.local and see the all the dinners in JSON format from the new REST API container:

The original NerdDinner app still works in the same way, but when I browse to /Dinners, the list of dinners to display is fetched from the API instead of the database directly:

Working out the routing rules for the proxy is one of the harder parts of breaking up a monolith into multiple frontend containers. Microservice apps tend to be easier here, because they're designed to be different concerns running at different domain paths. You'll need a good understanding of Traefik's rules and of regular expressions when you start routing UI features to their own containers.

Container-first design has let me modernize the architecture of NerdDinner without a complete rewrite. I'm using enterprise-grade open source software and Docker to power the following three patterns for breaking up the monolith:

  • Making features asynchronous by publishing and subscribing to events on a message queue
  • Exposing data with REST APIs, using a simple modern technology stack
  • Splitting frontend features across multiple containers and routing between them with a reverse proxy

Now I can be far more agile about delivering improvements to features because I won't always need to regression test the full application. I also have events that are published from key user activities, which is a step towards event-driven architecture. This lets me add completely new features without changing any existing code.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.139.240.119