Containers offer a layer of abstraction at the application layer, shifting the installation of packages and dependencies from the deploy to the build process. This is important because engineers are now shipping units of code that run and deploy in a uniform way regardless of the environment. Promoting containers as runnable units reduces the risk of dependency and configuration snafus between environments. Given this, there has been a large drive for organizations to deploy their applications on container platforms. When running applications on a container platform, it’s common to containerize as much of the stack as possible, including your proxy or load balancer. NGINX and NGINX Plus containerize and ship with ease. They also include many features that make delivering containerized applications fluid. This chapter focuses on building NGINX and NGINX Plus container images, features that make working in a containerized environment easier, and deploying your image on Kubernetes and OpenShift.
When containerizing, it’s often common to decompose services into smaller applications. When doing so, they’re tied back together by an API gateway. The first section in this chapter provides a common case scenario of using NGINX as an API gateway to secure, validate, authenticate, and route requests to the appropriate service.
A couple of architecture considerations about running NGINX or NGINX Plus in a container should be called out. When containerizing a service, to make use of the Docker log driver, logs must be output to /dev/stdout and error logs directed to /dev/stderr. By doing so, the logs are streamed to the Docker log driver which is able to route them to consolidated logging servers natively.
Load balancing methods are also of consideration when using NGINX Plus in a containerized environment. The least_time
load-balancing method was designed with containerized networking overlays in mind. By favoring low response time, NGINX Plus will pass the incoming request to the upstream server with the fastest average response time. When all servers are adequately load balanced and performing equally, NGINX Plus can optimize by network latency, preferring servers in closest network proximity.
Use NGINX or NGINX Plus as an API gateway. An API gateway provides an entry point to one or more application programming interfaces (APIs). NGINX fits this role very well. This section will highlight some core concepts and reference other sections within this book for more detail on specifics. It’s also important to note that NGINX has published an entire ebook on this topic: Deploying NGINX Plus as an API gateway.
Start by defining a server block for your API gateway within its own file. A name such as /etc/nginx/api_gateway.conf will do.
server { listen 443 ssl; server_name api.company.com; # SSL Settings Chapter 7 default_type application/json; }
Add some basic error-handling responses to your server definition:
proxy_intercept_errors on; error_page 400 = @400; location @400 { return 400 '{"status":400,"message":"Bad request"} '; } error_page 401 = @401; location @401 { return 401 '{"status":401,"message":"Unauthorized"} '; } error_page 403 = @403; location @403 { return 403 '{"status":403,"message":"Forbidden"} '; } error_page 404 = @404; location @404 { return 404 '{"status":404,"message":"Resource not found"} '; }
The above section of NGINX configuration can be added directly to the server block in /etc/nginx/api_gateway.conf or a separate file, and imported via an include
directive. The include
directive is covered in Recipe 17.1.
Use an include directive to import this server configuration into the main nginx.conf file within the http
context:
include /etc/nginx/api_gateway.conf;
You now need to define your upstream service endpoints. Chapter 2 covers load balancing, which discusses the upstream
block. As a reminder, upstream
is valid within the http
context, and not within server
context. The following must be included or set outside of the server
block.
upstream service_1 { server 10.0.0.12:80; server 10.0.0.13:80; } upstream service_2 { server 10.0.0.14:80; server 10.0.0.15:80; }
Depending on the use case, you may want to declare your services inline, as an included file, or included per services. A case also exists where services should be defined as proxy location endpoints, in this case it’s suggested to define the endpoint as a variable for use throughout. Chapter 5, Programmability and Automation, discusses ways to automate adding and removing machines from upstream
blocks.
Build an internally routable location within the server
block for each service:
location = /_service_1 { internal; # Config common to service proxy_pass http://service_1/$request_uri; } location = /_service_2 { internal; # Config common to service proxy_pass http://service_2/$request_uri; }
By defining internal routable locations for these services, configuration that is common to the service can be defined once, rather than repeatedly.
From here, we need to build up location
blocks that define specific URI paths for a given service. These blocks will validate, and route the request appropriately. An API gateway can be as simple as routing requests based on path, and as detailed as defining specific rules for every single accepted API URI. In the latter, you’ll want to devise a file structure for organization and use NGINX includes to import your configuration files. This concept is discussed in Recipe 17.1.
Create a new directory for the API gateway:
mkdir /etc/nginx/api_conf.d/
Build a specification of a service use case by defining location
blocks within a file at a path that makes sense for your configuration structure. Use the rewrite
directive to direct the request to the prior configured location
block that proxies the request to a service. The rewrite
directive used below instructs NGINX to reprocess the request with an altered URI. The following defines rules specific to an API resources, restricts HTTP methods, then uses the rewrite
directive to send the request to the prior defined internal common proxy location for the service.
location /api/service_1/object { limit_except GET PUT { deny all; } rewrite ^ /_service_1 last; } location /api/service_1/object/[^/]*$ { limit_except GET POST { deny all; } rewrite ^ /_service_1 last; }
Repeat this step for each service. Employ logical separation by means of file and directory structures to organize effectively for the use case. Use any and all information provided in this book to configure API location
blocks to be as specific and restrictive as possible.
If separate files were used for the above location
or upstream
blocks, ensure they’re included in your server context:
server { listen 443 ssl; server_name api.company.com; # SSL Settings Chapter 7 default_type application/json; include api_conf.d/*.conf; }
Enable authentication to protect private resources, by using one of the many methods discussed in Chapter 6, or something as simple as preshared API keys as follows (note the map
directive is only valid in the http
context):
map $http_apikey $api_client_name { default ""; "j7UqLLB+yRv2VTCXXDZ1M/N4" "client_one"; "6B2kbyrrTiIN8S8JhSAxb63R" "client_two"; "KcVgIDSY4Nm46m3tXVY3vbgA" "client_three"; }
Protect backend services from attack with NGINX by employing learnings from Chapter 2 to limit usage. In the http
context, define one or many request limit shared–memory zones.
limit_req_zone $http_apikey zone=limitbyapikey:10m rate=100r/s; limit_req_status 429;
Protect a given context with rate limits and authentication:
location /api/service_2/object { limit_req zone=limitbyapikey; # Consider writing these if's to a file # and using an include were needed. if ($http_apikey = "") { return 401; } if ($api_client_name = "") { return 403; } limit_except GET PUT { deny all; } rewrite ^ /_service_2 last; }
Test out some calls to your API gateway:
curl -H"apikey: 6B2kbyrrTiIN8S8JhSAxb63R"
https://api.company.com/api/service_2/object
API gateways provide an entry point to an application programming interface (API). That sounds vague and basic, so let’s dig in. Integration points happen at many different layers. Any two independent services that need to communicate (integrate) should hold an API version contract. Such version contracts define the compatibility of the services. An API gateway enforces such contracts—authenticating, authorizing, transforming, and routing requests between services.
This section demonstrated how NGINX can function as an API gateway by validating, authenticating, and directing incoming requests to specific services and limiting their usage. This tactic is popular in microservice architectures, where a single API offering is split among different services.
Implore all of your learnings thus far to construct an NGINX server configuration to the exact specifications for your use case. By weaving together the core concepts demonstrated in this text, you have the ability to authenticate and authorize the use of URI paths, route or rewrite requests based on any factor, limit usage, and define what is and is not accepted as a valid request. There will never be a single solution to an API gateway, as each is intimately and infinitely definable to the use case it provides.
An API gateway provides an ultimate collaboration space between operations and application teams to form a true DevOps organization. Application development defines validity parameters of a given request. Delivery of such a request is typically managed by what is considered IT, (networking, infrastructure, security, and middleware teams). An API gateway acts as an interface between those two layers. The construction of an API gateway requires input from all sides. Configuration of such should be kept in some sort of source control. Many modern-day source-control repositories have the concept of code owners. This concept allows you to require specific users’ approval for certain files. In this way, teams can collaborate but verify changes specific to a given department.
Something to keep in mind when working with API gateways is the URI path. In the example configuration, the entire URI path is passed to the upstream servers. This means the service_1
example needs to have handlers at the /api/service_1/*
path. To perform path-based routing in this way, it’s best that the application doesn’t have conflicting routes with another application.
If conflicting routes do apply, there are a few things you can do. Edit the code to resolve the conflicts, or add a URI prefix configuration to one or both applications to move one of them to another context. In the case of off-the-shelf software that can’t be edited, you can rewrite the requests URI upstream. However, if the application returns links in the body, you’ll need to use regular expressions (regex) to rewrite the body of the request before providing it to the client—this should be avoided.
Specify the service directive with a value of http
on an upstream server to instruct NGINX to utilize the SRV record as a load-balancing pool:
http { resolver 10.0.0.2 valid=30s; upstream backend { zone backends 64k; server api.example.internal service=http resolve; } }
This feature is an NGINX Plus exclusive. The configuration instructs NGINX Plus to resolve DNS from a DNS server at 10.0.0.2
and set up an upstream server pool with a single server
directive. This server
directive specified with the resolve
parameter is instructed to periodically re-resolve the domain name base on the DNS record TTL, or the valid
override parameter of the resolver
directive. The service=http
parameter and value tells NGINX that this is an SRV record containing a list of IPs and ports, and to load balance over them as if they were configured with the server
directive.
Dynamic infrastructure is becoming ever more popular with the demand and adoption of cloud-based infrastructure. Auto Scaling environments scale horizontally, increasing and decreasing the number of servers in the pool to match the demand of the load. Scaling horizontally demands a load balancer that can add and remove resources from the pool. With an SRV record, you offload the responsibility of keeping the list of servers to DNS. This type of configuration is extremely enticing for containerized environments because you may have containers running applications on variable port numbers, possibly at the same IP address. It’s important to note that UDP DNS record payload is limited to about 512 bytes.
Use the NGINX image from Docker Hub. This image contains a default configuration. You’ll need to either mount a local configuration directory or create a Dockerfile and ADD
in your configuration to the image build to alter the configuration. Here we mount a volume where NGINX’s default configuration serves static content to demonstrate its capabilities by using a single command:
$
docker run --name my-nginx -p 80:80-v /path/to/content:/usr/share/nginx/html:ro -d nginx
The docker
command pulls the nginx:latest
image from Docker Hub if it’s not found locally. The command then runs this NGINX image as a Docker container, mapping localhost:80
to port 80
of the NGINX container. It also mounts the local directory /path/to/content/ as a container volume at /usr/share/nginx/html/ as read only. The default NGINX configuration will serve this directory as static content. When specifying mapping from your local machine to a container, the local machine port or directory comes first, and the container port or directory comes second.
NGINX has made an official Docker image available via Docker Hub. This official Docker image makes it easy to get up and going very quickly in Docker with your favorite application delivery platform, NGINX. In this section, we were able to get NGINX up and running in a container with a single command! The official NGINX Docker image mainline that we used in this example is built from the Debian Jessie Docker image. However, you can choose official images based on Alpine Linux. The Dockerfile and source for these official images are available on GitHub. You can extend the official image by building your own Dockerfile and specifying the official image in the FROM
command. You can also mount an NGINX configuration directory as a Docker volume to override the NGINX configuration without modifying the official image.
Start FROM
your favorite distribution’s Docker image. Use the RUN
command to install NGINX. Use the ADD
command to add your NGINX configuration files. Use the EXPOSE
command to instruct Docker to expose given ports, or do this manually when you run the image as a container. Use CMD
to start NGINX when the image is instantiated as a container. You’ll need to run NGINX in the foreground. To do this, you’ll need to start NGINX with -g "daemon off;"
or add daemon off;
to your configuration. This example will use the latter with daemon off;
in the configuration file within the main context. You will also want to alter your NGINX configuration to log to /dev/stdout for access logs and /dev/stderr for error logs; doing so will put your logs into the hands of the Docker daemon, which will make them more easily available, based on the log driver you’ve chosen to use with Docker:
FROM
centos:7
# Install epel repo to get nginx and install nginx
RUN
yum -y install epel-release&&
yum -y install nginx
# add local configuration files into the image
ADD
/nginx-conf /etc/nginx
EXPOSE
80 443
CMD
["nginx"]
The directory structure looks as follows:
. ├── Dockerfile └── nginx-conf ├── conf.d │ └── default.conf ├── fastcgi.conf ├── fastcgi_params ├── koi-utf ├── koi-win ├── mime.types ├── nginx.conf ├── scgi_params ├── uwsgi_params └── win-utf
I chose to host the entire NGINX configuration within this Docker directory for ease of access to all of the configurations with only one line in the Dockerfile to add all my NGINX configurations.
You will find it useful to create your own Dockerfile when you require full control over the packages installed and updates. It’s common to keep your own repository of images so that you know your base image is reliable and tested by your team before running it in production.
Use this Dockerfile to build an NGINX Plus Docker image. You’ll need to download your NGINX Plus repository certificates and keep them in the directory with this Dockerfile named nginx-repo.crt and nginx-repo.key, respectively. With that, this Dockerfile will do the rest of the work installing NGINX Plus for your use and linking NGINX access and error logs to the Docker log collector.
FROM
debian:stretch-slim
LABEL
maintainer
=
"NGINX <[email protected]>"
# Download certificate and key from the customer portal
# (https://cs.nginx.com) and copy to the build context
COPY
nginx-repo.crt /etc/ssl/nginx/
COPY
nginx-repo.key /etc/ssl/nginx/
# Install NGINX Plus
RUN
set
-x
&&
APT_PKG
=
"Acquire::https::plus-pkgs.nginx.com::"
&&
REPO_URL
=
"https://plus-pkgs.nginx.com/debian"
&&
apt-get update&&
apt-get upgrade -y
&&
apt-get install--no-install-recommends --no-install-suggests
-y apt-transport-https ca-certificates gnupg1
&&
NGINX_GPGKEY
=
573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62;
found
=
''
;
for
server inha.pool.sks-keyservers.net
hkp://keyserver.ubuntu.com:80
hkp://p80.pool.sks-keyservers.net:80
pgp.mit.edu
;
do
echo
"Fetching GPG key
$NGINX_GPGKEY
from
$server
"
;
apt-key adv --keyserver
"
$server
"
--keyserver-options
timeout
=
10
--recv-keys"
$NGINX_GPGKEY
"
&&
found
=
yes
&&
break
;
done
;
test
-z"
$found
"
&&
echo
>&
2
"error: failed to fetch GPG key
$NGINX_GPGKEY
"
&&
exit
1;
echo
"
${
APT_PKG
}
Verify-Peer "
true
";"
>> /etc/apt/apt.conf.d/90nginx
&&
echo
"
${
APT_PKG
}
Verify-Host "
true
";"
>>/etc/apt/apt.conf.d/90nginx
&&
echo
"
${
APT_PKG
}
SslCert
"
/etc/ssl/nginx/nginx-repo.crt";"
>>/etc/apt/apt.conf.d/90nginx
&&
echo
"
${
APT_PKG
}
SslKey
"
/etc/ssl/nginx/nginx-repo.key";"
>>/etc/apt/apt.conf.d/90nginx
&&
printf
"deb
${
REPO_URL
}
stretch nginx-plus"
> /etc/apt/sources.list.d/nginx-plus.list
&&
apt-get update&&
apt-get install -y nginx-plus
&&
apt-get remove --purge --auto-remove -y gnupg1
&&
rm -rf /var/lib/apt/lists/*# Forward request logs to Docker log collector
RUN
ln -sf /dev/stdout /var/log/nginx/access.log
&&
ln -sf /dev/stderr /var/log/nginx/error.logEXPOSE
80
STOPSIGNAL
SIGTERM
CMD
["nginx", "-g", "daemon off;"]
To build this Dockerfile into a Docker image, run the following in the directory that contains the Dockerfile and your NGINX Plus repository certificate and key:
$
docker build --no-cache -t nginxplus .
This docker build
command uses the flag --no-cache
to ensure that whenever you build this, the NGINX Plus packages are pulled fresh from the NGINX Plus repository for updates. If it’s acceptable to use the same version on NGINX Plus as the prior build, you can omit the --no-cache
flag. In this example, the new Docker image is tagged nginxplus
.
By creating your own Docker image for NGINX Plus, you can configure your NGINX Plus container however you see fit and drop it into any Docker environment. This opens up all of the power and advanced features of NGINX Plus to your containerized environment. This Dockerfile does not use the Dockerfile property ADD
to add in your configuration; you will need to add in your configuration manually.
Use the ngx_http_perl_module
to set variables in NGINX from your environment:
daemon off; env APP_DNS; include /usr/share/nginx/modules/*.conf; # ... http { perl_set $upstream_app 'sub { return $ENV{"APP_DNS"}; }'; server { # ... location / { proxy_pass https://$upstream_app; } } }
To use perl_set
you must have the ngx_http_perl_module
installed; you can do so by loading the module dynamically or statically if building from source. NGINX by default wipes environment variables from its environment; you need to declare any variables you do not want removed with the env
directive. The perl_set
directive takes two parameters: the variable name you’d like to set and a Perl string that renders the result.
The following is a Dockerfile that loads the ngx_http_perl_module
dynamically, installing this module from the package management utility. When installing modules from the package utility for CentOS, they’re placed in the /usr/lib64/nginx/modules/ directory, and configuration files that dynamically load these modules are placed in the /usr/share/nginx/modules/ directory. This is why in the preceding configuration snippet we include all configuration files at that path:
FROM
centos:7
# Install epel repo to get nginx and install nginx
RUN
yum -y install epel-release&&
yum -y install nginx nginx-mod-http-perl
# add local configuration files into the image
ADD
/nginx-conf /etc/nginx
EXPOSE
80 443
CMD
["nginx"]
Ensure that you have access to the ingress controller image. For NGINX, you can use the nginx/nginx-ingress image from Docker Hub. For NGINX Plus, you will need to build your own image and host it in your private Docker registry. You can find instructions on building and pushing your own NGINX Plus Kubernetes Ingress Controller on NGINX Inc’s GitHub.
Visit the Kubernetes Ingress Controller Deployments folder in the kubernetes-ingress repository on GitHub. The commands that follow will be run from within this directory of a local copy of the repository.
Create a namespace and a service account for the ingress controller; both are named nginx-ingress
:
$
kubectl apply -f common/ns-and-sa.yaml
Create a secret with a TLS certificate and key for the ingress controller:
$
kubectl apply -f common/default-server-secret.yaml
This certificate and key are self-signed and created by NGINX Inc. for testing and example purposes. It’s recommended to use your own because this key is publicly available.
Optionally, you can create a config map for customizing NGINX configuration (the config map provided is blank; however, you can read more about customization and annotation):
$
kubectl apply -f common/nginx-config.yaml
If Role-Based Access Control (RBAC) is enabled in your cluster, create a cluster role and bind it to the service account. You must be a cluster administrator to perform this step:
$
kubectl apply -f rbac/rbac.yaml
Now deploy the ingress controller. Two example deployments are made available in this repository: a Deployment and a DaemonSet. Use a Deployment if you plan to dynamically change the number of ingress controller replicas. Use a DaemonSet to deploy an ingress controller on every node or a subset of nodes.
If you plan to use the NGINX Plus Deployment manifests, you must alter the YAML file and specify your own registry and image.
For NGINX Deployment:
$
kubectl apply -f deployment/nginx-ingress.yaml
For NGINX Plus Deployment:
$
kubectl apply -f deployment/nginx-plus-ingress.yaml
For NGINX DaemonSet:
$
kubectl apply -f daemon-set/nginx-ingress.yaml
For NGINX Plus DaemonSet:
$
kubectl apply -f daemon-set/nginx-plus-ingress.yaml
Validate that the ingress controller is running:
$
kubectl get pods --namespace=
nginx-ingress
If you created a DaemonSet, port 80
and 443
of the ingress controller are mapped to the same ports on the node where the container is running. To access the ingress controller, use those ports and the IP address of any of the nodes on which the ingress controller is running. If you deployed a Deployment, continue with the next steps.
For the Deployment methods, there are two options for accessing the ingress controller pods. You can instruct Kubernetes to randomly assign a node port that maps to the ingress controller pod. This is a service with the type NodePort
. The other option is to create a service with the type LoadBalancer
. When creating a service of type LoadBalancer
, Kubernetes builds a load balancer for the given cloud platform, such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Compute.
To create a service of type NodePort
, use the following:
$
kubectl create -f service/nodeport.yaml
To statically configure the port that is opened for the pod, alter the YAML and add the attribute nodePort: {port}
to the configuration of each port being opened.
To create a service of type LoadBalancer
for Google Cloud Compute or Azure, use this code:
$
kubectl create -f service/loadbalancer.yaml
To create a service of type LoadBalancer
for Amazon Web Services:
$
kubectl create -f service/loadbalancer-aws-elb.yaml
On AWS, Kubernetes creates a classic ELB in TCP mode with the PROXY Protocol enabled. You must configure NGINX to use the PROXY Protocol. To do so, you can add the following to the config map mentioned previously in reference to the file common/nginx-config.yaml.
proxy-protocol
:
"True"
real-ip-header
:
"proxy_protocol"
set-real-ip-from
:
"0.0.0.0/0"
Then, update the config map:
$
kubectl apply -f common/nginx-config.yaml
You can now address the pod by its NodePort
or by making a request to the load balancer created on its behalf.
As of this writing, Kubernetes is the leading platform in container orchestration and management. The ingress controller is the edge pod that routes traffic to the rest of your application. NGINX fits this role perfectly and makes it simple to configure with its annotations. The NGINX Ingress project offers an NGINX Open Source ingress controller out of the box from a Docker Hub image, and NGINX Plus through a few steps to add your repository certificate and key. Enabling your Kubernetes cluster with an NGINX Ingress controller provides all the same features of NGINX but with the added features of Kubernetes networking and DNS to route traffic.
Use the NGINX Prometheus Exporter to harvest NGINX or NGINX Plus statistics and ship them to Prometheus.
The NGINX Prometheus Exporter Module is written in GoLang and distributed as a binary on GitHub and can be found as a prebuilt Docker Image on Docker Hub.
By default, the exporter will be started for NGINX and will only harvest the stub_status
information. To run the exporter for NGINX Open Source, ensure stub status is enabled. If it’s not, there is more information on how to do so in Recipe 13.1, then use the following Docker command:
docker run -p 9113:9113 nginx/nginx-prometheus-exporter:0.8.0-nginx.scrape-uri http://
{
nginxEndpoint}
:8080/stub_status
To use the exporter with NGINX Plus, a flag must be used to switch the exporter’s context because much more data can be collected from the NGINX Plus API. You can learn how to turn on the NGINX Plus API, in Recipe 13.2. Use the following Docker command to run the exporter for an NGINX Plus environment.
docker run -p 9113:9113 nginx/nginx-prometheus-exporter:0.8.0-nginx.plus -nginx.scrape-uri http://
{
nginxPlusEndpoint}
:8080/api
Prometheus is an extremely common metric monitoring solution that is very prevalent in the Kubernetes ecosystem. The NGINX Prometheus Exporter Module is a fairly simple component, however, it enables prebuilt integration between NGINX and common monitoring platforms. With NGINX, the stub status does not provide a vast amount of data, but important data to provide insight into the amount of work an NGINX node is handling. The NGINX Plus API enables many more statistics about the NGINX Plus server, all of which the exporter ships to Prometheus. With either case, the information gleaned is valuable monitoring data, and the work to ship this data to Prometheus is already done; you just need to wire it up and take advantage of the insight provided by NGINX statistics.
18.224.44.108