Containers offer a layer of abstraction at the application layer, shifting the installation of packages and dependencies from the deploy to the build process. This is important because engineers are now shipping units of code that run and deploy in a uniform way regardless of the environment. Promoting containers as runnable units reduces the risk of dependency and configuration snafus between environments. Given this, there has been a large drive for organizations to deploy their applications on container platforms. When running applications on a container platform, it’s common to containerize as much of the stack as possible, including your proxy or load balancer. NGINX and NGINX Plus containerize and ship with ease. They also include many features that make delivering containerized applications fluid. This chapter focuses on building NGINX and NGINX Plus container images, features that make working in a containerized environment easier, and deploying your image on Kubernetes and OpenShift.
You’d like to use your existing DNS SRV record implementation as the source for upstream servers with NGINX Plus.
Specify the service directive with a value of http
on an upstream server to instruct NGINX to utilize the SRV record as a load-balancing pool:
http { resolver 10.0.0.2; upstream backend { zone backends 64k; server api.example.internal service=http resolve; } }
This feature is an NGINX Plus exclusive. The configuration instructs NGINX Plus to resolve DNS from a DNS server at 10.0.0.2 and set up an upstream server pool with a single server
directive. This server
directive specified with the resolve
parameter is instructed to periodically re-resolve the domain name. The service=http
parameter and value tells NGINX that this is an SRV record containing a list of IPs and ports and to load balance over them as if they were configured with the server
directive.
Dynamic infrastructure is becoming ever more popular with the demand and adoption of cloud-based infrastructure. Autoscaling environments scale horizontally, increasing and decreasing the number of servers in the pool to match the demand of the load. Scaling horizontally demands a load balancer that can add and remove resources from the pool. With an SRV record, you offload the responsibility of keeping the list of servers to DNS. This type of configuration is extremely enticing for containerized environments because you may have containers running applications on variable port numbers, possibly at the same IP address. It’s important to note that UDP DNS record payload is limited to about 512 bytes.
You need to get up and running quickly with the NGINX image from Docker Hub.
Use the NGINX image from Docker Hub. This image contains a default configuration. You’ll need to either mount a local configuration directory or create a Dockerfile and ADD
in your configuration to the image build to alter the configuration. Here, we mount a volume where NGINX’s default configuration serves static content to demonstrate its capabilities by using a single command:
$
docker run --name my-nginx -p 80:80-v /path/to/content:/usr/share/nginx/html:ro -d nginx
The docker
command pulls the nginx:latest
image from Docker Hub if it’s not found locally. The command then runs this NGINX image as a Docker container, mapping localhost:80
to port 80
of the NGINX container. It also mounts the local directory /path/to/content/ as a container volume at /usr/share/nginx/html/ as read only. The default NGINX configuration will serve this directory as static content. When specifying mapping from your local machine to a container, the local machine port or directory comes first, and the container port or directory comes second.
NGINX has made an official Docker image available via Docker Hub. This official Docker image makes it easy to get up and going very quickly in Docker with your favorite application delivery platform, NGINX. In this section, we were able to get NGINX up and running in a container with a single command! The official NGINX Docker image mainline that we used in this example is built off of the Debian Jessie Docker image. However, you can choose official images built off of Alpine Linux. The Dockerfile and source for these official images are available on GitHub. You can extend the official image by building your own Dockerfile and specifying the official image in the FROM
command.
You need to create an NGINX Dockerfile in order to create a Docker image.
Start FROM
your favorite distribution’s Docker image. Use the RUN
command to install NGINX. Use the ADD
command to add your NGINX configuration files. Use the EXPOSE
command to instruct Docker to expose given ports or do this manually when you run the image as a container. Use CMD
to start NGINX when the image is instantiated as a container. You’ll need to run NGINX in the foreground. To do this, you’ll need to start NGINX with -g "daemon off;"
or add daemon off;
to your configuration. This example will use the latter with daemon off;
in the configuration file within the main context. You will also want to alter your NGINX configuration to log to /dev/stdout for access logs and /dev/stderr for error logs; doing so will put your logs into the hands of the Docker daemon, which will make them available to you more easily based on the log driver you’ve chosen to use with Docker:
FROM
centos:7
# Install epel repo to get nginx and install nginx
RUN
yum -y install epel-release&&
yum -y install nginx
# add local configuration files into the image
ADD
/nginx-conf /etc/nginx
EXPOSE
80 443
CMD
["nginx"]
The directory structure looks as follows:
. ├── Dockerfile └── nginx-conf ├── conf.d │ └── default.conf ├── fastcgi.conf ├── fastcgi_params ├── koi-utf ├── koi-win ├── mime.types ├── nginx.conf ├── scgi_params ├── uwsgi_params └── win-utf
I chose to host the entire NGINX configuration within this Docker directory for ease of access to all of the configurations with only one line in the Dockerfile to add all my NGINX configurations.
You will find it useful to create your own Dockerfile when you require full control over the packages installed and updates. It’s common to keep your own repository of images so that you know your base image is reliable and tested by your team before running it in production.
You need to build an NGINX Plus Docker image to run NGINX Plus in a containerized environment.
Use this Dockerfile to build an NGINX Plus Docker image. You’ll need to download your NGINX Plus repository certificates and keep them in the directory with this Dockerfile named nginx-repo.crt and nginx-repo.key, respectively. With that, this Dockerfile will do the rest of the work installing NGINX Plus for your use and linking NGINX access and error logs to the Docker log collector.
FROM
debian:stretch-slim
LABEL
maintainer
=
"NGINX <[email protected]>"
# Download certificate and key from the customer portal
# (https://cs.nginx.com) and copy to the build context
COPY
nginx-repo.crt /etc/ssl/nginx/
COPY
nginx-repo.key /etc/ssl/nginx/
# Install NGINX Plus
RUN
set
-x
&&
APT_PKG
=
"Acquire::https::plus-pkgs.nginx.com::"
&&
REPO_URL
=
"https://plus-pkgs.nginx.com/debian"
&&
apt-get update&&
apt-get upgrade -y
&&
apt-get install--no-install-recommends --no-install-suggests
-y apt-transport-https ca-certificates gnupg1
&&
NGINX_GPGKEY
=
573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62;
found
=
''
;
for
server inha.pool.sks-keyservers.net
hkp://keyserver.ubuntu.com:80
hkp://p80.pool.sks-keyservers.net:80
pgp.mit.edu
;
do
echo
"Fetching GPG key
$NGINX_GPGKEY
from
$server
"
;
apt-key adv --keyserver
"
$server
"
--keyserver-options
timeout
=
10
--recv-keys"
$NGINX_GPGKEY
"
&&
found
=
yes
&&
break
;
done
;
test
-z"
$found
"
&&
echo
>&
2
"error: failed to fetch GPG key
$NGINX_GPGKEY
"
&&
exit
1;
echo
"
${
APT_PKG
}
Verify-Peer "
true
";"
>> /etc/apt/apt.conf.d/90nginx
&&
echo
"
${
APT_PKG
}
Verify-Host "
true
";"
>>/etc/apt/apt.conf.d/90nginx
&&
echo
"
${
APT_PKG
}
SslCert
"
/etc/ssl/nginx/nginx-repo.crt";"
>>/etc/apt/apt.conf.d/90nginx
&&
echo
"
${
APT_PKG
}
SslKey
"
/etc/ssl/nginx/nginx-repo.key";"
>>/etc/apt/apt.conf.d/90nginx
&&
printf
"deb
${
REPO_URL
}
stretch nginx-plus"
> /etc/apt/sources.list.d/nginx-plus.list
&&
apt-get update&&
apt-get install -y nginx-plus
&&
apt-get remove --purge --auto-remove -y gnupg1
&&
rm -rf /var/lib/apt/lists/*# Forward request logs to Docker log collector
RUN
ln -sf /dev/stdout /var/log/nginx/access.log
&&
ln -sf /dev/stderr /var/log/nginx/error.logEXPOSE
80
STOPSIGNAL
SIGTERM
CMD
["nginx", "-g", "daemon off;"]
To build this Dockerfile into a Docker image, run the following in the directory that contains the Dockerfile and your NGINX Plus repository certificate and key:
$
docker build --no-cache -t nginxplus .
This docker build
command uses the flag --no-cache
to ensure that whenever you build this, the NGINX Plus packages are pulled fresh from the NGINX Plus repository for updates. If it’s acceptable to use the same version on NGINX Plus as the prior build, you can omit the --no-cache
flag. In this example, the new Docker image is tagged nginxplus
.
By creating your own Docker image for NGINX Plus, you can configure your NGINX Plus container however you see fit and drop it into any Docker environment. This opens up all of the power and advanced features of NGINX Plus to your containerized environment. This Dockerfile does not use the Dockerfile property ADD
to add in your configuration; you will need to add in your configuration manually.
You need to use environment variables inside your NGINX configuration in order to use the same container image for different environments.
Use the ngx_http_perl_module
to set variables in NGINX from your environment:
daemon off; env APP_DNS; include /usr/share/nginx/modules/*.conf; ... http { perl_set $upstream_app 'sub { return $ENV{"APP_DNS"}; }'; server { ... location / { proxy_pass https://$upstream_app; } } }
To use perl_set
you must have the ngx_http_perl_module
installed; you can do so by loading the module dynamically or statically if building from source. NGINX by default wipes environment variables from its environment; you need to declare any variables you do not want removed with the env
directive. The perl_set
directive takes two parameters: the variable name you’d like to set and a perl string that renders the result.
The following is a Dockerfile that loads the ngx_http_perl_module
dynamically, installing this module from the package management utility. When installing modules from the package utility for CentOS, they’re placed in the /usr/lib64/nginx/modules/ directory, and configuration files that dynamically load these modules are placed in the /usr/share/nginx/modules/ directory. This is why in the preceding configuration snippet we include all configuration files at that path:
FROM
centos:7
# Install epel repo to get nginx and install nginx
RUN
yum -y install epel-release&&
yum -y install nginx nginx-mod-http-perl
# add local configuration files into the image
ADD
/nginx-conf /etc/nginx
EXPOSE
80 443
CMD
["nginx"]
A typical practice when using Docker is to utilize environment variables to change the way the container operates. You can use environment variables in your NGINX configuration so that your NGINX Dockerfile can be used in multiple, diverse environments.
You are deploying your application on Kubernetes and need an ingress controller.
Ensure that you have access to the ingress controller image. For NGINX, you can use the nginx/nginx-ingress image from DockerHub. For NGINX Plus, you will need to build your own image and host it in your private Docker registry. You can find instructions on building and pushing your own NGINX Plus Kubernetes Ingress Controller on NGINX Inc’s GitHub.
Visit the Kubernetes Ingress Controller Deployments folder in the kubernetes-ingress repository on GitHub. The commands that follow will be run from within this directory of a local copy of the repository.
Create a namespace and a service account for the ingress controller; both are named nginx-ingress
:
$
kubectl apply -f common/ns-and-sa.yaml
Create a secret with a TLS certificate and key for the ingress controller:
$
kubectl apply -f common/default-server-secret.yaml
This certificate and key are self-signed and created by NGINX Inc. for testing and example purposes. It’s recommended to use your own because this key is publicly available.
Optionally, you can create a config map for customizing NGINX configuration (the config map provided is blank; however, you can read more about customization and annotation here):
$
kubectl apply -f common/nginx-config.yaml
If Role-Based Access Control (RBAC) is enabled in your cluster, create a cluster role and bind it to the service account. You must be a cluster administrator to perform this step:
$
kubectl apply -f rbac/rbac.yaml
Now deploy the ingress controller. Two example deployments are made available in this repository: a Deployment and a DaemonSet. Use a Deployment if you plan to dynamically change the number of ingress controller replicas. Use a DaemonSet to deploy an ingress controller on every node or a subset of nodes.
If you plan to use the NGINX Plus Deployment manifests, you must alter the YAML file and specify your own registry and image.
For NGINX Deployment:
$
kubectl apply -f deployment/nginx-ingress.yaml
For NGINX Plus Deployment:
$
kubectl apply -f deployment/nginx-plus-ingress.yaml
For NGINX DaemonSet:
$
kubectl apply -f daemon-set/nginx-ingress.yaml
For NGINX Plus DaemonSet:
$
kubectl apply -f daemon-set/nginx-plus-ingress.yaml
Validate that the ingress controller is running:
$
kubectl get pods --namespace=
nginx-ingress
If you created a DaemonSet, port 80
and 443
of the ingress controller are mapped to the same ports on the node where the container is running. To access the ingress controller, use those ports and the IP address of any of the nodes on which the ingress controller is running. If you deployed a Deployment, continue with the next steps.
For the Deployment methods, there are two options for accessing the ingress controller pods. You can instruct Kubernetes to randomly assign a node port that maps to the ingress controller pod. This is a service with the type NodePort
. The other option is to create a service with the type LoadBalancer
. When creating a service of type LoadBalancer
, Kubernetes builds a load balancer for the given cloud platform, such as Amazon Web Services, Microsoft Azure, and Google Cloud Compute.
To create a service of type NodePort
, use the following:
$
kubectl create -f service/nodeport.yaml
To statically configure the port that is opened for the pod, alter the YAML and add the attribute nodePort: {port}
to the configuration of each port being opened.
To create a service of type LoadBalancer
for Google Cloud Compute or Azure, use this code:
$
kubectl create -f service/loadbalancer.yaml
To create a service of type LoadBalancer
for Amazon Web Services:
$
kubectl create -f service/loadbalancer-aws-elb.yaml
On AWS, Kubernetes creates a classic ELB in TCP mode with the PROXY protocol enabled. You must configure NGINX to use the PROXY protocol. To do so, you can add the following to the config map mentioned previously in reference to the file common/nginx-config.yaml.
proxy-protocol
:
"True"
real-ip-header
:
"proxy_protocol"
set-real-ip-from
:
"0.0.0.0/0"
Then, update the config map:
$
kubectl apply -f common/nginx-config.yaml
You can now address the pod by its NodePort
or by making a request to the load balancer created on its behalf.
As of this writing, Kubernetes is the leading platform in container orchestration and management. The ingress controller is the edge pod that routes traffic to the rest of your application. NGINX fits this role perfectly and makes it simple to configure with its annotations. The NGINX-Ingress project offers an NGINX Open Source ingress controller out of the box from a DockerHub image, and NGINX Plus through a few steps to add your repository certificate and key. Enabling your Kubernetes cluster with an NGINX ingress controller provides all the same features of NGINX but with the added features of Kubernetes networking and DNS to route traffic.
You are deploying your application on OpenShift and would like to use NGINX as a router.
Build the Router image and upload it to your private registry. You can find the source files for the image in the Origin Repository. It’s important to push your Router image to the private registry before deleting the default Router because it will render the registry unavailable.
Log in to the OpenShift Cluster as an admin:
$ oc login -u system:admin
Select the default
project:
$ oc project default
Back up the default Router config, in case you need to recreate it:
$ oc get -o yaml service/router dc/router clusterrolebinding/router-router-role serviceaccount/router > default-router-backup.yaml
Delete the Router:
$ oc delete -f default-router-backup.yaml
Deploy the NGINX Router:
$ oc adm router router --images={image} --type='' --selector='node-role.kubernetes.io/infra=true'
In this example, the {image}
must point to the NGINX Router image in your registry. The selector parameter specifies a label selector for nodes where the Router will be deployed: node-role.kubernetes.io/infra=true
. Use a selector that makes sense for your environment.
Validate that your NGINX Router pods are running:
$ oc get pods
You should see a Router pod with the name router-1-{string}
.
By default, the NGINX stub status page is available via port 1936
of the node where the Router is running (you can change this port by using the STATS_PORT
env variable). To access the page outside of the node, you need to add an entry to the IPtables rules for that node:
$ sudo iptables -I OS_FIREWALL_ALLOW -p tcp -s {ip range} -m tcp --dport 1936 -j ACCEPT
Open your browser to http://{node-ip}:1936/stub_status to access the stub status page.
The OpenShift Router is the entry point for external requests bound for applications running on OpenShift. The Router’s job is to receive incoming requests and direct them to the appropriate application pod. The load-balancing and routing abilities of NGINX make it a great choice for use as an OpenShift Router. Switching out the default OpenShift Router for an NGINX Router enables all of the features and power of NGINX as the ingress of your OpenStack application.
18.116.90.141