Security is done in layers, and there must be multiple layers to your security model for it to be truly hardened. In this chapter, we go through many different ways to secure your web applications with NGINX and NGINX Plus. You can use many of these security methods in conjunction with one another to help harden security. The following are a number of security sections that explore features of NGINX and NGINX Plus that can assist in strengthening your application. You might notice that this chapter does not touch upon one of the largest security features of NGINX, the ModSecurity 3.0 NGINX module, which turns NGINX into a Web Application Firewall (WAF). To learn more about the WAF capabilities, download the ModSecurity 3.0 and NGINX: Quick Start Guide.
You need to control access based on the IP address of the client.
Use the HTTP access module to control access to protected resources:
location /admin/ { deny 10.0.0.1; allow 10.0.0.0/20; allow 2001:0db8::/32; deny all; }
The given location block allows access from any IPv4 address in 10.0.0.0/20 except 10.0.0.1, allows access from IPv6 addresses in the 2001:0db8::/32
subnet, and returns a 403 for requests originating from any other address. The allow
and deny
directives are valid within the HTTP, server, and location contexts. Rules are checked in sequence until a match is found for the remote address.
Protecting valuable resources and services on the internet must be done in layers. NGINX provides the ability to be one of those layers. The deny
directive blocks access to a given context, while the allow
directive can be used to allow subsets of the blocked access. You can use IP addresses, IPv4 or IPv6, CIDR block ranges, the keyword all
, and a Unix socket. Typically when protecting a resource, one might allow a block of internal IP addresses and deny access from all.
You’re serving resources from another domain and need to allow cross-origin resource sharing (CORS) to enable browsers to utilize these resources.
Alter headers based on the request
method to enable CORS:
map $request_method $cors_method { OPTIONS 11; GET 1; POST 1; default 0; } server { ... location / { if ($cors_method ~ '1') { add_header 'Access-Control-Allow-Methods' 'GET,POST,OPTIONS'; add_header 'Access-Control-Allow-Origin' '*.example.com'; add_header 'Access-Control-Allow-Headers' 'DNT, Keep-Alive, User-Agent, X-Requested-With, If-Modified-Since, Cache-Control, Content-Type'; } if ($cors_method = '11') { add_header 'Access-Control-Max-Age' 1728000; add_header 'Content-Type' 'text/plain; charset=UTF-8'; add_header 'Content-Length' 0; return 204; } } }
There’s a lot going on in this example, which has been condensed by using a map
to group the GET
and POST
methods together. The OPTIONS
request method returns a preflight request to the client about this server’s CORS rules. OPTIONS
, GET
, and POST
methods are allowed under CORS. Setting the Access-Control-Allow-Origin
header allows for content being served from this server to also be used on pages of origins that match this header. The preflight request can be cached on the client for 1,728,000 seconds, or 20 days.
Resources such as JavaScript make CORS when the resource they’re requesting is of a domain other than its own. When a request is considered cross origin, the browser is required to obey CORS rules. The browser will not use the resource if it does not have headers that specifically allow its use. To allow our resources to be used by other subdomains, we have to set the CORS headers, which can be done with the add_header
directive. If the request is a GET
, HEAD
, or POST
with standard content type, and the request does not have special headers, the browser will make the request and only check for origin. Other request methods will cause the browser to make the preflight request to check the terms of the server to which it will obey for that resource. If you do not set these headers appropriately, the browser will give an error when trying to utilize that resource.
You need to encrypt traffic between your NGINX server and the client.
Utilize one of the SSL modules, such as the ngx_http_ssl_module
or ngx_stream_ssl_module
to encrypt traffic:
http { # All directives used below are also valid in stream server { listen 8433 ssl; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers HIGH:!aNULL:!MD5; ssl_certificate /etc/nginx/ssl/example.pem; ssl_certificate_key /etc/nginx/ssl/example.key; ssl_certificate /etc/nginx/ssl/example.ecdsa.crt; ssl_certificate_key /etc/nginx/ssl/example.ecdsa.key; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; } }
This configuration sets up a server to listen on a port encrypted with SSL, 8443. The server accepts the SSL protocol versions TLSv1.2 and TLSv1.3. Two sets of certificate and key pair locations are disclosed to the server for use. The server is instructed to use the highest strength offered by the client while restricting a few that are insecure. The Elliptic Curve Cryptopgraphy (ECC) ciphers are prioritized as we’ve provided an ECC certificate key pair. The SSL session cache and timeout allow workers to cache and store session parameters for a given amount of time. There are many other session cache options that can help with performance or security of all types of use cases. You can use session cache options in conjunction with one another. However, specifying one without the default will turn off that default, built-in session cache.
Secure transport layers are the most common way of encrypting information in transit. As of this writing, the TLS protocol is preferred over the SSL protocol. That’s because versions 1 through 3 of SSL are now considered insecure. Although the protocol name might be different, TLS still establishes a secure socket layer. NGINX enables your service to protect information between you and your clients, which in turn protects the client and your business. When using a signed certificate, you need to concatenate the certificate with the certificate authority chain. When you concatenate your certificate and the chain, your certificate should be above the chain in the file. If your certificate authority has provided many files in the chain, it can also provide the order in which they are layered. The SSL session cache enhances performance by not having to negotiate for SSL/TLS versions and ciphers.
In testing, ECC certificates were found to be faster than the equivalent-strength RSA certificates. The key size is smaller, which results in the ability to serve more SSL/TLS connections, and with faster handshakes. NGINX allows you to configure multiple certificates and keys, and then serve the optimal certificate for the client browser. This allows you to take advantage of the newer technology but still serve older clients.
You need to encrypt traffic between NGINX and the upstream service and set specific negotiation rules for compliance regulations or if the upstream is outside of your secured network.
Use the SSL directives of the HTTP proxy module to specify SSL rules:
location / { proxy_pass https://upstream.example.com; proxy_ssl_verify on; proxy_ssl_verify_depth 2; proxy_ssl_protocols TLSv1.2; }
These proxy directives set specific SSL rules for NGINX to obey. The configured directives ensure that NGINX verifies that the certificate and chain on the upstream service is valid up to two certificates deep. The proxy_ssl_protocols
directive specifies that NGINX will only use TLS version 1.2. By default, NGINX does not verify upstream certificates and accepts all TLS versions.
The configuration directives for the HTTP proxy module are vast, and if you need to encrypt upstream traffic, you should at least turn on verification. You can proxy over HTTPS simply by changing the protocol on the value passed to the proxy_pass
directive. However, this does not validate the upstream certificate. Other directives, such as proxy_ssl_certificate
and proxy_ssl_certificate_key
, allow you to lock down upstream encryption for enhanced security. You can also specify proxy_ssl_crl
or a certificate revocation list, which lists certificates that are no longer considered valid. These SSL proxy directives help harden your system’s communication channels within your own network or across the public internet.
You need to secure a location block using a secret.
Use the secure link module and the secure_link_secret
directive to restrict access to resources to users who have a secure link:
location /resources { secure_link_secret mySecret; if ($secure_link = "") { return 403; } rewrite ^ /secured/$secure_link; } location /secured/ { internal; root /var/www; }
This configuration creates an internal and public-facing location block. The public-facing location block /resources
will return a 403 Forbidden unless the request URI includes an md5
hash string that can be verified with the secret provided to the secure_link_secret
directive. The $secure_link
variable is an empty string unless the hash in the URI is verified.
Securing resources with a secret is a great way to ensure your files are protected. The secret is used in conjunction with the URI. This string is then md5
hashed, and the hex digest of that md5
hash is used in the URI. The hash is placed into the link and evaluated by NGINX. NGINX knows the path to the file being requested as it’s in the URI after the hash. NGINX also knows your secret as it’s provided via the secure_link_secret
directive. NGINX is able to quickly validate the md5
hash and store the URI in the $secure_link
variable. If the hash cannot be validated, the variable is set to an empty string. It’s important to note that the argument passed to the secure_link_secret
must be a static string; it cannot be a variable.
You need to generate a secure link from your application using a secret.
The secure link module in NGINX accepts the hex digest of an md5
hashed string, where the string is a concatenation of the URI path and the secret. Building on the last section, “Securing a Location”, we will create the secured link that will work with the previous configuration example given that there’s a file present at /var/www/secured/index.html. To generate the hex digest of the md5
hash, we can use the Unix openssl
command:
$ echo -n 'index.htmlmySecret' | openssl md5 -hex (stdin)= a53bee08a4bf0bbea978ddf736363a12
Here we show the URI that we’re protecting, index.html, concatenated with our secret, mySecret
. This string is passed to the openssl
command to output an md5
hex digest.
The following is an example of the same hash digest being constructed in Python using the hashlib
library that is included in the Python Standard Library:
import
hashlib
hashlib
.
md5
.
(
b
'index.htmlmySecret'
)
.
hexdigest
()
'a53bee08a4bf0bbea978ddf736363a12'
Now that we have this hash digest, we can use it in a URL. Our example will be www.example.com making a request for the file /var/www/secured/index.html through our /resources location. Our full URL will be the following:
www.example.com/resources/a53bee08a4bf0bbea978ddf736363a12/ index.html
Generating the digest can be done in many ways, in many languages. Things to remember: the URI path goes before the secret, there are no carriage returns in the string, and use the hex digest of the md5
hash.
You need to secure a location with a link that expires at some future time and is specific to a client.
Utilize the other directives included in the secure link module to set an expire time and use variables in your secure link:
location /resources { root /var/www; secure_link $arg_md5,$arg_expires; secure_link_md5 "$secure_link_expires$uri$remote_addr mySecret"; if ($secure_link = "") { return 403; } if ($secure_link = "0") { return 410; } }
The secure_link
directive takes two parameters separated with a comma. The first parameter is the variable that holds the md5
hash. This example uses an HTTP argument of md5
. The second parameter is a variable that holds the time in which the link expires in Unix epoch time format. The secure_link_md5
directive takes a single parameter that declares the format of the string that is used to construct the md5
hash. Like the other configuration, if the hash does not validate, the $secure_link
variable is set to an empty string. However, with this usage, if the hash matches but the time has expired, the $secure_link
variable will be set to 0
.
This usage of securing a link is more flexible and looks cleaner than the secure_link_secret
shown in “Securing a Location”. With these directives, you can use any number of variables that are available to NGINX in the hashed string. Using user-specific variables in the hash string will strengthen your security as users won’t be able to trade links to secured resources. It’s recommended to use a variable like $remote_addr
or $http_x_forwarded_for
, or a session cookie header generated by the application. The arguments to secure_link
can come from any variable you prefer, and they can be named whatever best fits. The conditions around what the $secure_link
variable is set to returns known HTTP codes for Forbidden and Gone. The HTTP 410, Gone, works great for expired links as the condition is to be considered permanent.
You need to generate a link that expires.
Generate a timestamp for the expire time in the Unix epoch format. On a Unix system, you can test by using the date as demonstrated in the following:
$ date -d "2020-12-31 00:00" +%s --utc 1609372800
Next, you’ll need to concatenate your hash string to match the string configured with the secure_link_md5
directive. In this case, our string to be used will be 1293771600/resources/index.html127.0.0.1 mySecret
. The md5
hash is a bit different than just a hex digest. It’s an md5
hash in binary format, base64-encoded, with plus signs (+) translated to hyphens (-), slashes (/) translated to underscores (_), and equal (=) signs removed. The following is an example on a Unix system:
$ echo -n '1609372800/resources/index.html127.0.0.1 mySecret' | openssl md5 -binary | openssl base64 | tr +/ -_ | tr -d = TG6ck3OpAttQ1d7jW3JOcw
Now that we have our hash, we can use it as an argument along with the expire date:
/resources/index.html?md5=TG6ck3OpAttQ1d7jW3JOcw&expires=1609372 800'
The following is a more practical example in Python utilizing a relative time for the expiration, setting the link to expire one hour from generation. At the time of writing this example works with Python 2.7 and 3.x utilizing the Python Standard Library:
from
datetime
import
datetime
,
timedelta
from
base64
import
b64encode
import
hashlib
# Set environment vars
resource
=
b
'/resources/index.html'
remote_addr
=
b
'127.0.0.1'
host
=
b
'www.example.com'
mysecret
=
b
'mySecret'
# Generate expire timestamp
now
=
datetime
.
utcnow
()
expire_dt
=
now
+
timedelta
(
hours
=
1
)
expire_epoch
=
str
.
encode
(
expire_dt
.
strftime
(
'
%s
'
))
# md5 hash the string
uncoded
=
expire_epoch
+
resource
+
remote_addr
+
mysecret
md5hashed
=
hashlib
.
md5
(
uncoded
)
.
digest
()
# Base64 encode and transform the string
b64
=
b64encode
(
md5hashed
)
unpadded_b64url
=
b64
.
replace
(
b
'+'
,
b
'-'
)
.
replace
(
b
'/'
,
b
'_'
)
.
replace
(
b
'='
,
b
''
)
# Format and generate the link
linkformat
=
"{}{}?md5={}?expires={}"
securelink
=
linkformat
.
format
(
host
.
decode
(),
resource
.
decode
(),
unpadded_b64url
.
decode
(),
expire_epoch
.
decode
()
)
(
securelink
)
With this pattern we’re able to generate a secure link in a special format that can be used in URLs. The secret provides security through use of a variable that is never sent to the client. You’re able to use as many other variables as you need to in order to secure the location. md5
hashing and base64 encoding are common, lightweight, and available in nearly every language.
You need to redirect unencrypted requests to HTTPS.
Use a rewrite to send all HTTP traffic to HTTPS:
server { listen 80 default_server; listen [::]:80 default_server; server_name _; return 301 https://$host$request_uri; }
This configuration listens on port 80 as the default server for both IPv4 and IPv6 and for any hostname. The return
statement returns a 301 permanent redirect to the HTTPS server at the same host and request URI.
It’s important to always redirect to HTTPS where appropriate. You may find that you do not need to redirect all requests but only those with sensitive information being passed between client and server. In that case, you may want to put the return
statement in particular locations only, such as /login.
You need to redirect to HTTPS, however, you’ve terminated SSL/TLS at a layer before NGINX.
Use the standard X-Forwarded-Proto
header to determine if you need to redirect:
server { listen 80 default_server; listen [::]:80 default_server; server_name _; if ($http_x_forwarded_proto = 'http') { return 301 https://$host$request_uri; } }
This configuration is very much like HTTPS redirects. However, in this configuration we’re only redirecting if the header X-Forwarded-Proto
is equal to HTTP.
It’s a common use case that you may terminate SSL/TLS in a layer in front of NGINX. One reason you may do something like this is to save on compute costs. However, you need to make sure that every request is HTTPS, but the layer terminating SSL/TLS does not have the ability to redirect. It can, however, set proxy headers. This configuration works with layers such as the Amazon Web Services Elastic Load Balancer, which will offload SSL/TLS at no additional cost. This is a handy trick to make sure that your HTTP traffic is secured.
You need to instruct browsers to never send requests over HTTP.
Use the HTTP Strict Transport Security (HSTS) enhancement by setting the Strict-Transport-Security
header:
add_header Strict-Transport-Security max-age=31536000;
This configuration sets the Strict-Transport-Security
header to a max age of a year. This will instruct the browser to always do an internal redirect when HTTP requests are attempted to this domain, so that all requests will be made over HTTPS.
For some applications a single HTTP request trapped by a man in the middle attack could be the end of the company. If a form post containing sensitive information is sent over HTTP, the HTTPS redirect from NGINX won’t save you; the damage is done. This opt-in security enhancement informs the browser to never make an HTTP request, and therefore the request is never sent unencrypted.
You need to provide multiple ways to pass security to a closed site.
Use the satisfy
directive to instruct NGINX that you want to satisfy any or all of the security methods used:
location / { satisfy any; allow 192.168.1.0/24; deny all; auth_basic "closed site"; auth_basic_user_file conf/htpasswd; }
This configuration tells NGINX that the user requesting the location /
needs to satisfy one of the security methods: either the request needs to originate from the 192.168.1.0/24 CIDR block or be able to supply a username and password that can be found in the conf/htpasswd file. The satisfy
directive takes one of two options: any
or all
.
The satisfy
directive is a great way to offer multiple ways to authenticate to your web application. By specifying any
to the satisfy
directive, the user must meet one of the security challenges. By specifying all
to the satisfy
directive, the user must meet all of the security challenges. This directive can be used in conjunction with the http_access_module
detailed in “Access Based on IP Address”, the http_auth_basic_module
detailed in “HTTP Basic Authentication”, the http_auth_request_module
detailed in “Authentication Subrequests”, and the http_auth_jwt_module
detailed in “Validating JWTs”. Security is only truly secure if it’s done in multiple layers. The satisfy
directive will help you achieve this for locations and servers that require deep security rules.
You need a dynamic Distributed Denial of Service (DDoS) mitigation solution.
Use NGINX Plus to build a cluster-aware rate limit and automatic blacklist:
limit_req_zone $remote_addr zone=per_ip:1M rate=100r/s sync; # Cluster-aware rate limit limit_req_status 429; keyval_zone zone=sinbin:1M timeout=600 sync; # Cluster-aware "sin bin" with # 10-minute TTL keyval $remote_addr $in_sinbin zone=sinbin; # Populate $in_sinbin with # matched client IP addresses server { listen 80; location / { if ($in_sinbin) { set $limit_rate 50; # Restrict bandwidth of bad clients } limit_req zone=per_ip; # Apply the rate limit here error_page 429 = @send_to_sinbin; # Excessive clients are moved to # this location proxy_pass http://my_backend; } location @send_to_sinbin { rewrite ^ /api/3/http/keyvals/sinbin break; # Set the URI of the # "sin bin" key-val proxy_method POST; proxy_set_body '{"$remote_addr":"1"}'; proxy_pass http://127.0.0.1:80; } location /api/ { api write=on; # directives to control access to the API } }
This solution uses a synchronized rate limit and a synchronized key-value store to dynamically respond to DDoS attacks and mitigate their effects. The sync
parameter provided to the limit_req_zone
and keyval_zone
directives synchronizes the shared memory zone with other machines in the active-active NGINX Plus cluster. This example identifies clients that send more than 100 requests per second, regardless of which NGINX Plus node receives the request. When a client exceeds the rate limit, its IP address is added to a “sin bin” key-value store by making a call to the NGINX Plus API. The sin bin is synchronized across the cluster. Further requests from clients in the sin bin are subject to a very low bandwidth limit, regardless of which NGINX Plus node receives them. Limiting bandwidth is preferable to rejecting requests outright because it does not clearly signal to the client that DDoS mitigation is in effect. After 10 minutes the client is automatically removed from the sin bin.
3.145.2.184