Chapter 7. Security Controls

7.0 Introduction

Security is done in layers, and there must be multiple layers to your security model for it to be truly hardened. In this chapter, we go through many different ways to secure your web applications with NGINX and NGINX Plus. You can use many of these security methods in conjunction with one another to help harden security. The following are a number of sections that explore security features of NGINX and NGINX Plus that can assist in strengthening your application. You might notice that this chapter does not touch upon one of the largest security features of NGINX, the ModSecurity 3.0 NGINX module, which turns NGINX into a Web Application Firewall (WAF). To learn more about the WAF capabilities, download the ModSecurity 3.0 and NGINX: Quick Start Guide.

7.1 Access Based on IP Address

Problem

You need to control access based on the IP address of the client.

Solution

Use the HTTP or stream access module to control access to protected resources:

location /admin/ {
    deny  10.0.0.1;
    allow 10.0.0.0/20; 
    allow 2001:0db8::/32;
    deny  all;
}

The given location block allows access from any IPv4 address in 10.0.0.0/20 except 10.0.0.1, allows access from IPv6 addresses in the 2001:0db8::/32 subnet, and returns a 403 for requests originating from any other address. The allow and deny directives are valid within the HTTP, server, and location contexts, as well as in stream and server context for TCP/UDP. Rules are checked in sequence until a match is found for the remote address.

Discussion

Protecting valuable resources and services on the internet must be done in layers. NGINX functionality provides the ability to be one of those layers. The deny directive blocks access to a given context, while the allow directive can be used to allow subsets of the blocked access. You can use IP addresses, IPv4 or IPv6, classless inter-domain routing (CIDR) block ranges, the keyword all, and a Unix socket. Typically, when protecting a resource, one might allow a block of internal IP addresses and deny access from all.

7.2 Allowing Cross-Origin Resource Sharing

Problem

You’re serving resources from another domain and need to allow cross-origin resource sharing (CORS) to enable browsers to utilize these resources.

Solution

Alter headers based on the request method to enable CORS:

map $request_method $cors_method {
  OPTIONS 11;
  GET  1;
  POST 1;
  default 0;
}
server {
  # ...
  location / {
    if ($cors_method ~ '1') {
        ​add_header 'Access-Control-Allow-Methods' 
           'GET,POST,OPTIONS';
        add_header 'Access-Control-Allow-Origin' 
           '*.example.com';
        add_header 'Access-Control-Allow-Headers' 
                   'DNT,
                    Keep-Alive,
                    User-Agent,
                    X-Requested-With,
                    If-Modified-Since,
                    Cache-Control,
                    Content-Type';
    }
    if ($cors_method = '11') {
        add_header 'Access-Control-Max-Age' 1728000;
        add_header 'Content-Type' 'text/plain; charset=UTF-8';
        add_header 'Content-Length' 0;
        return 204;
    }
  }
}

There’s a lot going on in this example, which has been condensed by using a map to group the GET and POST methods together. The OPTIONS request method returns a preflight request to the client about this server’s CORS rules. OPTIONS, GET, and POST methods are allowed under CORS. Setting the Access-Control-Allow-Origin header allows for content being served from this server to also be used on pages of origins that match this header. The preflight request can be cached on the client for 1,728,000 seconds, or 20 days.

Discussion

Resources such as JavaScript make CORS when the resource they’re requesting is of a domain other than its own. When a request is considered cross origin, the browser is required to obey CORS rules. The browser will not use the resource if it does not have headers that specifically allow its use. To allow our resources to be used by other subdomains, we have to set the CORS headers, which can be done with the add_header directive. If the request is a GET, HEAD, or POST with standard content type, and the request does not have special headers, the browser will make the request and only check for origin. Other request methods will cause the browser to make the preflight request to check the terms of the server to which it will obey for that resource. If you do not set these headers appropriately, the browser will give an error when trying to utilize that resource.

7.3 Client-Side Encryption

Problem

You need to encrypt traffic between your NGINX server and the client.

Solution

Utilize one of the SSL modules, such as the ngx_http_ssl_module or ngx_stream_ssl_module to encrypt traffic:

http { # All directives used below are also valid in stream 
    server {
        listen 8443 ssl;
        ssl_certificate /etc/nginx/ssl/example.crt;
        ssl_certificate_key /etc/nginx/ssl/example.key;
   }
}

This configuration sets up a server to listen on a port encrypted with SSL/TLS, 8443. The directive ssl_certificate defines the certificate, and optional chain that is served to the client. The ssl_certificate_key directive defines the key used by NGINX to decrypt requests and encrypt responses. A number of SSL/TLS negotiation configurations are defaulted to secure presets for the NGINX version release date.

Discussion

Secure transport layers are the most common way of encrypting information in transit. As of this writing, the TLS protocol is preferred over the SSL protocol. That’s because versions 1 through 3 of SSL are now considered insecure. Although the protocol name might be different, TLS still establishes a secure socket layer. NGINX enables your service to protect information between you and your clients, which in turn protects the client and your business. When using a CA-signed certificate, you need to concatenate the certificate with the certificate authority chain. When you concatenate your certificate and the chain, your certificate should be above the concatenated chain file. If your certificate authority has provided multiple files as intermediate certificates for the chain, there is an order in which they are layered. Refer to the certificate provider’s documentation for the order.

7.4 Advanced Client-Side Encryption

Problem

You have advanced client-server encryption configuration needs.

Solution

The http and stream SSL modules for NGINX enable complete control of the accepted SSL/TLS handshake. Certificates and keys can be provided to NGINX, by way of file path, or variable value. NGINX presents the client with an accepted list of protocols, ciphers, and key types, per its configuration. The highest standard between the client and NGINX server is negotiated. NGINX can cache the result of client-server SSL/TLS negotiation for a period of time.

The following intentionally demonstrates many options at once to illustrate the available complexity of the client-server negotiation:

http { # All directives used below are also valid in stream 
    server {
        listen 8443 ssl;
        # Set accepted protocol and cipher
        ssl_protocols TLSv1.2 TLSv1.3;
        ssl_ciphers HIGH:!aNULL:!MD5;

        # RSA certificate chain loaded from file
        ssl_certificate /etc/nginx/ssl/example.crt;
        # RSA encryption key loaded from file
        ssl_certificate_key /etc/nginx/ssl/example.pem;

        # Elliptic curve cert from variable value
        ssl_certificate $ecdsa_cert;
        # Elliptic curve key as file path variable
        ssl_certificate_key data:$ecdsa_key_path;

        # Client-Server negotiation caching
        ssl_session_cache shared:SSL:10m;
        ssl_session_timeout 10m;
   }
}

The server accepts the SSL protocol versions TLSv1.2 and TLSv1.3. The ciphers accepted are set to HIGH, which is a macro for the highest standard, explicit denies are demonstrated for aNULL and MD5 by denotation of the !.

Two sets of certificate-key pairs are used. The values passed to the NGINX directives demonstrate different ways to provide NGINX certificate-key values. A variable is interpreted as a path to a file. When prefixed with data: the value of a variable is interpreted as a direct value. Multiple certificate-key formats may be provided to offer reverse compatibility to the client. The strongest standard capable by the client and accepted by the server will be the result of the negotiation.

Warning

If the SSL/TLS key is exposed as a direct value variable, it has the potential of being logged or exposed by the configuration. Ensure you have strict change and access controls if exposing the key value as a variable.

The SSL session cache and timeout allow NGINX worker processes to cache and store session parameters for a given amount of time. The NGINX worker processes share this cache between themselves as processes within a single instantiation, but not between machines. There are many other session cache options that can help with performance or security of all types of use cases. You can use session cache options in conjunction with one another. However, specifying one without the default will turn off the default built-in session cache.

Discussion

In this advanced example, NGINX provides the client with the SSL/TLS options of TLS version 1.2 or 1.3, highly regarded cipher algorithms, and the ability to use RSA or Elliptic Curve Cryptography (ECC) formatted keys. The strongest of the protocols, ciphers, and key formats the client is capable of is the result of the negotiation. The configuration instructs NGINX to cache the negotiation for a period of 10 minutes with the available memory allocation of 10 MB.

In testing, ECC certificates were found to be faster than the equivalent-strength RSA certificates. The key size is smaller, which results in the ability to serve more SSL/TLS connections, and with faster handshakes. NGINX allows you to configure multiple certificates and keys, and then serve the optimal certificate for the client browser. This allows you to take advantage of the newer technology but still serve older clients.

Note

NGINX is encrypting the traffic between itself and the client in this example. The connection to upstream servers may also be encrypted, however. The negotiation between NGINX and the upstream server is demonstrated in the Upstream Encryption recipe.

7.5 Upstream Encryption

Problem

You need to encrypt traffic between NGINX and the upstream service and set specific negotiation rules for compliance regulations or the upstream service is outside of your secured network.

Solution

Use the SSL directives of the HTTP proxy module to specify SSL rules:

location / {
    proxy_pass https://upstream.example.com;
    proxy_ssl_verify on;
    proxy_ssl_verify_depth 2;
    proxy_ssl_protocols TLSv1.2;
}

These proxy directives set specific SSL rules for NGINX to obey. The configured directives ensure that NGINX verifies that the certificate and chain on the upstream service is valid up to two certificates deep. The proxy_ssl_protocols directive specifies that NGINX will only use TLS version 1.2. By default, NGINX does not verify upstream certificates and accepts all TLS versions.

Discussion

The configuration directives for the HTTP proxy module are vast, and if you need to encrypt upstream traffic, you should at least turn on verification. You can proxy over HTTPS simply by changing the protocol on the value passed to the proxy_pass directive. However, this does not validate the upstream certificate. Other directives, such as proxy_ssl_certificate and proxy_ssl_certificate_key, allow you to lock down upstream encryption for enhanced security. You can also specify proxy_ssl_crl or a certificate revocation list, which lists certificates that are no longer considered valid. These SSL proxy directives help harden your system’s communication channels within your own network or across the public internet.

7.6 Securing a Location

Problem

You need to secure a location block using a secret.

Solution

Use the secure link module and the secure_link_secret directive to restrict access to resources to users who have a secure link:

    location /resources {
        secure_link_secret mySecret;
        if ($secure_link = "") { return 403; }

        rewrite ^ /secured/$secure_link;
    }

    location /secured/ {
        internal;
        root /var/www;
    }

This configuration creates an internal and public-facing location block. The public-facing location block /resources will return a 403 Forbidden unless the request URI includes an md5 hash string that can be verified with the secret provided to the secure_link_secret directive. The $secure_link variable is an empty string unless the hash in the URI is verified.

Discussion

Securing resources with a secret is a great way to ensure your files are protected. The secret is used in conjunction with the URI. This string is then md5 hashed, and the hex digest of that md5 hash is used in the URI. The hash is placed into the link and evaluated by NGINX. NGINX knows the path to the file being requested because it’s in the URI after the hash. NGINX also knows your secret as it’s provided via the secure_link_secret directive. NGINX is able to quickly validate the md5 hash and store the URI in the $secure_link variable. If the hash cannot be validated, the variable is set to an empty string. It’s important to note that the argument passed to the secure_link_secret must be a static string; it cannot be a variable.

7.7 Generating a Secure Link with a Secret

Problem

You need to generate a secure link from your application using a secret.

Solution

The secure link module in NGINX accepts the hex digest of an md5 hashed string, where the string is a concatenation of the URI path and the secret. Building on the last section, Recipe 7.6, we will create the secured link that will work with the previous configuration example given that there’s a file present at /var/www/secured/index.html. To generate the hex digest of the md5 hash, we can use the Unix openssl command:

$ echo -n 'index.htmlmySecret' | openssl md5 -hex
(stdin)= a53bee08a4bf0bbea978ddf736363a12

Here we show the URI that we’re protecting, index.html, concatenated with our secret, mySecret. This string is passed to the openssl command to output an md5 hex digest.

The following is an example of the same hash digest being constructed in Python using the hashlib library that is included in the Python Standard Library:

import hashlib
hashlib.md5.(b'index.htmlmySecret').hexdigest()
'a53bee08a4bf0bbea978ddf736363a12'

Now that we have this hash digest, we can use it in a URL. Our example will be www.example.com making a request for the file /var/www/secured/index.html through our /resources location. Our full URL will be the following:

www.example.com/resources/a53bee08a4bf0bbea978ddf736363a12/
index.html

Discussion

Generating the digest can be done in many ways, in many languages. Things to remember: the URI path goes before the secret, there are no carriage returns in the string, and use the hex digest of the md5 hash.

7.8 Securing a Location with an Expire Date

Problem

You need to secure a location with a link that expires at some future time and is specific to a client.

Solution

Utilize the other directives included in the secure link module to set an expire time and use variables in your secure link:

location /resources {
    root /var/www;
    secure_link $arg_md5,$arg_expires;
    secure_link_md5 "$secure_link_expires$uri$remote_addrmySecret";
    if ($secure_link = "") { return 403; }
    if ($secure_link = "0") { return 410; }
}

The secure_link directive takes two parameters separated with a comma. The first parameter is the variable that holds the md5 hash. This example uses an HTTP argument of md5. The second parameter is a variable that holds the time in which the link expires in Unix epoch time format. The secure_link_md5 directive takes a single parameter that declares the format of the string that is used to construct the md5 hash. Like the other configuration, if the hash does not validate, the $secure_link variable is set to an empty string. However, with this usage, if the hash matches but the time has expired, the $secure_link variable will be set to 0.

Discussion

This usage of securing a link is more flexible and looks cleaner than the secure_link_secret shown in Recipe 7.6. With these directives, you can use any number of variables that are available to NGINX in the hashed string. Using user-specific variables in the hash string will strengthen your security, as users won’t be able to trade links to secured resources. It’s recommended to use a variable like $remote_addr or $http_x_forwarded_for, or a session cookie header generated by the application. The arguments to secure_link can come from any variable you prefer, and they can be named whatever best fits. The conditions are: Do you have access? Are you accessing it within the time frame? If you don’t have access: Forbidden. If you have access but you’re late: Gone. The HTTP 410, Gone, works great for expired links because the condition is to be considered permanent.

7.9 Generating an Expiring Link

Problem

You need to generate a link that expires.

Solution

Generate a timestamp for the expire time in the Unix epoch format. On a Unix system, you can test by using the date as demonstrated in the following:

$ date -d "2020-12-31 00:00" +%s --utc
1609372800

Next, you’ll need to concatenate your hash string to match the string configured with the secure_link_md5 directive. In this case, our string to be used will be 1293771600/resources/index.html127.0.0.1 mySecret. The md5 hash is a bit different than just a hex digest. It’s an md5 hash in binary format, base64-encoded, with plus signs (+) translated to hyphens (-), slashes (/) translated to underscores (_), and equal signs (=) removed. The following is an example on a Unix system:

$ echo -n '1609372800/resources/index.html127.0.0.1 mySecret' 
  | openssl md5 -binary 
  | openssl base64 
  | tr +/ -_ 
  | tr -d =
TG6ck3OpAttQ1d7jW3JOcw

Now that we have our hash, we can use it as an argument along with the expire date:

/resources/index.html?md5=TG6ck3OpAttQ1d7jW3JOcw&expires=1609372
  800'

The following is a more practical example in Python utilizing a relative time for the expiration, setting the link to expire one hour from generation. At the time of writing this example works with Python 2.7 and 3.x utilizing the Python Standard Library:

from datetime import datetime, timedelta
from base64 import b64encode
import hashlib

# Set environment vars
resource = b'/resources/index.html'
remote_addr = b'127.0.0.1'
host = b'www.example.com'
mysecret = b'mySecret'

# Generate expire timestamp
now = datetime.utcnow()
expire_dt = now + timedelta(hours=1)
expire_epoch = str.encode(expire_dt.strftime('%s'))

# md5 hash the string
uncoded = expire_epoch + resource + remote_addr + mysecret
md5hashed = hashlib.md5(uncoded).digest()

# Base64 encode and transform the string
b64 = b64encode(md5hashed)
unpadded_b64url = b64.replace(b'+', b'-')
    .replace(b'/', b'_')
    .replace(b'=', b'')

# Format and generate the link
linkformat = "{}{}?md5={}?expires={}"
securelink = linkformat.format(
    host.decode(),
    resource.decode(),
    unpadded_b64url.decode(),
    expire_epoch.decode()
)
print(securelink)

Discussion

With this pattern, we’re able to generate a secure link in a special format that can be used in URLs. The secret provides security through use of a variable that is never sent to the client. You’re able to use as many other variables as you need to in order to secure the location. md5 hashing and base64 encoding are common, lightweight, and available in nearly every language.

7.10 HTTPS Redirects

Problem

You need to redirect unencrypted requests to HTTPS.

Solution

Use a rewrite to send all HTTP traffic to HTTPS:

server {
    listen 80 default_server;
    listen [::]:80 default_server;
    server_name _;
    return 301 https://$host$request_uri;
}

This configuration listens on port 80 as the default server for both IPv4 and IPv6 and for any hostname. The return statement returns a 301 permanent redirect to the HTTPS server at the same host and request URI.

Discussion

It’s important to always redirect to HTTPS where appropriate. You may find that you do not need to redirect all requests but only those with sensitive information being passed between client and server. In that case, you may want to put the return statement in particular locations only, such as /login.

7.11 Redirecting to HTTPS Where SSL/TLS Is Terminated Before NGINX

Problem

You need to redirect to HTTPS, however, you’ve terminated SSL/TLS at a layer before NGINX.

Solution

Use the common X-Forwarded-Proto header to determine if you need to redirect:

server {
    listen 80 default_server;
    listen [::]:80 default_server;
    server_name _;
    if ($http_x_forwarded_proto = 'http') {
        return 301 https://$host$request_uri;
    }
}

This configuration is very much like HTTPS redirects. However, in this configuration we’re only redirecting if the header X-Forwarded-Proto is equal to HTTP.

Discussion

It’s a common use case that you may terminate SSL/TLS in a layer in front of NGINX. One reason you may do something like this is to save on compute costs. However, you need to make sure that every request is HTTPS, but the layer terminating SSL/TLS does not have the ability to redirect. It can, however, set proxy headers. This configuration works with layers such as the Amazon Web Services Elastic Load Balancer (AWS ELB), which will offload SSL/TLS at no additional cost. This is a handy trick to make sure that your HTTP traffic is secured.

7.12 HTTP Strict Transport Security

Problem

You need to instruct browsers to never send requests over HTTP.

Solution

Use the HTTP Strict Transport Security (HSTS) enhancement by setting the Strict-Transport-Security header:

  add_header Strict-Transport-Security max-age=31536000;

This configuration sets the Strict-Transport-Security header to a max age of a year. This will instruct the browser to always do an internal redirect when HTTP requests are attempted to this domain, so that all requests will be made over HTTPS.

Discussion

For some applications, a single HTTP request trapped by a man-in-the-middle attack could be the end of the company. If a form post containing sensitive information is sent over HTTP, the HTTPS redirect from NGINX won’t save you; the damage is done. This opt-in security enhancement informs the browser to never make an HTTP request, and therefore the request is never sent unencrypted.

7.13 Satisfying Any Number of Security Methods

Problem

You need to provide multiple ways to pass security to a closed site.

Solution

Use the satisfy directive to instruct NGINX that you want to satisfy any or all of the security methods used:

location / {
    satisfy any;

    allow 192.168.1.0/24;
    deny  all;

    auth_basic           "closed site";
    auth_basic_user_file conf/htpasswd;
}

This configuration tells NGINX that the user requesting the location / needs to satisfy one of the security methods: either the request needs to originate from the 192.168.1.0/24 CIDR block or be able to supply a username and password that can be found in the conf/htpasswd file. The satisfy directive takes one of two options: any or all.

Discussion

The satisfy directive is a great way to offer multiple ways to authenticate to your web application. By specifying any to the satisfy directive, the user must meet one of the security challenges. By specifying all to the satisfy directive, the user must meet all of the security challenges. This directive can be used in conjunction with the http_access_module detailed in Recipe 7.1, the http_auth_basic_module detailed in Recipe 6.1, the http_auth_request_module detailed in Recipe 6.2, and the http_auth_jwt_module detailed in Recipe 6.3. Security is only truly secure if it’s done in multiple layers. The satisfy directive will help you achieve this for locations and servers that require deep security rules.

7.14 NGINX Plus Dynamic Application Layer DDoS Mitigation

Problem

You need a dynamic Distributed Denial of Service (DDoS) mitigation solution.

Solution

Use NGINX Plus to build a cluster-aware rate limit and automatic blocklist:

limit_req_zone   $remote_addr zone=per_ip:1M rate=100r/s sync;
                 # Cluster-aware rate limit
limit_req_status 429;

keyval_zone zone=sinbin:1M timeout=600 sync; 
              # Cluster-aware "sin bin" with 
              # 10-minute TTL
keyval $remote_addr $in_sinbin zone=sinbin;  
              # Populate $in_sinbin with 
              # matched client IP addresses

server {
    listen 80;
    location / {
        if ($in_sinbin) {
            set $limit_rate 50; # Restrict bandwidth of bad clients
        }

        limit_req zone=per_ip;            
              # Apply the rate limit here
        error_page 429 = @send_to_sinbin; 
              # Excessive clients are moved to 
              # this location
        proxy_pass http://my_backend;
    }

    location @send_to_sinbin {
        rewrite ^ /api/3/http/keyvals/sinbin break; 
              # Set the URI of the 
              # "sin bin" key-val
        proxy_method POST;
        proxy_set_body '{"$remote_addr":"1"}';
        proxy_pass http://127.0.0.1:80;
    }

    location /api/ {
        api write=on;
        # directives to control access to the API
    }
}

Discussion

This solution uses a synchronized rate limit by use of a synchronized key-value store to dynamically respond to DDoS attacks and mitigate their effects. The sync parameter provided to the limit_req_zone and keyval_zone directives synchronizes the shared memory zone with other machines in the active-active NGINX Plus cluster. This example identifies clients that send more than 100 requests per second, regardless of which NGINX Plus node receives the request. When a client exceeds the rate limit, its IP address is added to a “sin bin” key-value store by making a call to the NGINX Plus API. The sin bin is synchronized across the cluster. Further requests from clients in the sin bin are subject to a very low bandwidth limit, regardless of which NGINX Plus node receives them. Limiting bandwidth is preferable to rejecting requests outright because it does not clearly signal to the client that DDoS mitigation is in effect. After 10 minutes, the client is automatically removed from the sin bin.

7.15 Installing and Configuring NGINX Plus App Protect Module

Problem

You need to install and configure the NGINX Plus App Protect Module.

Solution

Follow the NGINX Plus App Protect installation guide for your platform. Make sure not to skip the portion about installing App Protect signatures from the separate repository.

Ensure that the App Protect Module is dynamically loaded by NGINX Plus using the load_module directive in the main context, and enabled by using the app_protect_* directives.

user nginx;
worker_processes  auto;
 
load_module modules/ngx_http_app_protect_module.so;

# ... Other main context directives

http {
    app_protect_enable on; 
    app_protect_policy_file "/etc/nginx/AppProtectTransparentPolicy.json"; 
    app_protect_security_log_enable on; 
    app_protect_security_log "/etc/nginx/log-default.json"
      syslog:server=127.0.0.1:515;
    
    # ... Other http context directives
}

In this example, the app_protect_enable directive set to on enabled the module for the current context. This directive, and all of the following, are valid within the HTTP context, as well as the Server and Location contexts with HTTP. The app_protect_policy_file directive points to an App Protect policy file which we will define next; if not defined, the default policy is used. Security logging is configured next, and requires a remote logging server. For the example, we’ve configured it to the local syslog listener. The app_protect_security_log directive takes two parameters; the first is a JSON file that defines the logging settings, and the second is a log stream destination. The log settings file will be shown later in this section.

Build an App Protect Policy file, and name it /etc/nginx/AppProtectTransparentPolicy.json:

{
    "policy": {
        "name": "transparent_policy",
        "template": { "name": "POLICY_TEMPLATE_NGINX_BASE" },
        "applicationLanguage": "utf-8",
        "enforcementMode": "transparent"
    }
}

This policy file configures the default NGINX App Protect policy by use of a template, setting the policy name to transparent_policy, and setting the enforcementMode to transparent, which means NGINX Plus will log but not block. Transparent mode is great to test out new policies before putting them into effect.

Enable blocking by changing the enforcementMode to blocking. This policy file can be named /etc/nginx/AppProtectTransparentPolicy.json. To switch between the files, update the app_protect_policy_file directive in your NGINX Plus configuration.

{
    "policy": {
        "name": "blocking_policy",
        "template": { "name": "POLICY_TEMPLATE_NGINX_BASE" },
        "applicationLanguage": "utf-8",
        "enforcementMode": "blocking"
    }
}

To enable some of the protection features of App Protect, enable some violations:

{
    "policy": {
        "name": "blocking_policy",
        "template": { "name": "POLICY_TEMPLATE_NGINX_BASE" },
        "applicationLanguage": "utf-8",
        "enforcementMode": "blocking",
        "blocking-settings": {
            "violations": [
                {
                    "name": "VIOL_JSON_FORMAT",
                    "alarm": true,
                    "block": true
                },
                {
                    "name": "VIOL_PARAMETER_VALUE_METACHAR",
                    "alarm": true,
                    "block": false
                }
            ]
        }
    }
}

The above example demonstrates adding two violations to our policy. Take note that VIOL_PARAMETER_VALUE_METACHAR is not set to block, but only alarm; whereas VIOL_JSON_FORMAT is set to block and alarm. This functionality enables the overriding of the default enforcementMode when set to blocking. When enforcementMode is set to transparent, the default enforcement setting takes precedence.

Set up an NGINX Plus logging file, named /etc/nginx/log-default.json:

{
   "filter":{
      "request_type":"all"
   }, 
   "content":{
      "format":"default",
      "max_request_size":"any",
      "max_message_size":"5k"
   }    
}

This file was defined in the NGINX Plus configuration by the app_protect_security_log directive and is necessary for App Protect logging.

Discussion

This solution demonstrates the basis of configuring the NGINX Plus App Protect Module. The App Protect module enables an entire suite of Web Application Firewall (WAF) definitions. These definitions derive from the Advanced F5 Application Security functionality in F5. This comprehensive set of WAF attack signatures has been extensively field-tested and proven. By adding this to an NGINX Plus installation, it renders the best of F5 Application Security with the agility of the NGINX Platform.

Once the module is installed and enabled, most of the configuration is done in a policy file. The policy files in this section showed how to enable active blocking, passive monitoring, transparent mode, as well as explained overrides to this functionality with violations. Violations are only one type of protection offered. Other protections include HTTP Compliance, Evasion Techniques, Attack Signatures, Server Technologies, Data Guard, and many more. To retrieve App Protect logs, it’s necessary to use the NGINX Plus logging format and send the logs to a remote listening service, a file, or /dev/stderr.

If you’re using NGINX Controller ADC, you can enable NGINX App Protect WAF capabilities through NGINX Controllers App Security component, and visualize the WAF metrics through the web interface.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.151.141