Chapter 4. NGINX and Microsoft Managed Options

Microsoft Azure provides a number of different proxy-like, data plane–level services that forward a request or connection through different networking layers, load balancing and applying rules along the way. NGINX provides much of the same functionality as these services but can reside deeper in the stack, and it has less configuration limitation. When delivering applications hosted in Microsoft Azure, you need to determine what controls are needed where, and how best to provide them.

Most of the time the right answer is not one service or the other, but a mix. By layering proxy-like services in front of your application, you’re able to maintain more control and distribute the incoming load. The Azure services are meant to complement one another by being layered throughout the stack. NGINX is interchangeable with the Azure services that reside in the Azure Virtual Network. A major added value of Azure managed services is that because they are managed, they do not require maintenance and care on your part.

The Azure managed services that provide proxy-like, data plane–level services are Azure Front Door, CDN Profiles, Application Gateway, and Load Balancer. All of them have valid use cases, some have overlapping features, and they all can be frontends for NGINX. Azure Front Door is covered in depth in Chapter 5; the present chapter will focus on Azure Load Balancer, the Application Gateway, and the integration with Azure WAF policies. CDN Profiles, while they do act as a proxy, are not designed for load balancing, therefore are not discussed in this book.

Comparing NGINX and Azure Load Balancer

Azure Load Balancer operates at Layer 4 of the OSI model, the transport layer. This means that Azure Load Balancer is chauffeuring the connection from the client to a backend server. As the connection is direct between the client and the server, Azure Load Balancer is not considered a proxy. Only data within the connection headers is used or updated by Azure Load Balancer; it does not and cannot use or manipulate the data within the packets.

Using information from the connection headers, Azure Load Balancer can determine to which backend server it should route the request. Load balancing of a connection is performed by a hash algorithm that uses the connection headers to place a connection across the backend pool. Five pieces of information are used from the connection header to generate the hash:

  • Source IP
  • Source port
  • Destination IP
  • Destination port
  • IP protocol

Azure Load Balancer calls the connection sessions flows, because a flow may consist of multiple connections. Because the source port usually changes between connections, Azure Load Balancer creates an affinity, or rudimentary session persistence, between client and server by hashing only a portion of the connection header information that is used for initial distribution. As a result, connections from a given source IP, destination IP, and destination port will be pinned to the same backend server.

This operating model is different from NGINX, because NGINX operates at Layer 7 of the OSI model, the application layer. With NGINX there are two connections: one between the client and NGINX and another between NGINX and the server. Acting as an intermediary in the connection makes NGINX a proxy.

Operating at Layer 7, NGINX has the ability to read and manipulate the data packet bound from the client to the server, and the response bound from the server to the client. In this way, NGINX can understand higher-level application protocols such as HTTP and use that information for routing, whereas Layer 4 load balancers just enable the transport of a connection.

Use Cases

There are valid use cases for both. With Layer 4 load balancing, the direct connection between the client and the server has benefits. The server receives the direct connection and has all of the original connection information without having to understand proxy protocol. This is especially important for legacy applications that depend on a direct connection with the client. A proxy scenario also has its benefits, as it’s able to control the backend connections and therefore can optimize or manipulate those in any way it may need to. If an application relies on client information in the connection headers, it would simply need to understand the proxy protocol. The proxy protocol appends information about proxies the request has passed through on its way to the server. The proxy protocol is an addendum to Layer 7 protocols, which means the information goes in the application layer protocol headers and not in the connection headers.

Despite these differences, the two solutions have things in common. Both NGINX and Azure Load Balancer are able to load balance traffic and can route based on connection information. Both are able to listen for incoming traffic on one port and direct the request to the backend service, which may be on a different port; in the Layer 4 scenario this is considered Network Address Translation, or NAT, whereas in a proxy scenario this doesn’t have a name—it’s just part of the nature of creating the second connection. Both Azure Load Balancer and NGINX can perform TCP and UDP load balancing.

While the solutions are different from each other and serve their own use cases, it’s not uncommon to see them working together. Azure Load Balancer complements NGINX well when multiple NGINX machines are deployed and the traffic bound for them needs to be balanced with something more sophisticated than DNS. Don’t be surprised to see Azure Load Balancer in front of a fleet of NGINX nodes.

Comparing NGINX and Azure Application Gateway Functionality

Where Azure Load Balancer and NGINX differ, Azure Application Gateway and NGINX have more commonalities. Azure Application Gateway operates at Layer 7, like NGINX. To operate at this layer, Azure Application Gateway must and does act as a proxy. One major difference is that NGINX is able to do this for all TCP and UDP protocols, whereas Application Gateway is concentrated only on HTTP(S).

By receiving the request and understanding the HTTP protocol, Application Gateway is able to use HTTP header information to make decisions about how to route or respond to requests. The idea of an Application Gateway, or API gateway, is to consolidate multiple microservices that make up a service or product offering under a single API endpoint. This API endpoint understands the services that it’s providing for, as well as its overall API spec.

Concept Versus Product Terminology

In the following sections, we will use the term API gateway to refer to a concept that Azure Application Gateway, NGINX, and other application delivery controllers all fit into. When referring to the product Azure Application Gateway, we’ll use the term Azure Application Gateway or Application Gateway.

By having a full understanding of the API spec, an API gateway can validate requests on their way to your application. If a request is invalid, it can be denied at the API gateway. Basic matching of requests for redirection is also possible. The power and necessity of an API gateway lie in its ability to route traffic to different backend services based on the URI path. Microservices for RESTful APIs are typically broken up by the sets of API resources they handle, and that’s reflected by the API’s path. In this way, we can use URI path matching to direct requests for specific API resources, based on information in the URI, to the correct microservices.

An example of URI-based routing would be if we had two services, authentication and users. Our API gateway handles requests for both but routes each request based on the URI. Resource requests for authentication are behind a URI path prefix of /auth/, and requests for the users service are behind a URI path prefix of /users/. Figure 4-1 depicts this scenario.

Figure 4-1. URI-based routing with Azure Application Gateway.

Once the API gateway has validated a request and matched a URI path routing rule, it can manipulate the request it makes to the backend service by altering headers, or URI paths. It can perform these actions because it is a proxy and is making its own connection and requests to the backend service on behalf of the request it received from the client. This is important to note because you may have headers that are used only internally, or paths on the backend services may not match exactly what your frontend API provides. By virtue of being a proxy, the API gateway is also able to terminate SSL/TLS, meaning that connections to backend services may not be encrypted, or use a different certificate for that connection.

Once the request is ready to be sent to a backend service, the feature sets of what could be considered an API gateway versus what more advanced data plane services provide start to differ. Both Azure Application Gateway and NGINX are able to provide load balancing, whereas some API gateways would simply pass the request to a load balancer. Having load balancing built into an API gateway solution is nice because it saves a hop in the connection and provides a complete control in between client-server communication and routing in a single system.

Connection Draining

A useful feature that both Azure Application Gateway and NGINX Plus provide is connection draining, which allows live connections to finish before removing a backend node from the load-balancing pool. This feature is not available in the open source version of NGINX.

When load balancing requests, there is sometimes a need for session persistence. In Azure it’s referred to as session affinity. When a backend service does not share session state between horizontally scaled nodes, subsequent requests need to be routed to the same backend node. The most common case of requiring session persistence is when legacy applications are ported to a cloud environment and session state has not yet moved off local disk or memory to storage that is network addressable, such as Redis or Memcached. This is less common with API gateways, as they were built around more modern-day web architecture. A scenario in which an API gateway may require session persistence might be when the session data being worked with is too large to be performant over the network.

An important feature that both NGINX and Azure Application Gateway provide is the support of Websockets and HTTP/2 traffic. HTTP/2 enables the client-server connection to pass multiple requests through a single connection, cutting down on handshakes and SSL/TLS negotiation. The server in the case of HTTP/2 is the API gateway. A Websocket enables bidirectional communication between the client and server over a long-standing connection. The server in this case is the backend application server.

A feature that NGINX provides but Azure Application Gateway does not is HTTP/2 push. HTTP/2 push is a feature of the HTTP/2 protocol in which the server can push extra documents that it knows are going to be subsequent requests. One common example would be in response to a request for index.html, where the server knows that the browser will also need some CSS and JavaScript documents. The server can push those documents with the response for index.html to save on round-trip requests.

Azure Application Gateway and NGINX are a lot alike; however, Azure Application Gateway is missing one important and welcome feature of an API gateway, which is the ability to validate authentication. The web has began to standardize on JSON Web Tokens, or JWTs, which use asymmetric encryption to validate identity claims. Standard authentication workflows such as OAUTH/2 and OpenId Connect utilize JWTs, which enables services that can validate JWTs to take part in the authentication validation process. NGINX Plus is able to validate JWTs out of the box, whereas with open source NGINX, validation requires a bit of work through extendable programming. Both NGINX and NGINX Plus can also perform authentication subrequests, where the request or a portion of the request can be sent to an authentication service for validation before NGINX proxies the request to a backend service. Azure Application Gateway does not offer any authentication validation, meaning your services will need to validate the request once it is received, whereas this action could and should be offloaded to the API gateway whenever possible.

Comparing NGINX and Azure Web Application Firewall Capabilities

A Web Application Firewall (WAF) is a Layer 7 proxy that specifically reviews the request for security exploits and determines whether the request should be proxied to the backend service or denied. The request is evaluated against a number of rules that look for things like cross-site scripting, SQL injection, known bad bots, protocol violations, application language and framework-specific vulnerabilities, and size limit abuse. Azure services and NGINX are able to act as Web Application Firewalls.

Azure provides Web Application Firewall capabilities in the form of policies that can be attached to the Azure Front Door and Application Gateway services. Azure WAF policies comprise a number of rules. These rules take the form of managed rule sets provided by Azure and custom rules defined by you. Managed rule sets are supplied by Azure, and at least one must be configured. Individual rules within a managed rule set can be disabled if necessary. The managed rule sets provide protection out of the box, but you can also build and apply your own custom rules on top of the managed rule set. You can set specific custom WAF rules or entire policies to block or passively monitor and record events.

When using custom rules, you can match on a number of different conditions gleaned from the request. A rule is made up of numerous components, such as the type of match, where to find the variable, an operator, and our matching pattern.

The following describes the types of rules that can be set up and their different options:

IP address
The source IP address of the request is matched inclusively or exclusively against a CIDR range, or specific IP address.
Number
A variable numeric value that is derived from the query string, request URI, headers, arguments, body, or cookies and that is or is not less than, greater to, or equal to a specific value.
String
A variable string value that is derived from the query string, request URI, headers, arguments, body, or cookies. The value is evaluated by an operator to determine whether the derived string contains, begins with, ends with, or is equal to the value of the string provided in the rule.
Geo location
The variable derived from the source IP or request header is compared against an array of country or region codes. The rule allows the provided country or region code list to be inclusive or exclusive.

Azure WAF policies log and produce a metric for each blocked request. The log has metadata about the request and the rule that blocked it. The metric can be filtered by rule name and action type. Metrics can be found in Azure Monitor. Logs are able to be streamed to an Azure Storage Account, Event Hub, or Log Analytics. This monitoring information allows you to analyze how your WAF rules are performing and whether they’re flagging false positives. With any WAF you should monitor live traffic with your rule set in a mode that passively monitors for rule violations, review the information, and confirm that the WAF is working appropriately before enabling it to actively block traffic.

The Azure WAF policies are a great addition to the Azure managed service offerings. WAF policies should be enabled at any layer of your Azure environment to which they can be applied. Being that these are fully managed and come with default rule sets, there’s no reason not to take advantage of them.

ModSecurity

The aforementioned functionality provides the basis for what would be considered a WAF: evaluating requests to block based on matching rules. These rules can be configured to be extremely versatile and specific to block all sorts of attacks. This type of functionality can be found in the open source Web Application Firewall ModSecurity, which integrates directly into NGINX. ModSecurity is a rule engine specifically for matching web request attributes.

Installing ModSecurity for NGINX provides the same type of plug-in option as the Azure WAF policies do for Application Gateway. With ModSecurity, you can find a number of community-maintained rule sets ready for use, plug them in, and get going. ModSecurity’s configuration capabilities go extremely deep, such that entire books have been written on the topic. One of the most popular community-maintained rule sets is the OWASP ModSecurity Core Rule Set (CRS), which is provided by the OWASP project. The OWASP CRS is one of the two managed rule sets provided by Azure WAF policies; the other is a list specifically about bots. The OWASP CRS is versioned, and at the time of writing, the latest public rule set version is 3.2, while the latest offered by Azure is 3.1.

Another extremely popular rule set is from Trustwave SpiderLabs. It requires a commercial subscription but is updated daily, so your ModSecurity rules are always up to date on the most recently discovered vulnerabilities and web attack patterns. The increased rate of updates on current web attacks is worth a premium over waiting for Azure to update its managed rule sets.

If you are comparing these two options, you’re weighing a fully managed solution against a DIY open source solution. There are clear pros and cons here. Being fully managed with simplified configuration is a clear pro for Azure WAF policies. Bleeding-edge updates to security patterns and advanced configuration are a clear win for NGINX with ModSecurity. The cons are the exact reverse of the pros: NGINX must be managed by you and is more complicated to configure, whereas Azure is not bleeding edge on security updates but is easy to configure and doesn’t require management on your part. This, however, does not have to be an either/or comparison. You can use a mix of the two, applying Azure WAF polices to Azure Front Door and using NGINX as a WAF at the API gateway layer. A determination of what is best for your situation will depend on circumstantial conditions within your organization.

NGINX App Protect

After F5 acquired NGINX, it integrated the F5 WAF with NGINX Plus to create a commercial WAF option for NGINX Plus called the NGINX App Protect module. The App Protect module is more advanced than ModSecurity and receives updated signatures from F5 to keep the rules up to date with the latest security policies.

To use NGINX App Protect, you need a subscription to NGINX Plus and a subscription to NGINX App Protect. You can subscribe through the marketplace (NGINX Plus with NGINX App Protect) or the installation is done through a private NGINX Plus repository for the package manager being used by your system. After the module is installed, it can be dynamically loaded into NGINX Plus, enabled, and provided with a policy file. A passive mode can be enabled by turning on the module’s logging directive and providing a log location. The log location consists of a JSON configuration file and a destination. The destination may be the local or remote syslog receiver, a file, or /dev/stderr. The JSON configuration file enables filtering of which events are logged. An example follows:

{
   "filter":{
      "request_type":"all"
   },
   "content":{
      "format":"default",
      "max_request_size":"any",
      "max_message_size":"5k"
   }
}

As mentioned before, it is recommended that you monitor a rule set before enabling it to understand the pattern of what will be blocked or allowed.

Once logging is set up, the App Protect module is open to a vast amount of configuration through the policy file. NGINX and F5 have provided a number of different templates to enable you to protect your apps with high-level definitions rather than building your own rules, though that is an option. Each policy provides the ability to set an enforcementMode attribute to transparent or blocking. This is an advantage over turning the entire WAF on or off because you can test certain policies while still enforcing those policies you know are good.

The attribute names of a policy file speak for themselves. The following is an example of a policy:

{
    "policy": {
        "name": "blocking_policy",
        "template": { "name": "POLICY_TEMPLATE_NGINX_BASE" },
        "applicationLanguage": "utf-8",
        "enforcementMode": "blocking",
        "blocking-settings": {
            "violations": [
                {
                    "name": "VIOL_JSON_FORMAT",
                    "alarm": true,
                    "block": true
                },
                {
                    "name": "VIOL_PARAMETER_VALUE_META CHAR",
                    "alarm": true,
                    "block": false
                }
            ]
        }
    }
}

At its core, App Protect is still using the same information from requests to look for malicious requests based on a number of filters, but the funding behind it has enabled it to advance past what’s going on in the open source WAF options. One of the most valuable features of the App Protect module is its ability to filter responses, which enables us to filter outbound data to prevent sensitive data from leaving the system. Credit card information is an example of data that should never be returned to the user, and with the ability to filter responses, we can ensure that it doesn’t. When dealing with sensitive information, risk reduction of data leaks is of the highest importance.

App Protect is, in a way, a managed service because of the updated signatures and the number of high-level features. Prebuilt parsers for application data transfer standards like JSON and XML, SQL and Mogno syntaxes, and Linux and Windows Commands enable higher-level controls. Signature updates take a load of security management responsibility off an organization. It takes a certain degree of skill and effort to build complex filter rules to block only bad requests while staying up to date with the landscape of active new threats.

NGINX Plus with the App Protect module flips the management versus configurability scenario. The rules are tightly managed by the subscription, and the configuration options are more in-depth, but you have to manage the hosting and underlying OS. Hosting and ensuring availability is par for the course in cloud environments, and thus if you build and configure your NGINX Plus layer as you do your application code, it’s no more than another app on the stack. This makes a solid case for distributing your data plane technologies; by layering fully managed with highly configurable and up to date, you build toward the highest levels of security and availability.

Highly Available Multiregion NGINX Plus with Traffic Manager

Now that you have an understanding of how managed Azure load-balancing solutions and NGINX compare, we’ll take a look at how you can layer solutions to enhance your web application offering.

All of the same concepts apply when using NGINX as a load balancer or API gateway over the Azure managed offerings. Because of distribution and point of presence locations that Azure managed services provide, you should utilize the global managed services from Azure to distribute load and route client requests to the correct environment region.

Figure 4-2 shows a multiregion deployment using NGINX Plus as an API gateway in both regions. NGINX Plus is also being used to load balance over a database tier. Traffic is routed through Traffic Manager using the Performance algorithm to provide clients with responses from the least latent region.

The Content Delivery Network, if the request is not cached, will proxy the request to the nearest region, where the request will be received by NGINX Plus. NGINX Plus will decrypt the request in the case of HTTPS. NGINX Plus inspects the request and routes to different server pools based on the request URI. The backend service may make a request through another NGINX Plus load-balancing tier to access the database.

Figure 4-2. A multiregion web application on Azure using the Azure Content Delivery Network, which uses Traffic Manager to route the client’s request to the point of presence closest to the user.

Figure 4-3 depicts a scenario in which Traffic Manager uses geography-based routing to direct a client in California to the US-West Azure region based on the user’s geography. A client in California makes a DNS request, and Traffic Manager responds with the endpoint for US-West. The client makes a direct connection to NGINX Plus in the US-West region. NGINX Plus then decrypts the request in the case of HTTPS. NGINX Plus inspects the request and routes the request based on its own rules and proxies the request, which may be reencrypted.

Figure 4-3. NGINX Plus, along with GeoDNS, enables a globally distributed application.

In these scenarios, Traffic Manager is directing our client to an available region that best fits the client’s needs or the needs of our regulation. NGINX Plus is providing the API gateway functionality, as well as internal load balancing. These solutions together enable high availability failover, as well as highly configurable traffic routing within our environment.

Conclusion

Microsoft Azure provides a number of different data plane managed services to aid in stronger and more reliable delivery of your application. Throughout this chapter, you learned how Azure Load Balancer, Application Gateway, and WAF Policies work, how they differ, and how they can be used with NGINX or NGINX Plus. The knowledge of where these different services fit in the stack enables you to make informed architecture decisions about load balancing and application delivery for your cloud platform and about the trade-off between functionality and management.

In this chapter we introduced the idea of layering Microsoft Azure managed load balancing solutions with NGINX. In the next chapter, we will add another layer to the managed Azure data plane service by looking at the Azure Front Door service and at how it can be used with NGINX.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.189.14.219