NGINX is able to authenticate clients. Authenticating client requests with NGINX offloads work and provides the ability to stop unauthenticated requests from reaching your application servers. Modules available for NGINX Open Source include basic authentication and authentication subrequests. The NGINX Plus exclusive module for verifying JSON Web Tokens (JWTs) enables integration with third-party authentication providers that use the authentication standard OpenID Connect.
You need to secure your application or content via HTTP basic authentication.
Generate a file in the following format, where the password is encrypted or hashed with one of the allowed formats:
# comment name1:password1 name2:password2:comment name3:password3
The username is the first field, the password the second field, and the delimiter is a colon. There is an optional third field, which you can use to comment on each user. NGINX can understand a few different formats for passwords, one of which is whether the password is encrypted with the C function crypt()
. This function is exposed to the command line by the openssl passwd
command. With openssl
installed, you can create encrypted password strings by using the following command:
$ openssl passwd MyPassword1234
The output will be a string that NGINX can use in your password file.
Use the auth_basic
and auth_basic_user_file
directives within your NGINX configuration to enable basic authentication:
location / { auth_basic "Private site"; auth_basic_user_file conf.d/passwd; }
You can use the auth_basic
directives in the HTTP, server, or location contexts. The auth_basic
directive takes a string parameter, which is displayed on the basic authentication pop-up window when an unauthenticated user arrives. The auth_basic_user_file
specifies a path to the user file.
To test your configuration, you can use curl
with the -u
or --user
to build an Authorization
header for the request.
$ curl --user myuser:MyPassword1234 https://localhost
You can generate basic authentication passwords a few ways, and in a few different formats, with varying degrees of security. The htpasswd
command from Apache can also generate passwords. Both the openssl
and htpasswd
commands can generate passwords with the apr1
algorithm, which NGINX can also understand. The password can also be in the salted SHA-1 format that Lightweight Directory Access Protocol (LDAP) and Dovecot use. NGINX supports more formats and hashing algorithms; however, many of them are considered insecure because they can easily be defeated by brute-force attacks.
You can use basic authentication to protect the context of the entire NGINX host, specific virtual servers, or even just specific location blocks. Basic authentication won’t replace user authentication for web applications, but it can help keep private information secure. Under the hood, basic authentication is done by the server returning a 401 unauthorized HTTP code with the response header WWW-Authenticate
. This header will have a value of Basic realm="your string"
. This response causes the browser to prompt for a username and password. The username and password are concatenated and delimited with a colon, then base64-encoded, and then sent in a request header named Authorization
. The Authorization
request header will specify a Basic
and user:password
encoded string. The server decodes the header and verifies against the provided auth_basic_user_file
. Because the username password string is merely base64-encoded, it’s recommended to use HTTPS with basic authentication.
You have a third-party authentication system for which you would like requests authenticated.
Use the http_auth_request_module
to make a request to the authentication service to verify identity before serving the request:
location /private/ { auth_request /auth; auth_request_set $auth_status $upstream_status; } location = /auth { internal; proxy_pass http://auth-server; proxy_pass_request_body off; proxy_set_header Content-Length ""; proxy_set_header X-Original-URI $request_uri; }
The auth_request
directive takes a URI parameter that must be a local internal location. The auth_request_set
directive allows you to set variables from the authentication subrequest.
The http_auth_request_module
enables authentication on every request handled by the NGINX server. The module will use a subrequest to determine if the request is authorized to proceed. A subrequest is when NGINX passes the request to an alternate internal location and observes its response before routing the request to its destination. The auth
location passes the original request, including the body and headers, to the authentication server. The HTTP status code of the subrequest is what determines whether or not access is granted. If the subrequest returns with an HTTP 200 status code, the authentication is successful and the request is fulfilled. If the subrequest returns HTTP 401 or 403, the same will be returned for the original request.
If your authentication service does not request the request body, you can drop the request body with the proxy_pass_request_body
directive, as demonstrated. This practice will reduce the request size and time. Because the response body is discarded, the Content-Length
header must be set to an empty string. If your authentication service needs to know the URI being accessed by the request, you’ll want to put that value in a custom header that your authentication service checks and verifies. If there are things you do want to keep from the subrequest to the authentication service, like response headers or other information, you can use the auth_request_set
directive to make new variables out of response data.
You need to validate a JWT before the request is handled with NGINX Plus.
Use NGINX Plus’s HTTP JWT authentication module to validate the token signature and embed JWT claims and headers as NGINX variables:
location /api/ { auth_jwt "api"; auth_jwt_key_file conf/keys.json; }
This configuration enables validation of JWTs for this location. The auth_jwt
directive is passed a string, which is used as the authentication realm. The auth_jwt
takes an optional token parameter of a variable that holds the JWT. By default, the Authentication
header is used per the JWT standard. The auth_jwt
directive can also be used to cancel the effects of required JWT authentication from inherited configurations. To turn off
authentication, pass the keyword to the auth_jwt
directive with nothing else. To cancel inherited authentication requirements, pass the off
keyword to the auth_jwt
directive with nothing else. The auth_jwt_key_file
takes a single parameter. This parameter is the path to the key file in standard JSON Web Key (JWK) format.
NGINX Plus is able to validate the JSON web-signature types of tokens, as opposed to the JSON web-encryption type, where the entire token is encrypted. NGINX Plus is able to validate signatures that are signed with the HS256, RS256, and ES256 algorithms. Having NGINX Plus validate the token can save the time and resources needed to make a subrequest to an authentication service. NGINX Plus deciphers the JWT header and payload, and captures the standard headers and claims into embedded variables for your use. The auth_jwt
directive can be used in the http
, server
, location
, and limit_except
contexts.
You need a JSON Web Key (JWK) for NGINX Plus to use.
NGINX Plus utilizes the JWK format as specified in the RFC standard. This standard allows for an array of key objects within the JWK file.
The following is an example of what the key file may look like:
{
"keys"
:
[
{
"kty"
:
"oct"
,
"kid"
:
"0001"
,
"k"
:
"OctetSequenceKeyValue"
},
{
"kty"
:
"EC"
,
"kid"
:
"0002"
"crv"
:
"P-256"
,
"x"
:
"XCoordinateValue"
,
"y"
:
"YCoordinateValue"
,
"d"
:
"PrivateExponent"
,
"use"
:
"sig"
},
{
"kty"
:
"RSA"
,
"kid"
:
"0003"
"n"
:
"Modulus"
,
"e"
:
"Exponent"
,
"d"
:
"PrivateExponent"
}
]
}
The JWK file shown demonstrates the three initial types of keys noted in the RFC standard. The format of these keys is also part of the RFC standard. The kty
attribute is the key type. This file shows three key types: the Octet Sequence (oct
), the EllipticCurve (EC
), and the RSA
type. The kid
attribute is the key ID. Other attributes to these keys are specified in the standard for that type of key. Look to the RFC documentation of these standards for more information.
There are numerous libraries available in many different languages to generate the JWK. It’s recommended to create a key service that is the central JWK authority to create and rotate your JWKs at a regular interval. For enhanced security, it’s recommended to make your JWKs as secure as your SSL/TLS certifications. Secure your key file with proper user and group permissions. Keeping them in memory on your host is best practice. You can do so by creating an in-memory filesystem like ramfs. Rotating keys on a regular interval is also important; you may opt to create a key service that creates public and private keys and offers them to the application and NGINX via an API.
You want to validate JSON Web Tokens with NGINX Plus.
Use the JWT module that comes with NGINX Plus to secure a location or server, and instruct the auth_jwt
directive to use $cookie_auth_token
as the token to be validated:
location /private/ { auth_jwt "Google Oauth" token=$cookie_auth_token; auth_jwt_key_file /etc/nginx/google_certs.jwk; }
This configuration directs NGINX Plus to secure the /private/ URI path with JWT validation. Google OAuth 2.0 OpenID Connect uses the cookie auth_token
rather than the default bearer token. Thus, you must instruct NGINX to look for the token in this cookie rather than in the NGINX Plus default location. The auth_jwt_key_file
location is set to an arbitrary path, which is a step that we cover in Recipe 6.6.
This configuration demonstrates how you can validate a Google OAuth 2.0 OpenID Connect JWT with NGINX Plus. The NGINX Plus JWT authentication module for HTTP is able to validate any JWT that adheres to the RFC for JSON Web Signature specification, instantly enabling any SSO authority that utilizes JWTs to be validated at the NGINX Plus layer. The OpenID 1.0 protocol is a layer on top of the OAuth 2.0 authentication protocol that adds identity, enabling the use of JWTs to prove the identity of the user sending the request. With the signature of the token, NGINX Plus can validate that the token has not been modified since it was signed. In this way, Google is using an asynchronous signing method and makes it possible to distribute public JWKs while keeping its private JWK secret.
You want NGINX Plus to automatically request the JSON Web Key Set (JWKS) from a provider and cache it.
Utilize a cache zone and the auth_jwt_key_request
directive to automatically keep your key up to date:
proxy_cache_path /data/nginx/cache levels=1 keys_zone=foo:10m; server { # ... location / { auth_jwt "closed site"; auth_jwt_key_request /jwks_uri; } location = /jwks_uri { internal; proxy_cache foo; proxy_pass https://idp.example.com/keys; } }
In this example, the auth_jwt_key_request
directive instructs NGINX Plus to retrieve the JWKS from an internal subrequest. The subrequest is directed to /jwks_uri
, which will proxy the request to a identity provider. The request is cached for a default of 10 minutes to limit overhead.
In NGINX Plus R17, the auth_jwt_key_request
directive was introduced. This feature enables the NGINX Plus server to dynamically update its JWKs when a request is made. A subrequest method is used to fetch the JWKs, which means the location that the directive points to must be local to the NGINX Plus server. In the example, the subrequest location was locked down to ensure that only internal NGINX Plus requests would be served. A cache was also used to ensure the JWKs retrieval request is only made as often as necessary, and does not overload the identity provider. The auth_jwt_key_request
directive is valid in the http
, server
, location
, and limit_except
contexts.
This solution consists of a number of configuration aspects and a bit of NGINScript code. The identity provider (IdP), must support OpenID Connect 1.0. NGINX Plus will act as a relaying party of your OIDC in an Authorization Code Flow.
NGINX Inc., maintains a public GitHub repository containing configuration and code as a reference implementation of OIDC integration with NGINX Plus. The following link to the repository has up-to-date instructions on how to set up the reference implementation with your own IdP.
This solution simply linked to a reference implementation to ensure that you, the reader, have the most up-to-date solution. The reference provided configures NGINX Plus as a relaying party to an authorization code flow for OpenID Connect 1.0. When unauthenticated requests for protected resources are made to NGINX Plus in this configuration, NGINX Plus first redirects the request to the IdP. The IdP takes the client through its own login flow, and returns the client to NGINX Plus with an authentication code. NGINX Plus then communicates directly with the IdP to exchange the authentication code for a set of ID Tokens. These tokens are validated using JWTs, and stored in NGINX Plus’s key-value store. By using the key-value store, the tokens are made available to all NGINX Plus nodes in a highly available (HA) configuration. During this process, NGINX Plus generates a session cookie for the client that is used as the key to look up the token in the key-value store. The client is then served a redirect with the cookie to the initial requested resource. Subsequent requests are validated by using the cookie to look up the ID Token in NGINX Plus’s key-value store.
This capability enables integration with most major identity providers, including CA Single Sign‑On (formerly SiteMinder), ForgeRock OpenAM, Keycloak, Okta, OneLogin, and Ping Identity. OIDC as a standard is extremely relevant in authentication—the aforementioned identity providers are only a subset of the integrations that are possible.
3.15.226.248