Chapter 5. Programmability and Automation

5.0 Introduction

Programmability refers to the ability to interact with something through programming. The API for NGINX Plus provides just that: the ability to interact with the configuration and behavior of NGINX Plus through an HTTP interface. This API provides the ability to reconfigure NGINX Plus by adding or removing upstream servers through HTTP requests. The key-value store feature in NGINX Plus enables another level of dynamic configuration—you can utilize HTTP calls to inject information that NGINX Plus can use to route or control traffic dynamically. This chapter will touch on the NGINX Plus API and the key-value store module exposed by that same API.

Configuration management tools automate the installation and configuration of servers, which is an invaluable utility in the age of the cloud. Engineers of large-scale web applications no longer need to configure servers by hand; instead, they can use one of the many configuration management tools available. With these tools, engineers only need to write configurations and code once to produce many servers with the same configuration in a repeatable, testable, and modular fashion. This chapter covers a few of the most popular configuration management tools available and how to use them to install NGINX and template a base configuration. These examples are extremely basic but demonstrate how to get an NGINX server started with each platform.

5.1 NGINX Plus API

Problem

You have a dynamic environment and need to reconfigure NGINX Plus on the fly.

Solution

Configure the NGINX Plus API to enable adding and removing servers through API calls:

upstream backend {
    zone http_backend 64k;
}
server {
    # ...
    location /api {
        api [write=on];
        # Directives limiting access to the API
        # See chapter 7
    }

    location = /dashboard.html {
        root   /usr/share/nginx/html;
    }
}

This NGINX Plus configuration creates an upstream server with a shared memory zone, enables the API in the /api location block, and provides a location for the NGINX Plus dashboard.

You can utilize the API to add servers when they come online:

$ curl -X POST -d '{"server":"172.17.0.3"}' 
  'http://nginx.local/api/3/http/upstreams/backend/servers/'
{
  "id":0,
  "server":"172.17.0.3:80",
  "weight":1,
  "max_conns":0,
  "max_fails":1,
  "fail_timeout":"10s",
  "slow_start":"0s",
  "route":"",
  "backup":false,
  "down":false
}

The curl call in this example makes a request to NGINX Plus to add a new server to the backend upstream configuration. The HTTP method is a POST, a JSON object is passed as the body, and a JSON response is returned. The JSON response shows the server object configuration, note that a new id was generated, and other configuration settings were set to default values.

The NGINX Plus API is RESTful; therefore, there are parameters in the request URI.

The format of the URI is as follows:

/api/{version}/http/upstreams/{httpUpstreamName}/servers/

You can utilize the NGINX Plus API to list the servers in the upstream pool:

$ curl 'http://nginx.local/api/3/http/upstreams/backend/servers/'
[
  {
    "id":0,
    "server":"172.17.0.3:80",
    "weight":1,
    "max_conns":0,
    "max_fails":1,
    "fail_timeout":"10s",
    "slow_start":"0s",
    "route":"",
    "backup":false,
    "down":false
  }
]

The curl call in this example makes a request to NGINX Plus to list all of the servers in the upstream pool named backend. Currently, we have only the one server that we added in the previous curl call to the API. The request will return an upstream server object that contains all of the configurable options for a server.

Use the NGINX Plus API to drain connections from an upstream server, preparing it for a graceful removal from the upstream pool. You can find details about connection draining in Recipe 2.8:

$ curl -X PATCH -d '{"drain":true}' 
  'http://nginx.local/api/3/http/upstreams/backend/servers/0'
{
  "id":0,
  "server":"172.17.0.3:80",
  "weight":1,
  "max_conns":0,
  "max_fails":1,
  "fail_timeout":
  "10s","slow_start":
  "0s",
  "route":"",
  "backup":false,
  "down":false,
  "drain":true
}

In this curl, we specify that the request method is PATCH, we pass a JSON body instructing it to drain connections for the server, and specify the server ID by appending it to the URI. We found the ID of the server by listing the servers in the upstream pool in the previous curl command.

NGINX Plus will begin to drain the connections. This process can take as long as the length of the sessions of the application. To check in on how many active connections are being served by the server you’ve begun to drain, use the following call and look for the active attribute of the server being drained:

$ curl 'http://nginx.local/api/3/http/upstreams/backend'
{
   "zone" : "http_backend",
   "keepalive" : 0,
   "peers" : [
      {
         "backup" : false,
         "id" : 0,
         "unavail" : 0,
         "name" : "172.17.0.3",
         "requests" : 0,
         "received" : 0,
         "state" : "draining",
         "server" : "172.17.0.3:80",
         "active" : 0,
         "weight" : 1,
         "fails" : 0,
         "sent" : 0,
         "responses" : {
            "4xx" : 0,
            "total" : 0,
            "3xx" : 0,
            "5xx" : 0,
            "2xx" : 0,
            "1xx" : 0
         },
         "health_checks" : {
            "checks" : 0,
            "unhealthy" : 0,
            "fails" : 0
         },
         "downtime" : 0
      }
   ],
   "zombies" : 0
}

After all connections have drained, utilize the NGINX Plus API to remove the server from the upstream pool entirely:

$ curl -X DELETE 
  'http://nginx.local/api/3/http/upstreams/backend/servers/0'
[]

The curl command makes a DELETE method request to the same URI used to update the servers’ state. The DELETE method instructs NGINX to remove the server. This API call returns all of the servers and their IDs that are still left in the pool. Because we started with an empty pool, added only one server through the API, drained it, and then removed it, we now have an empty pool again.

Discussion

The NGINX Plus exclusive API enables dynamic application servers to add and remove themselves to the NGINX configuration on the fly. As servers come online, they can register themselves to the pool, and NGINX will start sending load to it. When a server needs to be removed, the server can request NGINX Plus to drain its connections, and then remove itself from the upstream pool before it’s shut down. This enables the infrastructure, through some automation, to scale in and out without human intervention.

5.2 Using the Key-Value Store with NGINX Plus

Problem

You need NGINX Plus to make dynamic traffic management decisions based on input from applications.

Solution

This section will use the example of a dynamic blocklist as a traffic management decision.

Set up the cluster-aware key-value store and API, and then add keys and values:

keyval_zone zone=blocklist:1M;
keyval $remote_addr $blocked zone=blocklist;

server {
    # ...
    location / {
        if ($blocked) {
            return 403 'Forbidden';
        }
        return 200 'OK';
    }
}
server {
    # ...
    # Directives limiting access to the API
    # See chapter 6
    location /api {
        api write=on; 
    }
}

This NGINX Plus configuration uses the keyval_zone directory to build a key-value store shared memory zone named blocklist and sets a memory limit of 1 MB. The keyval directive then maps the value of the key, matching the first parameter $remote_addr to a new variable named $blocked from the zone. This new variable is then used to determine whether NGINX Plus should serve the request or return a 403 Forbidden code.

After starting the NGINX Plus server with this configuration, you can curl the local machine and expect to receive a 200 OK response.

$ curl 'http://127.0.0.1/'
OK

Now add the local machine’s IP address to the key-value store with a value of 1:

$ curl -X POST -d '{"127.0.0.1":"1"}' 
  'http://127.0.0.1/api/3/http/keyvals/blocklist'

This curl command submits an HTTP POST request with a JSON object containing a key-value object to be submitted to the blocklist shared memory zone. The key-value store API URI is formatted as follows:

/api/{version}/http/keyvals/{httpKeyvalZoneName}

The local machine’s IP address is now added to the key-value zone named blocklist with a value of 1. In the next request, NGINX Plus looks up the $remote_addr in the key-value zone, finds the entry, and maps the value to the variable $blocked. This variable is then evaluated in the if statement. When the variable has a value, the if evaluates to True and NGINX Plus returns the 403 Forbidden return code:

$ curl 'http://127.0.0.1/'
Forbidden

You can update or delete the key by making a PATCH method request:

$ curl -X PATCH -d '{"127.0.0.1":null}' 
  'http://127.0.0.1/api/3/http/keyvals/blocklist'

NGINX Plus deletes the key if the value is null, and requests will again return 200 OK.

Discussion

The key-value store, an NGINX Plus exclusive feature, enables applications to inject information into NGINX Plus. In the example provided, the $remote_addr variable is used to create a dynamic blocklist. You can populate the key-value store with any key that NGINX Plus might have as a variable—a session cookie, for example—and provide NGINX Plus an external value. In NGINX Plus R16, the key-value store became cluster-aware, meaning that you have to provide your key-value update to only one NGINX Plus server, and all of them will receive the information.

In NGINX Plus R19, the key-value store enabled a type parameter, which enables indexing for specific types of keys. By default, the type is of value string, where ip, and prefix are also options. The string type does not build an index and all key requests must be exact matches, where prefix will allow for partial key matches provided the prefix of the key is a match. An ip type enables the use of CIDR notation. In our example, if we had specified the type=ip as a parameter to our zone, we could have provided an entire CIDR range to block, such as 192.168.0.0/16 to block the entire RFC 1918 private range block, or 127.0.0.1/32 for localhost which would have rendered the same effect as demonstrated in the example.

5.3 Extending NGINX with a Common Programming Language

Problem

You need NGINX to perform some custom extension using a common programming language.

Solution

Before preparing to write a custom NGINX module from scratch in C, first evaluate if one of the other programming language modules will fit your use case. The C programming language is extremely powerful, and performant. There are, however, many other languages available as modules that may enable the customization required. NGINX has introduced NGINScript (njs), which exposes the power of JavaScript into the NGINX configuration by simply enabling a module. Lua and Perl modules are also available.

To get started with njs, install the njs module and use the following njs script, hello_world.js, to return “Hello World” when called:

function hello(request) {
    request.return(200, "Hello world!");
}

Call the njs script using the following minimal NGINX configuration:

load_module modules/ngx_http_js_module.so;

events {}

http {
    js_include hello_world.js;

    server {
        listen 8000;

        location / {
            js_content hello;
        }
    }
}

The above NGINX configuration, enables the njs module, includes the njs library we constructed named hello_world.js, and used the hello function to return a response to the client. The hello function is called by the NGINX directive js_content. The request object provided to the njs function has many attributes that describe the request and are able to manipulate the response. The njs module is written and supported by NGINX and is updated with every NGINX release. For an up-to-date reference, view the njs Documentation link in the Also See section.

With these language modules, you either import a file including code, or define a block of code directly within the configuration.

To use Lua, install the Lua module and the following NGINX configuration to define a Lua script inline.

load_module modules/ngx_http_lua_module.so;

events {}

http {
    server {
        listen 8080;
        location / {
            default_type text/html;
            content_by_lua_block {
                ngx.say("hello, world")
            }
        }
    }
}

The Lua module provides its own NGINX API through an object defined by the module named ngx. Like the request object in njs, the ngx object has attributes and methods to describe the request and manipulate the response.

With the Perl module installed, this example will use Perl to set an NGINX variable from the runtime environment.

load_module modules/ngx_http_perl_module.so;

events {}

http {
    perl_set $app_endpoint 'sub { return $ENV{"APP_DNS_ENDPOINT"}; }';
    server {
        listen 8080;
        location / {
                proxy_pass http://$app_endpoint
            }
        }
    }
}

The prior example demonstrates that these language modules expose more functionality than just returning a response. The perl_set directive sets an NGINX variable to data returned from a Perl script. This limited example simply returns a system environment variable, which is used as the endpoint to which to proxy requests.

Discussion

The capabilities enabled by the extendibility of NGINX are endless. NGINX is extendable with custom code through C modules, which can be compiled into NGINX when building from source, or dynamically loaded within the configuration. Existing modules that expose the functionality and syntax of JavaScript (njs), Lua, and Perl are already available. In many cases, unless distributing custom NGINX functionality to others, these pre-existing modules can suffice. Many scripts built for these modules already exist in the open source community.

This solution demonstrated basic usage of the njs, Lua, and Perl scripting languages available in NGINX and NGINX Plus. Whether looking to respond, set a variable, make a subrequest, or define a complex rewrite, these NGINX modules provide the capability.

5.4 Installing with Puppet

Problem

You need to install and configure NGINX with Puppet to manage NGINX configurations as code and conform with the rest of your Puppet configurations.

Solution

Create a module that installs NGINX, manages the files you need, and ensures that NGINX is running:

 class nginx {
    package {"nginx": ensure => 'installed',}
    service {"nginx": 
        ensure => 'true',
        hasrestart => 'true',
        restart => '/etc/init.d/nginx reload',
    }
    file { "nginx.conf":
        path    => '/etc/nginx/nginx.conf',
        require => Package['nginx'],
        notify  => Service['nginx'],
        content => template('nginx/templates/nginx.conf.erb'),
        user=>'root', 
        group=>'root', 
        mode='0644';
    }
}

This module uses the package management utility to ensure the NGINX package is installed. It also ensures that NGINX is running and enabled at boot time. The configuration informs Puppet that the service has a restart command with the hasrestart directive, and we can override the restart command with an NGINX reload. The file resource will manage and template the nginx.conf file with the Embedded Ruby (ERB) templating language. The templating of the file will happen after the NGINX package is installed due to the require directive. However, the file resource will notify the NGINX service to reload because of the notify directive. The templated configuration file is not included. However, it can be simple to install a default NGINX configuration file, or very complex if using ERB or Embedded Puppet (EPP) templating language loops and variable substitution.

Discussion

Puppet is a configuration management tool based in the Ruby programming language. Modules are built in a domain-specific language and called via a manifest file that defines the configuration for a given server. Puppet can be run in a client server relationship or standalone configuration. With Puppet, the manifest is run on the server and then sent to the agent. This is important because it ensures that the agent is only delivered the configuration meant for it and no extra configurations meant for other servers. There are a lot of extremely advanced public modules available for Puppet. Starting from these modules will help you get a jump start on your configuration. A public NGINX module from Vox Pupuli on GitHub will template out NGINX configurations for you.

5.5 Installing with Chef

Problem

You need to install and configure NGINX with Chef to manage NGINX configurations as code and conform with the rest of your Chef configurations.

Solution

Create a cookbook with a recipe to install NGINX and define configuration files through templating, and ensure NGINX reloads after the configuration is put in place. The following is an example recipe:

package 'nginx' do
  action :install
end

service 'nginx' do
  supports :status => true, :restart => true, :reload => true
  action   [ :start, :enable ]
end

template 'nginx.conf' do
  path   "/etc/nginx.conf"
  source "nginx.conf.erb"
  owner  'root'
  group  'root'
  mode   '0644'
  notifies :reload, 'service[nginx]', :delayed
end

The package block installs NGINX. The service block ensures that NGINX is started and enabled at boot, then declares to the rest of Chef what the nginx service will support as far as actions. The template block templates an ERB file and places it at /etc/nginx.conf with an owner and group of root. The template block also sets the mode to 644 and notifies the nginx service to reload, but waits until the end of the Chef run declared by the :delayed statement. The templated configuration file is not included. However, it can be as simple as a default NGINX configuration file or very complex with ERB templating language loops and variable substitution.

Discussion

Chef is a configuration management tool based in Ruby. Chef can be run in a client server relationship, or solo configuration, now known as Chef Zero. Chef has a very large community with many public cookbooks called the Supermarket. Public cookbooks from the Supermarket can be installed and maintained via a command-line utility called Berkshelf. Chef is extremely capable, and what we have demonstrated is just a small sample. The public NGINX Cookbook in the Supermarket is extremely flexible and provides the options to easily install NGINX from a package manager or from source, and the ability to compile and install many different modules as well as template out the basic configurations.

5.6 Installing with Ansible

Problem

You need to install and configure NGINX with Ansible to manage NGINX configurations as code and conform with the rest of your Ansible configurations.

Solution

Create an Ansible playbook to install NGINX and manage the nginx.conf file. The following is an example task file for the playbook to install NGINX. Ensure it’s running and template the configuration file:

- name: NGINX | Installing NGINX
  package: name=nginx state=present
 
- name: NGINX | Starting NGINX
  service:
    name: nginx
    state: started
    enabled: yes

- name: Copy nginx configuration in place.
  template:
    src: nginx.conf.j2
    dest: "/etc/nginx/nginx.conf"
    owner: root
    group: root
    mode: 0644
  notify:
    - reload nginx

The package block installs NGINX. The service block ensures that NGINX is started and enabled at boot. The template block templates a Jinja2 file and places the result at /etc/nginx.conf with an owner and group of root. The template block also sets the mode to 644 and notifies the nginx service to reload. The templated configuration file is not included. However, it can be as simple as a default NGINX configuration file or very complex with Jinja2 templating language loops and variable substitution.

Discussion

Ansible is a widely used and powerful configuration management tool based in Python. The configuration of tasks is in YAML, and you use the Jinja2 templating language for file templating. Ansible offers a server named Ansible Tower on a subscription model. However, it’s commonly used from local machines or to build servers directly to the client or in a standalone model. Ansible will bulk SSH into servers and run the configuration. Much like other configuration management tools, there’s a large community of public roles. Ansible calls this the Ansible Galaxy. You can find very sophisticated roles to utilize in your playbooks.

5.7 Installing with SaltStack

Problem

You need to install and configure NGINX with SaltStack to manage NGINX configurations as code and conform with the rest of your SaltStack configurations.

Solution

Install NGINX through the package management module and manage the configuration files you desire. The following is an example state file (Salt State file [SLS]) that will install the nginx package and ensure the service is running, enabled at boot, and reload if a change is made to the configuration file:

nginx:
  pkg:
    - installed
  service:
    - name: nginx
    - running
    - enable: True
    - reload: True
    - watch:
      - file: /etc/nginx/nginx.conf

/etc/nginx/nginx.conf:
  file:
    - managed
    - source: salt://path/to/nginx.conf
    - user: root
    - group: root
    - template: jinja
    - mode: 644
    - require:
      - pkg: nginx

This is a basic example of installing NGINX via a package management utility and managing the nginx.conf file. The NGINX package is installed and the service is running and enabled at boot. With SaltStack, you can declare a file managed by Salt, as seen in the example, and templated by many different templating languages. The templated configuration file is not included. However, it can be as simple as a default NGINX configuration file, or very complex with the Jinja2 templating language loops and variable substitution. This configuration also specifies that NGINX must be installed prior to managing the file because of the require statement. After the file is in place, NGINX is reloaded because of the watch directive on the service, and reloads, as opposed to restarts, because the reload directive is set to True.

Discussion

SaltStack is a powerful configuration management tool that defines server states in YAML. Modules for SaltStack can be written in Python. Salt exposes the Jinja2 templating language for states as well as for files. However, for files, there are many other options, such as Mako, Python itself, and others. SaltStack uses master minion terminology to represent the client-server relationship. The minion can be run on its own as well. The master minion transport communication, however, differs from others and sets SaltStack apart. With Salt, you’re able to choose ZeroMQ, TCP, or Reliable Asynchronous Event Transport (RAET) for transmissions to the Salt agent; or you may not use an agent, and the master can SSH instead. Because the transport layer is by default asynchronous, SaltStack is built to be able to deliver its message to a large number of minions with low load to the master server.

5.8 Automating Configurations with Consul Templating

Problem

You need to automate your NGINX configuration to respond to changes in your environment through use of Consul.

Solution

Use the consul-template daemon and a template file to template out the NGINX configuration file of your choice:

upstream backend { {{range service "app.backend"}}
    server {{.Address}};{{end}}
}

This example is a Consul Template file that templates an upstream configuration block. This template will loop through nodes in Consul identified as app.backend. For every node in Consul, the template will produce a server directive with that node’s IP address.

The consul-template daemon is run via the command line and can be used to reload NGINX every time the configuration file is templated with a change:

# consul-template -consul consul.example.internal -template 
 template:/etc/nginx/conf.d/upstream.conf:"nginx -s reload"

This command instructs the consul-template daemon to connect to a Consul cluster at consul.example.internal and to use a file named template in the current working directory to template the file and output the generated contents to /etc/nginx/conf.d/upstream.conf, then to reload NGINX every time the templated file changes. The -template flag takes a string of the template file, the output location, and the command to run after the templating process takes place. These three variables are separated by colons. If the command being run has spaces, make sure to wrap it in double quotes. The -consul flag tells the daemon what Consul cluster to connect to.

Discussion

Consul is a powerful service discovery tool and configuration store. Consul stores information about nodes as well as key-value pairs in a directory-like structure and allows for restful API interaction. Consul also provides a DNS interface on each client, allowing for domain name lookups of nodes connected to the cluster. A separate project that utilizes Consul clusters is the consul-template daemon; this tool templates files in response to changes in Consul nodes, services, or key-value pairs. This makes Consul a very powerful choice for automating NGINX. With consul-template you can also instruct the daemon to run a command after a change to the template takes place. With this, we can reload the NGINX configuration and allow your NGINX configuration to come alive along with your environment. With Consul and consul-template, your NGINX configuration can be as dynamic as your environment. Infrastructure, configuration, and application information is centrally stored, and consul-template can subscribe and retemplate as necessary in an event-based manner. With this technology, NGINX can dynamically reconfigure in reaction to the addition and removal of servers, services, application versions, and so on.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.131.178