Chapter 5. Programmability and Automation

Introduction

Programmability refers to the ability to interact with something through programming. The API for NGINX Plus provides just that: the ability to interact with the configuation and behavior of NGINX Plus through an HTTP interface. This API provides the ability to reconfigure NGINX Plus by adding or removing upstream servers through HTTP requests. The key-value store feature in NGINX Plus enables another level of dynamic configuration—you can utilize HTTP calls to inject information that NGINX Plus can use to route or control traffic dynamically. This chapter will touch on the NGINX Plus API and the key-value store module exposed by that same API.

Configuration management tools automate the installation and configuration of servers, which is an invaluable utility in the age of the cloud. Engineers of large-scale web applications no longer need to configure servers by hand; instead, they can use one of the many configuration management tools available. With these tools, engineers can write configurations and code one time to produce many servers with the same configuration in a repeatable, testable, and modular fashion. This chapter covers a few of the most popular configuration management tools available and how to use them to install NGINX and template a base configuration. These examples are extremely basic but demonstrate how to get an NGINX server started with each platform.

NGINX Plus API

Problem

You have a dynamic environment and need to reconfigure NGINX Plus on the fly.

Solution

Configure the NGINX Plus API to enable adding and removing servers through API calls:

upstream backend {
    zone http_backend 64k;
}
server {
    # ...
    location /api {
        api [write=on];
        # Directives limiting access to the API
        # See chapter 7
    }

    location = /dashboard.html {
        root   /usr/share/nginx/html;
    }
}

This NGINX Plus configuration creates an upstream server with a shared memory zone, enables the API in the /api location block, and provides a location for the NGINX Plus dashboard.

You can utilize the API to add servers when they come online:

$ curl -X POST -d '{"server":"172.17.0.3"}' 
  'http://nginx.local/api/3/http/upstreams/backend/servers/'
{
  "id":0,
  "server":"172.17.0.3:80",
  "weight":1,
  "max_conns":0,
  "max_fails":1,
  "fail_timeout":"10s",
  "slow_start":"0s",
  "route":"",
  "backup":false,
  "down":false
}

The curl call in this example makes a request to NGINX Plus to add a new server to the backend upstream configuration. The HTTP method is a POST, and a JSON object is passed as the body. The NGINX Plus API is RESTful; therefore, there are parameters in the request URI. The format of the URI is as follows:

/api/{version}/http/upstreams/{httpUpstreamName}/servers/

You can utilize the NGINX Plus API to list the servers in the upstream pool:

$ curl 'http://nginx.local/api/3/http/upstreams/backend/servers/'
[
  {
    "id":0,
    "server":"172.17.0.3:80",
    "weight":1,
    "max_conns":0,
    "max_fails":1,
    "fail_timeout":"10s",
    "slow_start":"0s",
    "route":"",
    "backup":false,
    "down":false
  }
]

The curl call in this example makes a request to NGINX Plus to list all of the servers in the upstream pool named backend. Currently, we have only the one server that we added in the previous curl call to the API. The request will return a upstream server object that contains all of the configurable options for a server.

Use the NGINX Plus API to drain connections from an upstream server, preparing it for a graceful removal from the upstream pool. You can find details about connection draining in “Connection Draining”:

$ curl -X PATCH -d '{"drain":true}' 
  'http://nginx.local/api/3/http/upstreams/backend/servers/0'
{
  "id":0,
  "server":"172.17.0.3:80",
  "weight":1,
  "max_conns":0,
  "max_fails":1,
  "fail_timeout":
  "10s","slow_start":
  "0s",
  "route":"",
  "backup":false,
  "down":false,
  "drain":true
}

In this curl, we specify that the request method is PATCH, we pass a JSON body instructing it to drain connections for the server, and specify the server ID by appending it to the URI. We found the ID of the server by listing the servers in the upstream pool in the previous curl command.

NGINX Plus will begin to drain the connections. This process can take as long as the length of the sessions of the application. To check in on how many active connections are being served by the server you’ve begun to drain, use the following call and look for the active attribute of the server being drained:

$ curl 'http://nginx.local/api/3/http/upstreams/backend'
{
   "zone" : "http_backend",
   "keepalive" : 0,
   "peers" : [
      {
         "backup" : false,
         "id" : 0,
         "unavail" : 0,
         "name" : "172.17.0.3",
         "requests" : 0,
         "received" : 0,
         "state" : "draining",
         "server" : "172.17.0.3:80",
         "active" : 0,
         "weight" : 1,
         "fails" : 0,
         "sent" : 0,
         "responses" : {
            "4xx" : 0,
            "total" : 0,
            "3xx" : 0,
            "5xx" : 0,
            "2xx" : 0,
            "1xx" : 0
         },
         "health_checks" : {
            "checks" : 0,
            "unhealthy" : 0,
            "fails" : 0
         },
         "downtime" : 0
      }
   ],
   "zombies" : 0
}

After all connections have drained, utilize the NGINX Plus API to remove the server from the upstream pool entirely:

$ curl -X DELETE 
  'http://nginx.local/api/3/http/upstreams/backend/servers/0'
[]

The curl command makes a DELETE method request to the same URI used to update the servers’ state. The DELETE method instructs NGINX to remove the server. This API call returns all of the servers and their IDs that are still left in the pool. Because we started with an empty pool, added only one server through the API, drained it, and then removed it, we now have an empty pool again.

Discussion

The NGINX Plus exclusive API enables dynamic application servers to add and remove themselves to the NGINX configuration on the fly. As servers come online, they can register themselves to the pool, and NGINX will start sending it load. When a server needs to be removed, the server can request NGINX Plus to drain its connections, and then remove itself from the upstream pool before it’s shut down. This enables the infrastructure, through some automation, to scale in and out without human intervention.

Also See

NGINX Plus API Swagger Documentation

Key-Value Store

Problem

You need NGINX Plus to make dynamic traffic management decisions based on input from applications.

Solution

Set up the cluster-aware key-value store and API, and then add keys and values:

keyval_zone zone=blacklist:1M;
keyval $remote_addr $num_failures zone=blacklist;

server {
    # ...
    location / {
        if ($num_failures) {
            return 403 'Forbidden';
        }
        return 200 'OK';
    }
}
server {
    # ...
    # Directives limiting access to the API
    # See chapter 6
    location /api {
        api write=on; 
    }
}

This NGINX Plus configuration uses the keyval_zone directory to build a key-value store shared memory zone named blacklist and sets a memory limit of 1 MB. The keyval directive then maps the value of the key matching the first parameter $remote_addr to a new variable named $num_failures from the zone. This new variable is then used to determine whether NGINX Plus should serve the request or return a 403 Forbidden code.

After starting the NGINX Plus server with this configuration, you can curl the local machine and expect to receive a 200 OK response.

$ curl 'http://127.0.0.1/'
OK

Now add the local machine’s IP address to the key-value store with a value of 1:

$ curl -X POST -d '{"127.0.0.1":"1"}' 
  'http://127.0.0.1/api/3/http/keyvals/blacklist'

This curl command submits an HTTP POST request with a JSON object containing a key-value object to be submitted to the blacklist shared memory zone. The key-value store API URI is formatted as follows:

/api/{version}/http/keyvals/{httpKeyvalZoneName}

The local machine’s IP address is now added to the key-value zone named blacklist with a value of 1. In the next request, NGINX Plus looks up the $remote_addr in the key-value zone, finds the entry, and maps the value to the variable $num_failures. This variable is then evaluated in the if statement. When the variable has a value, the if evaluates to True and NGINX Plus returns the 403 Forbidden return code:

$ curl 'http://127.0.0.1/'
Forbidden

You can update or delete the key by making a PATCH method request:

$ curl -X PATCH -d '{"127.0.0.1":null}' 
  'http://127.0.0.1/api/3/http/keyvals/blacklist'

NGINX Plus deletes the key if the value is null, and requests will again return 200 OK.

Discussion

The key-value store, an NGINX Plus exclusive feature, enables applications to inject information into NGINX Plus. In the example provided, the $remote_addr variable is used to create a dynamic blacklist. You can populate the key-value store with any key that NGINX Plus might have as a variable—a session cookie, for example—and provide NGINX Plus an external value. In NGINX Plus R16, the key-value store became cluster-aware, meaning that you have to provide your key-value update to only one NGINX Plus server, and all of them will receive the information.

Installing with Puppet

Problem

You need to install and configure NGINX with Puppet to manage NGINX configurations as code and conform with the rest of your Puppet configurations.

Solution

Create a module that installs NGINX, manages the files you need, and ensures that NGINX is running:

 class nginx {
    package {"nginx": ensure => 'installed',}
    service {"nginx": 
        ensure => 'true',
        hasrestart => 'true',
        restart => '/etc/init.d/nginx reload',
    }
    file { "nginx.conf":
        path    => '/etc/nginx/nginx.conf',
        require => Package['nginx'],
        notify  => Service['nginx'],
        content => template('nginx/templates/nginx.conf.erb'),
        user=>'root', 
        group=>'root', 
        mode='0644';
    }
}

This module uses the package management utility to ensure the NGINX package is installed. It also ensures NGINX is running and enabled at boot time. The configuration informs Puppet that the service has a restart command with the hasrestart directive, and we can override the restart command with an NGINX reload. The file resource will manage and template the nginx.conf file with the Embedded Ruby (ERB) templating language. The templating of the file will happen after the NGINX package is installed due to the require directive. However, the file resource will notify the NGINX service to reload because of the notify directive. The templated configuration file is not included. However, it can be simple to install a default NGINX configuration file, or very complex if using ERB or EPP templating language loops and variable substitution.

Discussion

Puppet is a configuration management tool based in the Ruby programming language. Modules are built in a domain-specific language and called via a manifest file that defines the configuration for a given server. Puppet can be run in a master-slave or masterless configuration. With Puppet, the manifest is run on the master and then sent to the slave. This is important because it ensures that the slave is only delivered the configuration meant for it and no extra configurations meant for other servers. There are a lot of extremely advanced public modules available for Puppet. Starting from these modules will help you get a jump-start on your configuration. A public NGINX module from voxpupuli on GitHub will template out NGINX configurations for you.

Installing with Chef

Problem

You need to install and configure NGINX with Chef to manage NGINX configurations as code and conform with the rest of your Chef configurations.

Solution

Create a cookbook with a recipe to install NGINX and configure configuration files through templating, and ensure NGINX reloads after the configuration is put in place. The following is an example recipe:

package 'nginx' do
  action :install
end

service 'nginx' do
  supports :status => true, :restart => true, :reload => true
  action   [ :start, :enable ]
end

template 'nginx.conf' do
  path   "/etc/nginx.conf"
  source "nginx.conf.erb"
  owner  'root'
  group  'root'
  mode   '0644'
  notifies :reload, 'service[nginx]', :delayed
end

The package block installs NGINX. The service block ensures that NGINX is started and enabled at boot, then declares to the rest of Chef what the nginx service will support as far as actions. The template block templates an ERB file and places it at /etc/nginx.conf with an owner and group of root. The template block also sets the mode to 644 and notifies the nginx service to reload, but waits until the end of the Chef run declared by the :delayed statement. The templated configuration file is not included. However, it can be as simple as a default NGINX configuration file or very complex with ERB templating language loops and variable substitution.

Discussion

Chef is a configuration management tool based in Ruby. Chef can be run in a master-slave, or solo configuration, now known as Chef Zero. Chef has a very large community with many public cookbooks called the Supermarket. Public cookbooks from the Supermarket can be installed and maintained via a command-line utility called Berkshelf. Chef is extremely capable, and what we have demonstrated is just a small sample. The public NGINX cookbook in the Supermarket is extremely flexible and provides the options to easily install NGINX from a package manager or from source, and the ability to compile and install many different modules as well as template out the basic configurations.

Installing with Ansible

Problem

You need to install and configure NGINX with Ansible to manage NGINX configurations as code and conform with the rest of your Ansible configurations.

Solution

Create an Ansible playbook to install NGINX and manage the nginx.conf file. The following is an example task file for the playbook to install NGINX. Ensure it’s running and template the configuration file:

- name: NGINX | Installing NGINX
  package: name=nginx state=present
 
- name: NGINX | Starting NGINX
  service:
    name: nginx
    state: started
    enabled: yes

- name: Copy nginx configuration in place.
  template:
    src: nginx.conf.j2
    dest: "/etc/nginx/nginx.conf"
    owner: root
    group: root
    mode: 0644
  notify:
    - reload nginx

The package block installs NGINX. The service block ensures that NGINX is started and enabled at boot. The template block templates a Jinja2 file and places the result at /etc/nginx.conf with an owner and group of root. The template block also sets the mode to 644 and notifies the nginx service to reload. The templated configuration file is not included. However, it can be as simple as a default NGINX configuration file or very complex with Jinja2 templating language loops and variable substitution.

Discussion

Ansible is a widely used and powerful configuration management tool based in Python. The configuration of tasks is in YAML, and you use the Jinja2 templating language for file templating. Ansible offers a master named Ansible Tower on a subscription model. However, it’s commonly used from local machines or to build servers directly to the client or in a masterless model. Ansible bulk SSHes into its servers and runs the configurations. Much like other configuration management tools, there’s a large community of public roles. Ansible calls this the Ansible Galaxy. You can find very sophisticated roles to utilize in your playbooks.

Installing with SaltStack

Problem

You need to install and configure NGINX with SaltStack to manage NGINX configurations as code and conform with the rest of your SaltStack configurations.

Solution

Install NGINX through the package management module and manage the configuration files you desire. The following is an example state file (sls) that will install the nginx package and ensure the service is running, enabled at boot, and reload if a change is made to the configuration file:

nginx:
  pkg:
    - installed
  service:
    - name: nginx
    - running
    - enable: True
    - reload: True
    - watch:
      - file: /etc/nginx/nginx.conf

/etc/nginx/nginx.conf:
  file:
    - managed
    - source: salt://path/to/nginx.conf
    - user: root
    - group: root
    - template: jinja
    - mode: 644
    - require:
      - pkg: nginx

This is a basic example of installing NGINX via a package management utility and managing the nginx.conf file. The NGINX package is installed and the service is running and enabled at boot. With SaltStack you can declare a file managed by Salt as seen in the example and templated by many different templating languages. The templated configuration file is not included. However, it can be as simple as a default NGINX configuration file or very complex with the Jinja2 templating language loops and variable substitution. This configuration also specifies that NGINX must be installed prior to managing the file because of the require statement. After the file is in place, NGINX is reloaded because of the watch directive on the service and reloads as opposed to restarts because the reload directive is set to True.

Discussion

SaltStack is a powerful configuration management tool that defines server states in YAML. Modules for SaltStack can be written in Python. Salt exposes the Jinja2 templating language for states as well as for files. However, for files there are many other options, such as Mako, Python itself, and others. Salt works in a master-slave configuration as well as a masterless configuration. Slaves are called minions. The master-slave transport communication, however, differs from others and sets SaltStack apart. With Salt you’re able to choose ZeroMQ, TCP, or Reliable Asynchronous Event Transport (RAET) for transmissions to the Salt agent; or you can not use an agent, and the master can SSH instead. Because the transport layer is by default asynchronous, SaltStack is built to be able to deliver its message to a large number of minions with low load to the master server.

Automating Configurations with Consul Templating

Problem

You need to automate your NGINX configuration to respond to changes in your environment through use of Consul.

Solution

Use the consul-template daemon and a template file to template out the NGINX configuration file of your choice:

upstream backend { {{range service "app.backend"}}
    server {{.Address}};{{end}}
}

This example is a Consul Template file that templates an upstream configuration block. This template will loop through nodes in Consul identified as app.backend. For every node in Consul, the template will produce a server directive with that node’s IP address.

The consul-template daemon is run via the command line and can be used to reload NGINX every time the configuration file is templated with a change:

# consul-template -consul consul.example.internal -template 
 template:/etc/nginx/conf.d/upstream.conf:"nginx -s reload"

This command instructs the consul-template daemon to connect to a Consul cluster at consul.example.internal and to use a file named template in the current working directory to template the file and output the generated contents to /etc/nginx/conf.d/upstream.conf, then to reload NGINX every time the templated file changes. The -template flag takes a string of the template file, the output location, and the command to run after the templating process takes place; these three variables are separated by a colon. If the command being run has spaces, make sure to wrap it in double quotes. The -consul flag tells the daemon what Consul cluster to connect to.

Discussion

Consul is a powerful service discovery tool and configuration store. Consul stores information about nodes as well as key-value pairs in a directory-like structure and allows for restful API interaction. Consul also provides a DNS interface on each client, allowing for domain name lookups of nodes connected to the cluster. A separate project that utilizes Consul clusters is the consul-template daemon; this tool templates files in response to changes in Consul nodes, services, or key-value pairs. This makes Consul a very powerful choice for automating NGINX. With consul-template you can also instruct the daemon to run a command after a change to the template takes place. With this, we can reload the NGINX configuration and allow your NGINX configuration to come alive along with your environment. With Consul you’re able to set up health checks on each client to check the health of the intended service. With this failure detection, you’re able to template your NGINX configuration accordingly to only send traffic to healthy hosts.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.143.244.83