NGINX, Inc., a company that is now part of F5 Networks, shares its name with its leading product, NGINX. NGINX has two versions: an Open Source software solution, OSS, and a commercial solution, Plus. These two versions dominate the world of application delivery controllers, both on-premises and in the cloud. In Azure, many companies are trying to decide between the Azure native managed services discussed in the previous chapter and solutions they already use and trust from their on-premises environments. This chapter will explore the similarities and differences between NGINX OSS and NGINX Plus and how to deploy them using the Azure Portal, PowerShell, and Terraform. In the next chapter, we’ll cover comparisons between NGINX solutions and Azure managed solutions.
Both NGINX and NGINX Plus can fit into your web application landscape as a load balancer for TCP and UDP, but they also can fill the need for a more advanced HTTP(S) application delivery controller. NGINX and NGINX Plus operate at Layer 7 for all types of load balancing. You might employ NGINX or NGINX Plus as an entry point and HTTP(S) request router for your application, or as a load balancer for a service that uses a protocol that is not HTTP, such as database read replicas.
NGINX Open Source software, or NGINX OSS, is free open source software, whereas NGINX Plus is a commercial product that offers advanced features and enterprise-level support as licensed software by NGINX, Inc.
NGINX combines the functionality of a high-performance web server, a powerful load balancer, and a highly scalable caching layer to create the ideal end-to-end platform for your web applications. NGINX Plus is built on top of NGINX OSS. For the sake of clarity, if the product NGINX is ever referenced in this book without the explicit denotation of “Plus,” the feature set being described is available in both versions, as all of the OSS version capabilities are available in NGINX Plus.
For organizations currently using NGINX OSS, NGINX Plus provides your data plane with “off the shelf” advanced features such as intelligent session persistence, JSON Web Token (JWT) and OpenID Connect integration, advanced monitoring statistics, and clustering abilities. These features enable your data plane to integrate more deeply with your application layer, provide deeper insight into your application traffic flow, and enable state-aware high availability. NGINX Plus also enables access to a knowledgable support team that specializes in data plane technology and NGINX Plus implementation. We’ve personally met some of these support engineers at the NGINX conference, NGINX.conf, and our conversations were deep and introspective about the current state of application delivery on the modern web.
For organizations currently using hardware-based load balancers, NGINX Plus provides a full set of ADC features in a much more flexible way, through means of a software form factor, with a cost-effective subscription. We’ve worked with a number of data plane solutions that were hardware-based but pivoted their business to virtual appliance when the cloud really took hold. A common theme for virtual network appliances is that their operating systems are based in BSD rather than Linux. Years ago, BSD’s networking stack had an advantage over Linux; this margin has since shrunk, and when running on vitalized infrastructure, the margin between them is even more diminished. Maintaining another set of tools to manage a separate kernel type is, in our opinion, not worth the effort. In a move to the cloud, you want to manage all VMs through the same methodology. If a specific set of VMs does not fit the mold of your management model, it requires an exception; that if
condition may exist in your configuration code or your compliance documentation, neither of which is necessary given that capable software-based data plane controllers provide the same or greater functionality.
Table 3-1 shows the NGINX Plus feature sets compared to those of NGINX OSS. You can get more information on the differences between NGINX products at https://nginx.com.
Feature type | Feature | NGINX OSS | NGINX Plus |
---|---|---|---|
Load balancer | HTTP/TCP/UDP support | X | X |
Layer 7 request routing | X | X | |
Active health checks | X | ||
Sophisticated session persistence | X | ||
DNS SRV support (service discovery) | X | ||
Content cache | Static/dynamic content caching | X | X |
Cache-purging API | X | ||
Web server/reverse proxy | Origin server for static content | X | X |
Reverse proxy protocols: TCP, UDP, HTTP, FastCGI, uwsgi, gRPC | X | X | |
HTTP/2 gateway | X | X | |
HTTP/2 server push | X | X | |
Security controls | HTTP Basic Authentication | X | X |
HTTP authentication subrequests | X | X | |
IP address-based subrequests | X | X | |
Rate limiting | X | X | |
Dual-stack RSA/ECC SSL/TLS offload | X | X | |
ModSecurity 3.0 support | X | X | |
TLS 1.3 support | X | X | |
JWT authentication | X | ||
OpenID Connect SSO | X | ||
NGINX App Protect (WAF) | X | ||
Monitoring | Syslog | X | X |
AppDynamics, Datadog, Dynatrace plug‑ins | X | X | |
Basic Status Metrics | X | X | |
Advanced Metrics with Dashboard 90+ metrics | X | ||
High availability | Behind Azure Load Balancer | X | X |
Configuration synchronization | X | ||
State sharing: sticky‑learn session persistence, rate limiting, key‑value stores | X | ||
Programmability | NGINX JavaScript module | X | X |
Third-party Lea and Perl modules | X | X | |
Custom C module | X | X | |
Seamless reconfiguration through process reload | X | X | |
NGINX Plus API dynamic configuration | X | ||
Key-value store | X | ||
Dynamic reconfiguration without process reloads | X |
Both NGINX OSS and NGINX Plus are widely available to download and install from a variety of sources. This flexibility allows you to find and use the deployment option that best suits your needs. For instance, you can install via prebuilt Azure virtual machine images available in the Azure Marketplace, manually on a virtual machine, or through the Azure Resource Center with PowerShell. We’ll walk through the installation process for these settings next.
Azure Marketplace is a software repository for prebuilt and configured Azure resources from independent software vendors (ISVs). You will find open source and enterprise applications that have been certified and optimized to run on Azure.
NGINX, Inc., provides the latest release of NGINX Plus in Azure Marketplace as a virtual machine (VM) image. NGINX OSS is not available from NGINX, Inc., as an Azure Marketplace VM image, but there are several options available from other ISVs in Azure Marketplace.
Searching for “NGINX” in Azure Marketplace will produce several results, as shown in Figure 3-1.
You will see several results besides the official NGINX Plus VM images from NGINX, Inc., such as the following examples from other ISVs for NGINX OSS:
There are currently four options available from NGINX, Inc. You can choose from different tiers of NGINX Plus with or without NGINX App Protect, as shown in Figure 3-2. The tiers correspond to the support level and readiness for production usage.
The initial page presented is the Overview page, which summarizes the NGINX Plus software functionality and pricing. For more details, click the “Plans” link. There are a number of plans. The plans simply provide a way to select the base OS you’d like NGINX Plus to run on. Select a plan and press the Create button to be taken to the Azure virtual machine creation process. To avoid confusion, the plan is not associated with cost for the NGINX Plus image, only the base OS; the cost associated with the NGINX Plus Marketplace image is the same independent of the base OS.
The Azure VM creation process using the Azure Portal follows seven standard steps, with explanations for each step on the Azure Portal page in the following areas: Basics, Disks, Networking, Management, Advanced (Settings), and Tags. The final step allows you to review and approve any associated costs before the VM is built. These seven steps are displayed in Figure 3-3.
When selecting a size for your VM, a cost will be associated. This includes the cost for the NGINX Plus software.
It is recommended that an Azure availability set of two or more VMs be used to provide high availability in the case of planned system maintenance by Azure or as a safeguard against one VM becoming unavailable. Zone redundancy, if available in the region, is also suggested, as it protects against Azure zone failure and maintenance outages.
You will need to manually create endpoints to support HTTPS (port 443) and HTTP (port 80) traffic in the Azure Portal to enable access to the NGINX Plus VM. For more information, see “How to set up endpoints on a Linux classic virtual machine in Azure” in the Azure documentation.
NGINX Plus will start automatically and load its default start page once the VM starts. You can use a web browser to navigate to the VM’s IP address or DNS name. You can also check the running status of NGINX Plus by logging in to the VM and running the following command:
$ /etc/init.d/nginx status
Azure virtual machine scale sets (VMSSs) let you create and manage a group of identical load-balanced VMs. VMSSs provide redundancy and improved performance by automatically scaling up or down based on workloads or a predefined schedule.
To scale NGINX Plus, create a public or internal Azure Load Balancer with a VMSS. You can deploy the NGINX Plus VM to the VMSS and then configure the Azure Load Balancer for the desired rules, ports, and protocols for allowed traffic to the backend pool.
The cost of running NGINX Plus is a combination of the selected software plan charges plus the Azure infrastructure costs for the VMs on which you will be running the software. There are no additional costs for VMSSs, but you do pay for the underlying compute resources. The actual Azure infrastructure price might vary if you have enterprise agreements or other discounts.
Using NGINX Plus from the Azure Marketplace enables you to scale your NGINX Plus layer on demand without having to procure more licenses, as the software cost is built into the Marketplace with a pay-per-usage model. You may want to procure a couple of machine licenses for your base footprint to enter the support contract with NGINX, Inc., and then use the Marketplace for burst capacity.
In some instances, you may want to install NGINX manually on an Azure VM. Example use cases include a need for modules not included in a Marketplace image, for extra packages, for advanced configurations, for the latest version of NGINX, or for bootstrapping to be tightly controlled by configuration management.
The process for installing NGINX OSS or NGINX Plus on an Azure VM is no different than for installing them on any other hosting platform because NGINX is software that runs on top of any Linux distribution.
In Azure, your configuration should be repeatable through automation so that you can scale as necessary. You can either manually build a VM and take an image of it, so that you can use the image in an Azure Scale Set, or automate the installation through scripting or configuration management. You can also combine the two methods, so that automation builds VM images.
VM images will be ready to serve the client faster because the software is already installed. Installing at boot time provides flexibility, as the configuration can change without having to create images, but it takes longer to become ready because it has to install the software. A hybrid approach should be considered in which an image is made with the software installed, but configuration management brings the configuration up to date at boot time.
When installing NGINX OSS, we always make sure to use the NGINX official package repository for the Linux distribution that we’re using. This ensures that we always have the latest version with the most up-to-date features and security fixes. You can learn how to install from the official repository by visiting the “Supported Distributions and Versions” page of the NGINX documentation.
Azure Resource Manager (ARM) templates are a native Azure automation process that uses declarative state JSON objects to build resources within Azure. This process is the default option for Azure Infrastructure as Code (IaC) and allows you to check your templates into source control.
There are currently no prebuilt ARM templates or PowerShell scripts available from NGINX, Inc. However, there is nothing preventing the creation of a Resource Manager template and PowerShell script based on your custom deployment requirements for Azure and using your previously created custom VM images.
The following provides an example of creating an Ubuntu 16.04 LTS marketplace image from Canonical along with the NGINX OSS web server using Azure Cloud Shell and the Azure PowerShell module.
Open Azure Cloud Shell, and perform the following steps in Azure PowerShell.
First, let’s use ssh-keygen
to create a Secure Shell (SSH) key pair. Accept all the defaults by pressing the Enter key:
ssh-keygen -t rsa -b 2048 # RSA private key will be saved as id_rsa # RSA public key will be saved as id_rsa.pub # Created in directory: '/home/azureuser/.ssh'
Before we can run any Azure CLI commands, we’ll need to be logged in. Use the following command to receive a link and an access code that you paste into a browser to verify your Azure identity:
Connect-AzAccount
Next, create an Azure resource group by using New-AzResourceGroup
:
New-AzResourceGroup ` -Name "nginx-rg" ` -Location "EastUS2"
Using the New-AzVirtualNetworkSubnetConfig
command, you can now create a subnet config object, which will be used when creating a new Azure Virtual Network using the New-AzVirtualNetwork
command. After those are created, New-AzPublicIpAddress
will create an IP address to use with the NGINX VM:
# Create a subnet configuration $subnetConfig = New-AzVirtualNetworkSubnetConfig ` -Name "nginx-Subnet" ` -AddressPrefix 192.168.1.0/24 # Create a virtual network $vnet = New-AzVirtualNetwork ` -ResourceGroupName "nginx-rg" ` -Location "EastUS2" ` -Name "nginxVNET" ` -AddressPrefix 192.168.0.0/16 ` -Subnet $subnetConfig # Create a public IP address # and specify a DNS name $pip = New-AzPublicIpAddress ` -ResourceGroupName "nginx-rg" ` -Location "EastUS2" ` -AllocationMethod Static ` -IdleTimeoutInMinutes 4 ` -Name "nginxpublicdns$(Get-Random)"
Though doing so is optional, it is best practice to add an Azure network security group (NSG) (New-AzNetworkSecurityGroup
) along with traffic rules using New-AzNetworkSecurityRuleConfig
:
# Create an inbound NSG rule for port 22 $nsgRuleSSH = New-AzNetworkSecurityRuleConfig ` -Name "nginxNSGRuleSSH" ` -Protocol "Tcp" ` -Direction "Inbound" ` -Priority 1000 ` -SourceAddressPrefix * ` -SourcePortRange * ` -DestinationAddressPrefix * ` -DestinationPortRange 22 ` -Access "Allow" # Create an inbound NSG rule for port 80 $nsgRuleWeb = New-AzNetworkSecurityRuleConfig ` -Name "nginxNSGRuleWWW" ` -Protocol "Tcp" ` -Direction "Inbound" ` -Priority 1001 ` -SourceAddressPrefix * ` -SourcePortRange * ` -DestinationAddressPrefix * ` -DestinationPortRange 80 ` -Access "Allow" # Create a network security group (NSG) $nsg = New-AzNetworkSecurityGroup ` -ResourceGroupName "nginx-rg" ` -Location "EastUS2" ` -Name "nginxNSG" ` -SecurityRules $nsgRuleSSH,$nsgRuleWeb # Create a virtual network card and # associate it with the public IP # address and NSG $nic = New-AzNetworkInterface ` -Name "nginxNIC" ` -ResourceGroupName "nginx-rg" ` -Location "EastUS2" ` -SubnetId $vnet.Subnets[0].Id ` -PublicIpAddressId $pip.Id ` -NetworkSecurityGroupId $nsg.Id
PowerShell allows you to quickly build a VM while specifying VM attributes such as memory, vCPUs, disks, and network cards based on the VM image options available on Azure. The following is the configuration of the VM suitable for our example:
# Define a credential object make sure that your password is unique and secure $securePassword = ConvertTo-SecureString ` 'MySuperSecurePasswordWith#sAndSymbols*)23' -AsPlainText -Force $cred = New-Object ` System.Management.Automation.PSCredential("azureuser", $securePassword) # Create a virtual machine configuration $vmConfig = New-AzVMConfig ` -VMName "nginxVM" ` -VMSize "Standard_B1s" | ` Set-AzVMOperatingSystem ` -Linux ` -ComputerName "nginxVM" ` -Credential $cred ` -DisablePasswordAuthentication | ` Set-AzVMSourceImage ` -PublisherName "Canonical" ` -Offer "UbuntuServer" ` -Skus "16.04-LTS" ` -Version "latest" | ` Add-AzVMNetworkInterface ` -Id $nic.Id # Configure the SSH key $sshPublicKey = cat ~/.ssh/id_rsa.pub Add-AzVMSshPublicKey ` -VM $vmconfig ` -KeyData $sshPublicKey ` -Path "/home/azureuser/.ssh/authorized_keys"
Next, combine the previous configuration definitions to create a new VM by using New-AzVM
:
New-AzVM ` -ResourceGroupName "nginx-rg" ` -Location eastus2 -VM $vmConfig
Using SSH, connect to the VM after it is created by using the public IP displayed by the following code:
Get-AzPublicIpAddress ` -ResourceGroupName "nginx-rg" | ` Select "IpAddress"
In the Azure Cloud Shell or your local bash shell, paste the SSH connection command into the shell to create an SSH session, using the login username azureuser
when prompted. If an optional passphrase is used, please enter it when prompted:
ssh azureuser@<vm-public-ip>
From your SSH session, update your package sources and then install the latest NGINX OSS package by running the following as root or with sudo
:
echo "deb http://nginx.org/packages/mainline/ubuntu/ xenial nginx" > /etc/apt/sources.list.d/nginx.list echo "deb-src http://nginx.org/packages/mainline/ubuntu/ xenial nginx" >> /etc/apt/sources.list.d/nginx.list wget http://nginx.org/keys/nginx_signing.key apt-key add nginx_signing.key apt-get update apt-get -y install nginx # Test NGINX is installed nginx -v # Start NGINX - it's enabled to start at boot by default /etc/init.d/nginx start
You will need to use a web browser to test the loading of the default NGINX OSS start page, which is the public IP address of the VM you’ve created. To exit the SSH session, type exit
when done.
Once you have completed this process, you can us the Remove-AzResourceGroup
cmdlet to remove the resource group, VM, virtual network, and all other Azure resources to avoid incurring ongoing charges:
Remove-AzResourceGroup ` -Name "nginx-rg"
In this section, we will deploy a Linux virtual machine with NGINX OSS using Terraform. We will show two examples: one for Debian and Ubuntu, and another for CentOS and Red Hat. Items common to both of them are the provider and the network that will provide the starting point for installing NGINX OSS.
You can learn about Terraform by reading the Introduction to Terraform document, or by going through the “Get Started – Azure” guide. If you are using Azure Cloud Shell, the "Configure Terraform using Azure Cloud Shell” document may be useful.
The first step is to create the provider file. The provider is used to interact with the Azure APIs.
We create a file called provider-main.tf, which is used to create the interaction with Terraform and the Azure providers:
# Define Terraform provider terraform { required_version = ">= 0.12" } # Configure the Azure provider provider "azurerm" { environment = "public" version = ">= 2.15.0" features {} # It is important that the following values of these variables # NEVER be written to source control, and therefore should not be # hard-coded with defaults and should always come from the local # environment subscription_id = var.azure-subscription-id client_id = var.azure-client-id client_secret = var.azure-client-secret tenant_id = var.azure-tenant-id }
Next, we create a file called provider-variables.tf, which is used to manage the authentication variables of the Azure provider:
variable "azure-subscription-id" { type = string description = "Azure Subscription ID" } variable "azure-client-id" { type = string description = "Azure Client ID" } variable "azure-client-secret" { type = string description = "Azure Client Secret" } variable "azure-tenant-id" { type = string description = "Azure Tenant ID" }
The next step is to create the resource group that will host all of our Azure resources. A VNET, and a subnet within the VNET, will also be created. The subnet will host our virtual machine.
We create a file called network-main.tf to describe these resources:
# Create a resource group resource "azurerm_resource_group" "network-rg" { name = "nginx-network-rg" location = var.location } # Create the network VNET resource "azurerm_virtual_network" "network-vnet" { name = "nginx-network-vnet" address_space = [var.network-vnet-cidr] resource_group_name = azurerm_resource_group.network-rg.name location = azurerm_resource_group.network-rg.location } # Create a subnet for VM resource "azurerm_subnet" "vm-subnet" { name = "nginx-vm-subnet" address_prefixes = [var.vm-subnet-cidr] virtual_network_name = azurerm_virtual_network.network-vnet.name resource_group_name = azurerm_resource_group.network-rg.name }
Then we create a file called network-variables.tf to manage network variables:
variable "location" { type = string description = "Azure Region" default = "eastus" } variable "network-vnet-cidr" { type = string description = "The CIDR of the network VNET" } variable "vm-subnet-cidr" { type = string description = "The CIDR for the vm subnet" }
In this section, we will create an Azure NSG (network security group) to protect our virtual machine. The security group will allow inbound traffic in ports 22 (SSH), 80 (HTTP), and 443 (HTTPS).
For brevity, the following code will allow SSH connections from anywhere on the internet. You should determine your own needs for SSH access and restrict access accordingly.
Create a file called security-main.tf and add the following code:
# Create Network Security Group resource "azurerm_network_security_group" "nginx-vm-nsg" { depends_on=[azurerm_resource_group.network-rg] name = "nginxvm-nsg" location = azurerm_resource_group.network-rg.location resource_group_name = azurerm_resource_group.network-rg.name # Allows inbound SSH from entire internet! security_rule { name = "Allow-SSH" description = "Allow SSH" priority = 100 direction = "Inbound" access = "Allow" protocol = "Tcp" source_port_range = "*" destination_port_range = "22" source_address_prefix = "Internet" destination_address_prefix = "*" } security_rule { name = "Allow-HTTP" description = "Allow HTTP" priority = 110 direction = "Inbound" access = "Allow" protocol = "Tcp" source_port_range = "*" destination_port_range = "80" source_address_prefix = "Internet" destination_address_prefix = "*" } } # Associate the web NSG with the subnet resource "azurerm_subnet_network_security_group_association" "ngx-nsg-assoc" { depends_on=[azurerm_resource_group.network-rg] subnet_id = azurerm_subnet.vm-subnet.id network_security_group_id = azurerm_network_security_group.nginx-vm-nsg.id }
In this section, we are going to learn how to deploy a virtual machine with NGINX OSS running Ubuntu Linux. This code will work without major changes on Debian; we would just need to update the source_image_reference
section (instructions are at the end of this chapter).
If you are using CentOS or Red Hat, please jump ahead to “Deploying NGINX OSS in CentOS and Red Hat Linux”.
In this step, we will create a Bash script called install-nginx.sh to install NGINX OSS in the virtual machine:
#! /bin/bash echo "deb http://nginx.org/packages/mainline/ubuntu/ xenial nginx" > /etc/apt/sources.list.d/nginx.list echo "deb-src http://nginx.org/packages/mainline/ubuntu/ xenial nginx" >> /etc/apt/sources.list.d/nginx.list wget http://nginx.org/keys/nginx_signing.key apt-key add nginx_signing.key apt-get update apt-get -y install nginx # Test NGINX is installed nginx -v # Start NGINX - it's enabled to start at boot by default /etc/init.d/nginx start
Here we will create a file called vm-nginx-main.tf. This file will load the bootstrapping script, get a public IP address, and create a virtual machine:
# Bootstrapping Template File data "template_file" "nginx-vm-cloud-init" { template = file("install-nginx.sh") } # Generate random password resource "random_password" "nginx-vm-password" { length = 16 min_upper = 2 min_lower = 2 min_special = 2 number = true special = true override_special = "!@#$%&" } # Get a Static Public IP resource "azurerm_public_ip" "nginx-vm-ip" { depends_on=[azurerm_resource_group.network-rg] name = "nginxvm-ip" location = azurerm_resource_group.network-rg.location resource_group_name = azurerm_resource_group.network-rg.name allocation_method = "Static" } # Create Network Card for the VM resource "azurerm_network_interface" "nginx-nic" { depends_on=[azurerm_resource_group.network-rg] name = "nginxvm-nic" location = azurerm_resource_group.network-rg.location resource_group_name = azurerm_resource_group.network-rg.name ip_configuration { name = "internal" subnet_id = azurerm_subnet.vm-subnet.id private_ip_address_allocation = "Dynamic" public_ip_address_id = azurerm_public_ip.nginx-vm-ip.id } } # Create NGINX VM resource "azurerm_linux_virtual_machine" "nginx-vm" { depends_on=[azurerm_network_interface.nginx-nic] name = "nginxvm" location = azurerm_resource_group.network-rg.location resource_group_name = azurerm_resource_group.network-rg.name network_interface_ids = [azurerm_network_interface.nginx-nic.id] size = var.nginx_vm_size source_image_reference { publisher = "Canonical" offer = "UbuntuServer" sku = "18.04-LTS" version = "latest" } os_disk { name = "nginxvm-osdisk" caching = "ReadWrite" storage_account_type = "Standard_LRS" } computer_name = "nginxvm" admin_username = var.nginx_admin_username admin_password = random_password.nginx-vm-password.result custom_data = base64encode(data.template_file.nginx-vm-cloud-init.rendered) disable_password_authentication = false }
Then we create a file called vm-nginx-variables.tf to manage variables for virtual machines:
variable "nginx_vm_size" { type = string description = "Size (SKU) of the virtual machine to create" } variable "nginx_admin_username" { description = "Username for Virtual Machine administrator account" type = string default = "" } variable "nginx_admin_password" { description = "Password for Virtual Machine administrator account" type = string default = "" }
In this section, we will deploy a virtual machine with NGINX OSS running CentOS Linux. If you prefer Ubuntu, you can skip these next two sections, as they overwrite the files created previously. This code will work on a Red Hat system without major changes; we would just need to update the NGINX OSS package repository, replacing centos
with rhel
, and the source_image_reference
section in the vm-nginx-main.tf file.
In this step, we overwrite the Bash script used in the previous Ubuntu section to install NGINX OSS through yum during the bootstrapping of the virtual machine. Replace the install-nginx.sh file with the following:
#! /bin/bash echo "[nginx] name=nginx repo baseurl=http://nginx.org/packages/mainline/centos/7/$basearch/ gpgcheck=0 enabled=1" > /etc/yum.repos.d/nginx.repo yum -y install nginx systemctl enable nginx systemctl start nginx firewall-cmd --permanent --zone=public --add-port=80/tcp firewall-cmd --reload
Here we replace the file called vm-nginx-main.tf. This file will load the bootstrapping script, get a public IP address, and create a CentOS-based virtual machine that runs the bash shell at boot:
# Bootstrapping Template File data "template_file" "nginx-vm-cloud-init" { template = file("install-nginx.sh") } # Generate random password resource "random_password" "nginx-vm-password" { length = 16 min_upper = 2 min_lower = 2 min_special = 2 number = true special = true override_special = "!@#$%&" } # Get a Static Public IP resource "azurerm_public_ip" "nginx-vm-ip" { depends_on=[azurerm_resource_group.network-rg] name = "nginxvm-ip" location = azurerm_resource_group.network-rg.location resource_group_name = azurerm_resource_group.network-rg.name allocation_method = "Static" } # Create Network Card for the VM resource "azurerm_network_interface" "nginx-nic" { depends_on=[azurerm_resource_group.network-rg] name = "nginxvm-nic" location = azurerm_resource_group.network-rg.location resource_group_name = azurerm_resource_group.network-rg.name ip_configuration { name = "internal" subnet_id = azurerm_subnet.vm-subnet.id private_ip_address_allocation = "Dynamic" public_ip_address_id = azurerm_public_ip.nginx-vm-ip.id } } # Create NGINX VM resource "azurerm_linux_virtual_machine" "nginx-vm" { depends_on=[azurerm_network_interface.nginx-nic] name = "nginxvm" location = azurerm_resource_group.network-rg.location resource_group_name = azurerm_resource_group.network-rg.name network_interface_ids = [azurerm_network_interface.nginx-nic.id] size = var.nginx_vm_size source_image_reference { publisher = "OpenLogic" offer = "CentOS" sku = "7_8-gen2" version = "latest" } os_disk { name = "nginxvm-osdisk" caching = "ReadWrite" storage_account_type = "Standard_LRS" } computer_name = "nginxvm" admin_username = var.nginx_admin_username admin_password = random_password.nginx-vm-password.result custom_data = base64encode(data.template_file.nginx-vm-cloud-init.rendered) disable_password_authentication = false }
We can provide values to our variables through the terraform.tfvars file, or exported environment variables. This will make calling the terraform
command line tool simpler.
Here are the PowerShell environment variables:
$Env:TF_VAR_location = "eastus" $Env:TF_VAR_network-vnet-cidr = "10.0.0.0/24" $Env:TF_VAR_vm-subnet-cidr = "10.0.0.0/26" $Env:TF_VAR_nginx_vm_size = "Standard_B1s" $Env:TF_VAR_nginx_admin_username = "admin" $Env:TF_VAR_azure-subscription-id = "complete-here" $Env:TF_VAR_azure-client-id = "complete-here" $Env:TF_VAR_azure-client-secret = "complete-here" $Env:TF_VAR_azure-tenant-id = "complete-here"
And here are the Bash environment variables:
export TF_VAR_location = "eastus" export TF_VAR_network-vnet-cidr = "10.0.0.0/24" export TF_VAR_vm-subnet-cidr = "10.0.0.0/26" export TF_VAR_nginx_vm_size = "Standard_B1s" export TF_VAR_nginx_admin_username = "admin" export TF_VAR_azure-subscription-id = "complete-here" export TF_VAR_azure-client-id = "complete-here" export TF_VAR_azure-client-secret = "complete-here" export TF_VAR_azure-tenant-id = "complete-here"
When using a terraform.tsars, ensure you never commit the file to source control or share the file with others:
location = "eastus" network-vnet-cidr = "10.0.0.0/24" vm-subnet-cidr = "10.0.0.0/26" nginx_vm_size = "Standard_B1s" nginx_admin_username = "tfadmin" azure-subscription-id = "complete-here" azure-client-id = "complete-here" azure-client-secret = "complete-here" azure-tenant-id = "complete-here"
We must first initialize our working directory for deploying Terraform:
terraform init
Before we run terraform
to deploy our infrastructure, it’s a good idea to use the plan
command to discover what Terraform intends on doing in our Azure account:
terraform plan
If you approve the plan, you can apply the Terraform to your Azure account by running the following; when you are prompted to approve, type yes
:
terraform apply
After Terraform runs, you can go find your newly created resources in Azure and use the IP address to view the default NGINX OSS landing page.
In this section, we will deploy a Linux virtual machine with NGINX Plus using Terraform. Unlike the open source version, in this section we will deploy a virtual machine image preinstalled with NGINX Plus from the Azure Marketplace. Currently, NGINX Plus suggested Azure VM sizes are:
Before we get started with Terraform, we need to accept the Azure Marketplace terms using the following PowerShell script:
Get-AzMarketplaceTerms -Publisher "nginxinc" -Product "nginx-plus-v1" ` -Name "nginx-plus-ub1804" | Set-AzMarketplaceTerms -Accept
To deploy an NGINX Plus virtual machine, we will need to find the value for the Publisher
, offer
, and sku
parameters of the Azure Marketplace source image, using PowerShell.
Start by defining the Azure region you’d like to provision into using a variable:
$Location = “East US”
Then set the a variable to hold the name of the publisher and query the list of offers. For NGINX Plus images, the publisher is called nginxinc
:
$publisher = “nginxinc” Get-AzVMImageOffer -Location $location -PublisherName $publisher | Select Offer
These are the results:
Offer ----- nginx-plus-ent-v1 nginx-plus-v1
Next, we list SKUs for NGINX Plus. We do not want the enterprise agreement because that requires us to bring our own license. We’ll instead use the standard offering to pay for the software license by the hour:
$offer = "nginx-plus-v1" Get-AzVMImageSku -Location $location -PublisherName $publisher -Offer $offer | ` Select Skus
These are the resulting SKUs:
Skus ---- nginx-plus-centos7 nginx-plus-q1fy17 nginx-plus-rhel7 nginx-plus-rhel8 nginx-plus-ub1604 nginx-plus-ub1804
As we can see, there are several options for an operating system to deploy NGINX Plus on Azure: CentOS Linux 7, Red Hat Enterprise Linux 7 and 8, and Ubuntu Linux 16.04 and 18.04.
If we want to use the enterprise version of NGINX Plus, we can use the following code to list SKUs:
$offer = "nginx-plus-ent-v1" Get-AzVMImageSku -Location $location -PublisherName $publisher -Offer $offer | ` Select Skus
The result will be as follows:
Skus ---- nginx-plus-ent-centos7 nginx-plus-ent-rhel7 nginx-plus-ent-ub1804
The first step is to create the provider file for Terraform. The provider is used to interact with APIs.
We create a file called provider-main.tf that is used to create the interaction with Terraform and Azure providers:
# Define Terraform provider terraform { required_version = ">= 0.12" } # Configure the Azure provider provider "azurerm" { environment = "public" version = ">= 2.15.0" features {} # It is important that the following values of these variables # NEVER be written to source control, and therefore should not be # hard-coded with defaults and should always come from the local # environment subscription_id = var.azure-subscription-id client_id = var.azure-client-id client_secret = var.azure-client-secret tenant_id = var.azure-tenant-id }
Next, we create a file called provider-variables.tf that is used to manage the authentication variables of the Azure provider:
variable "azure-subscription-id" { type = string description = "Azure Subscription ID" } variable "azure-client-id" { type = string description = "Azure Client ID" } variable "azure-client-secret" { type = string description = "Azure Client Secret" } variable "azure-tenant-id" { type = string description = "Azure Tenant ID" }
The next step is to create the resource group that will host all of our Azure resources. A VNET, and a subnet within the VNET, will also be created. The subnet will host our virtual machine.
We create a file called network-main.tf to describe these resources:
# Create a resource group resource "azurerm_resource_group" "network-rg" { name = "nginx-network-rg" location = var.location } # Create the network VNET resource "azurerm_virtual_network" "network-vnet" { name = "nginx-network-vnet" address_space = [var.network-vnet-cidr] resource_group_name = azurerm_resource_group.network-rg.name location = azurerm_resource_group.network-rg.location } # Create a subnet for VM resource "azurerm_subnet" "vm-subnet" { name = "nginx-vm-subnet" address_prefixes = [var.vm-subnet-cidr] virtual_network_name = azurerm_virtual_network.network-vnet.name resource_group_name = azurerm_resource_group.network-rg.name }
Then, we create the file network-variables.tf to manage network variables:
variable "location" { type = string description = "Azure Region" default = "eastus" } variable "network-vnet-cidr" { type = string description = "The CIDR of the network VNET" } variable "vm-subnet-cidr" { type = string description = "The CIDR for the vm subnet" }
In this section, we will create an Azure NSG (network security group) to protect our virtual machine. The security group will allow inbound traffic in ports 22 (SSH), 80 (HTTP), and 443 (HTTPS).
For brevity, the following code will allow SSH connections from anywhere on the internet. You should determine your own needs for SSH access and restrict access accordingly.
We create a file called security-main.tf and add the following code:
# Create Network Security Group resource "azurerm_network_security_group" "nginx-vm-nsg" { depends_on=[azurerm_resource_group.network-rg] name = "nginxvm-nsg" location = azurerm_resource_group.network-rg.location resource_group_name = azurerm_resource_group.network-rg.name security_rule { name = "Allow-SSH" description = "Allow SSH" priority = 100 direction = "Inbound" access = "Allow" protocol = "Tcp" source_port_range = "*" destination_port_range = "22" source_address_prefix = "Internet" destination_address_prefix = "*" } security_rule { name = "Allow-HTTP" description = "Allow HTTP" priority = 110 direction = "Inbound" access = "Allow" protocol = "Tcp" source_port_range = "*" destination_port_range = "80" source_address_prefix = "Internet" destination_address_prefix = "*" } security_rule { name = "Allow-HTTPS" description = "Allow HTTPS" priority = 120 direction = "Inbound" access = "Allow" protocol = "Tcp" source_port_range = "*" destination_port_range = "443" source_address_prefix = "Internet" destination_address_prefix = "*" } } # Associate the web NSG with the subnet resource "azurerm_subnet_network_security_group_association" "ngx-nsg-assoc" { depends_on=[azurerm_resource_group.network-rg] subnet_id = azurerm_subnet.vm-subnet.id network_security_group_id = azurerm_network_security_group.nginx-vm-nsg.id }
In this section, we will define a virtual machine with NGINX Plus.
First, we create a file called vm-nginx-main.tf and add code to generate a random password and a random virtual machine name:
# Generate random password resource "random_password" "nginx-vm-password" { length = 16 min_upper = 2 min_lower = 2 min_special = 2 number = true special = true override_special = "!@#$%&" } # Generate a random vm name resource "random_string" "nginx-vm-name" { length = 8 upper = false number = false lower = true special = false }
Then, to the same file, we add code to request a public IP address, generate a network card, and assign the public IP address to it:
# Get a Static Public IP resource "azurerm_public_ip" "nginx-vm-ip" { depends_on=[azurerm_resource_group.network-rg] name = "nginx-${random_string.nginx-vm-name.result}-ip" location = azurerm_resource_group.network-rg.location resource_group_name = azurerm_resource_group.network-rg.name allocation_method = "Static" } # Create Network Card for the VM resource "azurerm_network_interface" "nginx-nic" { depends_on=[azurerm_resource_group.network-rg] name = "nginx-${random_string.nginx-vm-name.result}-nic" location = azurerm_resource_group.network-rg.location resource_group_name = azurerm_resource_group.network-rg.name ip_configuration { name = "internal" subnet_id = azurerm_subnet.vm-subnet.id private_ip_address_allocation = "Dynamic" public_ip_address_id = azurerm_public_ip.nginx-vm-ip.id } }
Next, we add the definition to create the virtual machine with the NGINX Plus:
# Create NGINX VM resource "azurerm_linux_virtual_machine" "nginx-vm" { depends_on=[azurerm_network_interface.nginx-nic] name = "nginx-${random_string.nginx-vm-name.result}-vm" location = azurerm_resource_group.network-rg.location resource_group_name = azurerm_resource_group.network-rg.name network_interface_ids = [azurerm_network_interface.nginx-nic.id] size = var.nginx_vm_size source_image_reference { publisher = var.nginx-publisher offer = var.nginx-plus-offer sku = "nginx-plus-ub1804" version = "latest" } plan { name = "nginx-plus-ub1804" publisher = var.nginx-publisher product = var.nginx-plus-offer } os_disk { name = "nginx-${random_string.nginx-vm-name.result}-osdisk" caching = "ReadWrite" storage_account_type = "Standard_LRS" } computer_name = "nginx-${random_string.nginx-vm-name.result}-vm" admin_username = var.nginx_admin_username admin_password = random_password.nginx-vm-password.result disable_password_authentication = false }
Finally, we create a file called vm-nginx-variables.tf to manage variables for virtual machines:
variable "nginx_vm_size" { type = string description = "Size (SKU) of the virtual machine to create" } variable "nginx_admin_username" { description = "Username for Virtual Machine administrator account" type = string default = "" } variable "nginx_admin_password" { description = "Password for Virtual Machine administrator account" type = string default = "" } variable "nginx-publisher" { type = string description = "Publisher ID for NGINX" default = "nginxinc" } variable "nginx-plus-offer" { type = string description = "Offer ID for NGINX" default = "nginx-plus-v1" }
We can provide values to our variables through the terraform.tfvars file, or exported environment variables; this will make calling the terraform
command line tool simpler.
Here are the PowerShell environment variables:
$Env:TF_VAR_location = "eastus" $Env:TF_VAR_network-vnet-cidr = "10.0.0.0/24" $Env:TF_VAR_vm-subnet-cidr = "10.0.0.0/26" $Env:TF_VAR_nginx_vm_size = "Standard_B1s" $Env:TF_VAR_nginx_admin_username = "admin" $Env:TF_VAR_azure-subscription-id = "complete-here" $Env:TF_VAR_azure-client-id = "complete-here" $Env:TF_VAR_azure-client-secret = "complete-here" $Env:TF_VAR_azure-tenant-id = "complete-here"
And here are the Bash environment variables:
export TF_VAR_location = "eastus" export TF_VAR_network-vnet-cidr = "10.0.0.0/24" export TF_VAR_vm-subnet-cidr = "10.0.0.0/26" export TF_VAR_nginx_vm_size = "Standard_B1s" export TF_VAR_nginx_admin_username = "admin" export TF_VAR_azure-subscription-id = "complete-here" export TF_VAR_azure-client-id = "complete-here" export TF_VAR_azure-client-secret = "complete-here" export TF_VAR_azure-tenant-id = "complete-here"
When using a terraform.tsars, ensure you never commit the file to source control or share the file with others:
location = "eastus" network-vnet-cidr = "10.0.0.0/24" vm-subnet-cidr = "10.0.0.0/26" nginx_vm_size = "Standard_B1s" nginx_admin_username = "admin" azure-subscription-id = "complete-here" azure-client-id = "complete-here" azure-client-secret = "complete-here" azure-tenant-id = "complete-here"
Before we run terraform
, it’s a good idea to use the plan
command to discover what Terraform intends on doing in our Azure account:
terraform plan
If you approve the plan, you can apply the Terraform to your Azure account by running the following. When you are prompted to approve, type yes
:
terraform apply
After Terraform runs, you can go find your newly created resources in Azure and use the IP address to view the default NGINX Plus landing page.
This chapter was a chance to deploy both NGINX OSS and NGINX Plus and to explore the levels of functionality available from both products, as well as the differences between them. NGINX OSS is free but requires a better understanding of how to deploy it and how to make the best use of its feature set. NGINX Plus has several varied and convenient options for deployment and is a commercial product that offers advanced features and enterprise-level support as licensed software by NGINX, Inc.
We deployed NGINX OSS and NGINX Plus using a combination of the Azure Portal, PowerShell, and Terraform to see the available options. Terraform provided the most complete solution for NGINX OSS and NGINX Plus, allowing the greatest levels of automation and integration into a full Azure deployment scenario.
To learn in detail how to configure NGINX, consider checking out Derek’s book, NGINX Cookbook: Advanced Recipes for High-Performance Load Balancing (O’Reilly).
In the next chapter, we will compare the features of Azure managed load-balancing solutions with NGINX and NGINX Plus.
3.147.104.248