Logical Edge load balancers

Load balancers allow network traffic to be balanced across multiple servers to increase performance, and they also allow the high availability of services. This distribution of an incoming service among multiple servers is transparent to the end users, which makes deploying load balancers a critical component of any environment.

A good use-case for load balancers for further reading can be found at:
http://cloudmaniac.net/load-balance-vmware-psc-with-nsx/

The Edge services gateway offers logical Edge load balancers that allow you to utilize the load balancing functionality and distribute incoming traffic across multiple virtual machine instances.

An Edge services gateway instance must be deployed in order to enable the load balancer service. There are multiple steps required when configuring a load balancer service, and it begins with enabling the service, followed by configuring the application profile to define the behavior based on the traffic type. Once these are defined, you will create a service monitor to enable a health check of the services behind the load balancer. This will stop you from sending traffic to a dead node. You can then create a server pool that has the list of servers participating in the load balancer and a virtual node that will receive all the traffic and distribute it among the pool based on the policies set.

To configure the load balancer service, do the following:

  1. Go to Home | Networking & Security | NSX Edges, and double click on an Edge gateway, and go to Manage | Load Balancer | Global Configuration:
  1. Click Edit to enable the load balancer service:

Select Enable Load Balancer to enable the load balancer service.

Enable Acceleration enables the Edge load balancer to use the faster L4 load balancer engine rather than the L7 engine.

Layer 4 load balancer takes routing decisions based on IPs and TCP or UDP ports. It has a packet view of the traffic exchanged between the client and a server and takes decisions packet by packet. The layer 4 connection is established between a client and a server.
A layer 7 load balancer takes routing decisions based on IPs, TCP, or UDP ports or other information it can get from the application protocol (mainly HTTP). The layer 7 load balancer acts as a proxy and maintains two TCP connections: one with the client and one with the server.

Logging allows you to specify the level of logging. Increasing the logging level will cause a lot of data to be collected on the Edge appliance. In such instances, a syslog server is recommended as per best practice.

Enable Service Insertion allows the Edge appliance to work with a third-party load balancer directly.

  1. Click OK to enable the load balancer service:

Now that the load balancer service is enabled, we will proceed to create a service monitor to monitor and define health check parameters. This service monitor is associated with a pool of servers that will be serving the traffic behind the load balancer:

  1. Go to Home | Networking & Security | NSX Edges, and double click on an Edge appliance, and then go to Manage | Load Balancer | Service Monitoring:

You will notice the default monitors in place. You can either edit them or remove them if not needed.

  1. Click on the + icon to add a New Service Monitor:
  1. Enter a Name for the service, followed by the Interval frequency to ping the service.
  2. Enter the Timeout for the maximum time in seconds to receive a response, followed by the maximum allowed retries before declaring a failure.
  3. The Type field defines the way the service is checked for the HTTP and HTTPS types.
  4. Enter the expected string in the Expected section. This is what the monitor should expect when it checks for the HTTP or the HTTPS service.
  5. Select Method and the base URL to test. If the method is POST instead of GET, type the data in the Send field to send to the server. When the Expected string matches, the monitor continues to also match the Receive string.
  6. The Extension allows you to enter advanced monitoring parameters, such as key:value pairs. This is optional.
  7. Click OK when done.

We will now proceed to create a server pool so we can associate it with the service monitor:

  1. Click on Pools in the Load Balancer section of the Edge appliance:
  1. Click on the + icon to create a New Pool:
  1. Enter the Name of the pool. Select the appropriate Algorithm for the service. This depends on what service you are working with to load balance traffic. The four options are:
  • Round-Robin: Each server is used in turn according to the priority or weight assigned to it.
  • IP_Hash: Selects a server based on the hash of the source and destination IP address of each packet.
  • Least_Conn: Directs new connections to the server in the pool that has the least connections active.
  • URI: The URI is hashed and divided by the total weight of all the servers in the pool. The result is used to decide which server in the pool will take the request.

 

  1. Select the Monitors that apply to the pool from the drop-down menu.
  2. Add members to the pool by clicking the + icon:
  1. Type in a Name for the member followed by the IP address or a virtual center object such as a cluster.
  2. Set the State of the member. Choose between the Enable, Disable, and Drain options.
The Drain option forces the server to shut down gracefully for maintenance. This setting also removes the server from the load balancing pool, but it will still be used for existing and new connections from clients with persistent session to that server.
  1. Type in the Port member where the traffic is to be sent. The Monitor Port is the port where the member receives the health check pings. The Weight determines how much traffic this member can handle.
  2. The Max Connections and Min Connections allow you to manage traffic and number of connections appropriately. Click OK.

 

  1. The Transparent option allows the backend servers in the pool to see the source IP of the request. Transparent is disabled by default and the backend servers only see the traffic coming in from the internal load balancer IP.
  2. Click OK when done.

Before we create a virtual server to map to the pool, we have to define an application profile that defines the behavior of a particular type of network traffic. When traffic is received, the virtual server processes the traffic based on the values defined in the profile. This allows for greater control over managing your network traffic:

  1. Select Application Profiles:
  1. Click the + icon:
  1. Type a Name for the profile and the Type of traffic. If you want to redirect HTTP traffic, enter the URL it needs to be redirected to.
  2. Specify the Persistence that applies to the profile. Persistence tracks and stores any session data. There are different persistence methods supported for different types of traffic.

 

  1. Selecting HTTPS allows you to terminal SSL certificates at the load balancer or even configure an SSL to pass through to your backend pool servers.
  2. Select any Cipher algorithms that are negotiated during the SSL/TLS handshake that apply.
  3. Click OK when done.

Now that we have the application profile created, let's create a virtual server and associate it with the pool. Once this is done, external traffic can be directed to the virtual server IP that, in turn, distributes the traffic across the pool members based on the algorithm we have defined:

  1. Select the Virtual Servers and click the + icon to add a new virtual server:
  1. You will see the New Virtual Server window pop out:
Check Enable Acceleration for the NSX Edge load balancer to use the faster L4 load balancer engine rather than the L7 load balancer engine.
  1. Select the Application Profile for your virtual server.
  2. Type the Name for the virtual server.
  3. Enter the IP Address of the virtual server. This is the IP address the load balancer will be receiving all the external traffic from.
  4. Select the Protocol the virtual server will handle and the port to receive the traffic on.
  5. Select the Default Pool and set the Connection Limit if applicable.
  6. Click OK when done.

Let's now look at Application Rules to understand their applications and configuration. An application rule allows you to specify rules and logic to manage your traffic to make intelligent redirection. You can use an application rule to directly manipulate and manage IP application traffic. This becomes critical in fine-tuning your traffic to the application you are running behind the perimeter:

  1. Log in to the vSphere web client and go to Home | Networking & Security | NSX Edges.
  2. Double click on an NSX Edge and navigate to the Manage | Load Balancer tab:
  1. Click Application Rules and click the Add icon:
  1. Type the Name and a Script for the rule. Click OK when done:

A good example of an application rule script is shown as follows. This script directs requests to a specific load balancer pool according to a domain name. The following rule directs requests to foo.com to pool_1, and requests to bar.com to pool_2.

acl is_foo hdr_dom(host) -i foo
acl is_bar hdr_dom(host) -i bar
use_backend pool_1 if is_foo
use_backend pool_2 if is_bar

You can also look into more rule syntax examples at VMware's NSX online documentation at:

http://pubs.vmware.com/nsx-63/topic/com.vmware.nsx.admin.doc/GUID-A5779D43-AC0F-4407-AF4A-0C1622394452.html

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.105.239