A load-balanced service is accessed by users every day, when they book a plane ticket on a website, watch the news, or access social media. With the use of load balancing, we have the ability to distribute user requests or client requests for content and applications across multiple backend servers where the content is located. In this chapter, we will cover the following topics:
A load-balanced service within NetScaler allows us to distribute user requests from different sources based upon different parameters and algorithms, such as least bandwidth or least connections. It also provides persistency, which allows us to maintain a session to the same server. These features allow us to redirect clients to a backend server, for example, a server with the least connections used.
A regular generic load-balanced service might look a bit like the one shown in the following figure. We have two backend web servers, which answer on port 80
, and they are publicly accessible via a VIP address, which is the load-balanced service.
So, in essence, a load-balanced service in NetScaler consists of the following:
10.0.0.3
and server1
, respectively10.0.0.3
and server1
, respectivelyHTTP
and 80
, respectively80.80.80.80
.HTTP
and 80
, respectively.If we have multiple backend servers hosting the same service, it is much more convenient to use service groups. This allows us to easily bind a service against multiple servers simultaneously.
When starting the deployment of load-balanced services, we need to have the basic configuration in place, such as placement of SNIP to allow communication with backend servers. A quick rule of thumb is:
First, we need to enable the load balancing feature. This can be done either by right-clicking on the load balancing menu under Traffic Management in the GUI and clicking on Enable, or by using the following CLI command:
Enable ns feature lb
In order to deploy a load-balanced web application, we first need to have servers in place that respond to some sort of a network service. In this example, we have two Internet Information Servers (IIS) running on Windows Server. These are accessible via the IPs 10.0.0.2
and 10.0.0.3
internally, and they respond to traffic HTTP on port 80
.
First, we need to add the IP addresses to the server list. This can be done by going to Traffic Management | Load Balancing | Servers, and clicking on Add. Here, we just enter the IP address of the backend servers and click on Create. We have to do this for every backend server. After that is done, we have to add a service to the servers. This can be done by going to Traffic Management | Load Balancing | Services, and clicking on Add. Here, we have many different options. First, we need to choose the server we entered earlier, choose a type of protocol, enter a port number, and give the service a name.
Now, we add a monitor to the service, and click on Create. It will automatically start using the monitor to check the state of the backend server. If we open the service again, the monitor will show statistics about the response time and the status.
We have other types of monitors that we can use as well. All of the default monitors are listed under Traffic Management | Monitors. There are many types of built-in monitors that we can use. They are explained as follows:
GET
request.GET
request and a successful SSL handshake.All of the monitors have parameters that define how often they should probe a service before they set it as offline. Some monitors also have extended parameters. This can be viewed by opening a monitor and going into the Special Parameters pane.
These monitors listed here are just some examples. We also have monitors that have the suffix of ecv
. These are used when we need to send a specific payload with a monitor. For example, if we want to check for custom headers on a web server, we can use the http-ecv monitor. The same can be done with other monitors as well.
There are also some monitors that are not built-in by default. We can add custom monitors for the Citrix web interface, XML service, DDC, and so on. These can be added by going to Monitors | Add. On the right-hand side under Types, there are different Citrix services that we can add a custom monitor to. For example, if we choose CITRIX-XML-SERVICE, we need to specify an application name in the Special Parameters pane. If we click on Create, we can use this monitor when setting up a load-balanced XML service.
Now, we need to create a service in NetScaler for each of the backend servers that are hosting the service. Also, a service is bound to a port. That means we cannot create another service on a server that is bound to another service.
If we want to limit the amount of bandwidth or number of clients that can access the backend service, we can add thresholds to the service. This can be done by going to Service | Advanced | Thresholds. This is useful if you have some backend servers that have limited bandwidth, or when you wish to guard yourself against a DDoS attack.
After we have created a service for each of our servers, we can go on to create the load-balanced virtual server. Go back to Traffic Management | Load Balancing | Virtual Servers, and click on Add. There are multiple settings that we need to set here. First, we need to enter a name, IP address, port, and protocol. Now, what kind of protocol we choose here is essential. For example, if we choose SSL, and the backend servers are responding on regular HTTP traffic, NetScaler will automatically do SSL offloading. This means that NetScaler will terminate the SSL connection at the VIP, and then fulfill regular HTTP requests to the backend servers. The advantage of this is that the backend web servers do not need to use CPU cycles for handling SSL traffic.
When we enable SSL as a protocol on the vServer, the SSL Settings pane is enabled and here we need to add an SSL certificate for our service. It is important that DNS is configured properly. If the DNS name and the subject name in the certificate do not match, we will get a warning, as NetScaler will not be able to validate the certificate. Also, it is important that we have the full SSL chain in place. If not, NetScaler cannot validate the certificate.
If company requirements are that all traffic needs to be encrypted from client to server, we can use SSL bridging. This enables NetScaler to bridge traffic from the clients to the backend servers. When we enable SSL bridging, NetScaler disables some features as it cannot see into the packets because the traffic is encrypted. For example, features such as content switching, SureConnect, or cache redirection will not work. Also, with SSL bridging, we do not need to add a certificate, as it is already available in the backend servers. So for this example, we will use SSL and add a certificate in the SSL Settings pane. After we have done this, we have to bind the backend services or service groups to the vServer. If we do not add a service to the vServer, it will be listed as DOWN until one has been added and assigned. After we have added the required information, the vServer should look something like the following screenshot:
Now, we should define the load balancing methods and persistency. There are multiple ways to load balance between the different services. They are explained as follows:
10.0.0.1
, NetScaler creates a hash out of the source IP. Frequent connections made from the IP and/or subnet will go to the same service.Some of the load balancing methods are explicitly used for some special services and protocols to make sure that when we set up load balancing, and want to use a custom load balancing method, the method is supported for the service. For example, Lync 2013 uses a special NetScaler monitor, which is listed in the setup guide.
Here, we will use the round-robin method. After we have chosen a way to load balance, we can choose how the connection will persist to the service. Again, there are different methods for a connection to persist. They are listed as follows:
Some of the persistency types are specific to a particular type of vServer, and all persistency types have a timer attached to it, which defines how long a connection should persist to a service. You can view more about the different persistency types, and what kind of protocol they can be used for at http://support.citrix.com/proddocs/topic/netscaler-load-balancing-93/ns-lb-persistence-about-con.html.
Now, let us explore a bit about the more advanced configurations that we can configure on a vServer.
Assigning weights to a service allows us to distribute load evenly, based upon parameters such as hardware. If we have many backend web servers that have 4 GB RAM, and we have newly set up vServers, that have 8 GB RAM, then the new ones should have a higher weight. This can be done when we attach a service to a load-balanced vServer. The higher weight we set on a service, the more user-defined traffic/connections it can handle. This is shown in the following screenshot:
However, it is important to remember that not all load balancing methods support weighing. For example, all the hashing load balancing methods and the token load balancing method do not support weighing.
Redirect URL is a function that allows us to send a client to a custom web page if the vServer is down. This only works if the vServer is set up using the HTTP or HTTPS protocols. This can be useful for instances where we have a planned maintenance or some unplanned failures, and we want to redirect users to a specific web page, where we have posted information about what is happening. This feature can be configured under vServer | Advanced | Redirect URL.
Backup vServer allows us to failover to another vServer, in case the main vServer should go down. This can be configured under vServer | Advanced | Backup vServer.
In addition to handling failover, we can also use the Backup vServer to handle excessive traffic, in case the primary vServer is flooded. This is known as a spillover. We can define spillover based upon different criteria, such as bandwidth connections. We can then define what the vServer should do if there are too many connections to the vServer, for example, if it should drop new connections, accept them, or redirect them to the backup vServer. These settings can also be configured in the same pane as the failover settings. Here, we need to configure the method and what kind of action we want it to take.
We have now gone through the basics of setting up a load-balanced service, and some of the advanced configuration that we can set. Now, let us continue with this and use the basics to set up load balanced services for Citrix XenApp and XenDesktop.
Now, there are only certain particular services that we can set up as load-balanced in a Citrix environment. They are listed as follows:
3.17.6.75