The mode switches on the NetScaler

Now, we're coming to the area that you can turn ON and OFF by navigating to System | Settings on the NetScaler.

Modes that are enabled by default

Let's start with the ones that are enabled by default. The fact that they are enabled by default also means they play along well for the most part with most deployments.

Fast Ramp

Fast Ramp is a performance friend. Traditional (read RFC-based) TCP follows a very conservative approach to increasing window sizes; while this made perfect sense in the days of unreliable pipes, it stems the TCP connection from quickly reaching its top speed. Especially in the context of the NetScaler, which will sit closer to or at least have very solid connections to the Server, Fast Ramp works great and is one of those features that rarely has to be touched.

Edge Configuration

Even though it's enabled by default, the Edge Configuration mode only impacts very specific use cases. Notably, Link Load Balancing and Cache Redirection. It's called edge mode because it's sitting literally at the edge of the network learning services that are not even part of your infrastructure, purely with the goal of load balancing. There are two desired behaviors for such deployments:

  • To be able to increase the number of internal services that are allowed on the NetScaler
  • To turn off binary performance logging for such services, thereby increasing performance and at the same time reducing the impact on log size

Remember though, this is only when cache redirection or link load balancing are in use, not system wide.

Using Subnet IP

As we discussed in the IP review section, SNIPs are the recommended way to configure IPs for the purpose of NetScaler to Server conversations. This mode, abbreviated as USNIP, simply enables your SNIPs to be used.

The Layer 3 mode

The Layer 3 mode is enabled by default. If you need the NetScaler to forward packets to an IP not owned by it, you need this mode. When enabled, NetScaler will behave like a router, looking at the routes it has learned or that has been configured with to forward packets. Disabling it will mean anything the NetScaler receives that doesn't have the destination IP as one of its IPs will be dropped. Situations where you would turn this off would be to prevent Servers in the backend from talking to each other or if you want routing to be entirely handled by a separate device (such as a router or firewall).

Path MTU Discovery

Path MTU Discovery, or PMTUD as it's known, is a pretty well-known networking technique that uses ICMP to learn what the lowest MTU along the path is. This way fragmentation is avoided and that is always a good thing. Fragmentation is inefficient, it costs performance at multiple points in the network, and then needs reassembly at end points.

Modes that are disabled by default

First, let's get the advertisements out of the way—SRADV, DRADV, IRADV, SRADV6, and DRADV6.

When you enable dynamic routing on the NetScaler, you do so to have it participate in routing and learn routes from other routers in its neighborhood. You then also have the ability to have it advertise the routes it knows. This is when you choose to enable the respective advertisement.

Next, there are a couple of RISE-related enhancements. These are RISE_APBR and RISE_RHI. First let's begin by understanding what RISE is. Remote Integrated Services Engine is a Cisco technology, which allows the NetScaler when used with the Cisco Nexus series of appliances, provides a tight integration between the two products and this enables some configuration automation and in turn easier management.

Here is a quote from the Cisco Whitepaper. Reference: http://www.cisco.com/c/dam/en/us/products/collateral/switches/nexus-7000-series-switches/white-paper-c11-731370.pdf:

"Each device can retrieve and program the hardware and software tables of the other (for example, the forwarding tables, routing tables, and access control lists [ACLs])."

The two RISE modes represent two of the fundamental use cases of this integration:

  • RISE_APBR (RISE Auto PBR): USIP configurations require special routing handling in place so that the return traffic goes through the NetScaler; otherwise, the client that isn't expecting a response from the Server will drop it. APBR allows the PBRs needed for this to be set up dynamically, which helps with scale.
  • RISE_RHI (RISE Route Health Injection): Route Health Injection allows active-active load balancing of VIPs by using injecting routes into the routing table to point to one of the several NetScalers as the target for a given VIP. The RISE implication here is that the addition of these routes can be done dynamically.

Let's now get to the modes that we are most concerned with while troubleshooting:

  • Layer 2 Mode: As you can tell from the name, this turns the NetScaler into a switch, forwarding any packets that are not meant for its MAC addresses. So yes, this does induce a very real risk of a loop if enabled without proper network evaluation. This is why it's turned off by default. Luckily, most deployments do not require this option (a couple of such exceptions are the AppFw transparent mode and CloudBridge Connector).
  • Bridge BPDU: It's important to first note, that the NetScaler doesn't participate in understanding Spanning Tree Protocol. By default, it drops BPDUs, and this is perfectly okay for most deployments because the L2 mode is disabled by default. The best practice in fact is to not have STP enabled at all on any of the switch ports that the NetScaler (with L2 mode off) is plugged into, so that the instances come on instantly without cycling through the intermediate states. If you, however, are enabling the L2 mode, consider bridging BPDUs so that the switches can detect loops and turn off redundant interfaces if they need to.
  • Use Source IP (USIP): When enabled, NetScaler preserves the original client IP as visible to it while forwarding traffic to the Servers. As simple as that is, there are network implications to consider in order to avoid dropped packets. When USIP is enabled, the Server can see the Clients IP address, and unless it's set to route traffic back to the NetScaler, might attempt to talk directly to the client. This, of course, will be rejected by the client. To get around this, you will need to either set the NetScaler as the default gateway for the Servers, route traffic back to the NetScaler using PBR, set up a non-ARPing loopback address, or alternatively use NAT for the reverse traffic.

    If it's purely for Client IP logging purposes that you are turning on USIP, consider Client IP Insertion or Web logging instead. The latter is especially designed for high performance logging. Another point to bear in mind while enabling USIP is that it reduces the reusability of a connection on the Server side. Why is that? Because when the NetScaler tries to look for a connection in its reuse pool, it looks for something that matches among other things, the source IP. Whereas, by default, you have a lot of matches given the SNIP remains somewhat a constant; with USIP, this gets chopped up into several small pools of connections.

    A common question is what happens if both USNIP (which we discussed earlier) and USIP are enabled? USIP always overrides USNIP. Also, USIP can be enabled either at the Global level or at the service level. The service level setting takes precedence over the Global level setting.

  • Client Keep-Alive: Known as CKA, in short, this is a HTTP technique to allow for connection multiplexing on the Client-side of the connection. When enabled, NetScaler drops the Connection: close header, which would have otherwise signified the end of the conversation and caused the client to close the connection and insert a replacement header of its own Connection: Keep-Alive. The result is that the client doesn't need to re-establish newer connections for other requests on the page it's trying to load. The technique, as such, is perfectly valid and most browsers support it, however you might run into cases where the browser (by behavior) doesn't load the page until it receives the Connection: close header. This once manifested for me as a certain browser not redirecting me on seeing a HTTP 302 for 180 seconds! Such situations would require you to leave this mode disabled.
  • TCP Buffering: In a typical deployment, the NetScaler has a much more reliable and faster connection to the Servers than it does to the Clients who are connecting to it. This could easily mean that the Server builds up a queue of responses that it hasn't been able to send out, as the Client doesn't acknowledge as fast as the Server is generating data. This is where TCP buffering comes in. When enabled, the NetScaler queues this data for the Server thereby taking the load off it and lets it continue working on data for other clients. The reason this mode is disabled by default is that it has, along with the memory requirement (configurable), a CPU impact. So, in summary this is a very helpful feature for Internet-based clients but proper testing to evaluate the impact for a given profile and the volume of traffic is needed.
  • MAC Based Forwarding: MBF is a cache-based forwarding technique. It notes the MAC address of the incoming Client request and automatically assumes what the destination MAC address should be for the response. For very static and symmetric environments, this could mean hugely increased forwarding performance as the whole route lookup process is bypassed. If your environment relies on specific routes for the return traffic (think PBR), those get bypassed too. So it needs careful consideration for such environments. There are certain scenarios, such as Firewall or VPN load balancing where MBF is indispensable due to the way it avoids asymmetric routing.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.133.148.117