HTTP Compression, like caching, is a best practice that should be implemented in all Web Application deployments. Even with the fast links of today, there are performance benefits to be had, as compression can reduce size upwards of 70 percent. This size reduction in TCP terms means fewer packets and reduced round trip times.
Let's start by looking at what kind of content should and shouldn't be compressed:
Go to the Compression section, click on policies, and select the option to show built-in compression policies:
Some of these use custom expressions in the policy (for example, ns_content_type
). When you're not sure, for example, what the expression is actually looking for, you can navigate to AppExpert | Expressions | classic:
Compression, while it is a recommended feature and a great performance optimization, is a hit on the CPU when done at volume. This is one reason to consider tuning your policies, by applying the best practices around what is and isn't a good candidate for compression.
The default configuration already bypasses compression if the CPU is at 100 percent:
Statistics similar to those for caching are available by using Compression Statistics – Detail
:
The output will also show errors, should anything go wrong with compression.
In my experience, however, while troubleshooting this feature, I found the most value for my time by looking at header traces and the flow, so let's look at them for a better understanding.
Compression only works if the client explicitly says to the NetScaler that it is capable of working with compression. It does so using the Accept-Encoding
header. This header also specifies what type of compression it can work with; gzip
and deflate
are the common ones you'll find most sites using.
The following header snippet demonstrates what a request will look like:
The response will look like this:
The Content-Encoding: gzip
header is how the NetScaler communicates to the client that what it is serving is compressed content and that it is compressed using the gzip
algorithm. You will also notice that the original Content-Length
header (Cteonnt-Length
) is now jumbled. That's because NetScaler has to calculate the new Content-Length
given the different size due to compression.
While other compression algorithms are available, the recommended one is gzip
since virtually every browser supports it.
Here are some of the things to look at if you suspect Compression, which generally is a very safe feature, to be a potential source of the problem:
Appfw
and Compression where possible as a test to rule these out. If you have the possibility to test by recreating a TCP VIP instead of HTTP, doing so will help rule out the role of compression in the issue.18.219.45.88