Chapter 10. Caching, Proxies and Improved Performance

We have covered a great deal about the web application that you'll need to connect to data sources, render templates, utilize SSL/TLS, build APIs for single-page applications, and so on.

While the fundamentals are clear, you may find that putting an application built on these guidelines into production would lead to some quick problems, particularly under heavy load.

We've implemented some of the best security practices in the last chapter by addressing some of the most common security issues in web applications. Let's do the same here in this chapter, by applying the best practices against some of the biggest issues of performance and speed.

To do this, we'll look at some of the most common bottlenecks in the pipeline and see how we can reduce these to make our application as performant as possible in production.

Specifically, we'll be identifying those bottlenecks and then looking to reverse proxies and load balancing, implementing caching into our application, utilizing SPDY, and look at how to use managed cloud services to augment our speed initiatives by reducing the number of requests that get to our application.

By this chapter's end, we hope to produce tools that can help any Go application squeeze every bit of performance out of our environment.

In this chapter, we will cover the following topics:

  • Identifying bottlenecks
  • Implementing reverse proxies
  • Implementing caching strategies
  • Implementing HTTP/2

Identifying bottlenecks

To simplify things a little, there are two types of bottlenecks for your application, those caused by development and programming deficiencies and those inherent to an underlying software or infrastructure limitation.

The answer to the former is simple, identify the poor design and fix it. Putting patches around bad code can hide the security vulnerabilities or delay even bigger performance issues from being discovered in a timely manner.

Sometimes these issues are born from a lack of stress testing; a code that is performant locally is not guaranteed to scale without applying artificial load. A lack of this testing sometimes leads to surprise downtime in production.

However, ignoring bad code as a source of issues, lets take a look at some of the other frequent offenders:

  • Disk I/O
  • Database access
  • High memory/CPU usage
  • Lack of concurrency support

There are of course hundreds of offenders, such as network issues, garbage collection overhead in some applications, not compressing payloads/headers, non-database deadlocks, and so on.

High memory and CPU usage is most often the result rather than the cause, but a lot of the other causes are specific to certain languages or environments.

For our application, we could have a weak point at the database layer. Since we're doing no caching, every request will hit the database multiple times. ACID-compliant databases (such as MySQL/PostgreSQL) are notorious for failing under loads, which would not be a problem on the same hardware for less strict key/value stores and NoSQL solutions. The cost of database consistency contributes heavily to this and it's one of the trade-offs of choosing a traditional relational database.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.220.237.24