Chapter 1
What Makes Ruby Code Fast

It’s time to optimize.

This is what I think when my Heroku dyno restarts after logging an “R14 - Memory quota exceeded” message. Or when New Relic sends me another bunch of Apdex score alerts. Or when simple pages take forever to load and the customer complains that my application is too slow. I’m sure you’ve had your own “time to optimize” moments. And every time these moments occur, we both ask exactly the same question: “What can I do to make the code faster?”

In my career as a Ruby programmer I have learned that the immediate answer to this question is often “I don’t know.” I’ll bet that’s your experience, too. After all, you thought you were writing efficient code. What we typically do then is to skip optimization altogether and resort to caching and scaling. Why? Because we don’t immediately see how to improve the code. Because conventional wisdom says optimization is hard. And because caching and scaling are familiar to seasoned Ruby developers. In most cases you only need to configure some external tool and make minimal changes to the code, and voilà! Your code is faster.

But there is a limit to what caching and scaling can do for you. One day my company discovered that Hirefire, the automated scaling solution for Heroku, scaled up the number of Heroku web dynos to 36 just to serve a meager five requests per minute. We would have to pay $3,108 per month for that. And our usual bill before was $228 for two web dynos and one worker. Whoa, why did we have to pay almost fifteen times more? It turned out there were two reasons for that. First, web traffic increased. Second, our recent changes in the code made the application three times slower. And our traffic kept increasing, which meant that we’d have to pay even more. Obviously, we needed a different approach. This was a case where we hit a limit to scaling and had to optimize.

It is also easy to hit a limit with caching. You can tell that you need to stop caching when your cache key gets more and more granular.

Let me show you what I mean with a code snippet from a Rails application of mine:

 
cache_key = [@org.id, @user.id,
 
current_project, current_sprint, current_date,
 
@user_filter, @status_filter,
 
@priority_filter, @severity_filter, @level_filter]
 
 
cache(cache_key.join(​"_"​)) ​do
 
render partial: ​'list'
 
end

Here my cache key consists of ten parts. You can probably guess that the likelihood of hitting such a granular cache is very low. This is exactly what happened in reality. At some point my application started to spend more resources (either memory for Memcached or disk space) for caching than for rendering. Here’s a case where further caching would not increase performance and I again had to optimize.

So have I convinced you of the need to optimize? Then let’s learn how.

Here’s when most sources on performance optimization start talking about execution time, profilers, and measurements. The hard stuff. We’ll do our own share of profiling and measuring, but let’s first step back and think about what exactly we need to optimize. Once we understand what makes Ruby slow, optimization stops being a search for a needle in a haystack with the profiler. Instead it can become almost a pleasing task where you attack a specific problem and get a significant performance improvement as the reward.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.12.108.175