Chapter 8
Test Performance

As experienced software developers, we know that testing is the best way to ensure that our code works as advertised. When you first write the code, a test proves that it does what you think it does. When you fix a bug, the test prevents it from happening again.

I’m a big fan of applying the same approach for performance. What if we write tests that first set the expected level of performance, and then make sure that performance doesn’t degrade below this level? Sounds like a good idea, right?

I learned about this concept while working on Acunote. We started Acunote when Rails was at version 1.0, so performance was a huge concern. Performance testing helped us not only to understand and improve application performance, but to survive through numerous Rails upgrades. It turned out that even a minor version upgrade could introduce the performance regression in some unexpected way. We wouldn’t be able to detect and fix these regressions without the performance tests.

So let me show you how we did performance testing in Acunote and how you can do it too.

A unit test for a function might look something like this:

 
def​ test_do_something
 
assert_equal 4, do_something(2,2)
 
end

This test in fact performs three separate steps: evaluation, assertion, and error reporting. Testing frameworks abstract these three steps, so we end up writing just one line of code to do all three.

To evaluate, our example test runs the do_something function to get its return value. The assertion assesses the equality of this actual value against the expected value. If the assertion fails, the test reports the error.

A performance test should perform these same three steps, but each one of them will be slightly different. Say we want to write the performance test for this same do_something function. The test will look like this:

 
def​ test_something_performance
 
actual_performance = performance_benchmark ​do
 
do_something(2,2))
 
end
 
assert_performance actual_performance
 
end

The evaluation step is simply a benchmarking. The actual value for assert_performance is the current set of the performance measurements.

Ah, but what is our expected level of performance? We said that our performance test should ensure that performance doesn’t degrade below an expected level. A reasonable answer is that our assert_performance should make sure that performance is the same as or better than before. So the test should somehow know the performance measurements from the previous test run. Those measurements make up the expected value that we’ll compare to. What if there are no previous results? Then the only thing the test should do is store the results for future comparison.

We already know how to compare performance measurements from the previous chapter. So the remaining thing to figure out is how and where to store the previous test results. This is something regular tests don’t do.

Should the test find a slowdown, we want to the performance before and after, and their difference. As we know from the previous chapter, all before and after numbers should come with their deviations, and the difference should come with its confidence interval. This means the reporting step in performance tests is also very different from what you usually see in tests.

OK, that’s the big picture of performance testing. Now, the details.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.26.138