Chapter 15. Performance Testing with JUnitPerf

Introducing JUnitPerf

JUnitPerf is an extension of JUnit 3 (see JUnit 3.8 and JUnit 4) that adds the ability to time test cases and to do simple load and performance testing. Its features are similar to some of the TestNG annotations (see Parallel Testing), but JUnitPerf uses a very different approach, integrating smoothly into the JUnit 3 architecture.

Using an elegant system of JUnit decorators, JUnitPerf lets you use existing unit tests to build simple but effective load and performance tests. First, you write your usual unit test cases to verify that the code is executing correctly and is behaving as expected. Then you can encapsulate some of your key unit tests using JUnitPerf decorators, effectively creating a suite of performance tests without having to modify your original unit tests. This also makes it easy to separate your performance tests from your ordinary unit tests, because you usually don’t run them at the same time.

Incorporating some basic performance testing into your unit tests makes good sense. Without going overboard, it is a good way to detect major performance anomalies early on in the piece. You can also configure these tests to run separately from your ordinary unit tests, in order to avoid penalizing the fast feedback cycle that is one of the trademarks of good unit tests.

Note that we are talking about verifying performance, not optimizing code in an uncontrolled manner. Premature optimization has often been decried, and with some justification. Tony Hoare is frequently quoted as saying, “We should forget about small efficiencies, say about 97 percent of the time: premature optimization is the root of all evil.” Optimization should indeed be a very focused task, with precise goals. It is a futile exercise to optimize code that is hardly ever used. However, making sure your application performs correctly where it needs to, right from the start, can be a huge time-saver. And, by formalizing the process and incorporating the tests into the application test suites, verifying your application’s performance with JUnitPerf can help to encourage a more systematic approach to optimization. Rather than optimizing for the sake of optimizing, you measure the performance you have and check it against what you need. Only then do you decide if optimization is necessary.

Measuring Performance with TimedTests

The most basic performance test is to verify how long a test case takes to execute. JUnitPerf provides a simple JUnit decorator class called TimedTest, which lets you check that a unit test does not take more than a certain time to run. In this section, we will look at how to use this decorator. At the same time, we will go through many basic notions about JUnitPerf.

In this chapter, we are going to write some performance tests for a simple web application that manages a database of model planes. On the web application’s home page, users can consult the list of known plane types, select a plane type, and then view the corresponding model planes. According to the performance requirements, this home page is expected to be heavily used, and needs to support a high load. More precisely, the specifications stipulate that The home page must be displayed in less than 2 seconds (not counting network traffic) in the presence of 10 simultaneous users.

Note that we have numbers here. Performance requirements without numbers are pretty much useless. The aim of performance tests is to verify that your code will provide acceptable performance. If it does, there is no need to look further. If it doesn’t, it is better to know about it sooner rather than later!

As it turns out, the main query in this page is the one that displays the list of plane types. In the application, plane types are represented by the PlaneType class. The DAO (Data Access Object) class for plane types implements the following interface:

public interface PlaneTypeDAO {
    PlaneType findById(long id);
    List<PlaneType> findAll();
    public void save(PlaneType planeType);
    public void delete(PlaneType planeType);
}

To list all available plane types, we need to invoke the findAll() method. The unit test class for this DAO is shown here:

public class PlaneTypeDaoTests extends TestCase {

    private PlaneTypeDAO dao;

    public PlaneTypeDaoTests(String value) {
        super(value);
    }

    public void setUp() throws SQLException {
        ApplicationContext ctx = SpringUtilsTestConfig.getApplicationContext();
        dao = (PlaneTypeDAO) ctx.getBean("planeTypeDAO");    
    }
    
    public void testFindAll() {
        List<PlaneType> planes = dao.findAll();
        assertTrue(planes.size() > 0);
        ...
    }   
    ... 
}

The testFindAll() unit test simply invokes the findAll() method, and checks that the results list is not empty. The setUp() method, executed before each test, obtains a DAO object using a Spring application context. Behind the scenes, this instantiates the DAO, along with the appropriate JDBC data source and Hibernate session, using an embedded Java database. It also populates the test database with test data.

Once we are sure that this test runs correctly, we can make sure it runs efficiently. Performance issues can come from many sources: is only the minimum data loaded, or are unnecessary associated objects loaded as well? Is the database correctly indexed?

The first thing to do is to create a test case containing the unit test we want to use, as shown here:

TestCase testCase = new PlaneTypeDaoTests("testFindAll"); 

Next, we create a TimedTest, specifying this test case along with the maximum allowable execution time in milliseconds. In the following example, the TimedTest waits until the test case finishes, and then fails if the findAll() method took more than 100 milliseconds to execute:

TimedTest timedTest = new TimedTest(testCase, 100);

If this test fails, the exception will indicate the maximum allowable time and the actual time that the method took to run:

junit.framework.AssertionFailedError: Maximum elapsed time exceeded! Expected 100ms, 
but was 281ms.
...

The advantage of this approach is that if the test fails, you know by how much. Did it just scrape in over the limit, or did it take an order of magnitude longer to finish? This sort of information can be vital to know if you need to investigate further or just adjust your threshold values.

By contrast, you may prefer to let the test fail immediately once the time limit has expired. This helps to keep the length of your tests within reasonable limits—if your tests take too long to run, you will have a natural tendency to run them less often. To do this, you simply build the test as follows:

TimedTest timedTest = new TimedTest(testCase, 100, false);

This approach of encapsulating test methods is known as the decorator pattern. It is a very flexible way of extending existing unit tests. And it makes it very easy to reuse existing test cases. However, it means that you can’t just write a JUnitPerf test case as you would an ordinary test case. You need to implement the suite() method, and create a TestSuite that contains your decorated unit tests. Tests defined in this method can be easily run in IDEs such as Eclipse and NetBeans, and can also be run automatically using Maven or Ant. The final test case looks like this:

import junit.framework.Test;
import junit.framework.TestCase;
import junit.framework.TestSuite;

import com.clarkware.junitperf.TimedTest;

public class PlaneTypeDaoPerfTests extends TestCase {

    public static Test suite() {        
         TestSuite suite = new TestSuite();
         TestCase testCase = new PlaneTypeDaoTests("testFindAll"); 
         suite.addTest(testCase);

         TimedTest timedTest = new TimedTest(testCase, 500);
         suite.addTest(timedTest);         
         return suite;
    }
}

There’s a trick here: we actually add the plain (no pun intended!) test case before adding the decorated one. This is simply to ensure that the timed test does not measure one-off initialization tasks such as setting up the test database, initializing Spring and Hibernate, and so on. This is important to keep in mind, as you don’t want initialization tasks to pollute your timing data. It is also useful to know that the time measured in a TimedTest test case includes the time spent in the setUp() and tearDown() methods. So, if you have essential and time-consuming code in these methods that you don’t want to measure, be sure to factor it out of your timer thresholds.

SimulatingLoad with LoadTests

You can also do simple load tests with JUnitPerf. Load tests aim at verifying that an application will cope with multiple simultaneous users and still provide acceptable response times. A load test involves simulating a number of simultaneous users by running a series of unit tests on different threads. This is also useful to check that your code is thread-proof, which is an important aspect of web application development.

The following code will create a test with five simultaneous users:

TestCase testCase = new PlaneTypeDaoTests("testFindAll"); 
LoadTest loadTest = new LoadTest(testCase, 5);

This will start all five threads simultaneously. This is not always what you need for proper load testing. For more realistic testing, the tests on each thread should be spread out over time, and not all happen at exactly the same time. You can obtain a more even distribution by providing a Timer object. The following example will create 5 threads, 1 every 100 milliseconds:

TestCase testCase = new PlaneTypeDaoTests("testFindAll"); 
Timer timer = new ConstantTimer(100);
LoadTest loadTest = new LoadTest(testCase, 5, timer);

Alternatively, you can use the RandomTimer to create the new threads at random intervals.

The above code will start several simultaneous threads and run the test case once in each thread. If you want to do serious load testing, or even just make sure that your application performs correctly in a multiuser environment, you really need to run the test case several times in each thread. To run a test case repeatedly, you can use the RepeatedTest wrapper. In the following example, we create five threads randomly distributed over the space of one second. In each thread, we run the findAll() test case 10 times:

TestCase testCase = new PlaneTypeDaoTests("testFindAll"); 
Timer timer = new RandomTimer(100,500);
RepeatedTest repeatedTest = new RepeatedTest(testCase, 10);
LoadTest loadTest = new LoadTest(testCase, 5, timer);

This sort of test will check that your code works well in a multithreaded environment. However, you may also want to test the performance of the application under pressure. You can do this very nicely with JUnitPerf. In the following example, we check that 100 transactions, run simultaneously across 5 threads, can be executed within 25 seconds. That would round out to half a second per transaction with 10 users:

public class PlaneTypeDaoPerfTests extends TestCase {
  
    public static Test suite() {        
         TestSuite suite = new TestSuite();
         TestCase testCase = new PlaneTypeDaoTests("testFindAll"); 
         suite.addTest(testCase);
         Timer timer = new RandomTimer(100,1000);
         RepeatedTest repeatedTest = new RepeatedTest(testCase, 10);
         LoadTest loadTest = new LoadTest(repeatedTest, 5, timer);
         TimedTest timedTest = new TimedTest(loadTest, 25000);
         suite.addTest(timedTest);   
         return suite;
    }
}

Now, these numbers are not plucked out of the blue. Our original requirements specified that the application home page, which contains the list of all plane types, must be displayed in less than 2 seconds (not counting network traffic) 95 percent of the time, in the presence of 10 simultaneous users. If your main domain-layer query takes half a second, you should have no trouble displaying the page in less than two seconds. The exact numbers in this example don’t matter. The point is you need to know what sort of performance you will require from your application before you can start to do sensible load tests, even at a unit-testing level. Note that this is not a reason not to do performance testing at an early stage: it is more of a reason to make sure you know what the performance requirements are before you start testing.

This code will check that the average transaction time is less than 500 milliseconds. Suppose that the client also stipulated the following (much) more demanding requirement: No transaction must take over one second with 10 simultaneous users.

This may be hard to guarantee from a coding perspective, but at least it’s easy to test. The JUnitPerf decorator-based approach is extremely flexible, and you can arrange the test wrappers in any way that suits you. To guarantee that no transaction takes over one second, you simply place a timed test first, directly encapsulating the unit test case. This way, the timed test will be executed every time the unit test is run, and not just at the end of the series as in the previous example. The following example will run 10 parallel threads and fail if any transaction takes more than 1 second:

         TimedTest timedTest = new TimedTest(testCase, 1000);
         RepeatedTest repeatedTest = new RepeatedTest(timedTest, 10);
         Timer timer = new RandomTimer(100,1000);
         LoadTest loadTest = new LoadTest(repeatedTest, 5, timer);
         suite.addTest(loadTest);   

Once you have this sort of test in place, you have a better idea of how your application stands up under pressure. If your tests are successful, that’s great there is no further work to do. If not, you may need to tune your application, or in a worst case rethink your architecture, to get to the required performance level. To optimize your code, it is a good idea to use profiling tools such as jConsole (Chapter 18) or the TPTP profiling tools under Eclipse (Chapter 19) to identify where the tests are spending the most time.

One other important thing to remember about these numbers is not to ask too much: they are just estimates. Exact timings will vary from machine to machine, and will even vary on the same machine, and you need to take this into account to ensure that your tests are portable. As a rule, you should worry about orders of magnitude, not small percentages. If a test takes 5 or 10 times longer than what you expect, you have a serious problem. If it is a matter of 5 or 10 percent, it’s probably not worth spending too much time on.

JUnitPerf does not claim to be a fully fledged load-testing framework. (Tools such as JMeter are better adapted for that.) What it does do is let you start to verify your application’s performance at an early stage, which can avoid costly debugging sessions or architectural changes later on in the project.

Note that JUnit 4 now provides a somewhat similar feature to the JUnitPerf TimedTest class, allowing you to define timeouts for your tests (see Simple Performance Testing Using Timeouts). TestNG goes further, allowing you to define timeouts and run your tests in parallel (see Parallel Testing).

Load-Testing Tests That Are Not Thread-Safe

Some test cases are atomic in nature: they run no risk of behaving strangely by modifying shared data in unpredictable ways. These tests can be safely invoked from several threads simultaneously. Other test classes set up a test environment using member variables in that class to simulate session state. Test cases within the test class manipulate these variables and they don’t like other processes touching them. These tests are not thread-safe.

If you need to load-test a test class that uses member variables in this way or that is not thread-safe for some other reason, JUnitPerf has a solution. The TestFactory and TestMethodFactory classes let you create a factory object that will generate instances of your test class, one instance per thread. In the following example, we create a loadtest object that will start up 10 threads. We provide a factory that it will use to generate separate instances of the PlaneTypeDaoTests class. So, the 10 test cases can run safely in parallel, with no risk of interference:

         Test factory = new TestMethodFactory(PlaneTypeDaoTests.class, "testFindAll");
         LoadTest loadTest = new LoadTest(factory, 10);   

Separating Performance Tests from Unit Tests in Ant

You typically run performance tests and ordinary unit tests at different points in the development lifecycle. Unit tests are (or should be) run very regularly. They need to be quick and snappy, and should avoid getting in the developer’s way by taking too long to run. Performance tests are different. You run them less frequently, and they may take longer to run. It is a good idea to set up your build process to run them separately. One simple approach is to use a special naming convention to distinguish them. There is nothing very complicated about this. In the following extract from an Ant build file, for example, performance tests are identified by the suffix “PerfTests”:

    <target name="test" depends="compiletests">
        <junit printsummary="yes" haltonfailure="yes">
          <classpath>
              <path refid="test.classpath" />
              <pathelement location="${test.classes}"/>
          </classpath>
    
          <formatter type="plain"/>
          <formatter type="xml"/>
         
          <batchtest fork="yes" todir="${test.reports}">
            <fileset dir="${test.src}">
                <exclude name="**/*PerfTests.java"/>
            </fileset>
          </batchtest>
        </junit>
    </target>

    <target name="perftests" depends="tests, compiletests">
        <junit printsummary="yes" haltonfailure="yes">
          <classpath>
              <path refid="test.classpath" />
              <pathelement location="${test.classes}"/>
          </classpath>
    
          <formatter type="plain"/>
          <formatter type="xml"/>
         
          <batchtest fork="yes" todir="${test.reports}">
            <fileset dir="${test.src}">
                <include name="**/*PerfTests.java"/>
            </fileset>
          </batchtest>
        </junit>
    </target>

Now you can run your normal unit tests using “test” target and your performance tests by calling the “perftests” target.

Separating Performance Tests from Unit Tests in Maven

In Maven, the logical place to put performance tests is probably in the integration test phase. The following extract from a Maven POM file dissociates unit tests from performance tests using the same naming convention as above: performance tests have the suffix “PerfTests.” With the following configuration, JUnitPerf performance tests will only be run during the integration test phase:

<project>
  ...
  <build>
    <plugins>
      <plugin>
        <artifactId>maven-surefire-plugin</artifactId>
        <executions>
          <execution>
            <id>unit-tests</id>
            <phase>test</phase>
            <goals>
              <goal>test</goal>
            </goals>
            <configuration>
              <excludes><exclude>**/*PerfTests.java</exclude></excludes>
            </configuration>
          </execution>
          <execution>
            <id>integration-tests</id>
            <phase>integration-test</phase>
            <goals>
              <goal>test</goal>
            </goals>
            <configuration>
              <includes><include>**/*PerfTests.java</include></includes>
            </configuration>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>
  ...
</project>

Then, to run your performance tests, just invoke the integration test phase:

$ mvn integration-test

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.14.144.108