Chapter 15. Efficient Quality Assurance

Properly testing a system requires a sizeable effort, with typically a quarter of the project schedule being set aside for the testing process [Brooks, 1995]. Consequently, any method that facilitates the various testing activities directly impacts the project timeframe, helping to speed up the delivery of the software. If you can reduce the time to undertake all aspects of the testing process, you’ll achieve the RAD objective of delivering the application to the customer in a shorter timeframe.

Despite the advantages test-driven development provides, it is not a replacement for a rigorous quality assurance (QA) process. This still has to be undertaken, as a test strategy based purely on unit testing falls well short of the goal of delivering a comprehensively tested solution.

The primary focus of this chapter is on functional and system-load testing. Predominantly, we examine how test automation tools can assist in reducing testing times and improving test accuracy. Two open source tools are introduced: HttpUnit for creating test scripts for the functional testing of a Web application, and JMeter for conducting load and stress testing.

Quality Assurance

Quality assurance involves the auditing, monitoring, and management of all aspects of software quality throughout the entire project lifecycle. A rigorous QA process demands a range of testing types to ensure an acceptable level of quality for the system delivered to the customer.

Here are the different types of test an enterprise system commonly undergoes before being released into a production environment.

  • Unit tests.

    The objective of unit testing is to validate a component’s conformance to the design. These tests form the backbone of a test-driven approach and are the responsibility of the developer. They are fine-grained tests and typically exercise the functionality of a single class or component via its public and package-level methods and data.

    Unit tests can be a combination of black-box and white-box testing methods. Under a black-box test, the unit test confirms the class under test meets its specified requirements. A white-box test looks at the internals of the class to verify how the requirements are met.

    Note

    Unit testing is covered in Chapter 14.

  • Integration tests.

    Unlike unit tests, which focus on a single class or component, integration testing takes a wider view, operating at a higher level. They confirm components are able to collaborate in order to deliver the required functionality.

  • Functional tests.

    Functional tests verify the system’s conformance to the end-user requirements. They usually align with individual use cases, which themselves provide a starting point for a test case. Larger projects tend to charge a dedicated QA team with the responsibility of producing and executing all functional tests.

  • Load tests.

    Load, or performance, tests target the nonfunctional requirements and confirm the ability of the system to meet the specified performance criteria under a given load.

  • Stress tests.

    Stress testing places the system under a load that exceeds its designed operational capacity. The purpose of the stress test is to observe the system’s failsafe behavior under excessive load.

  • System tests.

    A system test is carried out on a completely integrated system and looks to prove that the system meets all requirements, both functional, as defined in the use cases, and nonfunctional, such as meeting specific performance criteria.

  • Regression tests.

    This is a combination of the different testing types that together measure the impact of a modification upon a system. Full regression testing can be a lengthy process, involving the rerunning of all unit, integration, functional, and system tests. Depending on the nature and extent of the change, a partial regression test may be preferable.

  • Acceptance tests.

    An acceptance test confirms the system meets the acceptance criteria agreed with the stakeholders. It is common for the customers of a system to undertake this type of testing with their own QA team, as acceptance testing is closely linked to the terms of the contract under which the system was developed.

The focus of this discussion is on functional, load, and stress testing. Performing these tests correctly requires the establishment of a suitable QA environment. The next section examines the different environments necessary for developing and testing enterprise software.

The Project Environment

Enterprise projects commonly require three distinct working environments: development, testing, and production.

Figure 15-1 illustrates the three environments.

Environment setup.

Figure 15-1. Environment setup.

In the development environment, software engineers write, unit-test, and integrate all source code for the system. This environment is likely to comprise developer workstations with additional machines for performing integration builds and housing source-control repositories.

The testing environment is where all formal testing takes place against regular, versioned releases of the system from the development environment. For tests to be meaningful, the test environment must closely resemble the production environment.

note

The testing environment is sometimes called the staging area or preproduction environment.

Production is the target environment for the system, and the testing effort must validate the system is capable of meeting all functional and nonfunctional requirements when operating in this final environment. Testing is unlikely to be possible in production because live systems are involved, hence the need for the testing environment to closely mirror that of production.

The Testing Process

Testing as part of an iterative development methodology such as Extreme Programming (XP) or the IBM Rational Unified Process (RUP) is an ongoing process conducted throughout the entire software development lifecycle.

Note

Chapter 3 covers iterative development processes, including XP and the RUP.

An iterative development process has a running version of the system available from the early stages of the project. This allows the project team to continually submit the system to a barrage of functional and nonfunctional tests. This constant and ongoing testing effort is a key strategy in reducing risk by confirming the system under development is able to meet customer requirements and design goals.

Formal testing should commence toward the end of each iteration. The process starts by delivering a versioned release into the test environment

note

It is important the testing environment is reproducible between tests so the environment’s hardware, configuration, and test data remains constant for each test cycle. This is essential for meaningful comparisons of test results between releases.

Each release is subjected to a full range of functional and nonfunctional tests, and any arising defects and issues are logged for the attention of the development team. The release and testing process continues until the system is of an acceptable standard. The final iterations of a project focus intensely on the testing effort and look to deliver the finished system into production.

It is common for systems with a high defect rate to spend a substantial amount of time bouncing between the developments and testing environments. A long and involved testing process causes lengthy delays in getting the application into production.

Testing for RAD Projects

Supporting rapid application development requires meeting two objectives:

  • Improving the quality of software releases in order to reduce the number of testing cycles

  • Reducing the time taken to complete the testing process

A test-driven approach to development results in higher quality software with fewer defects. This point has critical implications for rapid development, as the number of defects found during testing directly relates to the project’s duration. Put bluntly, the more defects discovered, the longer the system takes to deliver to the customer. Moreover, a problem detected during the formal testing process is more expensive to rectify than if discovered during development. Rigorous testing during development pays dividends by enabling the detection and correction of defects close to their point of origin.

Reducing the time taken to execute all of a system’s functional and nonfunctional test cases is achievable with test automation. This approach speeds up the testing process without comprising the quality of the test performed. The benefits of automated testing for rapid development make this the main topic for this chapter.

Automated Testing

According to Robert Binder, author of the definitive testing guide Testing Object-Oriented Systems, for tests to be effective and repeatable, they must be automated [Binder, 1999]. This is an added bonus: by adopting test automation techniques, you not only help to reduce the project timeframe but also apply best-practice testing methods.

The arguments for writing automated tests have close parallels with the arguments used to justify the creation of code generators during development:

Note

Code generators are introduced in Chapter 6.

  • Accuracy and repeatability.

    Test scripts are usually executed more than once during the course of a project. Even the most careful developers fail to prevent every defect from finding a way into the system. Although a single test cycle must be the goal, several test cycles are often required.

    Accurate system testing relies on tests being repeatable between cycles. Repeatability is easier to achieve with test automation. By comparison, manual testing is labor-intensive and subject to both human error and oversight. A test strategy based completely on manual testing procedures carries the risk that errors introduced between testing cycles will escape detection. An automated test that is 100 percent repeatable avoids this danger.

  • Reduced timeframes.

    The time available to fully test a complex enterprise system, with numerous interoperating subsystems and countless points of integration between collaborating components, may be such that test automation is the only plausible option for realizing an effective test strategy in a timeframe that is acceptable to the customer.

  • Improved test effectiveness.

    Certain types of testing are difficult to achieve without the use of automated tools. Load and stress testing fall firmly within this category. Here, sophisticated tools are required that reproduce the same level of throughput upon a system as a large number of users all accessing the system concurrently. Unless the project’s budget runs to hiring potentially thousands of testers, then the use of these tools is a necessity.

  • Removal of mundane tasks.

    Testers, like developers, get bored if they are required to perform the same type of tests repeatedly. As with the use of code generators, automated test scripts remove the drudgery from the testing effort, leaving the QA specialist free to focus on other aspects of the system.

note

The adoption of test automation does not completely remove the need for manual testing processes. Expert testers can, and should, continue to perform invasive manual tests in an effort to break the system. An effective test strategy is a combination of automated and manual tests that together comprehensively test a system prior to its delivery to the customer.

A question that often arises in relation to test automation is whether the time taken to write the automated test script is justifiable in terms of the effort saved in running an automated test. Despite the benefits, test automation does consume effort and hence incurs a cost to the project. Nevertheless, these costs are usually recoverable within two or three test cycles, especially when including the added benefits of improved test accuracy and consequent increase in software quality. Furthermore, the same test scripts are reusable in the long term, as the system enters a maintenance phase.

tip

The creation of test scripts can start in the early stages of a project iteration by using a prototype as a starting point for the creation of automated test scripts.

The J2EE Testing Challenge

An enterprise-level distributed J2EE application presents some very unique challenges to the tester, and regardless of whether manual or automated tests are used, the testing of a distributed application is an involved and complex process.

First, an application built on the J2EE platform employs a variety of Java-based technologies for the development of system components, which are distributed over remotely distinct tiers. Thus, the tester immediately has a multimachine environment to contend with, plus the headaches a network brings in the form of firewalls and corporate security policies.

Note

Chapter 4 examines some possible software architectures for creating J2EE applications that do not depend upon the remote method invocation services of Enterprise JavaBeans.

In addition to the distributed environment, by leveraging the full capabilities of the J2EE platform, the architecture of a J2EE enterprise application could potentially involve the use of asynchronous messaging between systems, long-running business transactions being fired between heterogeneous systems, and critical business functionality being exposed to other systems as part of a wider service-oriented architecture.

Besides the sophisticated architectures made possible by the J2EE platform, enterprise systems present their own special challenges to the tester, with system operational attributes such as security, performance, scalability, and robustness requiring intensive test coverage. The sensitivities revolving around security alone often demand the skills of specialized QA experts.

Collectively, these points make the task of testing an enterprise system as complex for the tester as the task of development is for the software engineer.

Test Automation Tools

Fortunately, test automation tools are available to help address the challenges presented in thoroughly testing J2EE solutions. These tools aim to both simplify the testing process and improve the quality of the tests conducted.

Tools that support the testing process fall into the following general categories:

  • Test coverage.

    These tools analyze the entire code base and report on the depth and breadth of existing test cases.

  • Quality measurement.

    Measurement tools operate either statically, analyzing such artifacts as the design model and the source code, or dynamically by inspecting the runtime state of the system under test. The output from these types of products enables metrics to be compiled on the application in terms of complexity, maintainability, performance, and memory usage. This information can then be used to drive the production of a suitable test strategy.

  • Test data generators.

    One of the harder aspects of testing is the generation of suitable test data. This is especially the case where a new system is under development and no existing legacy data is available. Test data generators will produce test data from such artifacts as the design model, database schema, XML schemas, and the source code.

  • Test automation tools.

    This general-purpose category covers the tools that execute prepared automated test scripts. Products are available that support the automation of the full range of testing types, including unit, integration, functional, load, and stress tests. Methods employed by these tools include record and playback of events for GUI testing and programmatic testing for specific GUI types.

Various testing tools are available as open source, although few offer the same level of functionality as the high-end commercial products. Table 15-1 lists some of the open source testing tools available to the Java enterprise tester for undertaking test automation.

Table 15-1. Open Source Java Testing Tools

Name

Description

Type

Reference

Cactus

Cactus is a unit/integration test framework from the Apache Software Foundation. It is used for the testing of server-side Java code and offers a means of undertaking in-container testing on J2EE components.

Unit/Integration

http://jakarta.apache.org/cactus/

Grinder

The Grinder orchestrates the activities of a test script in many processes and across many machines, using a graphical console application. Test scripts use client code embodied in Java plug-ins. The Grinder comes with a plug-in for testing HTTP services, as well as a tool that allows HTTP scripts to be automatically recorded.

Load/stress

http://grinder.sourceforge.net/

HttpUnit

HttpUnit offers a Java API for developing a suite of functional tests for Web applications.

Unit/Functional

http://sourceforge.net/projects/httpunit/

Jameleon

Jameleon is an acceptance-level automated testing tool. Jameleon claims to separate applications into features, which are then tied together as test cases.

Functional

http://jameleon.sourceforge.net/

JFCUnit

An extension to the JUnit framework that enables you to execute unit tests against code that presents a Swing GUI–based interface. JFCUnit offers a record and playback facility to enable novice GUI developers to generate and execute tests.

Unit/Functional

http://sourceforge.net/projects/jfcunit/

JMeter

Apache JMeter is a 100 percent pure Java desktop application designed to load-test functional behavior.

Load/stress

http://jakarta.apache.org/jmeter/

Solex

Solex is an Eclipse plug-in for testing Web applications. It provides functions to record client sessions, adjust parameters as required, and then replay later as part of a regression suite.

Functional

http://solex.sourceforge.net/

The next sections cover the use and application of two of these tools: HttpUnit and JMeter. We first look at HttpUnit, an API that provides support for functional testing.

Functional Testing

Functional tests focus on proving the system exhibits the behavior requested by the customer. A system’s functional behavior can be detailed in a series of use cases, as is the practice with a use-case-driven process like the RUP, or as a set of user stories when following an agile method such as XP.

The ability of the system to meet the customer’s business requirements is obviously a key concern. The owners of the system use functional tests to assess the application’s conformance to their requirements as part of their acceptance tests.

Under XP, the customer works with the development team to generate an automated suite of acceptance tests. Larger projects following the RUP are likely to use a QA team to build test scripts from the system’s use cases.

Functional testing is a black-box technique, and for business systems relies on test cases that exercise system functionality via the user interface, although testing of batch functionality is also a concern.

Tools offer several approaches for automating the functional testing of systems via the user interface. One approach uses event capture and replay to rerun recorded keystroke sequences against the GUI. However, this approach is fragile and susceptible to even very small changes in the system’s user interface.

Another approach is to take control of the user interface programmatically using a script. This approach is the subject of the next section, which introduces the functional testing tool HttpUnit.

Introducing HttpUnit

HttpUnit is an open source testing tool for undertaking the functional testing of Web applications. Unlike its namesake JUnit, HttpUnit is not a testing framework but a Java API. The HttpUnit API provides a means of programmatically interacting with a Web application and enables the automation of sophisticated test scenarios.

Note

The JUnit framework is covered in Chapter 14.

The HttpUnit API centers around four core classes, described in Table 15-2.

Table 15-2. Main HttpUnit Classes

Class

Description

WebConversation

Acts as the Web browser during the test and maintains all conversation state associated with the running test case and the Web application under test.

WebResponse

Represents the HTTP response received from the Web application and provides methods for conveniently inspecting the contents of the response.

WebForm

Used to represent an HTML form and enables a request to the server to be built up by specifying values to be submitted for the form.

WebRequest

Represents an HTTP request submitted to the Web application.

Using this API, all facets of an interaction with a Web application are controllable. The API provides an elegant mechanism for constructing and submitting Web requests and analyzing the response. This functionality alone makes it a useful utility for Web application development, not just for testing purposes.

Although HttpUnit is suitable for building unit tests, its ability to engage in a dialog with a Web application places it in the category of a functional testing tool. HttpUnit test cases are written from the perspective of the end user, and key business scenarios can be orchestrated and tested, making the tool ideally suited for constructing acceptance tests.

As a testing tool, HttpUnit is very simple. Its main strength is a clean and easily understandable API. Unlike many of the commercial Web functional testing tools, the feature set of HttpUnit is distinctly limited in comparison. HttpUnit offers no reporting capabilities or graphical analysis features for deciphering the results of test runs. However, all of these bells and whistles carry a price tag. Consequently, the open source nature of the HttpUnit distribution, combined with its effectiveness as a functional testing tool, has made it a highly popular choice for testing Web applications.

The latest version of HttpUnit can be downloaded from http://sourceforge.net/projects/httpunit/.

HttpUnit and JUnit

As HttpUnit is not a framework but an API, a method is required for running the tests. Although HttpUnit is independent of the JUnit framework, it is perfectly acceptable to use HttpUnit in conjunction with JUnit. With this approach, a standard JUnit TestCase is produced, and HttpUnit calls are used within the test to submit requests and evaluate responses to and from the Web application. Based on the responses received, JUnit assertions confirm whether the interaction with the Web application is behaving in accordance with its requirements.

The advantage of piggybacking HttpUnit tests on the back of the JUnit framework means JUnit now serves as a common mechanism for running both unit and functional tests on the project. This allows all functional tests to be run against a system before its release into a formal testing environment. Ideally, the functional test suite should run as part of a continuous integration build process.

Writing a Test with HttpUnit

To illustrate the use of the HttpUnit API for constructing functional tests, you must have a Web application ready for testing. Rather than select a public Web site, the example operates against the Avitek Medical Records (MedRec) example that comes with the BEA WebLogic Server installation.

The MedRec application allows physicians, patients, and administrators to log in to the system and perform activities according to their role. Physicians, for example, can log in and perform a search for patients, and then view a patient’s details.

As every interaction with the application involves submitting a username and password via a login page for authentication, we build a test around this functionality. Specifically, we test the physician login process with the following scenario:

  1. Access the physician application via the login page.

  2. Confirm reaching the correct login page.

  3. Obtain the physician login form.

  4. Submit the login form with a valid username and password.

  5. Confirm reaching the physician search page.

To write a test for this scenario using HttpUnit, it is not necessary to have access to the code for the MedRec application. Instead, the test case is constructed by examining the source for each page through the view source option available from the browser. Tests confirm the presence of expected elements for each page.

Listing 15-1 illustrates a test case for the physician login scenario.

Example 15-1. Physician Login Test Case

import junit.framework.TestCase;

import com.meterware.httpunit.WebConversation;
import com.meterware.httpunit.WebForm;
import com.meterware.httpunit.WebRequest;
import com.meterware.httpunit.WebResponse;

/**
 * Login test for Physician MedRec application
 */
public class PhysicianLoginTest extends TestCase {

  private WebConversation conversation;

  /**
   * Establish an instance of WebConversation for
   * running the test
   */
  protected void setUp() throws Exception {
    super.setUp();

    conversation = new WebConversation();
  }

  /**
   * Ensure search page is reached on valid physician login
   */
  public void testPhysicianLogin() throws Exception {
    // Establish connection with Web page
    //
    WebResponse response = conversation
        .getResponse("http://localhost:7001/physician/login.do");

    // Check we have the right application
    //
    assertEquals("Avitek Medical Records", response.getTitle());

    // Get the form for the login
    //
    WebForm form = response.getFormWithName("userBean");

    // Build up a request from the form
    // Ensure we specify the action button for the submit
    //
    WebRequest loginRequest = form.getRequest("action");

    // Submit a valid username and password
    //
    loginRequest.setParameter("username", "[email protected]");
    loginRequest.setParameter("password", "weblogic");

    WebResponse loginResponse = conversation
        .getResponse(loginRequest);

    // Check we are successfully through to the search page
    //
    String destinationPath = loginResponse.getURL().getPath();
    assertTrue(destinationPath.endsWith("search.do"));
  }

} // PhysicianLoginTest

The MedRec application launches from the WebLogic Quick Start menu, which conveniently starts the WebLogic server and deploys the MedRec application. With the system up and running, initiate the test case as for a standard JUnit test from either Eclipse or using one of the JUnit test runner applications.

Note

Chapter 14 provides information on how to run a JUnit test.

The test starts by establishing a conversation with the physician application. Normally, we do this from a Web browser, but as this is an automated test, the equivalent of an automated browser is required. The WebConversation class serves this purpose and acts as the browser component in all communications with the application. The login page is accessed via a single call to getResponse() on the instance of WebConversation, passing in the URL. A WebResponse object is the result of a successful call. Methods on the WebResponse class enable the inspection of the response from the physician application in order to confirm we have received the expected response.

From the example, we can see how a JUnit assertion verifies the page’s title to confirm we have accessed the MedRec application.

assertEquals("Avitek Medical Records", response.getTitle());

It is one thing to validate a response from a Web application, but a proper test requires submitting requests. For the test in the example, we must supply both the username and password for the physician.

Using the getFormWithName() method on the WebResponse object, we can produce an instance of WebForm that enables us to build up a request to the server. The login page has only a single form, but the test requests the form explicitly by name.

WebForm form = response.getFormWithName("userBean");

From the WebForm object, a WebResponse is constructed. Note that we must specify the name of the submit button used for the form, because the form contains two buttons: a submit and a cancel. The call to getRequest("action") on the WebForm instance provides a readymade WebRequest.

WebRequest loginRequest = form.getRequest("action");

Using the WebRequest object, you can specify the username and password, and submit the request to the MedRec physician application. The setParameter() method on the WebRequest object allows the contents of the form to be built up. Submit the request by invoking the getResponse() method on the WebConversation instance and passing the request in as a parameter.

loginRequest.setParameter("username", "[email protected]");
loginRequest.setParameter("password", "weblogic");
WebResponse loginResponse = conversation
        .getResponse(loginRequest);

A valid username and password should take us to the search screen. The final act of the test is to confirm the page we have sent to by interrogating the path of the URL associated with the latest response from the application.

assertTrue(destinationPath.endsWith("search.do"));

The example should serve as a useful template for building more complex test scenarios. HttpUnit is an effective tool for creating exhaustive, automated functional tests for Web applications. HttpUnit is not suitable for testing Swing- and AWT-based fat clients. In this situation, consider using JFCUnit, which is designed for this very purpose.

The next section moves on to testing nonfunctional requirements and examines methods for conducting load and stress tests.

Load and Stress Testing

Load and stress tests are part of the process of validating that a system meets its nonfunctional requirements, which typically define such operational attributes for the system as performance, reliability, scalability, and robustness.

Load testing validates the system’s performance in terms of handling a specified number of users while maintaining a defined transaction rate. Stress testing assesses a system’s reliability and robustness when its design load is exceeded. Although the system may decline some requests when overloaded, it should still have the resilience to keep functioning without suffering a potentially embarrassing outage.

Load tests also test for system scalability. This testing takes the form of proving that through an increase in processing power, whether by adding processors or additional machines to the production environment, the system can scale to keep pace with the anticipated increase in load.

The expected system performance criteria can be detailed precisely in a number of ways but are typically stated as a required transaction rate with the system under a stated load. For example, the requirements could state that a request must be handled within two seconds when 10,000 users are accessing the system. For contractual reasons, the specification of load criteria must be highly detailed, taking into account such operational elements as environment, database size, network speed, and hardware configuration, to name but a few.

warning

Be wary of vague performance criteria, such as “Requests must be handled in a timely manner.” Such statements are impossible to quantify and hence impossible to accurately test. Remove any ambiguity and make sure you have hard figures for defining the performance measures your system must meet.

Performance Concerns

A key failing on many projects is that performance testing is not carried out until near the end of the project. If performance problems are not detected until later iterations in the project, drastic changes to the system’s underlying architecture may be necessary. Last minute efforts to correct performance-related problems put the quality and stability of the finished application at great risk. Moreover, such late changes could result in unacceptable delays in the delivery of the system.

A good test strategy combined with a software architecture that considers performance upfront in the development process guards against this danger.

You should use early project iterations for building exploratory prototypes that seek to validate the ability of the software architecture to meet all nonfunctional requirements. Tests constructed against such prototypes validate the architecture. Furthermore, these same tests remain available throughout the development of the system to ensure compliance with performance objectives as the application evolves.

Testing of this nature requires a suitable test automation tool. You can develop your own load-testing tool by building a framework to execute your automated functional tests multiple times from independently executing threads. Fortunately, the Apache Software Foundation has already developed such a tool in the form of JMeter.

Introducing JMeter

Like some of the best software tools, JMeter was built out of necessity. Stefano Mazzocchi originally created the application for testing the performance of Apache JServ, the forerunner to Tomcat.

JMeter is a Java application for load testing functional behavior. Initially, JMeter only supported the testing of Web applications. However, since its inception, the product has evolved to support the load testing of different parts of the system architecture, including database servers, FTP servers, LDAP servers, and Web Services. JMeter is extensible, so can you can easily add to this list.

JMeter is available from the Apache Software Foundation under the Apache Software License and can be downloaded from http://jakarta.apache.org/jmeter. Our example uses version 2.0.1.

Unlike HttpUnit, which is purely an API, JMeter is a complete framework and offers a Swing-based user interface for defining and executing load and stress tests. It also provides graphical reports for analyzing test results.

Load testing with JMeter involves using the JMeter user interface to define a number of functional test cases. JMeter uses these test cases to simulate multiple-users accessing the system by running each functional test repeatedly from a number of concurrent threads.

The test cases, the number of threads, and the number of times each test case is executed are all configurable elements of a JMeter test plan. JMeter takes responsibility for executing a test plan against the system under test, spinning up threads as needed in order to generate the required load.

We examine the elements of a test plan and the fundamental concepts behind JMeter by building up a simple example. Continuing from the previous discussion of HttpUnit, we stay with the MedRec system but this time put the application through its paces from a performance perspective.

Testing MedRec with JMeter

The functional tests created in the HttpUnit example verified the behavior of the application for a physician entering a username and password at the login page and navigating to the patient search page. To test the system under load, we use a functional test scenario that goes one step further and initiates a patient search after passing the login page. The objective of the test is to evaluate the patient search under load.

The steps for the performance test scenario are as follows:

  • Access the physician application via the login page.

  • Submit the login form with a valid username and password.

  • From the search page, initiate 100 patient searches.

note

Like JUnit, JMeter supports the use of assertions in the creation of its functional test cases. These assertions verify the results returned by the system under test. We do not use assertions in the example, but it is good practice to confirm functional behavior in tandem with performance. A well-performing system that has incorrect functionality isn’t much use to anyone. Furthermore, unexpected behavior may occur with the system under load.

Figure 15-2 shows the JMeter GUI with a test plan open for executing our load test. Over the next sections, we examine how this test plan is built up and the purpose of each of the plan’s elements.

JMeter with the MedRec test plan open.

Figure 15-2. JMeter with the MedRec test plan open.

The test plan is comprised of test elements that control the execution of the load test. Elements are added to the test plan as nodes to the tree in the left pane of the JMeter GUI.

The tree structure enables the hierarchical organization of plan elements. Here is a description of the main element types we use in the MedRec test plan:

  • A thread group is the starting point for the test plan and controls the number threads for executing the functional test cases.

  • A sampler sends requests, such as HTTP requests, to the server.

  • A logical controller allows you to instruct JMeter when to issue server requests.

  • A configuration element can add to or modify server requests.

  • A listener provides a view of the data JMeter gathers from a running test plan.

You add new elements to the plan by right-clicking a node in the tree and selecting the Add menu item from the element’s context menu. The action presents a menu of all child elements available for selection. We begin by adding a thread group to the top-level test plan node.

Creating a Thread Group

A thread group represents a virtual user, or tester, and defines a set of functional test cases the JMeter virtual user executes as part of the test plan. The JMeter GUI displays configuration options for the thread group element in the right pane.

There are several options of interest for a thread group:

  • Number of threads.

    This option instructs JMeter as to how many threads to allocate for the thread group for running the load test. Specifying the number of threads is the equivalent of defining the number of simultaneous end users who run the test cases.

  • Ramp-up period.

    Specifying a ramp-up period has JMeter create the threads in the thread group gradually over a given duration. For example, you may wish to have 10 threads spinning up over 180 seconds. This option is useful for monitoring performance as the load increases.

  • Loop count.

    The loop count defines the number of times to execute the thread group’s test cases. The default is to run continuously. When creating a test plan, it’s a good idea to set this value to just a single iteration, as this makes troubleshooting considerably easier.

  • Scheduler.

    The scheduler option is a checkbox. In the checked state, additional fields appear in the configuration pane that allow setting of the test’s start and end time.

The example contains only a single thread group, but the test plan node supports the addition of many thread groups, each with its own configuration and test cases.

For the example, the number of threads is set at 100 with a ramp-up period of one second. The thread group is set to run continuously.

A JMeter test plan can have many thread groups, each running a different functional test scenario. As the example is concerned with a only single test scenario, our plan contains just the one thread group element.

The Configuration Elements

Configuration elements modify requests sent to the server. They work closely with the sampler elements, which are responsible for sending requests. The MedRec test plan uses two types of this element: an HTTP request defaults and a cookie manager. We add both of these elements to the plan as immediate children of the thread group node.

The HTTP Request Defaults

Like the HttpUnit example, testing a Web application requires submitting HTTP and HTTPS requests to the Web server. The requests sent to place the Web application under load will likely share a common set of configuration options.

JMeter provides the HTTP request defaults element as a convenience for storing these common options. Here are some of the main settings you may wish to set for each request:

  • Protocol.

    This option specifies the protocol for sending each request, either HTTP or HTTPS.

  • Server name or IP.

    Use this field to set the domain name, or IP address, of the server running the system under test.

  • Path.

    The path option sets the Universal Resource Identifier (URI) for the page. You can also set default parameters for sending with the request, but it is likely these will be set for each individual request.

  • Port number.

    Set this option if your Web server is listening on a port other than port 80 (which is the default).

The example sets the Server Name or IP to the machine hosting the MedRec application and the Port Number to 7001. All other options remain unset.

Creating a Cookie Manager

The HTTP cookie manager does exactly as its name implies: it manages all cookies sent to the thread group from the Web application. Failing to add a cookie manager to the thread group is the equivalent of disabling cookies within the browser.

For the purposes of the example, the cookie manager element uses the default settings.

Logic Controllers

Logic controllers let you determine when JMeter issues requests. They direct the execution order of test plan elements, and so orchestrate the flow of control for a test.

JMeter provides a range of different logic controllers. Table 15-3 lists those that are available.

Table 15-3. Logic Controllers

Controller

Description

ForEach

Iterates through all child elements and supplies a new value on each iteration.

If

Makes the execution of child elements conditional.

Interleave

Executes an alternate child sampler element on each loop of the Controller’s branch.

Loop

Iterates over each child element a given number of times.

Module

Offers a mechanism for including test plan fragments into the current plan from different locations.

Once Only

Runs elements of the Controller only once per test.

Random

Randomizes the execution order of subcontrollers.

Recording

Placeholder for indicating where the HTTP proxy server element should record all data.

Simple

Placeholder for organizing elements.

Throughput

Used for throttling the requests sent by its child elements.

Transaction

Times the length of time taken for all child elements to run, then logs the timing information.

For the example, the flow of control sees the test case log in to the application and perform 100 patient searches. To achieve this, we use two types of controller: a simple controller and a loop controller.

The Simple Controller

Adding a simple controller provides a placeholder for organizing the sampler elements of the plan. The element type has no configuration options other than a name.

The simple controller in the example has two child elements: an HTTP request sampler for the login page and a loop controller. When JMeter executes the test plan, the login request under the simple controller executes first, followed by the elements of the loop controller.

Adding a Loop Controller

Unlike the simple controller, the loop controller is more than a placeholder. The loop controller states the number of times to iterate through each of the controller’s child elements.

For the example, the loop controller’s Loop Count property is set to 100. Thus, we have the login request being sent once, followed by 100 search requests. Of course, we still have to add a suitable HTTP request sampler element for the search request to the loop controller, as well as a sampler for the login request as a child of the simple controller. We look at the configuration of these sampler elements next.

Samplers

Up until now, the test plan doesn’t do much. To put the MedRec application under some strain, we need to start sending some actual requests. For this, we need a sampler element.

A sampler submits requests to the target server. JMeter supplies several types of sampler elements, making it possible to test systems other than Web applications. Table 15-4 describes each sampler provided as part of the JMeter installation.

Table 15-4. JMeter Samplers

Sampler

Description

FTP Request

Sends an FTP get request to the server to retrieve a file.

HTTP Request

Submits an HTTP or HTTPS request to a Web server.

Java Request

Allows you to control any Java class that implements the JavaSamplerClient interface.

JDBC Request

Enables the execution of SQL statements against a database.

LDAP Request

Issues an LDAP request to a server.

SOAP/XML-RPC Request

Supports sending a SOAP request to a Web Service or allows an XML-RPC request to be sent over HTTP.

Because MedRec is a Web application, we must generate HTTP requests, and JMeter provides the HTTP request sampler for this purpose.

Our test scenario calls for two HTTP request sampler elements, one for making the login request and another for initiating the patient search.

Making a Login Request

The first page accessed as part of the test is the physician login. Navigating past this page requires submitting a login request to establish our security credentials for the session. This process involves issuing the appropriate parameters as part of the request: a username and password.

The login process was covered in the HttpUnit example for the login page using an instance of WebForm in Listing 15-1. JMeter works very differently from HttpUnit, but rather than cover the login page twice, let’s leave the discussion on how to submit request parameters with JMeter until we reach the search page.

For now, Figure 15-3 shows the configuration of the sampler for the login page.

HTTP login request settings.

Figure 15-3. HTTP login request settings.

Notice that some of the options for the request are blank. These include the Server Name or IP, Port Number, and the Protocol. You can ignore these options because the HTTP request sampler inherits these settings from the HTTP request defaults element created earlier and added to the top-level of the thread group.

Add the sampler for the login requests as a child of the simple controller. To complete the test case, we need a final sampler for the search request.

Submitting a Search Request

The patient search requires an HTTP request element as a child of the loop controller. This element sends the parameters for the patient search. The search page uses an HTML form element for sending search requests, so the sampler needs to mimic the action of the form.

Here is an edited extract of the HTML source for the patient search page showing the form:

<form name="searchBean"
      method="POST"
      action="/physician/searchresults.do">
    <input type="text" name="lastName" value="">
    <input type="text" name="ssn" value="">
    <input type="submit" name="action" value="Search">
</form>

The searchBean form takes either the patient’s name or social security number. Our test case uses the name for searching. Figure 15-4 shows the configuration of the HTTP request element for the form.

HTTP search request settings.

Figure 15-4. HTTP search request settings.

The HTTP request sampler in Figure 15-4 simulates the sending of the searchBean form as if submitted from a browser. To match the form, you first need to switch the method from a GET, which is the default, to a POST. The HTTP request sampler provides a handy set of radio buttons for making this change.

Next, the path setting for the request must correspond to the value of the action attribute from the form. Set this to /physician/searchresults.do.

For the final task, add the form’s parameters to the HTTP request. We are searching on the patient’s last name, so you can ignore the social security number.

Two parameters are required to initiate a search. The first is the lastName parameter and specifies a string for matching against patient surnames. The second parameter is the value associated with the submit button, in this case the action parameter, which is assigned a value of Search. Don’t forget to add this value, or the MedRec application will not know how to handle the request correctly.

The search request completes the setup of the test plan for running the test. However, the test plan is still not complete: we need to tell JMeter how we wish to view the data gathered from running the test. This is accomplished using a listener element.

Listeners

JMeter uses listener elements for analyzing data gathered from the test. A selection of listeners is available, each providing a different presentation format for the data gathered during the execution of the test. In addition to rendering the data gathered by JMeter, each listener type can log the information collected to file for interrogation after the test plan has completed using other analysis tools.

tip

JMeter stores test results as either an XML document or a comma-separated value (CSV) file. XML is the default, but the CSV format is very useful for importing the data into spreadsheets like Microsoft Excel.

To change the format, locate the jmeter.save.saveservice.output_format entry in the jmeter.properties file and set its value to csv.

The listeners elements interpret data for their parent thread group. Add a listener to the thread group node by right-clicking on the node and selecting Add, then Listener, and then choosing from the listener elements available.

The example test plan uses these listeners:

  • View results tree.

    This listener presents a text-based hierarchical view of the requests and responses sent and received during the test.

  • Aggregate report.

    The aggregate report is a text-based listener that displays summary information for each separately named request used in the test.

  • Graph results.

    This listener provides a simple graphical view of the test results, including the average response time and throughput rate.

Refer to the JMeter documentation for a full list of listeners.

tip

The view results tree listener is especially useful when initially building the test plan for determining if the plan is executing as expected. It is of less value when intensive testing is underway due to the volume of data presented.

With the listeners in place, the test plan is complete and ready to run.

Executing the Test Plan

Before running a test, it is highly advisable to save the test plan first. The JMeter engine can potentially spin up a large number of threads, and things can go wrong. Always save the plan before starting the test just in case.

Set the thread group so it loops forever, and start the test from the Run menu with the Start item. You can stop the test with the Stop item from the same menu. By clicking on the different listener elements, you can observe the status of the test while it is in progress.

Analyzing the Results

The listeners display the data gathered for the test. These next sections examine the information presented in the aggregate report listener and the graph result listener. Although JMeter provides a number of other inbuilt listeners, these two are indicative of the type of information JMeter generates for a load test.

Aggregate Report Listener

The aggregate report displays the following summary information for the login and search requests.

  • Number of requests

  • Average response time

  • Minimum response time

  • Maximum response time

  • Error percentage

  • Throughput rate in terms of requests per second

Figure 15-5 shows the results from the aggregate report listener for the example test plan.

Aggregate Report Listener.

Figure 15-5. Aggregate Report Listener.

The report provides a concise and easy-to-read representation of the data in table format. In this case, the MedRec application has maintained a high throughput for the given load.

Because the aggregate report listener displays summary information, it is not possible to see how the Web application behaved during the running of the test. Viewing this information requires a graphical listener.

Graph Result Listener

For a graphical display of the test results, the graph result listener plots several types of performance information, including data samples, the average and median sample times, standard deviation, and throughput.

Showing all of this information in black and white is hard to read, so the graph result listener shown in Figure 15-6 plots only the data samples and the average response times during the test.

Graph Result Listener.

Figure 15-6. Graph Result Listener.

The graph has the duration of the test on the X-axis and the server response times for HTTP requests on the Y-axis. The black dots are the individual timings for each request. Ideally, the time taken for the application to handle a request should be uniform. A reasonably tight grouping of the dots represents a consistent response time for each request. The load on the server was quite light, and MedRec coped with the load—illustrated by the X-axis topping out at only 552ms.

The line through the dots is the average. Again, this should be fairly even. Apart from the step curve at the start where the test was ramping up, this is certainly the case.

JMeter Guidelines

Here are some guidelines you should find helpful when load testing a system with JMeter:

  • Use meaningful test scenarios, and construct test plans with test cases that represent real-world situations. Use cases provide an ideal nucleus around which to build your load tests.

  • Ensure you run JMeter on a separate machine from the system under test. This prevents JMeter from affecting the results of the test.

  • Testing is a scientific process, so conduct all tests under carefully controlled conditions. If you are working with a shared server, check first before starting a test that no one else on the team is also running a JMeter test plan against the same Web application.

  • Ensure you have adequate network bandwidth for the workstation running JMeter. You are testing the performance of the application and server, not your network connection.

  • Use several instances of JMeter running on different machines to add additional load on a server. This setup might be necessary for stress testing. JMeter can control instances of JMeter on other machines for this purpose. Refer to the JMeter documentation on distributed testing for more information.

  • Leave a JMeter test plan running for long periods, possibly several days or longer. This tests system availability and highlights any degradation in server performance over time due to poor resource management.

  • Don’t run JMeter test plans against external servers for which you are not responsible. The owners may consider this a denial of service attack.

HttpUnit can also assist in the process of load testing. Although JMeter and HttpUnit are different types of testing tools, the two can complement one another to assess the performance of a Web application under load.

Using this combined approach, a JMeter test plan places the Web application under load while running the HttpUnit functional test suite. Assertions raised from the functional tests are an indication of load-sensitive defects.

When designing and executing load tests, remember every system has its limits. By running JMeter on enough machines, it is possible to exceed those limits. This is a valid test in itself, as the system should exhibit failsafe behavior for this scenario. However, the objective of load testing is to prove the ability of the application to meet the performance criteria stipulated in the nonfunctional requirements.

Design tests to prove a system’s compliance with the customer’s performance specifications. Simply deluging the system with requests will not confirm this, so make your testing scientific. Set yourself a target, and carefully design a test plan that will either prove or disprove whether the system meets the stipulated criteria.

Summary

Formal testing of an enterprise-level system is often a time-consuming process. Using an automated approach improves the effectiveness of the testing process, resulting in shorter development timeframes and greater test accuracy.

Getting the most from test automation requires the definition of a carefully planned test strategy. Following are some important points to consider when formulating a suitable strategy.

  • Don’t leave testing to the end of the project. Early testing prevents nasty surprises that are expensive and time consuming to correct in the final iterations of a project. Test early and test often.

  • Design the system to be tested, and devise the test strategy in tandem with the software architecture.

  • Use an array of testing types. No single type of testing will adequately validate an enterprise-level system. Apply a barrage of different testing schemes, including unit, integration, functional, load, and stress testing.

  • Automate the testing process. The time taken to build automated test scripts will more than pay off in terms of testing accuracy and test speed.

Thoroughly testing a J2EE application is as hard a process as the development itself: do not underestimate the task. Give quality assurance the attention and respect it deserves.

Additional Information

The testing of object-oriented systems is a huge topic and appropriately, Robert Binder has written a huge book on the subject. For an in-depth discussion on all matters relating to the testing process, Testing Object-Oriented Systems: Models, Patterns, and Tools [Binder, 2000] is an extremely inclusive read and has the honor of being the thickest publication on my bookshelf.

To find out about other open source testing tools, visit http://www.opensourcetesting.org.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.139.108.22