Chapter 11. Load-Testing to Find and Fix Scalability Problems

 

“Software bugs are impossible to detect by anybody except the end user.”

 
 --Murphy's Technology Laws

The Importance of Load-Testing

Server-side applications have service-level requirements that specify the availability, scalability, and failover:

  • Availability—Specifies the up-time requirements that describe how long the application needs to be capable of running without restarting

  • Scalability—Specifies the capability of the application to provide the same level of service as the number of requests increases

  • Failover—Specifies the capability of the application to continue providing the same level of service when one of the application components fails

A typical development cycle allocates time for unit testing and integration testing, which generally focuses on functionality, but it does not always provide time for load-testing. The purpose of load-testing is to assess how the system performance meets service-level requirements under load. Obviously, every system's response time degrades as the load increases, but as long as it meets the specified requirements, the system is considered to be scalable.

Ignoring the load-testing is a risky practice, especially if an application is expected to serve hundreds or thousands of users. With a large user community, a small problem becomes a big problem because it affects a large group of people. Some failures do not appear unless there is a certain load. This might be the case for operations that depend on resources such as threads, database connections, and memory. Most of the problems inherent to multithreaded applications occur when there is a particular number of concurrent requests. Only load-testing and running the system for a prolonged amount of time mimics the production operating environment, so it is absolutely crucial to perform this test before a system goes live. Last, but not least, load-testing an application enables you to see how it will respond to denial-of-service attacks and hacking attempts. Load-testing reveals not how the application is built, but whether it is built to last. It can be used to get more value out of the techniques presented in other chapters. For example, profiling an application under load produces a different picture than without the load.

Simulating a load on the system is not a trivial task, so it is important to leverage the right tools. Plenty of load-testing products are available on the market, most of which are designed to work with Web sites serving HTML content via HTTP. Besides HTTP, Java server applications also work with various other protocols, such as JRMP for RMI clients and IIOP for CORBA and RMI clients. The best tools cost a lot of money, but some open-source alternatives can deliver most of the basic functionality. We are going to load-test an RMI-based Chat application using the open-source JUnit framework. Later we will use another open-source tool called JMeter to load-test WebCream, a Web-based application that has browser-based clients.

The principles behind simulating a load are virtually the same. A test case is first created by recording the running application or programmatically. When the test case is ready, it is used to create virtual users or clients, which are then run on multiple threads to simultaneously access the server. To the server application, the virtual users appear as the real traffic, and monitoring the server response's correctness and time produces the test results.

Load-Testing RMI-Based Servers with JUnit

JUnit is an open-source Java project that provides a framework for writing and executing unit tests. It promotes writing test code that asserts the validity of the application functionality. A JUnit test case is a Java class that is compiled and executed to test the application code. This approach provides the benefits of automated retesting with the ease of maintaining the testing code in sync with the application code. Last, but not least, developers get to write Java code instead of click buttons in debuggers and test tools, which is probably a significant contributor to the popularity of the framework.

To use JUnit, a developer must write a test case that extends junit.framework.TestCase or implements junit.framework.Test. The test case consists of calls to the classes that are being tested and assertions that the return values match the expected result. For example, a test case for a bank account can get the current balance, make a deposit, and then verify that the new balance matches the old balance plus the deposit amount. The quality of the test case is directly proportional to the zeal of the developer. The idea is to try to cover all possible scenarios, including the erroneous ones. After the tests are written, they are compiled and executed individually or in groups. JUnit is a well-documented and easy-to-learn framework, and if you haven't worked with it yet, please invest a couple of hours in reading the manual and the examples. (Don't forget to update your résumé because good managers will view it as a sign of a quality developer.) The framework and its related documentation can be downloaded for free from http://www.junit.org. The rest of this section focuses on developing a load test for Chat based on JUnit.

Chat was certainly not built to be indestructible, but it was not meant to scale to hundreds of users either. As long as it can handle between three and six simultaneous users, it is probably enough to satisfy the concurrency requirements for a demo application. For any load test, you should set the high mark a little above the anticipated maximum load. So, for our example, we will use 10 as the number of virtual clients it should support. Our goal is to simulate this number of clients simultaneously sending messages to the same Chat application. We also want to stagger the calls to mimic real-life experience. Staggering means that, instead of sending the requests at the same time, they are sent around the same time. Sometimes the term simultaneous is used to describe the clients that are sending a request at the same time and concurrent is used to describe the clients that maintain a conversation with the server but are sending requests around the same time. On a single CPU system, there is really no distinction between concurrent and simultaneous execution because there can be no true parallel processing, making the terms interchangeable.

We will begin by developing a test case that simulates a user sending one message to the target host. Then we will build a harness that uses the test case to create a number of virtual users repeatedly sending the messages. To be flexible, we will allow parameterization of the test by specifying the number of simultaneous users to simulate, the number of times to repeat the test, and the lag time to use for staggering the calls.

The test case is written in covertjava.loadtest.ChatTestCase. It extends TestCase and implements the core logic in its testSendMessage() method, which is shown in Listing 11.1.

Example 11.1. testSendMessage Source Code

public void testSendMessage() {
    logger.info("Sending test message...");
    try {
        StringBuffer message = new StringBuffer();
        message.append("[ChatTestCase@");
        message.append(Integer.toHexString(this.hashCode()));
        message.append("] Test ");
        message.append(messagesSent++);
        ChatServer.getInstance().sendMessage(this.host, message.toString());
        logger.info("Sent message successfully");
    }
    catch (Exception e) {
        e.printStackTrace();
        assertTrue("Exception: " + e.getMessage(), false);
    }
}

ChatTestCase creates a test message and uses ChatServer to send it to the target host. The key aspect of this method is a call to assertTrue() in the catch statement with false as the second parameter, which tells JUnit that this test has failed. Otherwise, the method simply returns, which would mean success. This way of testing is certainly far from perfect to ensure that the Chat server has processed the message correctly. It does not test whether the message has been parsed and appropriately added to the conversation history window. However, it provides a fairly decent tactic of testing the network communication and the throughput of the remote server and therefore will suffice to illustrate the point.

The next step is creating a test suite that contains ChatTestCase instances. This is accomplished in the ChatLoadTest class; the code is shown in Listing 11.2.

Example 11.2. Creation of a Test Suite

ActiveTestSuite suite = new ActiveTestSuite();
for (int i = 0; i < clientsNumber; i++) {
    Test test = new ChatTestCase ();
    test = new DelayedTest(test, (int)(Math.random()*lagTime));
    test = new RepeatedTest(test, repeatRuns);
    suite.addTest(test);
}

The parameters, such as clientsNumber and lagTime, are read from the properties file to support customization. JUnit test suites serve as containers for test cases and make running multiple tests easier. ActiveTestSuite runs all its tests simultaneously and then waits for them to finish before returning the result. JUnit uses a decorator pattern to attach additional functionality to tests. RepeatedTest, for example, runs a given thread repeatedly for a given number of times. To provide staggering execution for a test, we have added the DelayedTest decorator class that sleeps for a given period of time before running the test. The end result of the code in Listing 11.2 is a clientsNumber number of clients that will simultaneously send messages after sleeping for a random time.

You can run a JUnit test in several ways, but we will use the Swing GUI to get visual feedback on the testing progress. If an instance of Chat is not running on the localhost yet, we start it using CovertJavadistribinchat.bat. Then we use the loadtestJUnit.bat file located in the CovertJavain directory to open the JUnit GUI and execute our test suite. Shortly after the tests begin running, the JUnit GUI should look similar to Figure 11.1.

The JUnit GUI showing the testing progress.

Figure 11.1. The JUnit GUI showing the testing progress.

Most of the tests failed, and looking at the result panel, we can see the error message: testSendMessage(covertjava.loadtest.ChatTestCase): Exception java.lang.NullPointerException: null. Examining the Chat application shows that the conversation history component contains a mess and that several NullPointerExceptions are in the console window. Guess what? We are starting to reap the benefits of load testing. Reducing the number of clients to two makes the tests run successfully, so the problem must come from multithreading. Chapter 9, “Cracking Code with Unorthodox Debuggers,” and Chapter 10, “Using Profilers for Application Runtime Analysis,” have provided techniques for finding and fixing the problems that arise from concurrent execution. You can try applying these to find out what is wrong with Chat.

For those who feel knowledgeable enough, I will disclose that the problem comes from the way the new messages are appended to the conversation history. Swing is not thread safe, and it is generally recommended to interact with Swing components from the AWT event dispatch thread. The designers of Swing have sacrificed robustness for speed, and we can't really blame them because Swing performance has long been under scrutiny. Chat receives the incoming messages on an RMI thread and delegates message processing to the MainFrame class. MainFrame's appendMessage method, which appends a new message to the conversation JEditorPane, does not use synchronization. Therefore, if several users try to send a message to the same host at the same time, several threads will be trying to set data to the same JEditorPane. This is a classic data overrun problem that can be solved by making the appendMessage method synchronized. After making this change, rerunning the load test produces a clean result that allows us to happily consider the job done.

The benefit of using JUnit for load-testing is that it is simple and enables you to leverage any test cases that might already have been written. It is also an effective method of testing RMI-based servers because automatic script recording by load-testing tools does not always produce maintainable results.

Load-Testing with JMeter

The previous section showed how to load test Java applications using JUnit. The development of the test was simple and, although we didn't get any fancy graphs or charts, the basic job was done pretty well. However, this approach is rather limited because it requires writing a virtual client manually, and the only thing JUnit can do is run the test case on multiple threads. What if we wanted to test a Web-based application that has HTML front end? That would typically require that the users be running a Web browser, and the server would have to rely on servlets and JSPs to implement the user interface. JUnit is simply not an option because the virtual users should support the functionality of the browser, such as session management, cookies, HTTP forms, and others. Another shortcoming of JUnit is that it doesn't produce tangible evidence of the test's success or failure. A seasoned developer knows the power that reports, charts, and graphs have on management. In short, we need a better tool.

Load-testing is a huge and lucrative market, and many good products are available. At this point in time, they offer virtually the same core functionality, which includes the capability to record virtual users automatically, the capability to create test scripts programmatically, support for multiple programming languages and communication protocols, and—you guessed it—lots of fancy graphs and reports. Often it's the graphs that influence the final decision on what to purchase. I will highlight two products that are deemed to be the market leaders: Mercury Load Runner and Rational Test Suite. Load Runner is an excellent and time-proven tool that provides the ultimate flexibility and has practically any feature you can find in a load-testing tool. An important factor is its capability to record and customize the virtual user scripts, which can make writing code unnecessary. It can even record at the protocol level for HTTP and RMI clients. Because load tests have a high perceived value (and rightfully so), the prices for the tools can be very steep. Testing a cluster of servers with hundreds of virtual users can result in license fees of thousands of dollars. We are going to look at JMeter, an open-source alternative that can be obtained for free from Apache's Web site. Although not nearly as polished or versatile, it offers similar core functionality and is sufficient for most Web-based applications. And make no mistake; you'll learn how to produce a few graphs and a report as well.

JMeter Overview

JMeter is a tool for load-testing and measuring the performance of Web sites and Java applications. It supports servers that accept HTTP and FTP connections but can be used to test databases, Perl scripts, and Java objects as well. JMeter supports basic script recording via a proxy server and requires a thorough understanding of the underlying protocol and server implementation. Creating a test plan demands manual work, but just like JUnit, it is the kind of challenging work that developers actually enjoy. Because most of the new applications built today have a thin client interface, we will use JMeter to load test a Web-based product called WebCream.

JMeter has a GUI that can be used to create a test plan and an engine that executes the test plan to generate the load. The test plan is a container for the configuration elements and logical controllers that represent settings and actions to be performed. The test creation is visual, and the end result is a tree of nested elements that describes the test and its execution. Using HTTP Proxy Server, which records the browsing actions, is a good way to quickly create a draft test plan. Whether you start by recording the test plan or by creating it manually, you need to know the elements used by JMeter to make the test plan work. The following nodes can be added to a test plan:

  • Thread group—This defines the number of virtual users that will be concurrently executing the nested nodes. It also enables you to specify the ramp-up time for staggering and the duration of the test.

  • Listeners—This can be added to provide visualization of the test results and to track the progress of the test execution. For example, to have a graph of the response times, the graph results listener can be added to a test plan.

  • Configuration elements—These are used to add protocol-specific managers and default settings for samplers, which are discussed later. For instance, for virtual clients to support HTTP session via cookies, the HTTP Cookie Manager element must be added.

  • Assertions—These are used to test the validity of the server response. Getting a response is not enough to say that a test has passed. Assertions enable you to check the response for a certain substring or an exact match, and if the check fails, the assertion fails. Failed assertions can be viewed using the assertion results listener.

  • Preprocessors—These are executed before an operation is performed. For example, the user parameters preprocessor can be used to define and initialize variables for an HTTP request.

  • Postprocessors—These are executed after an operation is performed. For example, the regular expression extractor can be added to parse the page title from an HTML page that was returned by the server.

  • Timers—These are used to introduce scheduled running and staggered execution of operations.

A thread group can have these additional nodes:

  • Logic controllers—These can be added to specify the control flow for nested nodes. For example, adding a loop controller enables the execution of nested nodes in a loop for a given number of times.

  • Samplers—These enable the sending of a request to the application being tested. Samplers perform the actual calls to the server. JMeter currently supports FTP, HTTP, SOAP, Java, JDBC, and LDAP requests.

After a test plan is created, it can be executed on the local machine. You can also configure several machines to act as remote JMeter servers, which can be controlled from the same GUI environment. This allows simulating more virtual clients than one machine can handle.

WebCream Overview

WebCream is a unique tool for Java that provides automated Web enabling for GUI-based Java applications and applets. WebCream allows developers to implement a GUI front end using AWT and Swing and, at the same time, automatically get HTML access to the application. In a way, WebCream can be thought of as a dynamic Java-to-HTML converter that transforms GUI frames and dialog boxes to HTML on-the-fly. It then emulates Web page actions as GUI events to retain the application's original logic. WebCream is unique in that it requires no modifications to existing forms or business logic and does not require programmers to learn any APIs. Because WebCream uses a browser-based interface, generates dynamic content, and maintains a session, it is a good choice for HTTP load-testing with JMeter.

WebCream comes with a built-in Tomcat Web server and a demo application; you should play with it a little bit to become familiar with the product. Use the WebCream demo application and be sure to check the HTML source the product generates for each page. Try to understand the URLs used by the product and how it passes the data back to the server, and look for common patterns in generated pages. The standard edition of WebCream is free but limited to five concurrent users, so you might have to restart Tomcat during your tests if you don't want to wait for the sessions to time out. The WebCream demo's main page displays a frame with three buttons—Login Dialog, Tabs and Table, and Tree Dialog (see Figure 11.2).

The main HTML page of the WebCream demo application.

Figure 11.2. The main HTML page of the WebCream demo application.

Clicking the Login Dialog button displays the next page, which shows a dialog box allowing the user to enter her username and password and to select a domain. If the OK button is clicked on this page, the login information is passed to the main page. For the purposes of testing, we will limit ourselves to this functionality of the demo.

Creating a Web Test Plan

Now is the time to actually do some work. Download and install JMeter and WebCream. Run the JMeter GUI using the JMeterinjmeter.bat file. Because we are going to be testing WebCream, start the built-in Tomcat using WebCreaminstartServer.bat.

The initial tree in the JMeter GUI shows an empty test plan and a WorkBench. The WorkBench is just a placeholder for the temporary nodes and testing, so we will focus on the test plan. Because we want to simulate multiple simultaneous users, we must add a thread group by right-clicking the Test Plan node and selecting Add, Thread Group. Initially, we set the number of threads to 1 to simplify the testing, but later we will change it to 5. To simulate real-life user interactions with the server, we need to ensure that the user makes pauses before sending the requests to the server. One hundred concurrent users in real life does not mean one hundred simultaneous requests on the server because users spend time interpreting the server response and filling out data on forms (sometimes even taking a coffee break). JMeter provides the Gaussian Random Timer for the simulation of user delays. We add it to our thread group and specify a constant delay of 300 milliseconds with 100 milliseconds deviation. This means that JMeter will pause a thread for at least 300 milliseconds before trying to execute the next step of the test plan.

Before we embark on devising the HTTP requests to simulate, we need to take a few preparatory steps. WebCream maintains an HTTP session, which relies on cookies stored in the browser. We are simulating a browser-based client, so we add HTTP Cookie Manager to the thread group. Just adding this configuration element is enough to tell JMeter to store cookies, and we don't have to define any cookies manually.

Last, but not least, we want to use the HTTP Request Defaults configuration element. It is a convenient way to provide the common information in all HTTP requests only once. Obvious choices for inclusion into the HTTP Request Defaults are the protocol (HTTP), server name (localhost), path (/webcream/apps/WebCreamDemo), and port number (8040). If you looked carefully at the source of WebCream-generated HTML pages, you would notice that three hidden parameters are present on every page, as shown in Listing 11.3.

Example 11.3. Common Form Parameters Used by WebCream

<input type="hidden" name="__RequestId" value="1">
<input type="hidden" name="__WindowTitle" value="WebCream Demo">
<input type="hidden" name="__Action" value="">

As the user goes through the pages, the values of these parameters change to reflect the consecutive request number and the current window title. The __Action parameter is used to tell the server which action to perform on the application. For example, to close a window, the client code sets __Action to close. Because these parameters are sent with every request, it is prudent to include them in the HTTP Request Defaults. We use the JMeter GUI to add the three parameters, leaving the values blank for now. We will return to specify the values after you learn a little more about JMeter. Figure 11.3 shows the plan tree we have created so far.

The JMeter test plan tree, step 1.

Figure 11.3. The JMeter test plan tree, step 1.

We will simulate a user opening the WebCream demo in a browser, clicking Login Button to go to the Login page, and then clicking OK to go back to the main page. To ensure that there are no memory leaks or multithreading problems on the server, we will simulate the user opening and closing the dialog box several times.

To keep the test plan concise, we add Once Only Controller to the thread group, enabling us to group the sampling requests into a subtree and separate them from the configuration elements. Next, we add HTTP Request Sampler to the controller. We name it HTTP Request - Init and, because it is the initial request to the Web application, we specify the GET method and leave everything else blank. The information about the Web server (such as the protocol and host) is coming from the request defaults we specified earlier. At this point, we have created a step that is equivalent to a user typing

http://localhost:8040/webcream/apps/WebCreamDemo

in the browser address bar and pressing Enter. The server should respond with the main page (refer to Figure 11.2). We should verify that we got the right response; we do so by adding a Response Assertion to the HTTP Request - Init node. An assertion can be as simple as a substring that must be present in the response or as sophisticated as a number of regular expressions that should produce a match. We will keep it simple and just test that the response is the page titled WebCream Demo as we have seen in the browser. While writing the test plan, it helps to keep the browser open with the page you are scripting against. To complete the assertion, we specify that we want the response to contain the pattern <title>WebCream Demo</title>. Figure 11.4 shows the plan tree we have created so far.

The JMeter test plan tree, step 2.

Figure 11.4. The JMeter test plan tree, step 2.

We are now ready to proceed to the test that opens and closes a page with a dialog box. We want to do it several times, so we first add a Loop Controller logic controller to the Once Only Controller and specify 5 as the loop count. Our next task is to add an HTTP Request Sampler that simulates the user clicking the Login Dialog button. To see what has to be sent in this request, we need to open the source code for the main HTML page. Searching for Login Dialog takes us to the code shown in Listing 11.4.

Example 11.4. Login Button HTML Source

<form name="WebCreamForm" method="POST"
      action="http://localhost:8040/webcream/apps/WebCreamDemo"
      onSubmit="onSubmitForm()"
>
...
    <input type=button
       name="JButton31266642"
       value="Login Dialog"
       class=button
       OnClick=javascript:doSubmit('/button/JButton31266642')
       style="position:absolute;left:84;top:5;width:103;height:26"
    >
...
</form>

With some HTML familiarity, you can determine that when a user clicks the button, it invokes the doSubmit Java script passing '/button/JButton31266642' as a parameter. Searching for doSubmit on the page and the included JavaScript files produces the following snippet found in webcream_misc.js:

function doSubmit(action) {
  window.document.WebCreamForm.__Action.value=action;
  onSubmitForm();
  window.document.WebCreamForm.submit();
}

Thus, we conclude that when a user clicks the Login Dialog button, it sets the __Action parameter to /button/JButton31266642 and submits the form to the server. Accessing the WebCream demo using the browser several times, we can see that the action string is changing. It always starts with /button/JButton, but the remaining numbers vary from time to time because they are automatically generated. Most Web pages have some form of dynamic content, so the technique we use for WebCream is useful for other applications as well.

To work with dynamically generated content in JMeter, you must rely on variables and regular expressions. The Regular Expression Extractor is a postprocessor that applies a regular expression to the sampling response and places the extracted value into a variable. Regular expressions are a powerful mechanism of working with text input; if you are not familiar with them, I recommend learning about them. There is a plethora of references and tutorials covering regular expressions on the Web, and even a book from O'Reilly called Mastering Regular Expressions.

After a variable is initialized by the extractor, its value can be passed to other test elements such as samplers. We will use the extractor to obtain the values for the three parameters we have defined in HTTP Request Defaults (__RequestId, __WindowTitle, and __Action). We right-click the Http Request - Init node and add a post processor called Regular Expression Extractor. Next, we append - Request Id to the name of the extractor because it will be used to retrieve the value of this parameter. In the configuration screen for the extractor, we specify __RequestId as the reference name. Doing so tells JMeter that the result of the extraction should be stored in a variable called __RequestId. Looking at the HTML code in Listing 11.3 that defines the hidden form parameters, we can come up with a regular expression that will match the value of the __RequestId:

name="__RequestId" value="(d+)"

This regular expression uses static characters to uniquely identify the __RequestId definition and d+ mask to specify any number of digits. The parentheses specify which part of the expression should be used as the result. Because we are interested only in the numeric value, the parentheses surround d+.

The template in the extractor enables the creation of a string out of the matches found by applying the regular expression. The template can include static text and $n$ as a placeholder for the nth match of the regular expression. We are expecting only one match, so we specify $1$ as the template—and we're done with the __RequestId extraction.

Similarly to __RequestId, we should add two more extractors for the __WindowTitle and __Action variables. This is a good time to exercise your brain and come up with your own regular expressions and template strings. But just in case, I'll make your life easier by saying that the regular expression for __WindowTitle can be

name="__WindowTitle" value="(.+)"

and the template can be $1$. The __Action regular expression can be

name="(JButtond+)" value="Login Dialog"

and the template should be /button/$1$. The key in coming up with the regular expression and template is to produce a string that can be used as a value of a parameter for the next request. Figure 11.5 shows the test plan tree we have created so far.

The JMeter test plan tree, step 3.

Figure 11.5. The JMeter test plan tree, step 3.

We are not ready to use the variables, which contain the extracted values, as parameters to HTTP requests. We have already defined the parameters in the HTTP Request Defaults node, so let's go back to it and specify the values. To obtain a value of a JMeter variable, we must the use the syntax ${name}, where name is the name of the variable. We specify ${__RequestId} as the value for the __RequestId parameter, ${__WindowTitle} as the value for __WindowTitle, and ${__Action} as the value for __Action.

Hang on, we're almost there. Before running the test, we need to add a few listeners to monitor the test execution. To test the script, the most useful listeners are View Results Tree and Assertion Results. View Results Tree shows each request with the request and response data, whereas Assertion Results is a quick and easy way to see which assertions have failed. You should not keep View Results Tree in the final version of the test script due to the performance overhead associated with collecting and storing all the data. Finally, we add the Aggregate Report listener to produce a summary of the execution and performance metrics per request.

If you have followed all the steps correctly, your test plan should look as shown in Figure 11.6.

The JMeter test plan tree for WebCream, finished.

Figure 11.6. The JMeter test plan tree for WebCream, finished.

Make sure that WebCream's Tomcat is running, and unleash the fury of JMeter by selecting Start from the Run menu. After a few minutes, the test should finish and you should be able to see the results in the listener nodes.

Quick Quiz

1:

What is the purpose of load-testing?

2:

What is the difference between simultaneous and concurrent users?

3:

Which client protocols can be tested with JUnit?

4:

How do JUnit test cases assert the validity of the response?

5:

Which client protocols can be tested with JMeter?

6:

Which configuration and sampler nodes are used in JMeter test plans?

7:

How do you monitor the progress and the results of a running JMeter test plan?

8:

What are the benefits and shortcomings of JUnit and JMeter?

In Brief

  • The purpose of load-testing is to assess how system performance meets service level requirements under a load.

  • Commercial and open-source load-testing products can include the capability to record virtual users automatically, create test scripts programmatically, support multiple programming languages and communication protocols, and generate graphs and reports.

  • JUnit is an open-source framework that can be harnessed to conduct simple load tests of RMI or plain Java servers. The JUnit test case is a Java class that is compiled and executed to test the application code.

  • JMeter is an open-source tool for load-testing and measuring the performance of Web sites and Java applications. It supports servers that accept HTTP and FTP connections but can be used to test databases, Perl scripts, and Java objects as well.

  • JMeter supports dynamic content with regular expression extractors and variables.

  • JMeter listeners can produce performance reports and graphs.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.173.238