© Sai Matam and Jagdeep Jain 2017

Sai Matam and Jagdeep Jain, Pro Apache JMeter, https://doi.org/10.1007/978-1-4842-2961-3_6

6. Distributed Testing

Sai Matam and Jagdeep Jain2

(1)Pleasonton, California, USA

(2)Dewas, Madhya Pradesh, India

This chapter discusses how to perform distributed testing using JMeter. We cover the prerequisites and configuration of JMeter with remote hosts in master/slave environments. You’ll see how to run tests in GUI as well as non-GUI mode and learn about various ways that JMeter sends information from slave(s) to the master. Lastly, there is a section that will be useful in troubleshooting exceptions while you are developing tests script and running in distributed environment.

At the end of this chapter, you’ll have a good idea about the distributed load testing approach using JMeter. You will be able to set up a distributed testing environment and trigger test scripts from various JMeter slaves. Those who are already familiar with distributed testing using JMeter can proceed to the next chapter.

When the load generated by JMeter reaches the limit of a client machine in terms of CPU, memory, or network space, then you need to utilize more than one machine. JMeter can be configured to do distributed testing.

In a distributed testing environment, each JMeter client is configured to simulate the load of a few hundred users and the combination of these clients will trigger a few thousand requests simultaneously. It can be thought of as horizontal scaling of load, as you can increase and simulate the test load by adding client machines to the testing environment.

Distributed Testing Using JMeter

Distributed testing is performed using a master-slave model . A JMeter master node, together with one or more JMeter slave nodes, constitute a distributed testing cluster.

The test plan is loaded on the master . The hostname or IP address of the slave machines are configured in the jmeter.properties file on the master. This enables those slave machines to be a part of the JMeter distributed testing cluster, and they are visible in the master node GUI. It is assumed that JMeter is already installed on the slave nodes.

The slave nodes obtain a copy of the test plan from the master. The role of the master node is only to orchestrate the test. It is the slave nodes that execute the test and generate the load .

Distributed testing in JMeter is also called remote testing.

Prerequisites

The prerequisites for setting up the distributed testing cluster using JMeter are as follows:

  • All the slaves must be on the same subnet as the master.

  • The application under target should be on the same subnet.

  • All slaves and the master should have the same version of JMeter and JVM.

  • The firewall on the master and the slaves should be turned off.

  • There should be no antivirus software installed on the master or the slaves.

  • The network should be stable.

  • There should not be any extraneous network activity on the subnet.

Configuration

Open the jmeter.properties file on the master and add each slave’s hostname or IP address, as shown here:

remote_hosts=192.168.0.7,192.168.0.8

This is the only configuration that is required.

External configuration files needed by each slave are located on that machine.

For example, in a shopping-cart application where a username and password is required, you could store a list of these in files and load them separately in each slave. Make sure that you have distinct username/password pairs across the entire application to avoid duplicate logins.

On each of the slave machines, create a users.csv file, ensuring that the contents are distinct across each machine.

In slave #1, here is the users list.

[email protected],user1
[email protected],user2
[email protected],user3
[email protected],user4
[email protected],user5

In slave #2, here is the users list.

[email protected],user6
[email protected],user7
[email protected],user8
[email protected],user9
[email protected],user10

Ensure that the users.csv file is placed in the /bin folder of the slave’s $JMETER_HOME directory. When the test executes, it will be picked up.

Let’s illustrate this with an example.

Follow these steps or download the DistributedTestPlan.jmx 1 file:

  1. Create a test plan and give it a meaningful name, such as Distributed Testing.

  2. Click on Test Plan and go to Edit ➤ Add ➤ Threads (Users). Add Thread Group. Configure Number of Threads (Users) as 1 and Loop Count as 1.

  3. Click on Thread Group and go to Edit ➤ Add ➤ Config Element. Add HTTP Cookie Manager.

  4. Click on Thread Group and go to Edit ➤ Add ➤ Config Element. Add HTTP Request Defaults. Configure Server Name or IP as <your_machine_ip_or_hostname> and Port Name as 8080.

  5. Click on Thread Group and go to Edit ➤ Add ➤ Sampler. Add HTTP Request. Configure Path as /user/signIn and Method as POST.

  6. Click on HTTP Request and go to Edit ➤ Add ➤ Config Element. Add CSV Data Set Config and configure as shown in Figure 6-1.

    A449582_1_En_6_Fig1_HTML.jpg
    Figure 6-1. CSV data set config
  7. Click on HTTP Request again and configure the parameters as shown in Figure 6-2.

    A449582_1_En_6_Fig2_HTML.jpg
    Figure 6-2. HTTP request parameters
  8. Click on HTTP Request and go to Edit ➤ Add ➤ Assertions. Add Response Assertion. Configure Response Field to Test as Response Code, Pattern Matching Rules as Equals, and Patterns To Test as 200 .

  9. Click on Thread Group and go to Edit ➤ Add ➤ Sampler. Add HTTP Request. Configure Path as /user/signOut and Method as HEAD.

  10. Click on HTTP Request and go to Edit ➤ Add ➤ Assertions. Add Response Assertion. Configure Response Field to Test as Response Code, Pattern Matching Rules as Equals, and Patterns To Test as 200.

  11. Click on Thread Group and go to Edit ➤ Add ➤ Listener. Add View Results Tree.

  12. Click on Thread Group and go to Edit ➤ Add ➤ Listener. Add View Results in Table.

  13. Save the test plan .

Running the Test

After updating the jmeter.properties file with remote_hosts, start the JMeter GUI and load the test plan. Under the Run menu, the following options are available:

  • The Remote Start and Remote Start All options will start the test on remote hosts.

  • The Remote Stop and Remote Stop All options will stop the test on remote hosts.

  • The Remote Exit and Remote Exit All options will stop the JMeter server on remote hosts.

  • The Remote Shutdown and Remote Shutdown All options will stop the test on remote hosts but not the JMeter server, and we can continue the test on the remote host by again using the Remote Start option (see Figure 6-3).

    A449582_1_En_6_Fig3_HTML.jpg
    Figure 6-3. JMeter Remote Start option

From the apache-jmeter-3.0/bin directory on each slave, you need to start jmeterserver using the following command :

C:>jmeter-server

Once started, it will look like the output shown next.

Remote host #1 jmeter-server logs.

C:> jmeter-server
Could not find ApacheJmeter_core.jar ...
... Trying JMETER_HOME=..
Found ApacheJMeter_core.jar
Writing log file to: C:apache-jmeter-3.0injmeter-server.log
Created remote object:
UnicastServerRef [liveRef: [endpoint:[192.168.0.7:51324](local),objID:[535e2c0b:15c0e3a3328:-7fff, 3496195728845345408]]]

Remote host #2 jmeter-server logs.

C:> jmeter-server
Could not find ApacheJmeter_core.jar ...
... Trying JMETER_HOME=..
Found ApacheJMeter_core.jar
Writing log file to: C:apache-jmeter-3.0injmeter-server.log
Created remote object:
UnicastServerRef [liveRef: [endpoint:[192.168.0.8:63904](local),objID:[-99f93d6:15c0e3a7148:-7fff, -6488513131611751121]]]

GUI Mode

You have set up two slaves (remote hosts), which are visible on the JMeter GUI. Run the test on one remote host by selecting Remote Start and selecting any remote host or using the Remote Start All option.

Under Thread Group, you can see that you are running 1 thread and selecting Remote Start All; it will trigger this test on both remote hosts. The View Results in Table will show four responses, as shown in Figure 6-4.

A449582_1_En_6_Fig4_HTML.jpg
Figure 6-4. Master view results tree

Also on the remote host, the jmeter-server logs will show the test start and the test end.

Remote host #1 jmeter-server logs.

Starting the test on host 192.168.0.7 @ Mon May 15 15:31:15 PDT 2017 (1494887475261)
Finished the test on host 192.168.0.7 @ Mon May 15 15:31:16 PDT 2017 (1494887476397)

Remote host #2 jmeter-server logs.

Starting the test on host 192.168.0.8 @ Mon May 15 15:31:15 PDT 2017 (1494887475062)
Finished the test on host 192.168.0.8 @ Mon May 15 15:31:15 PDT 2017 (1494887475390)

Non-GUI Mode

GUI mode takes a lot of memory. JMeter provides an option to do a remote run of the tests.

Execute this command with -R and add the remote hosts’ IP addresses.

C:>jmeter -n -t DistributedTestPlan.jmx -R 192.168.0.7,192.168.0.8

It will show the following output. You can see that tests are executed on remote engines.

C:>jmeter -n -t DistributedTestPlan.jmx -R 192.168.0.7,192.168.0.8
Writing log file to: jmeter.log
Creating summariser <summary>
Created the tree successfully using DistributedTestPlan.jmx
Configuring remote engine: 192.168.0.7
Configuring remote engine: 192.168.0.8
Starting remote engines
Starting the test @ Mon May 15 15:36:44 PDT 2017 (1494887804089)
summary = 0 in 00:00:00 = ******/s Avg: 0 Min: 9223372036854775807 Max: -9223372036854775808 Err: 0 (0.00%)


Tidying up remote @ Mon May 15 15:36:45 PDT 2017 (1494887805458)
Remote engines have been started
Waiting for possible Shutdown/StopTestNow/Heapdump message on port 4445
summary = 2 in 00:00:01 = 3.7/s Avg: 106 Min: 105 Max: 107 Err: 0 (0.00%)
Tidying up remote @ Mon May 15 15:36:45 PDT 2017 (1494887805850)
... end of run
... end of run

Execute the following command with -r without the remote hosts’ IP addresses. This command will take the remote hosts from the jmeter.properties file assigned to the remote hosts property.

C:>jmeter -n -t DistributedTestPlan.jmx -r

RMI Port

By default, the server_port is set to 1099. Sometimes it may be that this port is blocked, and we are not able to start JMeter in the master-slave environment . In this case, we need to set the server_port of the slaves to something else.

Open the jmeter.properties file on the slaves and change it to a different port number, such as 1234.

# RMI port to be used by the server (must start rmiregistry  with same port) server_port=1234

The command for running tests should be as follows:

C:> jmeter -n -t DistributedTestPlan.jmx -R 192.168.0.7:1234, 192.168.0.8:1234

If you forget to use the updated port number in the command, you will get the exception shown here:

C:> jmeter -n -t DistributedTestPlan.jmx -R 192.168.0.7,192.168.0.8:1234
Writing log file to: jmeter.log
Creating summariser <summary>
Created the tree successfully using DistributedTestPlan.jmx
Configuring remote engine: 192.168.0.7:1234
Connection refused to host: 192.168.0.7; nested exception is:
        java.net.ConnectException: Connection refused: connect
Failed to configure 192.168.0.7:1234
Configuring remote engine: 192.168.0.8:1234
Connection refused to host: 192.168.0.8; nested exception is:
        java.net.ConnectException: Connection timed out: connect
Failed to configure 192.168.0.8:1234
Stopping remote engines
Remote engines have been stopped
Error in NonGUIDriver java.lang.RuntimeException: Following remote engines could not be configured:[192.168.0.7:1234,
192.168.0.8:1234]

Sample Sender Mode

In distributed testing of the master-slave environment, remote hosts (slaves) do the test execution and send samples to the client (master). The master gathers input from all the remote hosts and consolidates into a single view with respect to the target application server.

In the overall process, based on the configuration, remote hosts coordinate with the master to send samples before executing the next thread. This affects the maximum throughput of the server test, as the sample result has to be sent back before the thread can continue.

The client node (master) needs to have one of the following sample sending modes:

  • Standard : Send samples synchronously as soon as they are generated.

    #mode=Standard
  • Batch Mode : Send saved samples when either the count (num_sample_threshold) or time (time_threshold) exceeds a threshold, at which point the samples are sent synchronously. The thresholds can be configured on the server using the following properties:

    num_sample_threshold: The number of samples to accumulate; default is 100

    time_threshold: The time threshold ;default is 60000 ms = 60 seconds

    #mode=Batch
    ...
    #num_sample_threshold=100
    # Value is in milliseconds
    #time_threshold=60000
    ...
  • Statistical Mode : Send a summary sample when either the count or time exceeds a threshold. The samples are summarized by thread group name and sample label.

    The following fields are accumulated:

    • Elapsed time

    • Latency

    • Bytes

    • Sample count

    • Error count

    Other fields that vary between samples are lost.

    #mode=Statistical
    #Set to true to key statistical samples on threadName rather than threadGroup #key_on_threadname=false
    ...
    #num_sample_threshold=100
    # Value is in milliseconds
    #time_threshold=60000
    ...
  • Hold Mode : Hold samples in an array until the end of a run. This may use a lot of memory on the server and is discouraged.

    #mode=Hold
  • DiskStore Mode : Store samples in a disk file (under java.io.temp) until the end of a run . The serialized data file is deleted on JVM exit.

    # DiskStore: as for Hold mode, but serialises the samples to disk, rather than saving in memory
    #mode=DiskStore
  • StrippedDiskStore Mode : Remove responseData from successful samples and use DiskStore sender to send them.

    # Same as DiskStore but strips response data from SampleResult
    #mode=StrippedDiskStore
  • Stripped Mode : Remove responseData from successful samples.

    #mode=Stripped
  • StrippedBatch Mode : Remove responseData from successful samples and use Batch sender to send them.

    #mode=StrippedBatch
  • Asynch : Samples are temporarily stored in a local queue. A separate worker thread sends the samples. This allows the test thread to continue without waiting for the result to be sent back to the client (the master). However, if samples are being created faster than they can be sent, the queue will eventually fill up, and the sampler thread will block until some samples can be drained from the queue. This mode is useful for smoothing out peaks in sample generation. The queue size can be adjusted by setting the JMeter property asynch.batch.queue.size (default 100) on the server node.

    # Asynchronous sender; uses a queue and background worker  process to return the samples #mode=Asynch
    # default queue size
    #asynch.batch.queue.size=100
  • StrippedAsynch Mode : Remove responseData from successful samples and use Async sender to send them.

    # Same as Async but strips response data from SampleResult                                                      
    #mode=StrippedAsynch
  • Custom Implementation Mode: Set the mode parameter to custom sample sender class name. This must implement the interface SampleSender and have a constructor that takes a single parameter of type RemoteSampleListener.

    #mode=org.example.load.MySampleSender

Open the jmeter.properties file and set the mode as per the requirements.

Note

Stripped modes (StrippedDiskStore, Stripped, StrippedBatch, and StrippedAsynch) strip responseData, meaning that some elements that rely on the previous responseData being available will not work. Give attention to this feature while developing the test script.

Unreachable Remote Hosts

The Unreachable Remote Host condition will happen when one or more of the remote hosts is not reachable by the client (the master). Perhaps they have not yet booted up or they are shut down. In this case, when you trigger the test, it will fail.

C:>jmeter -n -t DistributedTestPlan.jmx -R 192.168.0.7:1234,192.168.0.8:1234
Writing log file to: jmeter.log
Creating summariser <summary>
Created the tree successfully using DistributedTestPlan.jmx
Configuring remote engine: 192.168.0.7:1234
Connection refused to host: 192.168.0.7; nested exception is:
        java.net.ConnectException: Connection refused: connect
Failed to configure 192.168.0.7:1234
Configuring remote engine: 192.168.0.8:1234
Connection refused to host: 192.168.0.8; nested exception is:
        java.net.ConnectException: Connection timed out: connect
Failed to configure 192.168.0.8:1234
Stopping remote engines
Remote engines have been stopped
Error in NonGUIDriver java.lang.RuntimeException: Following remote engines could not be configured:[192.168.0.7:1234,
192.168.0.8:1234]

In the first case, when the remote hosts are still booting up, JMeter has a property to wait for some time and then trigger the test. You can set how many retries JMeter has to make and how much retry delay it has to wait before starting the test.

By default, client.tries is set to 1 and client.retries_delay is set to 5000 milliseconds. Uncomment these properties and rerun the test.

# When distributed test is starting, there may be several attempts to initialize
# remote engines. By default, only a single try is made. Increase the following property
# to make it retry additional times
client.tries=1


# If there are initialization retries, the following property sets a delay between attempts
client.retries_delay=5000

JMeter has a property to skip the remote hosts if they are not reachable and continue the test with the rest.

Update client.continue_on_fail to true under the jmeter.properties file.

# When all initialization tries are made, test will fail if some remote engines are failed
# Set the following property to true to ignore failed nodes and proceed with test
client.continue_on_fail=true

Run the test again and it will skip the unreachable host.

C:>jmeter -n -t DistributedTestPlan.jmx -R 192.168.0.7,192.168.0.8
Writing log file to: jmeter.log
Creating summariser <summary>
Created the tree successfully using DistributedTestPlan.jmx
Configuring remote engine: 192.168.0.7
Configuring remote engine: 192.168.0.8
Connection refused to host: 192.168.0.8; nested exception is:
        java.net.ConnectException: Connection timed out: connect
Failed to configure 192.168.0.8
Following remote engines could not be configured:[192.168.0.8]
Continuing without failed engines...
Starting remote engines
Starting the test @ Mon May 15 17:04:14 PDT 2017 (1494893054551)
Remote engines have been started
Waiting for possible Shutdown/StopTestNow/Heapdump message on port 4445
summary = 2 in 00:00:01 = 2.2/s Avg: 376 Min: 279 Max: 474 Err: 0 (0.00%)
Tidying up remote @ Mon May 15 17:04:16 PDT 2017 (1494893056800)
... end of run

Limitations

The limitations with JMeter in distributed testing are listed here:

  • It is quite expensive to set up dedicated hardware for performance testing on the premises. A cloud-based distributed testing environment will provide a solution to this limitation.

  • RMI cannot communicate across subnets without a proxy; therefore, neither can JMeter.

  • Since JMeter sends all the test results to the controlling console, it is easy to saturate the network. It is a good idea to use the Simple Data Writer to save the results and view the file later with one of the graph Listeners.

  • A single JMeter client running on a 2-3 GHz CPU can handle 300-600 threads depending on the type of test. (The exception is the web-services). XML processing is CPU intensive and will rapidly consume all the CPU cycles. As a general rule, performance of XML-centric applications is 4-10 times slower than applications using binary protocols.

Conclusion

In this chapter, you learned to distribute load generation by using multiple machines, configuring remote hosts, and verifying that the remote hosts have successfully run the test. You also learned about the limitations of distributed testing using JMeter. In the next chapter, you will learn JMeter best practices that will make you a more efficient user of JMeter.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.59.227.82