Developing performance tests

It makes sense to design performance test scenarios that are close to the real world. Performance test technology should support scenarios that not only ramp up a big amount of users, but simulate user behavior. Typical behavior could be, for example, a user visits the home page, logs in, follows a link to an article, adds the article to their shopping cart, and performs the purchase.

There are several performance test technologies available. At the time of writing, the arguably most used ones are Gatling and Apache JMeter.

Apache JMeter executes test scenarios that put applications under load and generates reports from the test execution. It uses XML-based configuration, supports multiple or custom communication protocols and can be used to replay recorded load test scenarios. Apache JMeter defines test plans that contain compositions of so-called samplers and logic controllers. They are used to define test scenarios that simulate user behavior. JMeter is distributed, using a master/slave architecture and can be used to generate load from several directions. It ships a graphical UI which is is used to edit the test plan configuration. Command-line tools execute the tests locally or on a Continuous Integration server.

Gatling provides a similar performance test solution, but it defines test scenarios programmatically written in Scala. It therefore provides a lot of flexibility in defining test scenarios, behavior of virtual users, and how the test progresses. Gatling can also record and reuse user behavior. Since the tests are defined programmatically, there are a lot of flexible solutions possible, such as dynamically feeding cases from external sources. The so-called checks and assertions are used to verify whether a single test request or the whole test case was successful.

Unlike JMeter, Gatling runs on a single host, not distributed.

The following code snippet shows the definition of a simple Gatling simulation in Scala:

import io.gatling.core.Predef._
import io.gatling.core.structure.ScenarioBuilder
import io.gatling.http.Predef._
import io.gatling.http.protocol.HttpProtocolBuilder
import scala.concurrent.duration._

class CarCreationSimulation extends Simulation {

  val httpConf: HttpProtocolBuilder = http
    .baseURL("http://test.car-manufacture.example.com/car-manufacture/resources")
    .acceptHeader("*/*")

  val scn: ScenarioBuilder = scenario("create_car")
    .exec(http("request_1")
      .get("/cars"))
    .exec(http("request_1")
      .post("/cars")
      .body(StringBody("""{"id": "X123A234", "color": "RED", "engine": "DIESEL"}""")).asJSON
      .check(header("Location").saveAs("locationHeader")))
    .exec(http("request_1")
      .get("${locationHeader}"))

  pause(1 second)

  setUp(
    scn.inject(rampUsersPerSec(10).to(20).during(10 seconds))
  ).protocols(httpConf)
    .constantPauses

}

The create_car scenario involves three client requests, which retrieve all cars, create a car, and follow the created resource. The scenarios configure multiple virtual users. The number of users starts at 10 and is ramped up to 20 users within 10 seconds runtime.

The simulation is triggered via the command line and executed against a running environment. Gatling provides test results in HTML files. The following code snippet shows the Gatling HTML output of the test example run:

This example gives an idea of what is possible with Gatling tests.

Since performance tests should reflect somewhat realistic user scenarios, it makes sense to reuse existing system test scenarios for performance tests. Besides programmatically defining user behavior, pre-recorded test runs can be used to feed in data from external sources such as web server log files.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.227.134.133