© Erik Ostermueller 2017
Erik OstermuellerTroubleshooting Java Performancehttps://doi.org/10.1007/978-1-4842-2979-8_4

4. Load Generation Overview

Erik Ostermueller
(1)
Little Rock, Arkansas, USA
 
We have all seen brief glimpses of concern during meetings and other discussions about whether your product will be able to handle a production workload. This is performance angst. It’s like a big, red bubble of uncertainty that hovers over the team of an unreleased server-side software product .
Validating whether the architecture of a system performs well is a great way to pop the big, red bubble, and this chapter helps you get started doing that early in the software development life cycle (SDLC) .
This chapter presents a two-part plan for creating load scripts, which are essential for understanding how your application behaves under stress. The first part of the plan helps you create your first script quickly, so you can understand the performance of your architecture.
The objectives of this chapter are:
  • Understand that load testing the architecture of the system can be done with a very basic, no-frills load script with just a few business processes.
  • Understand how tuning architecture-wide performance problems can provide quick performance boost to all business processes.
  • Understand the load gen script enhancements required to more realistically model production business processes.
As if developing and testing software weren’t hard enough already, learning how to generate load for a load test takes extra work. Here are just a few of the tasks involved in load testing:
  • Creating and maintaining scripts for a load generator that apply stress to the system under test (SUT) .
  • Creating and maintaining a production-sized data set for your database.
  • Creating and maintaining user IDs and passwords that your load test will use to sign on to the SUT.
Because of all this and other work, load testing 100% of the SUT’s business processes is rarely, if ever done. It just is not cost-effective. My fall-back approach is to first load test the basic architecture with a simple First Priority script, and then later enhance the script to cover the most critical processes (the Second Priority script). Tuning your system with these two scripts educates you on the techniques required to make the other SUT business processes perform, as long as those processes are underpinned by roughly the same architecture, libraries, container, logging, the same authentication and authorization mechanisms, and so on.
This chapter provides an overview of the enhancements required to create First and Second Priority load scripts for your SUT. The First Priority script helps you quickly discover architecture-wide performance defects. The Second Priority script provides a little more workload realism. But first, there are some basic skills you will need for enhancing load scripts. We will cover those skills in the next section, which also includes a brief introduction to the load generator.

The Load Generator

In Chapter 2 when we talked about the modest tuning environment, we said that a load generator is basically a network traffic generator used to see whether your SUT can handle the stress of a production-like load. It is possible to assemble a script manually, but they are normally created by a “record-and-playback” process. This section talks about all the enhancement you’ll normally need to make after that initial recording has been done and it is tested.
Performance problems can be fixed in code or in data or in configuration, but there are also many things that need to be corrected inside a load script to best approximate your production workload.

Correlation Variables

Of all enhancements you make to a load script, creating correlation variables is used more frequently than any of the others, so everyone should be familiar with it. We create these variables in the load generator’s scripting language. Then, you use the variable to hold a particular piece of data from the output of an HTTP request, so it can later be presented as an input parameter or perhaps in the POST body for a subsequent request.
Here is a quick example: let’s say you used your browser to create a new customer in a web app, and the web app generated a new customer ID (custId=2360 in this case). Then you chose to update the customer details with a new street address. The “customer update” URL looked like this:
If you were recording this with your load generator, this URL would be recorded and saved into your script, verbatim. Days and weeks later when you run this script again, use of this particular custId of 2360 will be quite inappropriate as new custId values are created. As such, we don’t want hard coded values like this encoded in the script files.
So, we carefully replace a hard-coded value with a correlation variable. You are responsible for updating the script to locate custId=2360 in the response, storing it in the correlation variable, and then using that variable on some subsequent request(s).
I went over that pretty quickly, so Table 4-1 summarizes the required changes.
Table 4-1.
Enhancements Required to Convert a Hard-Coded ID from Your System with a Correlation Variable
Order
Load Script Behavior, as initially recorded
Enhancement required to add a Correlation Variable
1
The load script calls some URL to create a new customer.
no Changes.
2
The response to the previous request is returned to the load generator, and somewhere inside the response is the newly generated customer ID 2360.
Enhance the load script to locate/grab the newly created unique customer ID from inside the HTTP response. Store the value in a load script variable; CUST_ID would be a good name. With JMeter, you can accomplish this with a Regular Expression Extractor.
3
The load script submits a URL to update the customer details; the recorded cust ID 2360 is passed as a URL parameter:
Instead of passing the hard-coded cust ID of 2360, enhance the script to pass the number stored in the CUST_ID variable.
This correlation variable technique can and should be used in much more complicated load scripts, especially ones that are recorded. The trick for you, a troubleshooting trick, is to find exactly where in the scripts the technique needs to be applied . At a minimum, you should carefully consider applying this technique at least in these places in your scripts:
  • As shown earlier, anywhere the server-side code generates a unique ID.
  • Anywhere you typed in data or selected some option during the script recording.
  • All parts of the application that store HTML hidden variables.
  • Any other place, as with hidden variables, where JavaScript will move data from the output of one server-side request to the input of another.
  • All places where CSRF tokens ( https://stackoverflow.com/questions/5207160/what-is-a-csrf-token-what-is-its-importance-and-how-does-it-work ) are used. These unguessable tokens are generated on the fly and stored in URL parameters. They make it tough for malicious web pages in one tab on your browser to execute requests to other sites (like your bank) in a different tab on your browser.
  • When using JMeter, correlation variables are not required for JSESSIONID support. Instead, just make sure your JMeter script has one Cookie Manager.
To repeat, you are on the hook for finding where correlation variables need to be added. Start with the previous list and then yours is an ugly task after that: carefully assess whether each HTTP request is returning the right data, and add this technique as necessary. Another way to find the locations in the script for Correlation Variables is to think about all the data items you typed into the browser: searching for particular customer name, selecting a particular item to purchase, selecting that special manatee that you’d like to see in a singles bar.
Regardless of which load generator is used, correlation variables are critical to applying real production load. But if you’re using JMeter, Chapter 7 provides a step-by-step guide to creating correlation variables based on an example JMeter .jmx plan in the github.com repo for this book. Look for jpt_ch07_correlationVariables.jmx.

Sequencing Steps in a Load Script

When you record a load script , you log on to the SUT, navigate around a handful of web pages executing business processes (like the “create customer” detailed earlier) and then you log off.
Load generators let you choose to play a recorded script once, or repeat the script a specific number of times, or repeat it for a particular duration of time. These options are available in JMeter in something called a thread group , which is also where you dial in the number of threads that should be replaying the script at a time. The default one that comes with JMeter is fine for basic tasks, but I really like the Concurrency Thread Group from Blaze Meter . It has a nice visual display of how quickly your threads will ramp up to full load. You can download it from jmeter-plugins.org.
Often, you want two (or more) sets of users executing different business processes at the same time. The not-so-obvious approach to scripting this is to make two separate recordings, one for each task, and then combine the two scripts into one. To do this in JMeter, create two separate thread groups in a new/blank/empty JMeter script, and perhaps name them A and B.
Then record and test the two load scripts separately, and copy all the activity in one into the A thread group and the other into the B.
There are some very compelling reasons for this organization. It lets you dial in, for example, a higher count of threads (more load) with one thread group and a lower volume with the other, which enables you reflect the actual volumes (roughly) used by your end users in production.
Furthermore, this helps you to model the activity of a web application that helps two different groups of people coordinate activity. Say, for example, one group creates orders for widgets while the other approves each order. It’s pretty straightforward to implement one thread group that repeatedly creates orders. To implement the other thread group with the order approval activity, start by writing a script that repeatedly checks for new orders. Load generators, JMeter included, allow you to add conditional tests that would check in the response HTML for tags or attributes that indicate whether any new orders were created. Once a new order is detected, you can add the recorded steps to approve the order. See the JMeter If Controller and While Controller for details. The Loop Controller can be helpful, as well as the Setup and Teardown thread groups.
With these load generator basics under your belt, the next two sections lay out a two-step plan, “First Priority” and “Second Priority,” that helps you balance the conflicting concerns of creating a load script quickly and creating a script that applies load somewhat realistically.

The First Priority Script

Performance angst is a real ; consider these tough-to-answer questions: How many defects must we fix before our performance goals are met? How long will that take? Have we invested in the wrong Java architecture, one that will never perform well enough? Taunting project planners with these kinds of angst-ridden and unanswerable questions is a bit fun, try it some time.
The First Priority script helps you get up and going quickly to fix some of the most obvious performance defects quickly and to curtail some of this performance angst. This section provides a minimum check-list of “must have” load script enhancements, the ones required to convince you and your team of the validity of your load test. But also, look at the “First Priority” version as a cap: Once you have added these First Priority enhancements, stop. Focus on addressing the main performance defects uncovered by the P.A.t.h. Checklist in Chapter 8. Once you have the biggest issues under control, then you can jump to the Second Priority load scripting enhancements, later in this chapter, to finish off the job, and deliver some great performance.

Load Scripts and SUT Logons

The third performance anti-pattern discussed in Chapter 1 is “Overprocessing.” It says, in part, that trying to load-test and tune a seldom-used business process is a waste of time. Instead, we should focus on the most important and frequently used processes.
When first creating a load script, it is very easy to run into this exact situation, and have your users log in and out of the system much more frequently than would happen in production.
So, the main goal of a First Priority load script is to make sure the SUT logon activity in your script is somewhat realistic, especially for SUTs with processing-intensive logons.
The intensive activity I’m talking about includes things like creating resources for a user session, single sign-on, authorization of hundreds of permissions, password authentication, and more. The more complex the logon process, the more care that needs to be put into getting your load script to emulate realistic logon scenarios. Why? Because it is a waste of time to tune something that isn’t executed that often.
If you don’t have good data from production to understand logon frequency (for example, that says how many logons happen system-wide in an hour as compared to other business processes), you will have to make a judgement call. In most systems, real users do not logon, perform two business processes, and then log off—a 1 to 2 ratio. Instead, they do more work before logging out. Perhaps users do 1 logon for every 10 other business processes is a more reasonable ratio. My numbers are completely rough here—in the end you need to make the call.
Using JMeter, one easy way to implement a roughly 1-to-10 ratio of logon to other business processes would be to use a Loop Controller (mentioned earlier) like this:
  1. 1.
    User logs on.
     
  2. 2.
    JMeter Loop Controller repeats five times.
    1. a.
      Business process 1 (often requires a handful of HTTP requests).
       
    2. b.
      Business process 2 (often requires a handful of HTTP requests).
       
     
  3. 3.
    Logoff.
     
Once your script has a somewhat realistic number of SUT logons as compared to other business processes, it is time to start applying load and using the P.A.t.h. Checklist to find defects.

Using the Same SUT User in Multiple Threads

The suggestions in this section can be done either the First or Second Priority load script—your choice. I have included them here because they are also logon issues, akin to the ones discussed earlier.
If your SUT disallows the same user to be logged in from two different browsers at the same time, you get logon failures when logging on two or more concurrent users in your load test.
To avoid this problem, you can read your user IDs and passwords from a text file, instead of hardcoding a single user ID in the load script. Picture a .csv file with one user ID and password per line. Here is Jmeter’s component for reading from the file:
Most load generators, including JMeter, have features to keep two threads from using the same user (really just a line of text) at the same time. Default behavior normally wraps to the beginning of the .csv file after that last record is read/used. You could create the .csv file by hand in a text editor, or you could manually export the results of a SQL statement like SELECT MY_USER_NAME FROM MY_USER_TABLE.
There are other motivations to use a more realistic number of users. Let’s talk straight about this. Memory leaks and multi-threaded defects keep performance engineers very well employed, and generally, session management code is imbued with plenty of each type of defect. Load testing with one solitary SUT user ID, as with a freshly recorded and mostly unmodified load script, will not sufficiently stress your SUT’s session management.1
Let’s say your scripts are still logging on the same user repeatedly. If the SUT cached this user’s permissions, your hit/miss ratio would be unusually large (one miss at app server startup followed by hit hit hit hit…), because that same user would always remain in cache while a load test was running. This means your SUT would always be taking the fast code path, loading the user’s permissions from fast cache instead of the database (slower), and thereby underestimating the actual amount of processing.
When testing with just a single user, you also miss out on understanding how much memory is required to keep a large number of users in session. So, be sure to take a stab at how many users will be logged on at any one time, and include that many users in your load testing.
Lastly, while we are on the subject of users and session memory, don’t forget to validate that your system’s auto logout functionality works. You may want to make a one-off version of your load test and comment out all logoff activity. Run the test, wait 30 minutes or so (or however long it takes your SUT to auto-logout users) and verify that the session count drops down to zero.

Second Priority

To fully and accurately model business processes and workloads, you need a full-featured load generator like JMeter. However, as I said earlier, most performance defects can be found using the small subset of functionality I’ve detailed in the “First Priority” section. Of course there are critical bugs to be discovered with the “Second Priority” enhancements, but there tend to be fewer of them. Those defects also tend to be part of a single business process, instead of the SUT architecture as a whole.
The following various load script stages show a typical progression of enhancements that are made to scripts. Each stage shows enhancements you can make.

Load Script Stage 1

When you are at Load Script Stage 1, you have recently recorded a load script of yourself using a web browser to traverse the latest/greatest version of the SUT. Instead of being recorded, of course, perhaps you are working with a manually collected set of requests to an SOA. You have added to the scripts some processes that validate that all HTTP/S responses are error free and contain a few key bits of response text. A few correlation variables have been added to the script that will ferry a single data item (like a generated new customer ID or a shipping confirmation number) from the output of one HTML response to some needy web page in a subsequent part of the script’s business workflow. Aside from that, few other modifications have been made.

Details

Load generation is a record-and-playback technology and it is a little bit fragile, like most things code generated. So keep a backup of the full, detailed HTTP log of the most recent record process; perhaps keep the most recent recorded log and load script in source code control. The log must include all the HTTP request URLs, any POST data, HTTP response codes, and for sure the HTTP response text. When subsequent “refinements” to the scripts break something, you can return to your pristine, pseudo-canonical original log of the entire conversation to figure out what went wrong.
In fact, with the right load generator, a simple but very careful text file diff/comparison between the canonical and the broken HTTP log will guide you to your directly to your functional script problems. There is an example of how to do this in JMeter in Chapter 7. Look for the section “Debugging an HTTP Recording.”

Validating HTTP Responses

Validating that HTTP responses contain the “right” response data is critical. Cumulatively, I have wasted weeks and weeks of time analyzing and trusting tests whose responses were, unbeknownst to me, riddled with errors. So I implore you to please take some time to enhance your scripts to tally an error if the right HTTP response is absent. In addition, you should also check to see if the wrong response (exceptions, error messages, and so on) is present.
Chapter 7 on JMeter shows a feature that does this, called an Assertion. There are all kinds of JMeter reports and graphs that show error counts. When your JMeter Assertion flags an issue, those JMeter reports reflect those errors. Without this essential visibility, you, too, could waste weeks of time as I have.

Load Script Stage 2

Load scripts log many different users on in a single test, instead of the same user over and over. Hard-coded hostnames or IP addresses and TCP port numbers are replaced with variables, whose value you will change when using the scripts in a different environment. The script executes the business processes in the rough proportions seen in production.

Details

This script deals with workload proportions. To get a more production-like workload, you need to enhance your load scripts to execute various business processes in more realistic proportions. At a very high level, for many purposes, the breakdown of business processes is 70% inquiry, 30% update/insert/delete. All too seldom, our proportions are unrealistic, because we don’t take the time to collect quality data from production to learn how often each type of BP is executed.
But before jumping head-first into finding which services are used the most/least in production (and tweaking your load scripts to apply load in those proportions), I recommend, first, to focus on a more imperfect approach, where you apply three or more threads of load (with zero think time) to all services. This is the 3t0tt mentioned in the introduction. Of course, we do this to shake the multithreaded bugs out of that old dormitory room couch. Getting more persnickety and perfecting realistic load proportions buys you two different things. One, you avoid wasting time troubleshooting problems with a never-to-be-used workload, and two, you find compatibility/contention problems with the go-to, production workload. Of course these are both important concerns, but in my experience, their incidence is noticeably eclipsed by that of the multithreaded bugs, which can be evicted from that lovely couch even with imprecise load proportions. Aim for the right workload proportions and load script think times, but start out with at least three threads of load with zero think time.
With JMeter, you implement your workload proportions/percentages by assigning different numbers of threads to your various scripts. For instance, if you wanted to model the 70%/30% split detailed earlier, you would start by recording two separate scripts, one for account inquiry and the other for account update. With a single simple script like this, all the HTTP requests are stored under a single JMeter Thread Group—the place where you specify the number of threads of load to apply, and the duration of the test.
So to implement the 70/30 workload proportions, you might start with a blank script with two blank Thread Groups that are siblings in the JMeter tree configuration. In the first Thread Group, you might assign seven threads, and the other three threads. Then you would copy/paste all the HTTP requests from the account inquiry into the Thread Group with the seven threads, and the account update HTTP requests into the other Thread Group.
This blog post details three alternative JMeter approaches to getting workload proportions right:

Load Script Stage 3

Instead of inquiring upon or modifying the exact customer or accounts used to record the script, the load scripts are enhanced to read account/customer/other identifiers from a data file and then input those to the SUT.

Details

In this stage, the load scripts move beyond reading user IDs from .csv files and on to reading other important data from .csv files, like customer and account data. But come to think about it, why bother? We are most certainly getting ahead of ourselves. Beefy-sized data must actually exist in these customer, account, and other tables before we spend the time to enhance our load scripts to apply traffic with .csv or data-driven samples.
Yes, this chapter is about load scripts, but I cannot keep myself from reminding everyone that my dog has the skills to design a great-performing database, if the row counts are low enough. Just about any query you can dream up will perform very well with fewer than about ten thousand records. So, “performance complacency” is high when row counts are low.
To get around this complacency, it seems like a nice idea to use the load generator scripts to drive the SUT to grow the table counts of your big tables. But unfortunately, the SUT, at this point in the tuning process, is generally so slow that you will start collecting retirement checks before the SUT adds enough data, even if left to run for many hours.
If this is the case, consider using JMeter as a quick RDBMS data-populator. Create a separate script that uses the JMeter JDBC Sampler (Figure 4-1):
A449023_1_En_4_Fig1_HTML.jpg
Figure 4-1.
JMeter can execute SQL using your JDBC driver. This file was used to populate more than 2 million rows in well under 10 minutes on my 2012 MacBook. This file, loadDb-01.jmx, is available in the src/test/jmeter folder in the jpt examples.
It should fire multiple threads that execute massive numbers of INSERTs (carefully ordered to align with foreign key dependencies), with VALUES taken from either random variables:
Or from .csv text files:
And that concludes my little detour discussion of how to quickly get production-sized table counts. Let’s get back to the original discussion, where the initial recorded load script inquires upon or modifies data from one or a few particular customers or accounts. Whatever data you used in the script recording is hard-coded in the load script file. Inquire on customer 101. Update the balance of account number 393900. The “101” and “393900” are stored in the script file.
Now that your table counts are sufficiently beefed up, you can enhance your load scripts to, for example, do account inquiries based on account numbers found in a .csv file. So instead of having 101 and 393900 in the load script file, those values are instead stored in the .csv file.
A great place to start is with a text file containing tens of thousands of different account numbers, one account number per line in the text file. This is the most common format of input file for load generators. But do not stop there with just a single column in your .csv file; there is some great low-hanging fruit here. If you add a second column in your .csv file with corresponding account balances, you can easily enhance the load generator script to validate that the SUT is returning the right balance for each inquiry.

Load Script Stage 4

As more tuning happens, the SUT becomes a formidable data-producing beast, and repeated load tests INSERT massive amounts of data into SUT tables. Don’t forget to keep an eye on growing row counts, for reasons you might not expect:
  • Yes, unrealistically large table counts are bad, but undersized table counts are bad, too. If your load test, which lasted 300 seconds, created 10 orders per second, there should be some concrete proof somewhere in that database. Shouldn’t there be roughly 3000 (300x10) somethings INSERTed, that you can easily verify with SELECT COUNT(*) queries? As I mentioned earlier, invite skepticism in your load scripts, and capture table counts before and after your tests to prove that real work has been completed.
  • Make sure table counts do not grow unreasonably large, causing performance to degrade unnecessarily. Spend the time to work with business analysts (the ones that have the most experience with your customer’s data as well as regulatory requirements) and agree on retention requirements and how they impact table counts. If your live, primary database must keep ten years of transaction history, calculate how many records that translates into.
  • Have a plan for keeping your tables trimmed to the right sizes. When tables get unreasonably large, we have generally restored a backup to a known state/size, but that takes time/management/discipline to acquire disk space for the backup file(s), time to actually create a backup (and keep it up-to-date). Ad-hoc DELETEs are simpler, but can be very slow because rarely is time spent to get indexes for ad-hoc queries in place, plus anything ad-hoc just sounds bad—it seems like you are unnecessarily introducing variance and risk. Yes, TRUNCATE is faster, but we are here to validate performance with realistic table counts. Zero is not realistic.

Invite Skepticism

I’d like to leave you with a small warning: invite skepticism from yourself and your coworkers as to whether the load generated by your scripts resembles that in production. Perhaps even block out some time on your team’s calendar for a “load script skepticism” meeting. Of course this requires a lot of transparency, communication, and sharing of performance test data . Let me explain why this is important. The propeller on my colorful propeller-head beanie gets all excited and spins gleefully when watching a load script exercise the SUT. Look at all the load generator threads, like happy, little robots, navigating the pages of my big web application! How fun! The outward appearance is impressive. A freshly recorded load script, with dozens of captured URLs/parameters, appears to be impressive. Watching the script apply load for the first time also appears to be impressive, with busy log files and throughput and CPU metrics that really seem to be impressive.
But do not be fooled. Temper your excitement. Check for errors. Make a skeptical, sober assessment of whether all of that “activity” really approximates production-like workload. If your load script cranks out individual little results, perhaps a row in a database table to represent a single completed internet shopping order placed, then enhance your script to validate that final result. Even if free of errors and exceptions, your script might still not be returning the right data, say account balances for an account inquiry.
If your application is already in production, perhaps compare some production metrics (like hit counts per URL) to those captured during a load test.

Project Lifecycle Hints

The obstacles to getting started performance testing can be formidable. So many things take a lot of time: hardware purchase and setup, installation of the SUT, installation of monitoring tools, creation of the JMeter or other load scripts, creation of test users for the SUT, enhancing the data in the database to look more like production (especially the row counts), and so on. Just typing that list wore me out, but bear with me because I can show you some nice opportunities that most people miss.
Unfortunately, most of the previous prep work is required for a thorough performance vetting before you usher your SUT into production. But please understand that you must not wait for the perfect, unicorn-like performance environment to get started tuning. With the traffic of just a single lucky user (you), you can actually get started right now finding beefy and meaningful performance defects, ones that degrade performance in production. Here is one of those opportunities I mentioned: start using the P. and A. items on the P.A.t.h. Checklist (P=Persistence, A=Alien systems) in any “no-load” environment you can find, where just a few people are leisurely traversing the SUT. So if you are ready to make things faster without having to invest time creating load scripts, jump right now to Chapters 9 and 10.
This is the sole reason the P and A are in uppercase in P.A.t.h., and the t.h. are in lowercase. The lower case ones are for load testing only—specifically, they are of no real use in a no-load environment. When under load, they can help tune synchronization, find the exact location of heavy CPU consumers, identify inefficient garbage collection, and the list goes on. The uppercase ones are for both load and no-load environments. Typographically, the mixed case is not attractive, but perhaps it will be a subtle reminder to get started now finding performance defects with the P. and the A.
The overly simplified schedule in Table 4-2 shows how two developers could both be productive at the same time, working on performance. Java Developer 1 gets started tuning right away, while Java Developer 2 creates the load scripts.
Table 4-2.
Rough Sketch of a 9-Day Schedule for the Start of a Performance Tuning Project
Days
Java Developer 1 / Tasks
Java Developer 2 / Tasks
1-3
Explore performance issues with “no-load” using the P. and A. parts of the P.A.t.h. checklist
Load Script / V1: record, parameterize and test it.
4-6
Load Script / V2: record, parameterize, and test it.
Load Script / V1: apply load with it, locate defects using P.A.t.h. checklist.
7-9
Load Script / V1: apply load with it, locate defects using P.A.t.h. checklist.
Fix and deploy performance defects.

Don’t Forget

The rationale for deciding which script enhancements get first priority and which get second is pretty straightforward. Because there is neither time nor money to performance test and tune all business processes, performance problems will happen in production, even if you do everything “right.” As a fall-back position, you should therefore first aim to tune the overall architecture, so that the dozen or so business processes that do get a thorough performance vetting become blueprints of how to write a component that both functions and performs.
Many ask whether tuning during development is even worth it, considering that the code base will have to be performance tested again after more code changes. I agree this is a small problem. A much more important question is ask, though, is whether our current architecture will crash and burn under the projected load in production. That’s the often-forgotten question I want answered, and the “First Priority” script in the chapter is exactly the tool you need to answer it.
Furthermore, fixing one architecture-wide defect affects most business processes, which helps to maximize the performance impact of your fixes. The other side of the coin is, of course, that some script enhancements that are second priority might be necessary to stabilize the performance functionally critical business processes. You will have to use your own judgment here. Perhaps your most critical business processes deserve both first and second priority enhancements at the outset.

What’s Next

As this chapter demonstrates, load tests take some time to prepare and run, so I’m always hesitant to scrap the results of a test. But sometimes, the results are so wrong, the there is no other option. The next chapter is your detailed guide on when to acquiesce and scrap the results of a test.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.197.135