Chapter 13. How to Analyze Test Runs

As I've said before, a well-executed and seemingly successful test run followed by anything but sound statistical analysis is just a waste of time. In Chapter 11, we worked through the preparation steps necessary to ensure a successful series of test runs. In Chapter 12, we finally kicked off a stress test and worked through the details associated with successfully monitoring a test run. Now, with output logs and other data in hand, it's time for the real fun—to bring everything together. Initially, this means analyzing the response times, throughput, and other performance metrics associated with each test run. It also means creating a custom set of spreadsheets, tables, charts, and graphs that explain these results in terms that clearly show whether the test's success criteria were met. Further, because the results of a test run may be shared with a variety of stakeholders or other interested parties, it's important to customize the output to a particular audience—some will prefer aggregate data, for instance, whereas others will be more interested in the details surrounding a particular set of test runs. I cover all of this and more in this chapter.

A test run might be considered a complete or final product in its own right, or it might play a smaller role in what amounts to iterative cycles of testing and tuning. In both cases, though, I believe it's imperative to analyze the data coming out of a test run as soon as practical after the test run is completed (or set of runs; e.g., in the case of a stress test that seeks to measure the performance inherent to different user or workload distributions). That is why I've inserted this chapter before a detailed discussion of iterative testing and retesting/tuning loops, which is presented in Chapter 14. In reality, though, the line where testing stops, data analysis begins, and testing starts again can be rather blurry. For instance, you'll likely find yourself performing a subset of the data analysis tasks described herein after each test run. But by the same token, if you're like most people, you'll save creating the “big report” for last, delivering it to your management team and end-user communities well after the last set of test cases has been executed. You may even embrace a phased approach to testing, data analysis, and tuning, alternating every few months between sharing detailed results with technical teams and high-level results with upper management. This approach makes the most sense when stress testing and performance tuning play a role in your change management processes, or if you're engaged in a long implementation or upgrade where you seek to quantify and refine performance deltas every few weeks or months as you near Go-Live, simply because your Go-Live platform or understanding of the workload to be hosted by this platform continues to evolve during this period.

Regardless, once you have executed a test run and worked through collecting your various technology-stack proof points, all of these data must go through a cleansing process of sorts, culminating in the cleansed data being dumped into an analysis tool. My analysis tool of choice is Microsoft Excel—it's easy to use, accepts data in many different formats, is widely available, and, when the need arises, can be used to feed more refined presentation tools like PowerPoint. I call this simple process “collect, cleanse, analyze, and present,” and it forms the foundation of this chapter. In the bigger picture, this analysis method is but a piece of the “stop, analyze, and resume” process discussed in detail in Chapter 14. So with our focus firmly planted on the “analyze” portion of the big picture, let's take a look at the kind of data available at each SAP Technology Stack layer prior to pulling all of this information together in the name of “data collection” for an SAP stress test.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.172.223