Chapter 14. Iterations in Testing and Tuning

If you have ever wondered how many iterations it takes to finish performance tuning a complex enterprise solution like R/3 Enterprise or those found in the mySAP Business Suite, you've come to the right place. The answer is simple—many—and you'll likely never find yourself putting the final performance-tuning touches on your system. If you're fortunate and stay heads-down, you might get close, but by the time you peer back up, everything will look different again, much like the way the stars can't help but move across the sky at night.

So instead of trying to achieve performance-tuning perfection, I subscribe to an approach that says you must draw a line in the sand that falls short of achieving 100%. A tuning or performance optimization goal of perhaps 90% to 95% is more realistic, representing the upper end of a range in that 1% might imply “out of the box/no explicit tuning” whereas 100% would represent the “completely tuned and optimized” nirvana that no one ever really attains for more than a few moments. The idea behind the line-in-the-sand approach to testing and tuning is to simply push as rapidly as possible toward the line, aware that you will eventually hit a point of diminishing returns where additional testing and tuning actually costs more to the organization than the potential value or benefit derived from it.

But there's good news. In my experience with tuning both productive and benchmark-oriented end-to-end SAP Technology Stacks, my colleagues and I tend to hit what we perceive to be 80% or so fairly quickly—maybe as soon as 1 to 2 weeks—consisting of three to five testing/tuning iterations at each core layer in the technology stack. Eighty percent should not be taken lightly, either—it's relatively easy to reach this level of achievement; indeed, it's practically within spitting distance of that 95% goal. And, in the big picture, hitting 80% is cheap! To take this level of optimization to the next level and hit 90% could easily be another 2 weeks of effort, in fact, depending on a slew of factors: technology used across the landscape, workload mix and distribution, load balancing, expensive SQL, poor coding in general, and more. And once this 90% level of performance is achieved, to again bump a system's performance another 5% might take another significant level of effort on the part of an entire team! Clearly, as we near 95%, the returns diminish quickly as the costs begin to escalate exponentially—each major iteration in testing and tuning costs more than the last, and often delivers less incremental value.

It's for this reason that effective tuning is all about iterative stress testing. In turn, iterative testing is about maintaining a can-do attitude and a high degree of dogged determination or persistence as much as anything else. Indeed, this latter statement is directly at odds with one of my favorite quotes from W. C. Fields, who said, “If at first you don't succeed, try again. Then quit. There's no use being a !&*# fool about it.” Although it sounds like good advice to obsessive gamblers, if you quit early in the game, as Fields suggested, you'll be lucky to achieve 80%. But if you persist in conducting consistent and iterative test runs, you'll make progress, however small, and that 95% target, albeit down the road a ways, will be eminently achievable.

Beyond the technical, business, functional, and attitude challenges inherent to optimizing a complex enterprise solution, another real challenge everyone must deal with is that the concept of “good performance” is a moving target at best. Your end users may tell you that they're not happy with performance one day, but be pleased the next, even though nothing in terms of load or configuration has really changed. Your customers who are intent on getting their hands on their daily batch reports as quickly as possible may be completely satisfied most days, and unhappy with the turnaround time on other days. My point here is that perceptions will drive many a user's current definition of good performance much of the time, and genuine performance issues will drive the rest.

Outside of perceptions, though, the workload itself will morph over time, fundamentally changing how to quantify and characterize “good performance” simply because the baseline against which you're measuring is no longer valid. SLAs will change over time, too, based on what the business needs at the current moment in time to keep the real customers happy—the ones making it possible for all of us to pay our mortgages and keep two cars in the garage. One month, achieving a particular SLA will be business as usual, whereas the next month it may fail to impress. All of these conditions simply underscore the importance of consistent benchmarking, baselining, and testing. After all, what better way to ward off a potential performance issue than to catch it in its infancy? And what better way to fight perception than with the hard performance facts and response-time metrics that load testing can provide?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.165.247