10.1. Automation Fundamentals and the Big Picture

Automating a system load—the load emanating from your carefully constructed test mix, as discussed in Chapter 9—is all about doing more with less, and doing so in a repeatable, consistent manner. Automated testing approaches should allow you to:

  • Leverage virtual test suites to make it possible for a couple of “human resources” to drive a stress test of perhaps thousands of users

  • Leverage automated test tools to create, manage, and execute complex business processes subject to zero errors in execution

  • Leverage various utilities or approaches to find a few core data combinations that can subsequently yield hundreds of additional valid data combinations

  • Leverage a small number of multiprocessor servers to host what appears from an SAP perspective to be hundreds or thousands of end-user desktops

  • Drive test runs from the same SAP interface utilized by your end users, making it possible to reuse and customize stress-testing scripts for other purposes, like functional and regression testing

SAP load testing implies much more today than it ever has in the past, though. No longer are interfaces to other systems handled exclusively through Application Link Enabling (ALE) or Electronic Data Interchange (EDI) technologies; instead, more and more systems are being linked leveraging SAP XI, or similarly robust and Internet-enabled approaches like Tibco, BizTalk, and other message buses, proprietary middleware products, Internet services, and popular protocols like http, XML, and SOAP. SAP NetWeaver by virtue of XI is especially compelling in that by embracing both Microsoft's .NET and Java connectivity/interoperability across the board, it's positioned to quickly become the de facto standard for extending and integrating diverse systems into tightly linked cross-application solutions.

Thus, the final reason to automate a system load is to ensure that these tightly linked, typically mission-critical systems perform well together. Luckily for us, there's no need to be an expert in low-level protocols and initiatives, nor necessarily in the systems and products that these protocols bring together. Instead, once we identify the right mix of business processes to be tested, partner with the “owners” of these processes, and then focus on populating each business process with valid and abundant data, we should find ourselves in a position to kick back and monitor the progress of our TU runs as our business processes execute to completion and everything generally falls into place.

But not so fast. Automation takes time, requires testing, and is anything but free—and automation may still require a certain amount of up-front coordination and other work simply to tie individual processes together that don't necessarily execute concurrently or sequentially (the latter two of which can represent excellent methods for generating heavy loads or long-running loads, respectively).

The fact is, in the front end of a performance-tuning/stress-testing engagement, I spend much of my time with both the business and technical teams at my various client sites identifying functional process flows and work flows, often followed by another chunk of time focused on finding enough valid data to support the stress-test project. Next, after installing the best scripting or other tool for the job on my laptop or customer-provided equipment, even more time is spent creating scripts or TUs that functionally work; like programming, script development is subject to development iterations, including fixing coding bugs, tacking on standard subroutines for error handling and reporting, and so on. Creating scripts is not as simple as it sounds either—even when I understand the business task to be scripted (e.g., the steps involved with creating a sales order), I typically run into issues regarding valid data combinations, data fields that are required in some cases but not others, data that generate unusual errors, warnings, or unexpected screens, and more.

Even the process of recording a business process is fraught with errors: scripting tools may fail to capture certain conditions, field values, or the fact that I double-clicked the mouse in a specific cursor position during a business process script “recording” session. In addition, the initial script that is created from a recording session is far from ready to be truly useful for stress testing. For example, hard-coded data-entry points (e.g., distribution channels, company codes, or customer numbers) usually need to be converted to variables, which in turn must be reflected in the scripts. And any variables created must typically be defined in terms of whether the data is numeric or text, the length of the variable, whether it should be maintained as a private or public variable, and more. Bottom line, recording only gets you halfway there—capturing each SAP transaction, keyboard entry, mouse-click, and so on executed via the SAPGUI (or WebGUI, Java-GUI, etc.) only serves to create a glorified text file. A true business process script is formed, on the other hand, only once your basic text file is edited to reflect variable input, to run in virtual-user mode, to open input files and capture output data, and so on.

Don't worry too much about this scripting process just now, though—we'll go into the electrifying details of business process scripting later in this chapter. Instead, let's first revisit the three key methods of generating a system load, and why custom application-layer scripting tends to provide the most value at the end of the day.

10.1.1. Level One Testing

Level One testing, also called component-level or system-level stress testing as discussed in previous chapters, is the most fundamental of testing approaches. Level One focuses on tuning discrete technology subsystems, testing the impact of a discrete process, or conducting other typically “single-unit” testing. Exceptions exist, of course, but by and large this type of testing does not generate the load typically supported by a system servicing thousands of end users or hard-hitting batch processes, and thus is mainly of value from a “pretuning” perspective. Remember, Level One testing is inherently accomplished through the use of a single “user,” be it at the SAP front-end client level, the SAP application layer, or nearly any of the technology layers underneath these top-level layers.

Many disk subsystem test tools represent a key exception to this single-unit rule of thumb, in that multiple processes or threads may be leveraged to simulate realistic multiuser workloads. The same can be said of any test tool that can spawn multiple processes, threads, or similar multiuser constructs. But, even in these cases, great lengths must be taken to assemble a workload that resembles the workload generated by a diverse user community or complex batch job scheduler. And, in the end, even in the best of situations you could wind up with a tool that created a wonderfully representative disk I/O load but did nothing to address the network, CPU, and other hardware or database system component loads naturally driven by higher level application-layer tools.

10.1.2. Level Two—SAP Standard Benchmarks

Driving an application load can be accomplished in a number of ways, including through the use of a standard SAP benchmark kit (made available primarily to SAP hardware partners). Creating such a load for different mySAP components can be quite demanding, though, from three perspectives. First, you need to consider the learning curve that must be conquered relevant to executing an SAP benchmark, by no means a trivial task. Next, only specific versions of select SAP components are covered—the particular version of BW you have deployed may simply not be available, for example. Finally, the skill sets needed by the team responsible for BW testing may find itself short on core testing expertise germane to BW, a shortcoming simply not addressed by a benchmark kit. Suddenly, the very reason you wished to leverage a standard approach to rapid benchmarking would have deteriorated considerably. And that's just the beginning.

Thus, be careful not to misunderstand the role of the benchmark kits published by SAP. I'm a big fan of them myself, but they take time to learn and to master. More to the point, they were never intended as tool sets or scripts waiting to be refined and deployed in the name of customer-specific benchmarking. Not to say this isn't possible, but the real value of a standard benchmark is derived from the fact that it is standard. Take away the apples-to-apples comparison value and you're left with only the shell of a basic testing approach. So refrain from going down this road unless you seek only platform deltas. And, by all means, avoid fundamentally changing or modifying the contents of a benchmark script or data—otherwise, you lose the ability to compare your results with real-world published benchmark results! And finally, if you run into benchmark execution problems (as so many of us have in the past, myself included), you can be confident that SAP has probably already seen your particular issue and has tweaked or changed the customized configuration delivered with the benchmark kit via support packages or an updated set of scripts. I suggest a quick SAPNET search, creation of a SAP Note, or phone call to your SAP Competency Center to help you resolve your issue quickly. Or, on the flip side, if you enjoy tweaking AutoIT scripts and generating new Perl code, getting down and dirty in the code may be just the thing for you—you wouldn't be the first (I certainly wasn't). Besides, there's a certain amount of satisfaction in sharing benchmark kit script bug fixes with our colleagues in Walldorf, assuming you're working against a leisurely test schedule. But at the end of the day, take care not to change the nature of the scripts or data itself!

10.1.3. Level Three Custom Application-Layer Testing

The bulk of this chapter addresses Level Three testing, which includes customized and customer-specific load testing by leveraging high-end SAP-aware scripting test tools from companies like AutoTester, Compuware, and Mercury Interactive. Also called Proof-of-Concept Exercises or Customer-Specific Benchmarking, as discussed in Chapter 4, this type of load testing is the most difficult to conduct and unsurprisingly the most valuable. Think about it—driving the application layer via your actual business processes against a copy of your actual SAP database or databases is perfect for characterizing the impact that a specific load has on your unique system in terms of performance. And, though it's not always possible, doing so in an environment that closely mimics what you run in Production makes the whole process that much more relevant—for good measure, we'll also cover a number of approaches short of creating a copy of the production environment.

10.1.4. Other Real-World Approaches to Load Testing

In the overall scope of stress testing, if you do not automate your test load through software-based means, you must nonetheless find at least a marginally consistent way to drive your workload—even if it's by way of trained monkeys pressing the Enter key on queue every 30 seconds in a test lab filled with 1,000 desktops you personally loaded and configured using a stack of CD-ROMs and floppy disks. There are many ways to do this, some of which are discussed later. I am not a fan of any of these approaches but feel like this chapter would not be complete without painting a worst-case big picture. Maybe this will help you sell the value that an SAP API–aware or other load-testing tool alternatively provides.

The first of these approaches, the “monkey method” alluded to previously, is not actually that far-fetched. Typically, it entails bringing in the system's end-user community, though, rather than real monkeys. The challenge (beyond getting these resources to show up somewhere on a weekend or after hours, given that they have real work that needs to get done during the week) is to encourage these users to execute at your command the various business scenarios and processes deemed important enough to warrant a prechange or new-implementation stress test. Problems are plentiful with this approach, as noted in the following:

  • End users are expensive, because they already have a 40-hour work week to look after; any incremental time can be expensive in terms of actual costs (e.g., time-and-a-half pay for more than 40 hours).

  • End users are just people, subject to getting bored, making mistakes, and taking long lunches and untimely smoke breaks. All of this manifests itself in a multitude of ways, the most important of which involves poor consistency in test execution and therefore low value from a run-comparison perspective.

  • From a logistics perspective, bringing together the infrastructure necessary to execute a stress test from a central facility is unlikely. The alternative—keeping everyone where they are (which makes sense at many different levels)—has its own set of problems, though, especially surrounding test execution, coordination, and communication in general.

  • End-user-driven tests take longer to execute than their software-automated counterparts.

Of course, to rectify some of these issues, a customer might choose to leverage detailed checklists and other process documents, along with long-running conference calls attended by functional leaders for each area or department. But overall, this approach is far from being a best-in-class approach.

Another marginal approach to load testing involves using tools that are not SAP API–aware. For example, my colleagues and I once executed a stress test that consisted of six core R/3 transactions executed by 300 physical desktops, driven by a basic Windows GUI driver (Visual Test 4.0, if you're curious). Besides the pure cost associated with acquiring such a large number of desktops, a huge amount of work was required, as follows:

  • Each desktop had to comply with strict GUI standards, so that nothing “external” to the SAPGUI got in the way of successfully executing the scripts. Thus, standard monitors and screen resolution, font size, consistent naming conventions, and so on all became critical success factors.

  • The scripting language itself was subject to flaws in execution—for no reason, desktops would lock up unexpectedly, script windows would “lose focus” and stop executing, and so on. Actually, it's not fair to point the finger solely at Visual Test, because NT 4.0 was probably to blame for at least some of these issues.

  • Managing the start of a test was a big feat in itself, as best practices required a reboot (to clear cache, re-establish SAP client connections, and generally re-enable failed/locked desktops), followed by remotely executing the appropriate business transaction on the appropriate desktop.

  • Stopping a particular test run was also difficult simply from a controller perspective (for those interested, the controller was a home-grown NetBIOS Extended User Interface, or NetBEUI, application created by some very bright folks at Compaq Computer Corp in the mid 1990s). And when test runs went awry, we could count on an hour of dead time before the next run would be positioned to execute again.

  • Collecting data was a bit of a logistics nightmare—in the end, I had to create a batch file that established a connection to the local drive of each desktop, copied the contents of any output files to a shared drive, renamed the output file to reflect the originating desktop's name and test run, and then appended all data into a single file to be later manually analyzed via Microsoft Excel.

As mentioned before, a third method of load testing without the benefit of an automated test tool involves using an SAP benchmark kit. Executing SAP load tests by running a standard component-specific SAP benchmark kit might help prove that one technology stack outperforms another but does nothing to prove that a particular business process change performs better or worse than expected. And load tests that are focused on a particular component of an overall SAP solution are useful from a discrete perspective at best.

Finally, a fourth load-testing method that I've used in the past involves installing SAP's Internet (formerly International) Demonstration and Evaluation System (IDES), which is a full-blown system in and of itself. IDES contains all of the configuration, user, and master data required to execute end-to-end business processes, albeit none that are germane to your particular environment. That is, IDES is built around a fictional company created for demonstration purposes by the folks at SAP AG. So, finally, a certain amount of scripting work is still required to gain any benefit from this approach, and even then it's not specific to your company. But as a couple of my own customers can attest, it's still a good way to compare computing platforms against one another.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.158.134