10.7. Best Practices in Script Writing and Testing

Creating functional scripts that not only work but provide elegant solutions to the challenges surrounding testing, executing, monitoring, and analyzing results is as much an art as it is a science. Like writing in general, no two people will approach scripting in exactly the same manner. Ergo, no two people will wind up with identical scripts—variances in approach, technique, and so on ensure this. However, in my travels I've been lucky enough to work with some folks who are truly gifted in scripting (which is fortunate, given that I'm not one of those people!). These experts seem to hail from a programming background, for the most part, and sincerely enjoy their work—a number of the consultants from AutoTester and Compuware and a number of my own HP colleagues naturally fit into this category. Their scripts make some of my brute-force scripting solutions look sad in comparison. Fortunately, over the course of our various consulting engagements or internal projects together, they have shared with me different approaches and general best practices that make their business process scripts really stand out in one form or another. The best of these are covered in the next few sections.

10.7.1. Capturing Critical Data and Statistics Real-Time

Without sound data collection and statistical processes in place to ensure that the right post-test-run data were collected consistently, the act of scripting would be futile. That is, it would be a waste of time. Because this subject is so important in the overall context of scripting, I approach output statistics from a number of different perspectives. All of these scripting other management approaches work hand-in-hand to provide a complete performance picture. First, I ensure that the fundamental hardware and OS layers are being automatically monitored at a hardware subsystem layer, using OS-level tools or simple infrastructure management applications like HP Insight Manager or Dell OpenManage. Next, I look to the testing tools themselves to provide valuable data, many of which are often consolidated into high-level results files while still making it possible to drill down into the various transactions executed, virtual users started, and even the lines of code used.

But at a more granular and script-specific level, I also use a highly concatenated output file stuffed with valuable data that can then be written to an output log a line at a time (e.g., after the completion of a “loop” within the body of a scripted transaction). The following shows how such a concatenated output script is written to capture and log output data for a basic R/3 Financial transaction, FB03:

  • Assign RESULTS.SCREEN = “FB03-Display Document “

  • Assign RESULTS.OUTPUT = FI.FB03INPUTDOC

  • Assign FI.OUTPUT = $MACHINE

  • Assign FI.OUTPUT = FI.OUTPUT + “, “

  • Assign FI.OUTPUT = FI.OUTPUT + $TESTCASEID

  • Assign FI.OUTPUT = FI.OUTPUT + “, “

  • Assign FI.OUTPUT = FI.OUTPUT + LOGIN.SAPSERVER

  • Assign FI.OUTPUT = FI.OUTPUT + “, “

  • Assign FI.OUTPUT = FI.OUTPUT + RESULTS.SCREEN

  • Assign FI.OUTPUT = FI.OUTPUT + “, “

  • Assign FI.OUTPUT = FI.OUTPUT + $RUNNAME

  • Assign FI.OUTPUT = FI.OUTPUT + “, “

  • Assign FI.OUTPUT = FI.OUTPUT + $DATE

  • Assign FI.OUTPUT = FI.OUTPUT + “, “

  • Assign FI.OUTPUT = FI.OUTPUT + LOGIN.STRTTIME

  • Assign FI.OUTPUT = FI.OUTPUT + “, “

  • Assign FI.OUTPUT = FI.OUTPUT + LOGIN.STOPTIME

  • Assign FI.OUTPUT = FI.OUTPUT + “, “

  • Assign FI.OUTPUT = FI.OUTPUT + FI.RESPONSE

  • Assign FI.OUTPUT = FI.OUTPUT + “, “

  • Assign FI.OUTPUT = FI.OUTPUT + RESULTS.OUTPUT

  • Assign FI.OUTPUT = FI.OUTPUT + “, “

  • Assign FI.OUTPUT = FI.OUTPUT + RESULTS.ERROR

  • Log “s:ATWCSoutputFI.txt” FI.OUTPUT

Collecting data in this manner serves two purposes: it provides for a single repository of output data and “backs up” other methods of data collection that may or may not at the end of the day provide the data you seek in a consistent or reliable manner. Note that any data can be collected in this way, too—you simply “add” the new data, be they a numeric value or alphanumeric string, to the end of the list. There is a slew of useful system variables and other input, processing, and output details that could prove useful in your case. For example, AutoTester supports the transaction command, which lets you define the start and stop of an arbitrarily defined transaction, and then report the wall-clock time consumed in its execution.

I've also become a big fan of scripting CCMS transactions that can help me automatically gather test statistics for a particular period of time. Tools like transaction ST07 and AL08 can prove valuable while the system is under stress. The classic ST03 transaction, on the other hand, is an excellent tool for gathering post-test-run response time and throughput metrics associated with a particular test run. In the past, I've coded ST03 in combination with SM51; using SM51, I can select a specific application server (e.g., the first one in the list). I then follow this up with an ST03 to gather specific dialog steps processed for the run as well as average response time, wait time, load time, roll time, database request time, enqueue time, and so on. After collecting this application server–specific data, I simply run SM51 again and choose the next application server in the list. In all, these are just the kinds of data that truly prove an SAP system is ready for prime time, or that one SAP system outperforms another.

And rather than using commas to delimit the various data points, you might choose instead to go with tab-delimited spaces or other characters (I actually use both a comma and a space, as you may have noticed in the sample output script, to give me maximum flexibility). You can also set up the output log to dump directly into an Excel spreadsheet, making data analysis that much easier. Keep in mind, though, that if you use a number of client drivers or use different output logs for different groups of scripts, and need to eventually collapse a number of these files into a single file, you may want to stay with basic text-mode output data. A simple batch file can then be written to map a shared drive to each computer housing a log file, to copy and rename the file to a central location, and essentially to append each file to a master test-run output file.

10.7.2. Additional Coding Tips and Tricks

Throughout the years, I have come across a few additional scripting tips and tricks that should prove helpful to you. For example, in the case of simple hardware delta stress tests (where I want to compare the throughput of one SAP Technology Stack to another), one of my favorite practices is to avoid running transactions that create anything. Instead, I execute a read-only test case. In the long term, you'll still want to characterize the performance of other types of tests, like cache-only and 90% read/10% write situations. And it's still very important in some manner to eventually characterize write performance of the database and underlying disk subsystem, too (perhaps by executing transactions that make updates to sales orders, purchase requisitions, customer credit limits, and so on). But by creating a test case that purposefully avoids changes or inserts, you avoid growing the database and therefore eliminate much of the need for restoring the database to a known state before each of these types of stress-test runs.

This practice will speed up your overall test times tremendously, too. In a nutshell, read-only stress tests make for a much faster test cycle, from script development through managing input data through actual test execution—this read-only practice saves a lot of time on quite a few fronts! The biggest savings is in execution, though, because you don't have to devise a process for warming or populating the database prior to a stress-test run, nor do you have to take up time restoring the database to a known state before each test run.

For test runs where the previous read-only approach is not feasible or simply not desirable, you'll want to put together a process that's still as fast as possible to execute and re-execute. In these situations, the key is to determine how to reset quickly between stress-test runs because, bottom line, you need to refresh the database back to a known state after committing inserts or otherwise writing data, lest the comparison results be questionable. One hardware-centric method is to use clones or business continuity volumes to “snap off” a copy of your baseline database, and then resync from that clone once you need to get back to a known state. Other similar methods exist at OS and database layers as well.

I've also learned to use the random-number generators that ship with different load-testing tools and, more important, create my own pseudorandom generator. Some of these tools are quite good, in that they generate a different number or seed as expected. Others tend to generate the same sequence of numbers, which is therefore not random in my eyes, but may be used to create predictably unique sequences (which is always preferable to none at all when you seek true randomization). I highly recommend that you develop your own random-number generation process, too. Like the serial number example I gave earlier in this chapter, a random number can be easily created by simply concatenating many values that change constantly. I often use the virtual client's unique number (which is really the key) along with the numbers associated with the current time and date (which may be the same for many thousands of virtual users executing concurrently) to create a repeatably random number.

For scripting tools that are not SAP-aware, another useful practice is to drive the SAPGUI using keyboard commands as much as possible, instead of performing mouse-clicks. That is, if you can navigate via the keyboard to the button you need to click, or the radio button you need to click, you tend to get a more reliable script. And because many of the screens in the SAPGUI support function keys as well as “mousing,” it's not too difficult to find keyboard shortcuts. While scripting, right-click the background of the SAPGUI to see the available function key shortcuts for that particular screen. And keep in mind that not all shortcuts are displayed in this way. In fact, one of my favorite shortcuts is not displayed at all—the key combination used to save or commit the results of a transaction, “CTRL+S,” is not supported in Virtual mode by AT1 running new versions of SAP R/3. However, the “old” F11 function key used in the past to save these changes still works! You'll be hard pressed to find this noted anywhere, though, even if you right-click the background of the SAPGUI.

Another handy trick I picked up a few years ago involves using the number of seconds in the current time (0–59) as a way to randomly determine which transaction of many should execute. This is useful in umbrella scripts that essentially execute other scripts based on a set of criteria (generally improving script management and execution in the process). I use umbrella scripts and information provided to me by my customers to set up test mix distributions. For example, if my customer tells me that 10% of all scripts should execute FD32 and 30% should execute VA03, I set up if-then logic in my master umbrella script like “if time = 00 through 05, then execute transaction FD32.” This represents 6 of 60 possibilities, and therefore 10%. Similarly, “if time = 06 through 23, then execute transaction VA03.” This represents 18 of 60 possibilities, or 30%. As you can see, this method is granular enough to handle widely varying percentage loads. And if you create AutoController packages of 60 virtual clients each, you are assured of getting an excellent distribution (assuming you use the staggered SAPLOGIN approach I explained previously).

Finally, another method for ensuring that a script executes a specific number of times per time period, and only that number of times, revolves around using the clock as well. First I determine how long the script takes to execute under the load it'll eventually be required to run under. For instance, if I've been asked to execute 1200 MB1C transactions (to enter goods receipts and move a shipment from one place to another) per hour, followed by creating a transfer order (LT06), then this equates to 20 MB1C/LT06 business processes per minute. In the midst of a large-scale test with thousands of users, I must determine the slowest acceptable pace MB1C can run. For our purposes, let's say that even under the harshest of systems loads, MB1C and LT06 can run to completion within 60 seconds, with the typical time closer to 10 seconds. Therefore I need 20 virtual users, each executing MB1C and LT06 every minute, to meet my success criterion of 1200/hour. If I capture the minutes and seconds counters (system variables) before a virtual user commences execution, I can then use these numbers to control precisely when the next MB1C and LT06 will run again (this assumes I have scripted these transactions to run in loop fashion—over and over again). The key is to set up a loop after the body of the script to compare the current time to the time captured when the transaction started. Specifically, if MB1C started at 10:33:21a.m., the loop I set up after the body will be looking for the next time the value of 21 seconds (or greater, in case I miss it!) shows up in the seconds counter. Until 21 comes around again, the script simply loops through this counter, effectively creating a controlled delay. Once the counter hits 21 again, though, control is sent back to the top of the body of the script where MB1C is executed another time, followed by LT06, and so on. If I need more granular control, I can leverage both the minutes and seconds counters (i.e., to launch a new business process every 2 minutes, or every 4 and ? minutes, etc.). Proving that your scripts work as intended is as simple as reviewing your output logs, too, as the start time of each script should reflect that indeed the transactions executed by each of the 20 virtual users are staggered 1 minute apart.

10.7.3. Regular Communication

Although I have focused primarily on the technical side of scripting, it's probably appropriate to wrap up the chapter with a discussion on communication. Regular communication is a huge plus in any endeavor, and scripting SAP business transactions is no exception. For example, I've been left in the dark myself before by well-intentioned clients, only to surprise them a few days before test week with my “take” on what they needed or wanted. It's not where I want you to be a few days before you're expected to demonstrate the value of everything I cover in this book, so please read on.

Throughout my own script development experience, I have found that at least a twice-a-week 30-minute to 2-hour status review meeting is a good way to ensure that everyone is (still) on the same page. So much can change in a few days, especially in a new SAP implementation or in the wake of a planned upgrade, that going any longer than a few days is simply too risky in my opinion. Most of the time, these meetings should be used to focus on the status of different technical or business-related challenges inherent to business process scripting, like the following:

  • Review of the business transactions or business processes

  • Status of the basic scripts to be written for each core functional area or business process

  • Status of basic input-data requirements—where the data are coming from, how they will be obtained, and how much will be available

  • Input approach—data-formatting issues, and how the development of text, Excel-based, or inline data files is coming along

  • Script modifications regarding virtualization

  • Status of error routines, if-then logic, and do-while logic

  • Status of special randomization or similar needs

  • Output approach—what's being collected and how it will later be analyzed

  • General and specific script issues

  • Test infrastructure build-out

  • Status of packages and the review of software configuration necessary to execute scripts in a multiuser virtual manner

  • Status update on the knowledge repository's growth, including what has been currently added to it and what's expected to be delivered to it near-term

  • Data-contention issues, both in general and from a multiuser perspective

Thus, these regular meetings or conference calls typically revolve around both technical and business process folks—the technical team seems to be more involved in the middle of the scripting effort, whereas the business folks are more involved up front and then again at the tail end of the project. For projects lasting more than a month, I also like to include my project sponsor perhaps once every other week or so in these meetings. During our get-togethers, I work to ensure that all testing assumptions still hold true. In addition, I share status updates related to scripting in the various functional areas, identify issues or problem areas, give special attention to data or technical scripting issues, and share successes. If something is in the process of changing, or already has changed (e.g., a script development client refresh wiped out master or configuration data I depended on, or the new client no longer includes data previously used), we discuss what this means to the project overall, and the near-term timelines in particular.

Fortunately, some of the tasks associated with a stress test can be performed concurrently, including the following: working through the final test goals and success criteria, determining ways to validate success criteria, making revisions to the test plan and refining a testing methodology, working with the business units or functional teams to understand business processes or verify transaction flows, and actually test-executing each transaction. This gives the T3 the opportunity to crash the project schedule, a project-management term that implies throwing more bodies at a project to achieve milestones faster; in some cases, one or more of my colleagues have assisted me with data collection and script development, allowing these tasks to be completed faster than I ever could have completed them alone. On the other hand, tasks that cannot be crashed imply things that must occur sequentially, one task after the other. A good example of this is testing a script for bugs or other issues—each script must be recorded and then set up to run in virtual mode leveraging the to-be-tested variable input before any real bug testing can take place. Sharing updates like these helps to ensure that the project team stays on track, and the business process scripts, data, and supporting infrastructure and methods all make it possible to achieve an organization's testing and tuning goals.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.137.38