In General

Zambelich has this to say with respect to test automation:

The case for automating the Software Testing Process has been made repeatedly and convincingly by numerous testing professionals. Most people involved in the testing of software will agree that the automation of the testing process is not only desirable, but in fact is a necessity given the demands of the current market. (3)

Developing an automated testing framework can be an expensive and time-consuming project. The creation and implementation of this framework must be done far in advance of the AUT's delivery to quality control (QC) and thus may be built and tested early in the project's life cycle. Unfortunately, once that framework is in place, you may find that it is not appropriate for all of your organization's software testing needs. Knowing when to use such a framework is an important part of test automation. We have hotly argued this subject in online discussions with other members of the automated testing community. The portion of our argument that follows alludes to the value of data-driven testing, given the effort expended to develop the framework. It was taken from the discussion held on the SQA users group Web site the week of May 17–21, 1999. Although there were many participants, in the excerpt here Elfriede Dustin, Carl Nagle, and Dan Mosley are the debating parties. You can find the full online discussion in Appendix A.

Elfriede:

I have to agree with Mark, in that there is a time for data driven testing and then there isn't. I will always use data driven testing using “test data” (see www.autotestco.com for one example of how we've used Robot for Y2K data testing), but very rarely will use data driven testing using “control data.” The reason being is that it's tedious to implement and the effort only pays off if the test can be reused many times over in subsequent releases.

I inquired with Ed Kit after his presentation at the STAR and he agreed with me that the efforts of implementing this approach often don't pay off until after the 17th run. (Yes, that's the number he gave.)

A while back, one of my coworkers in a previous job had just received training on this data driven testing approach. It took that person 3 weeks to implement the data driven approach. There were lots of nice tables with commands/controls and data to read. But in the end it boiled down to that the test would have been much more efficient using simple record and playback and modification of the recorded script, since the test wasn't used repeatedly. The effort in this case was a waste of time. You will have to use your judgment and remember that it does not always pay off to implement a complex data driven framework for automated testing.

Carl:

I would have to agree on almost every aspect of this, but must also argue that no amount of automation is cost effective if it is not *INTENDED* to be repeated. In fact, to break even on a 17th iteration sounds great. On a build verification performed nightly that's 17 business days (or less) and all in the same release!

Dan:

I beg disagree with you (Elfriede). Data-driven testing does require the up front investment that you indicated, but it does pay off big dividends during build-level regression testing. I have been there and I have seen it. We had to test 100+ transaction screens (each one was a window in itself) for a financial application. We developed over 7000+ data-driven tests which each took approximately 3 to 5 days to create and debug, but which ran in 1–2 hours when played back between builds. We usually received one build a week and we were able to replay 100+ test scripts and 7000+ test records each week and finish on time.

As you can see from this discussion, there are differing opinions as to the value of automated testing in general and of the data-driven approach specifically. The main question is this: When does it make sense to implement an automated testing framework? For some smaller testing projects, the effort may be too much given the size of the venture. Even for larger testing efforts, test automation does not seem sensible if the tests are not going to be reused in a regression test suite.

Another way to break down this issue is to view it by type of testing to be automated. For unit and integration testing, automation is indispensable. Why? Without automation, the quality of the builds given to testing is horrible. What happens is that defects, which should have been found during the development, are left for system testers to find. In today's world of Web development, Java is the language of choice and object orientation is the approach. Developers should be using test tools such as JUnit to embed test cases as classes that can be saved with the rest of the code and used/reused to test the Java objects prior to integration through the build process. Doing this eliminates many of the error types that are currently getting through to system testing. Furthermore, an automated integration-level smoke test should be prepared and executed for each build before it is accepted into system test. Chapter 5 discusses these issues in more detail.

Hancock argues:

Test Automation is an investment. The initial investment may be substantial, but the return on the investment is also substantial. After the number of automation test runs exceed 15, the testing from that point forward is essentially free.(1)

He sees automation versus manual testing as basic cost-benefit analysis. He quotes Kaner's premise that it takes 3 to 10 times as long to build, verify, and document an automated test (2). Hancock uses 15 as a worst-case scenario in his example of the potential return from test automation in order to produce a conservative estimate of the potential benefits.

To calculate the return on investment (ROI), Hancock says the “multiple” must be determined; that is, how many times the set of tests needs to be executed—the number of platforms, operating systems, and foreign languages to be accommodated by the software multiplied together as well as by the number of builds/versions against which the tests will run (1).

If you use Hancock's ROI approach, automation during unit and integration tests is more of a bargain than automation of system testing because system tests have less of a chance of reuse than the other two types. If the system-level test cases are not implemented in an automated regression suite that will experience a high level of reuse, there is not much ROI from system test automation. The ROI for unit and integration tests can be much higher.

Very few companies fully realize the true value of automated testing. They seem to want it, and know they need it, but in the end they cannot convince upper management of its cost benefits.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.33.107