Chapter 3. Data-driven Tests

The second meeting took place a few days later. The overall feedback was positive since the automation has started off faster than expected. Bugs were being caught faster and more code was tested. The remainder of the meeting revolved around the new features that were recently added. After giving a few demos to the client, the data preparation for the application was the subject of most of the meetings. Setting up initial data for the application was hardly controllable, since the operation is allowed from different portals; and thus, data integrity was threatened. This is very critical not to mention the time and effort needed to establish it. Thus, the client highly preferred to incorporate data creation in the application.

From the development point of view, the new component that will cater to this feature consists of a form that accepts user inputs and commits them to the database after validating their integrity. This is where your mind couldn't but digress to count the huge time needed to develop a test case for every data combination in the test data that will be generated!

Accordingly, in this chapter, you will learn how to avoid the explosive growth of your automated test scripts by:

  • Creating data-driven tests using MS Excel, CSV, and XML file data sources
  • Creating data-driven tests using databases sources
  • Binding UI elements to data columns
  • Creating embedded data-driven tests

Data-driven testing architecture

The fulfilment of a testing deficiency is that, in any testing environment at the origin of adopting automation the testing environment is driven by the need to satisfy the augmenting speculations and concerns revolving around whether or not a project is going to be delivered at the agreed date, and if so, whether or not enough confidence exists to go live. One obsession starts to outweigh choices and trade-offs, which is cost. This is when numbers related to time versus cost, quality versus time, usability versus functionality, resources versus budget start inflicting decisions. Test automation is brought in to balance risks, costs, and quality. With fewer budgeting resources, automation increases your code coverage and uncovers the bugs early within less time. These facts flatten cost, as it is needless to speak about the importance of fixing bugs early in the development life cycle. After automation is adopted, the general atmosphere is happy, the ROI increases, and you start welcoming new tests to make the most out of it.

The first set of worries fades out only to be replaced by new and new ones. How to respond to change, how to maintain the scripts, or how to avoid repetitive work in automation? So you start getting creative in the implementation. Ideas and designs might slightly differ from each other, but eventually they are all oriented towards allowing your automated solution to grow in a healthy way while dealing with your worries. Practically, this is where you get to create a backend library for your tests or test framework extensions. The solution disciplines the effect of requirement changes, enforces maintainability, and promotes components' reusability.

But again, just as you manage to sort out your second problem a third one arises, and this time, it is data scalability. How to add more scripts that test the same functionality with varied input by spending the least effort? You notice that the solution is as follows: the greater number of test cases you cover with less number of scripts, the more the automation is beneficial for your problem.

So, you sit around for a bit and observe your tests. You conclude that your test is doing the same XYZ operations for the various inputs, and you realize that the steps to work around this problem are as follows:

  • Separate the functionality from the input
  • Allow the input to grow in a dynamic way with respect to your tests
  • Make your scripts operable regardless of the input source

There you start remodelling your test scripts and end up styling them in what is called as a data-driven architecture. This process of evolving from normal test recording to data-driven is self imposing in the sense that it calls upon itself with the rising of testing needs.

Throughout the coming examples in this chapter, Test Studio capabilities are going to be exhibited to demonstrate how it contributes in solving the data scalability problem described earlier. Each test is going to start with simple recorded steps having one integrated set of inputs and will end up in a parameterized test reading its input from external data sources.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.103.154