5.2. Using the where: block

The where: block, introduced in chapter 3, is responsible for holding all input and output parameters for a parameterized test. It can be combined with all other blocks shown in chapter 4, but it has to be the last block inside a Spock test, as illustrated in figure 5.2. Only an and: block might follow a where: block (and that would be rare).

Figure 5.2. A where: clause must be the last block in a Spock test. It contains the differing values for parameterized tests.

The simpler given-expect-when structure was shown in listing 5.2. This works for trivial and relatively simple tests. The more usual way (and the recommended way for larger parameterized tests) is the given-when-then-where structure shown in the following listing.

Listing 5.3. The given-when-then-where structure

Here I’ve modified the ImageNameValidator class to return a simple Java object named ImageExtensionCheck that groups the result of the check along with an error code and a human-readable description. The when: block creates this result object, and the then: block compares its contents against the parameterized variables in the where: block.

Notice that the where: block is the last one in the Spock test. If you have other blocks after the where: block, Spock will refuse to run the test.

Now that you know the basic use of the where: block, it’s time to focus on its contents. So far, all the examples you’ve seen have used data tables. This is one of the possible variations. Spock supports the following:

  • Data tables— This is the declarative style. Easy to write but doesn’t cope with complex tests. Readable by business analysts.
  • Data tables with programmatic expressions as values— A bit more flexible than data tables but with some loss in readability.
  • Data pipes with fully dynamic input and outputs— Flexible but not as readable as data tables.
  • Custom data iterators— Your nuclear option when all else fails. They can be used for any extreme corner case of data generation. Unreadable by nontechnical people.

You’ll examine the details of all these techniques in turn in the rest of the chapter.

5.2.1. Using data tables in the where: block

We’ve now established that the where: block must be the last block in a Spock test. In all examples you’ve seen so far, the where: block contains a data table, as illustrated in figure 5.3.

Figure 5.3. The where: block often contains a data table with defined input columns and a desired result column.

This data table holds multiple test cases in which each line is a scenario and each column is an input or output variable for that scenario. The next listing shows this format.

Listing 5.4. Using data tables in Spock

The data table contains a header that names each parameter. You have to make sure that the names you give to parameters don’t clash with existing variables in the source code (either in local scope or global scope).

You’ll notice that the data table is split with either single (|) or dual (||) pipe symbols. The single pipe denotes a column, and the double pipe shows where the input parameters stop and the output parameters start. Usually, only one column in a data table uses dual pipes.

In the simple example of listing 5.4, the output parameter is obvious. In more complex examples, such as listing 5.3 or the examples with the nuclear reactor in chapter 3, the dual pipe is much more helpful. Keep in mind that the dual pipe symbol is used strictly for readability and doesn’t affect the way Spock uses the data table. You can omit it if you think that it’s not needed (my recommendation is to always include it).

If you’re a seasoned Java developer, you should have noticed something strange in listing 5.4.[2] The types of the parameters are never declared. The data table contains the name and values of parameters but not their type!

2

And also in listings 5.3 and 5.2, if you’ve been paying attention.

Remember that Groovy (as explained in chapter 2) is an optionally typed language. In the case of data tables, Spock can understand the type of input and output parameters by the context of the unit test.

But it’s possible to explicitly define the types of the parameters by using them as arguments in the test method, as shown in the next listing.

Listing 5.5. Using data tables in Spock with typed parameters

Here I’ve included all parameters as arguments in the test method. This makes their type clear and can also help your IDE (Eclipse) to understand the nature of the test parameters.

You should decide on your own whether you need to declare the types of the parameters. For brevity, I don’t declare them in any of the chapter examples. Just make sure that all developers on your team agree on the same decision.

5.2.2. Understanding limitations of data tables

I’ve already stressed that the where: block must be the last block in a Spock test (and only an and: block can follow it as a rare exception). I’ve also shown how to declare the types of parameters (in listing 5.5) when they’re not clear either to your IDE or even to Spock in some extreme cases.

Another corner case with Spock data tables is that they must have at least two columns. If you’re writing a test that has only one parameter, you must use a “filler” for a second column, as shown in the next listing.

Listing 5.6. Data tables with one column

Perhaps some of these limitations will be lifted in future versions of Spock, but for the time being, you have to live with them. The advantages of Spock data tables still outperform these minor inconveniences.

5.2.3. Performing easy maintenance of data tables

The ultimate goal of a parameterized test is easy maintenance. Maintenance is affected by several factors, such as the size of the test, its readability, and of course, its comments. Unfortunately, test code doesn’t always get the same attention as production code, resulting in tests that are hard to read and understand.

The big advantage of Spock and the way it exploits data tables in parameterized tests is that it forces you to gather all input and output variables in a single place. Not only that, but unlike other solutions for parameterized tests (examples were shown with JUnit in chapter 3), data tables include both the names and the values of test parameters.

Adding a new scenario is literally a single line change. Adding a new output or input parameter is as easy as adding a new column. Figure 5.4 provides a visual overview of how this might work for listing 5.3.

Figure 5.4. Adding a new test scenario means adding a new line in the where: block. Adding a new parameter means adding a new column in the where: block.

The ease of maintenance of Spock data tables is so addicting that once you integrate data tables in your complex tests, you’ll understand that the only reason parameterized tests are considered difficult and boring is because of inefficient test tools.

The beauty of this format is that data tables can be used for any parameterized test, no matter the complexity involved. If you can isolate the input and output variables, the Spock test is a simple process of writing down the requirements in the source code. In some enterprise projects I’ve worked on, extracting the input/output parameters from the specifications was a more time-consuming job than writing the unit test itself.

The extensibility of a Spock data table is best illustrated with a semi-real example, as shown in the next listing.

Listing 5.7. Capturing business needs in data tables

The unit test code isn’t important. The data table contains the business requirements from the e-shop example that was mentioned in chapter 1. A user selects multiple products by adding them to an electronic basket. The basket then calculates the final discount of each product, which depends on the following:

  • The price of the product
  • The discount of the product
  • Whether the customer has bonus/loyalty points
  • The status of the customer (for example, silver, gold, platinum)
  • The price of the total order (the rest of the products)
  • Any special deals that are active

The production code of the e-shop may comprise multiple Java classes with deep hierarchies and complex setups. With Spock, you can directly map the business needs in a single data table.

Now imagine that you’ve finished writing this Spock test, and it passes correctly. You can show that data table to your business analyst and ask whether all cases are covered. If another scenario is needed, you can add it on the spot, run the test again, and verify the correctness of the system.

In another situation, your business analyst might not be sure about the current implementation status of the system[3] and might ask what happens in a specific scenario that’s not yet covered by the unit test. To answer the question, you don’t even need to look at the production code. Again, you add a new line/scenario in the Spock data table, run the unit test on the spot, and if it passes, you can answer that the requested feature is already implemented.

3

A common case in legacy projects.

In less common situations, a new business requirement (or refactoring process) might add another input variable into the system. For example, in the preceding e-shop scenario, the business decides that coupon codes will be given away that further affect the discount of a product. Rather than hunting down multiple unit test methods (as in the naive approach of listing 5.2), you can add a new column in the data table and have all test cases covered in one step.

Even though Spock offers several forms of the where: block that will be shown in the rest of the chapter, I like the data table format for its readability and extensibility.

5.2.4. Exploring the lifecycle of the where: block

It’s important to understand that the where: block in a parameterized test “spawns” multiple test runs (as many of its lines). A single test method that contains a where: block with three scenarios will be run by Spock as three individual methods, as shown in Figure 5.5. All scenarios of the where: block are tested individually, so any change in state (either in the class under test or its collaborators) will reset in the next run.

Figure 5.5. Spock will treat and run each scenario in the where: block of a parameterized test as if it were a separate test method.

To illustrate this individuality of data tables, look at the following listing.

Listing 5.8. Lifecycle of parameterized tests

Because this unit test has three scenarios in the where: block, the given-when-then blocks will be executed three times as well. Also, all lifecycle methods explained in chapter 4 are fully honored by parameterized tests. Both setup() and cleanup() will be run as many times as the scenarios of the where: block.

If you run the unit test shown in listing 5.8, you’ll get the following output:

Setup prepares next run
Given: block runs
When: block runs for first = 1 and second = 1
Then: block is evaluated for sum = 2
Cleanup releases resources of last run

Setup prepares next run
Given: block runs
When: block runs for first = 3 and second = 2
Then: block is evaluated for sum = 5
Cleanup releases resources of last run

Setup prepares next run
Given: block runs
When: block runs for first = 3 and second = -3
Then: block is evaluated for sum = 0
Cleanup releases resources of last run

It should be clear that each scenario of the where: block acts as if it were a test method on its own. This enforces the isolation of all test scenarios, which is what you’d expect in a well-written unit test.

5.2.5. Using the @Unroll annotation for reporting individual test runs

In the previous section, you saw the behavior of Spock in parameterized tests when the when: block contains multiple scenarios. Spock correctly treats each scenario as an independent run.

Unfortunately, for compatibility reasons,[4] Spock still presents to the testing environment the collection of parameterized scenarios as a single test. For example, in Eclipse the parameterized test of listing 5.8 produces the output shown in figure 5.6.

4

With older IDEs and tools that aren’t smart when it comes to JUnit runners.

Figure 5.6. By default, parameterized tests with multiple scenarios are shown as one test in Eclipse. The trivial adder test is shown only once, even though the source code defines three scenarios.

This behavior might not be a big issue when all your tests succeed. You still gain the advantage of using a full sentence as the name of the test in the same way as with non-parameterized Spock tests.

Now assume that out of the three scenarios in listing 5.8, the second scenario is a failure (whereas the other two scenarios pass correctly). For illustration purposes, I modify the data table as follows:

where: "some scenarios are"
first   |second     || sum
1       | 1         || 2
3       | 2         || 7
3       | -3        || 0

The second scenario is obviously wrong, because 3 plus 2 isn’t equal to 7. The other two scenarios are still correct. Running the modified unit test in Eclipse shows the output in figure 5.7.

Figure 5.7. When one scenario out of many fails, it’s not clear which is the failed one. You have to look at the failure trace, note the parameters, and go back to the source code to find the problematic line in the where: block.

Eclipse still shows the parameterized test in a single run. You can see that the test has failed, but you don’t know which of the scenarios is the problematic one. You have to look at the failure trace to understand what’s gone wrong.

This isn’t helpful when your unit test contains a lot of scenarios, as in the example in listing 5.8. Being able to detect the failed scenario(s) as fast as possible is crucial.

To accommodate this issue, Spock offers the @Unroll annotation, which makes multiple parameterized scenarios appear as multiple test runs. The annotation can be added on the Groovy class (Spock specification) or on the test method itself, as shown in the next listing. In the former case, its effect will be applied to all test methods.

Listing 5.9. Unrolling parameterized scenarios

With the @Unroll annotation active, running this unit test in Eclipse “unrolls” the test scenarios and presents them to the test runner as individual tests, as shown in figure 5.8.

Figure 5.8. By marking a parameterized test with @Unroll, Eclipse now shows each run as an individual test.

The @Unroll annotation is even more useful when a test has failed, because you can see exactly which run was the problem. In large enterprise projects with parameterized tests that might contain a lot of scenarios, the @Unroll annotation becomes an essential tool if you want to quickly locate which scenarios have failed. Figure 5.9 shows the same failure as before, but this time you can clearly see which scenario has failed.

Figure 5.9. Locating failed scenarios with @Unroll is far easier than without it. The failed scenario is shown instantly as a failed test.

Remember that you still get the individual failure state for each scenario if you click it. Also note that the @Unroll annotation can be placed on the class level (the whole Spock specification) and will apply to all test methods inside the class.

5.2.6. Documenting parameterized tests

As you’ve seen in the previous section, the @Unroll annotation is handy when it comes to parameterized tests because it forces all test scenarios in a single test method to be reported as individual test runs. If you think that this feature isn’t groundbreaking and should be the default, I agree with you.[5]

5

After all, JUnit does this as well.

But Spock has another trick. With a little more effort, you can format the name shown for each scenario. The most logical things to include are the test parameters, as shown in the following listing.

Listing 5.10. Printing parameters of each scenario

The @Unroll annotation accepts a string argument, in which you can put any English sentence. Variables marked with # will be replaced[6] with their current values when each scenario runs. The final result of this evaluation will override the name of the unit test, as shown in figure 5.10.

6

The reasons that the # symbol is used instead of $ are purely technical and aren’t relevant unless you’re interested in Groovy and Spock internals.

Figure 5.10. The parameter values for each scenario can be printed as part of the test name.

I consider this feature one of the big highlights of Spock. I challenge you to find a test framework that accomplishes this visibility of test parameters with a simple technique. If you’re feeling lazy, you can even embed the parameters directly in the test method name,[7] as shown in the following listing.

7

Take that, TestNG !

Listing 5.11. Parameter rendering on the test method

The result in Eclipse is the same as with listing 5.10, so pick any approach you like (but as always, if you work in a team, agree beforehand on the best practice).

5.2.7. Using expressions and statements in data tables

All the data tables I’ve shown you so far contain scalar values. Nothing is stopping you from using custom classes, collections, object factories, or any other Groovy expression that results in something that can be used as an input or output parameter. Take a look at the next listing (created strictly for demonstration purposes).

Listing 5.12. Custom expressions in data tables

The MyInteger class is a simple enumeration that contains the first 10 integers. The IntegerFactory class is a trivial factory that converts strings to integers. The details of the code aren’t important; what you need to take away from this example is the flexibility of data tables. If you run this example, Spock will evaluate everything and present you with the final result, as shown in figure 5.11.

Figure 5.11. Spock will evaluate all expressions and statements so that they can be used as standard parameters. All statements from listing 5.12 finally resolve to integers.

I try to avoid this technique because I think it damages the readability of the test. I prefer to keep values in data tables simple. Using too many expressions in your data tables is a sign that you need to convert the tabular data into data pipes, as explained in the next section.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.54.136