Creating Comprehensive Test Coverage

We’ve covered some of the very basics of testing and delved into a bit of test theory. In order to provide solid coverage of a file, we’ll need to build on that knowledge by pulling in some features of Elixir itself, as well as some optional features of ExUnit’s assertions. These skills will help you write maximum test coverage with minimal code and without trade-offs.

Using a Setup Block

Let’s jump back to where we left our Soggy Waffle test, SoggyWaffle.WeatherAPI.ResponseParserTest. We’ll make a modification to our existing test before adding some more tests to our response parser. The data we used in our test was hand-coded. That’s a great way to start, but we don’t own the original source of the data, the weather API. While it’s easy to read the hard-coded data, it’s actually a significantly pared down version of the real data from the weather API. It would be nice to be working with data that’s as realistic as possible. To do this, we can use a fixture file. A simple curl call to the API can provide an actual JSON response payload. We’ve saved that to a file in the application, test/support/weather_api_response.json. Let’s modify our test to use that data instead of the handwritten values we used earlier:

1: describe ​"​​parse_response/1"​ ​do
setup_all ​do
response_as_string =
File.read!(​"​​test/support/weather_api_response.json"​)
5: 
response_as_map = Jason.decode!(response_as_string)
%{​weather_data:​ response_as_map}
end
10:  test ​"​​success: accepts a valid payload, returns a list of weather structs"​,
%{​weather_data:​ weather_data} ​do
assert {​:ok​, parsed_response} =
ResponseParser.parse_response(weather_data)
15:  for weather_record <- parsed_response ​do
assert match?(
%Weather{​datetime:​ %DateTime{}, ​rain?:​ _rain},
weather_record
)
20: 
assert is_boolean(weather_record.rain?)
end
end

Let’s update our test to use a setup block (line 2). In the setup code, you’ll see that we’re reading the contents of a JSON file (line 3) and then decoding it to an Elixir map (line 6). This leaves us with data parsed just like it would have been by other parts of the application before being passed to ResponseParser.parse_response/1. We now have real data, so let’s use it in the test.

Our test has been modified at line 11 to accept a test context. This is how the data gets from the setup block into the test. Note that we destructured the map so that we have immediate access to the variable, weather_data. After that, we need to remove the old data setup and replace the parameter in the exercise call with the new variable (line 12). Run mix test; because our test was focused on the shape of the data and not on specific values, it should pass.

There are trade-offs to writing our test this way. As mentioned earlier, we benefit from the comfort of knowing that our test data is as realistic as it gets. To get there, though, we’ve sacrificed some of the readability of our test. The data we’re passing in is hidden in another file in JSON, a format that isn’t as easy to read as a pared-down Elixir map. Additionally, to make the fixture data available to other tests, we’ve moved it into a setup block.

Setup blocks are wonderful for test organization and for preventing repetition in our test suite, but they remove some of the test setup from the test and move it to another part of the file. Anytime the logic of a test extends outside of the test itself, it’s harder to read and understand. That said, like everything we do, we need to make a call on which way to go and which test design gives us the best balance of coverage and readability/maintainability.

Setting Module Attributes in Test Files

Now that our first test is using a fixture, we’ll add tests that focus on specific values in the weather API response, meaning we won’t use the fixture data because we don’t know the specific values. Since the module under test (SoggyWaffle.WeatherAPI.ResponseParser) is focused on translating data specific to an external API into internal data, the module must have a lot of knowledge specific to that API. This shows up in the form of all the various weather condition IDs at the top of the module.

Our tests will have to have that same level of knowledge so that they can test the code thoroughly. This means adding a copy of all of the IDs to our test file. It may seem like a good idea to put them somewhere where both the test and the code under test can access them, but that’s discouraged. Any accidental modifications to that list could cause our test to miss a needed case, allowing our code under test to make it to production with a bug. You should avoid using the Don’t Repeat Yourself (DRY) principle when the instances are split between your tests and your code under test.[10]

In our case, we know that more than one test will likely make use of those lists of IDs, so we’ll add them as module attributes to the top of the test file.

 @thunderstorm_ids {
 "​​thunderstorm"​,
  [200, 201, 202, 210, 211, 212, 221, 230, 231, 232]
 }
 @drizzle_ids {​"​​drizzle"​, [300, 301, 302, 310, 311, 312, 313, 314, 321]}
 @rain_ids {​"​​rain"​, [500, 501, 502, 503, 504, 511, 520, 521, 522, 531]}

Leveraging List Comprehensions

Are you ready for something wild? We’re going to wrap our new test in a list comprehension. Add the following code to your test file and we’ll step through it. Keep the first for at the same level of indentation as the test before it.

1: for {condition, ids} <- [@thunderstorm_ids, @drizzle_ids, @rain_ids] ​do
test ​"​​success: recognizes ​​#{​condition​}​​ as a rainy condition"​ ​do
now_unix = DateTime.utc_now() |> DateTime.to_unix()
5:  for id <- ​unquote​(ids) ​do
record = %{​"​​dt"​ => now_unix, ​"​​weather"​ => [%{​"​​id"​ => id}]}
assert {​:ok​, [weather_struct]} =
ResponseParser.parse_response(%{​"​​list"​ => [record]})
10: 
assert weather_struct.rain? == true
end
end
end

Once you’ve got the code in, run mix test to make sure that your tests are passing.

Before we talk about the list comprehension, let’s look at the data that we’re passing into the function at the exercise step. It’s a hand-coded map. Now that we’ve tested that our code handles correctly shaped data by passing a real response, we can trust that if we pass in correctly shaped data, the code will work fine. The payload has been distilled to the parts that our code cares about: a map with “dt” and “weather” keys, with the ID nested in a list of maps under the “weather” key. Keeping the input payload so small will help us keep our tests easier to understand. Additionally, because we’re defining the values inside our test file (and inside the test in this case), we’re safe to make assertions on specific values without worrying about hard-to-maintain tests.

The list comprehension actually adds three new tests, one for each kind of weather condition that we’re interested in: thunderstorms, drizzle, and rain. List comprehensions are useful when you have nearly identical tests but want your errors separated. You’ll see that we had to use unquote,[11] a metaprogramming macro, to have access to the information inside the test. This can be especially confusing since it’s not inside of a quote block, but it works and it’s the only piece of metaprogramming we’ll introduce in this book. You’ll need to have a way to provide a unique test name for each iteration, like the way we’re using the condition in the test name. If any of these tests fail, the output from ExUnit will tell you which test it is, making it easier to hunt down the issue.

»1) test parse_response/1 success: recognizes drizzle as a rainy condition
  (SoggyWaffle.WeatherAPI.ResponseParserTest)
 test/soggy_waffle/weather_api/response_parser_test.exs:32

Writing Custom Test Failure Messages

While list comprehensions allow us to cover more test cases with less test code, they run the risk of obfuscating your errors. If your test fails but you can’t tell which values it was asserting against, you’ll lose time having to debug the failure. In the test we just wrote, how would we know which ID we were testing when we had the wrong value of rain? Let’s update that assertion to use a custom failure message:

1: for {condition, ids} <- [@thunderstorm_ids, @drizzle_ids, @rain_ids] ​do
2:  test ​"​​success: recognizes ​​#{​condition​}​​ as a rainy condition"​ ​do
3: # test body
4: 
5:  assert weather_struct.rain? == true,
6: "​​Expected weather id (​​#{​id​}​​) to be a rain condition"
7: end
8: end

Adding a custom error message with string interpolation (line 6) now gives us everything we need to know if the test fails. Having this information is very important because otherwise it’ll be impossible to see which specific code caused the test to fail. When designing your tests, take time to make sure that the failure message points you straight at the issue. It doesn’t take long, and you, or your teammates, will thank you later.

 1) test parse_response/1 success:
  recognizes drizzle as a rainy condition
  (SoggyWaffle.WeatherAPI.ResponseParserTest)
 test/soggy_waffle/weather_api/response_parser_test.exs:32
»Expected weather id (300) to be a rain condition
 code: for id <- unquote(ids) do

Covering All the Use Cases

While we will discuss property-based testing in a later chapter, we often don’t need anything outside of ExUnit and Elixir to cover the rest of our cases. We’ve added a test for the conditions that will return true for rain, but now we need a test to make sure that no other codes will generate a true value. Add another test to your file with the following code:

1: test ​"​​success: returns rain?: false for any other id codes"​ ​do
{_, thunderstorm_ids} = @thunderstorm_ids
{_, drizzle_ids} = @drizzle_ids
{_, rain_ids} = @rain_ids
5:  all_rain_ids = thunderstorm_ids ++ drizzle_ids ++ rain_ids
now_unix = DateTime.utc_now() |> DateTime.to_unix()
for id <- 100..900, id ​not​ ​in​ all_rain_ids ​do
record = %{​"​​dt"​ => now_unix, ​"​​weather"​ => [%{​"​​id"​ => id}]}
10: 
assert {​:ok​, [weather_struct]} =
ResponseParser.parse_response(%{​"​​list"​ => [record]})
assert weather_struct.rain? == false,
15: "​​Expected weather id (​​#{​id​}​​) to NOT be a rain condition."
end
end

It’s time for another test run (mix test) to make sure your tests are green. Running your tests regularly is always a good way to keep from getting too far down a rabbit hole before you realize you have a problem.

These tests are very thorough in that they test all possible positive values. We’re able to do that because we have a known, finite list of rainy weather IDs. We don’t have as comprehensive a list of non-rainy IDs. As a result, looking at the documentation from the API, we’ve narrowed the range of codes that we could get back to between 100 and 900. We’re using a list comprehension again, but this time we’re using the filter function of a list comprehension to, on the fly, build a list of all IDs between 100 and 900 except the ones we know to be rain IDs (line 8). Even though this means we’re testing a lot of values, because our code is purely functional, the test will still run fairly quickly. We again gave ourselves a custom error message to make sure that if this test fails, we know the ID that caused the failure.

This test could have used a list comprehension in either of the ways we talked about (creating a new test or a new assertion for each value) and there are trade-offs with both. Generating a new test will remove the need for custom error messages, but it’ll increase the actual number of tests that are run. It’s up to you to decide which makes the most sense for you when you decide that a list comprehension is the appropriate tool to reach for.

Testing Error Cases

Every test we’ve written has focused on success cases. We add “success” and “error” to the beginning of our test names to help keep them sorted mentally, but that’s a style choice in response to not being able to nest describe blocks in our tests. Now let’s look at the places where our code might not return a success response, in this case due to bad input data. Since we want to test different functionality in separate tests, let’s choose the malformed weather data.

Our application is fairly small and isn’t intended for customer-facing production. If it were, we’d have more tests and would leverage Ecto Schema and Changeset functions. We’ll cover examples of that in Chapter 4, Testing Ecto Schemas. For now, we’re going to add some basic coverage with the understanding that once you’re done with this book, you’ll have better tools to write even better validation tests for payloads. Add a new test inside the same describe block; but this time, it’s an “error” test:

 test ​"​​error: returns error if weather data is malformed"​ ​do
  malformed_day = %{
 "​​dt"​ => 1_574_359_200,
 "​​weather"​ => [
  %{
 "​​wrong_key"​ => 1
  }
  ]
  }
 
  almost_correct_response = %{​"​​list"​ => [malformed_day]}
 
  assert {​:error​, ​:response_format_invalid​} =
  ResponseParser.parse_response(almost_correct_response)
 end

Run your tests again with mix test.

By comparison, this test is almost boring. It’s able to be simple because we already have a test that asserts that if we pass good data to the code under test, the code parses it correctly. As a result, we just need to focus on one small thing, a missing “id” key. It’s worth noting that, by choosing to name the bad key “wrong_key”, we’re making our test self-documenting. Small opportunities like this occur everywhere in your tests.

We’ll add one more test to our file to cover a missing datetime in the payload. Add this one last test in that same describe block:

 test ​"​​error: returns error if timestamp is missing"​ ​do
  malformed_day = %{
 # wrong key
 "​​datetime"​ => 1_574_359_200,
 "​​weather"​ => [
  %{
 "​​main"​ => 1
  }
  ]
  }
 
  almost_correct_response = %{​"​​list"​ => [malformed_day]}
 
  assert {​:error​, ​:response_format_invalid​} =
  ResponseParser.parse_response(almost_correct_response)
 end

This test is almost identical to the previous test; but because it’s testing a different part of the functionality of the code under test, we’ll leave it as a separate test instead of finding a way to use a list comprehension to DRY them up. If you think about your tests as documentation, your test names are the top-level reference, and combining tests that aren’t focused on the same thing removes some of your documentation.

Now that we’ve covered the anatomy of a test and written a comprehensive test file, it’s time to start looking at how the design of our code impacts the design of our tests. We’ll start by looking at testing purely functional code.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.192.3