Testing the Information System

To warm up, we’re going to start our tests with our independent caching layer. Since our cache is made up of a standalone genserver, we can test it in isolation. It’s a good place to start because our cache can do two things: fetch and put.

Testing Our Cache

For the most part, testing our cache will work like testing any other service. We’ll create a cache and try some fetches and puts. Then we’ll use asserts to check what actually happened against our expectations. Let’s begin with a few basic tests and then we can handle corner cases for timeouts and shutdown. Shift into the apps/info_sys directory and then make tests/cache_test.exs look like this:

 defmodule​ InfoSysTest.CacheTest ​do
 use​ ExUnit.Case, ​async:​ true
  alias InfoSys.Cache
  @moduletag ​clear_interval:​ 100
 
  setup %{​test:​ name, ​clear_interval:​ clear_interval} ​do
  {​:ok​, pid} = Cache.start_link(​name:​ name, ​clear_interval:​ clear_interval)
  {​:ok​, ​name:​ name, ​pid:​ pid}
 end

We’re creating a test, and including the usual ceremony. We have a module tag to specify the interval for clearing a cache. We’ll use that feature to customize the cache expiration during our tests.

We also set up the tests by creating a simple GenServer by calling the start_link for our cache, passing in the shortened interval. Then we return the pid in the test context.

Now, we’re ready to run a couple of tests. One will check puts and fetches, and the other will check nonexistent keys:

  test ​"​​key value pairs can be put and fetched from cache"​, %{​name:​ name} ​do
  assert ​:ok​ = Cache.put(name, ​:key1​, ​:value1​)
  assert ​:ok​ = Cache.put(name, ​:key2​, ​:value2​)
 
  assert Cache.fetch(name, ​:key1​) == {​:ok​, ​:value1​}
  assert Cache.fetch(name, ​:key2​) == {​:ok​, ​:value2​}
 end
 
  test ​"​​unfound entry returns error"​, %{​name:​ name} ​do
  assert Cache.fetch(name, ​:notexists​) == ​:error
 end
 end

The first test puts a couple of keys and verifies an :ok result, and then verifies both with fetches. The next test checks a simple fetch of a nonexistent key, and verifies an :error.

If you’d like, you can run the test. You’ll find it clean and green:

 $ mix test test/cache_test.exs
 .
 
 Finished in 0.02 seconds
 1 test, 0 failures

The tests are dead simple so far, but we should check out a couple of corner cases. We must make sure the GenServer shuts down cleanly and also make sure we handle error conditions like timeouts. We’re going to need a couple of test helper functions to assist us for each one. Above the test setup function, key this in:

 defp​ assert_shutdown(pid) ​do
  ref = Process.monitor(pid)
  Process.unlink(pid)
  Process.​exit​(pid, ​:kill​)
 
  assert_receive {​:DOWN​, ^ref, ​:process​, ^pid, ​:killed​}
 end
 
 defp​ eventually(func) ​do
 if​ func.() ​do
  true
 else
  Process.sleep(10)
  eventually(func)
 end
 end

That’s a bit more meaty. Let’s talk through those helpers. The first one will serve as a custom set of assertions to verify a server shuts down cleanly. We start a monitor and then unlink the process. We remove the link, otherwise killing the server would make our test process also crash. Next we kill the process and make sure we get a :DOWN message on the monitor. We break this code into its own function because we’ll use it twice in the tests that follow.

The second helper is a small helper, to prevent tests from having to sleep for long periods while the test waits on an expected result. Ideally, we want to have our tests react only to messages. But when that’s not enough, we will execute some function until it eventually returns true. Let’s see how these two helpers work in the context of our tests. Add these tests to the bottom of cache_test.exs:

 test ​"​​clears all entries after clear interval"​, %{​name:​ name} ​do
  assert ​:ok​ = Cache.put(name, ​:key1​, ​:value1​)
  assert Cache.fetch(name, ​:key1​) == {​:ok​, ​:value1​}
  assert eventually(​fn​ -> Cache.fetch(name, ​:key1​) == ​:error​ ​end​)
 end
 
 @tag ​clear_interval:​ 60_000
 test ​"​​values are cleaned up on exit"​, %{​name:​ name, ​pid:​ pid} ​do
  assert ​:ok​ = Cache.put(name, ​:key1​, ​:value1​)
  assert_shutdown(pid)
  {​:ok​, _cache} = Cache.start_link(​name:​ name)
  assert Cache.fetch(name, ​:key1​) == ​:error
 end

Nice! The first test puts a key into the cache and then uses the eventually function to check whether the values eventually clear. Recall that the @moduletag at the top of the test module sets the clear_interval to 100 milliseconds. After that waiting period, the cache should be cleared and our test will pass. If all is well, the test runs as quickly as it can. If not, ExUnit will time out the test after 60 seconds and we can fix the problem.

In the second test we verify that shutting down the cache actually erases all cached entries. To do so, we use @tag to set the clear_interval to a high value, overriding the value set in @moduletag. We do this to ensure clear_interval won’t interfere with our tests since we want to check that the shutdown of the cache erases all values, and not clear_interval. We do so by writing a key and shutting down the server. The test awaits for the :DOWN message, which is only delivered once the cache process exits. The test then checks to make sure the key is not present.

You can see that testing GenServers is a bit trickier than testing pure functions, but it’s not too bad. Our InfoSys has a new set of challenges, though, since it’s pulling data from an external source. It’s time to attack that challenge.

Testing the InfoSys

We’ll move on to perhaps our most significant testing challenge, the InfoSys. Since this information system interacts with an external interface, we have some decisions to make. We also have quite a bit of behavior to cover, such as timeouts and forced backend termination. You’ll be surprised at how quickly we can cover all this functionality with a few short and sweet test cases. Let’s get started.

A natural first step for testing our InfoSys is to simply look for successful results. Create a new rumbl_umbrella/apps/info_sys/test/info_sys_test.exs with the following code:

1: defmodule​ InfoSysTest ​do
use​ ExUnit.Case
alias InfoSys.Result
5: defmodule​ TestBackend ​do
def​ name(), ​do​: ​"​​Wolfram"
def​ compute(​"​​result"​, _opts) ​do
[%Result{​backend:​ __MODULE__, ​text:​ ​"​​result"​}]
10: end
def​ compute(​"​​none"​, _opts) ​do
[]
end
def​ compute(​"​​timeout"​, _opts) ​do
15:  Process.sleep(​:infinity​)
end
def​ compute(​"​​boom"​, _opts) ​do
raise​ ​"​​boom!"
end
20: end

The top of the file has the typical module declaration and aliases. Then we move to our first problem, how to isolate our test code from the internet requests.

We solve this isolation problem by defining a stub called TestBackend on line 5. This module will act like our Wolfram backend, returning a response in the format that we expect. Since we don’t use the URL query string to do actual work, we can use this string to identify specific types of results we want our test backend to fetch:

1: test ​"​​compute/2 with backend results"​ ​do
2:  assert [%Result{​backend:​ TestBackend, ​text:​ ​"​​result"​}] =
3:  InfoSys.compute(​"​​result"​, ​backends:​ [TestBackend])
4: end
5: 
6: test ​"​​compute/2 with no backend results"​ ​do
7:  assert [] = InfoSys.compute(​"​​none"​, ​backends:​ [TestBackend])
8: end

With our stub in place, the tests will be remarkably simple. We define a test case for computing successful results. We pass a query string of "result", signaling our backend to send fake results. Then we assert that the result set is what we expect. Next, we use the same approach to handle empty datasets.

That takes care of the cases in which backends properly report results. Next, we need to cover the edge cases, like backend timeouts.

Chris says:
Chris says:
What’s the Difference Between a Stub and a Mock?

Stubs and mocks are both testing fixtures that replace real-world implementations. A stub replaces real-world libraries with simpler, predictable behavior. With a stub, a programmer can bypass code that would otherwise be difficult to test. Other than that, the stub has nothing to say about whether a test passes or fails. For example, a http_send stub might always return a fixed JSON response. In other words, a stub is just a simple scaffold implementation standing in for a more complex real-world implementation.

A mock is similar, but it has a greater role. It replaces real-world behavior just as a stub does, but it does so by allowing a programmer to specify expectations and results, playing back those results at runtime. A mock will fail a test if the test code doesn’t receive the expected function calls. For example, a programmer might create a mock for http_send that expects the test argument, returning the value :ok, followed by the test2 argument, returning :ok. If the test code doesn’t call the mock first with the value test and next with the value test2, it’ll fail. In other words, a mock is an implementation that records expected behavior at definition time and plays it back at runtime, enforcing those expectations.

Incorporating Timeouts in Our Tests

A backend might time out. To test timeouts, we need a way to simulate a backend taking longer than expected. We also need to be able to make sure that the information system terminates the backend in such cases, as we expect it to. We want to do all of this in a fast test. Fortunately, with our testing structure, it’s a simple job:

1:  test ​"​​compute/2 with timeout returns no results"​ ​do
2:  results = InfoSys.compute(​"​​timeout"​, ​backends:​ [TestBackend], ​timeout:​ 10)
3:  assert results == []
4: end
5: end

We want our test to be fast, so we shorten the timeout interval to 10 milliseconds. We simply call the correct stub and it sleeps forever. We assert that we get an empty result. Mission accomplished.

Now we can shift to our last corner case. We need to check exceptions. It’s a relatively easy job but we’ll need one tiny trick. To keep our tests from printing out a bunch of noisy log messages when the exception fires we need to capture the log. With that in mind, key this last test in:

 @tag ​:capture_log
 test ​"​​compute/2 discards backend errors"​ ​do
  assert InfoSys.compute(​"​​boom"​, ​backends:​ [TestBackend]) == []
 end

The test is short and, um, suite. We capture the log in a test tag. Then we assert that no results are returned. Believe it or not, that’s all the testing we need to do at this level. We are ready to fire our test up:

 $ mix test test/info_sys_test.exs
 ....
 
 Finished in 0.06 seconds
 4 tests, 0 failures

Nice. It all works perfectly. Our new tests are nice and tidy, just like we want them. We’ve done pretty well with our generic information system, but there’s still some supporting Wolfram code that we’d like to test. Since that code has an external interface, it’s better to test that part in isolation.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.254.44