Review of test coverage and testability

When we introduced our sample service, we identified several issues related to testing. The first of these issues was the lack of isolation, where tests for one layer were also indirectly testing all the layers below it, as shown in the following code:

func TestGetHandler_ServeHTTP(t *testing.T) {
// ensure the test always fails by giving it a timeout
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()

// Create and start a server
// With out current implementation, we cannot test this handler without
// a full server as we need the mux.
address, err := startServer(ctx)
require.NoError(t, err)

// build inputs
response, err := http.Get("http://" + address + "/person/1/")

// validate outputs
require.NoError(t, err)
require.Equal(t, http.StatusOK, response.StatusCode)

expectedPayload := []byte(`{"id":1,"name":"John","phone":"0123456780","currency":"USD","price":100}` + " ")
payload, _ := ioutil.ReadAll(response.Body)
defer response.Body.Close()

assert.Equal(t, expectedPayload, payload)
}

This is a test in the REST layer, but because it calls the actual model, and therefore, the actual data layers, it is effectively testing everything. This makes it a reasonable integration test, as it ensures the layers work together appropriately. But is a poor unit test, because the layers are not isolated.

Our unit test now looks as follows:

func TestGetHandler_ServeHTTP(t *testing.T) {
scenarios := []struct {
desc string
inRequest func() *http.Request
inModelMock func() *MockGetModel
expectedStatus int
expectedPayload string
}{
// scenarios removed
}

for _, s := range scenarios {
scenario := s
t.Run(scenario.desc, func(t *testing.T) {
// define model layer mock
mockGetModel := scenario.inModelMock()

// build handler
handler := NewGetHandler(&testConfig{}, mockGetModel)

// perform request
response := httptest.NewRecorder()
handler.ServeHTTP(response, scenario.inRequest())

// validate outputs
require.Equal(t, scenario.expectedStatus, response.Code, scenario.desc)

payload, _ := ioutil.ReadAll(response.Body)
assert.Equal(t, scenario.expectedPayload, string(payload), scenario.desc)
})
}
}

This test is considered isolated because, instead of relying on the other layers, we are relying on an abstraction—in our case, a mock implementation called *MockGetModel. Let's take a look at a typical mock implementation:

type MockGetModel struct {
mock.Mock
}

func (_m *MockGetModel) Do(ID int) (*Person, error) {
outputs := _m.Called(ID)

if outputs.Get(0) != nil {
return outputs.Get(0).(*Person), outputs.Error(1)
}

return nil, outputs.Error(1)
}

As you can see, the mock implementation is very simple; definitely simpler than the actual implementation of this dependency. Because of this simplicity, we are able to trust that it performs as we expect, and therefore, any problems that arise in the test will be caused by the actual code and not the mock. This trust can be further reinforced by the use of a code generator, such as Mockery (as introduced in Chapter 3, Coding for User Experience), that generates reliable and consistent code.

The mock has also given us the ability to test other scenarios easily. We now have tests for the following:

  • Happy path
  • Missing ID in the request
  • Invalid ID in the request
  • Dependency (model layer or below) failure
  • The requested record does not exist

Many of these situations were difficult to reliably test without the changes we made.

Now that our test is isolated from the other layers, the test itself has a much smaller scope. This means we need to know less; all we need to know is the API contract for the layer we are testing.

In our example, this means that we only need to worry about HTTP concerns such as extracting data from the request, outputting the correct status code, and rendering the response payload. Additionally, the manner in which the code we are testing can fail has been reduced. So, we ended up with less test setup, shorter tests, and more scenario coverage.

The second issue related to testing was duplication of effort. With the lack of isolation, our original tests were often somewhat superfluous. For example, the model layer test for the Get endpoint looked like this:

func TestGetter_Do(t *testing.T) {
// inputs
ID := 1

// call method
getter := &Getter{}
person, err := getter.Do(ID)

// validate expectations
require.NoError(t, err)
assert.Equal(t, ID, person.ID)
assert.Equal(t, "John", person.FullName)
}

This looks alright on the surface, but when we consider the fact that this test scenario has already been covered by our REST package test, we actually gain nothing from this test. On the other hand, let's look at one of the several tests we have now:

func TestGetter_Do_noSuchPerson(t *testing.T) {
// inputs
ID := 5678

// configure the mock loader
mockLoader := &mockMyLoader{}
mockLoader.On("Load", mock.Anything, ID).Return(nil, data.ErrNotFound).Once()

// call method
getter := &Getter{
data: mockLoader,
}
person, err := getter.Do(ID)

// validate expectations
require.Equal(t, errPersonNotFound, err)
assert.Nil(t, person)
assert.True(t, mockLoader.AssertExpectations(t))
}

This test is now 100% predictable, as it does not rely on the current state of the database. It doesn't test the database, nor how we interact with it, but instead tests how we interact with the data loader abstraction. This means that the data layer implementation is free to evolve or change without needing to revisit and update the test. This test also validates that, if we receive an error from the data layer, we appropriately transform this error as our API contract expects.

We still have tests at both layers, as before, but instead of the tests bringing us no value, they now bring significant value.

Thirdly, another issue we encountered when testing was test verbosityOne of the many changes we made was the adoption of table-driven tests. The original service test for our register endpoint looked as follows:

func TestRegisterHandler_ServeHTTP(t *testing.T) {
// ensure the test always fails by giving it a timeout
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()

// Create and start a server
// With out current implementation, we cannot test this handler without
// a full server as we need the mux.
address, err := startServer(ctx)
require.NoError(t, err)

// build inputs
validRequest := buildValidRequest()
response, err := http.Post("http://"+address+"/person/register", "application/json", validRequest)

// validate outputs
require.NoError(t, err)
require.Equal(t, http.StatusCreated, response.StatusCode)
defer response.Body.Close()

// call should output the location to the new person
headerLocation := response.Header.Get("Location")
assert.Contains(t, headerLocation, "/person/")
}

And now, consider how it looks in the following code block:

func TestRegisterHandler_ServeHTTP(t *testing.T) {
scenarios := []struct {
desc string
inRequest func() *http.Request
inModelMock func() *MockRegisterModel
expectedStatus int
expectedHeader string
}{
// scenarios removed
}

for _, s := range scenarios {
scenario := s
t.Run(scenario.desc, func(t *testing.T) {
// define model layer mock
mockRegisterModel := scenario.inModelMock()

// build handler
handler := NewRegisterHandler(mockRegisterModel)

// perform request
response := httptest.NewRecorder()
handler.ServeHTTP(response, scenario.inRequest())

// validate outputs
require.Equal(t, scenario.expectedStatus, response.Code)

// call should output the location to the new person
resultHeader := response.Header().Get("Location")
assert.Equal(t, scenario.expectedHeader, resultHeader)

// validate the mock was used as we expected
assert.True(t, mockRegisterModel.AssertExpectations(t))
})
}
}

I know what you are thinking, the test became more verbose, not less. Yes, this individual test did. However, in the originals, if we were to test for another scenario, the first step would have been to copy and paste almost the entire test, leaving us with approximately 10 lines of duplicated code and only a few lines that were unique to that test scenario.

With our table-driven tests style, we have eight lines of shared code that execute for every scenario and are clearly visible as such. Each scenario is neatly specified as an object in a slice like so:

{
desc: "Happy Path",
inRequest: func() *http.Request {
validRequest := buildValidRegisterRequest()
request, err := http.NewRequest("POST", "/person/register", validRequest)
require.NoError(t, err)

return request
},
inModelMock: func() *MockRegisterModel {
// valid downstream configuration
resultID := 1234
var resultErr error

mockRegisterModel := &MockRegisterModel{}
mockRegisterModel.On("Do", mock.Anything, mock.Anything).Return(resultID, resultErr).Once()

return mockRegisterModel
},
expectedStatus: http.StatusCreated,
expectedHeader: "/person/1234/",
},

For us to add another scenario, all we have to do is add another item to the slice. This is both very simple, and quite neat and tidy.

Lastly, if we ever need to make changes to the tests, perhaps because the API contract changed, we now have only one test to fix, instead of many.

The fourth issue we encountered was reliance on our upstream service. This is one of my pet peeves. Tests are supposed to be reliable and predictable, and test failures should be an absolute indicator that there is a problem that needs fixing. When tests rely on a third party and an internet connection, anything could go wrong, and the tests can break for any reason. Thankfully, after our changes in Chapter 8Dependency Injection by Config, all of our tests, except the external-facing boundary tests, now rely on an abstraction and a mock implementation of the upstream service. Not only are our tests reliable, but we can now easily test our error-handling conditions similar to how we discussed earlier. 

In the following test, we have removed and mocked calls to the converter package in order to test what happens to our registrations when we fail to load the currency conversion:

func TestRegisterer_Do_exchangeError(t *testing.T) {
// configure the mocks
mockSaver := &mockMySaver{}
mockExchanger := &MockExchanger{}
mockExchanger.
On("Exchange", mock.Anything, mock.Anything, mock.Anything).
Return(0.0, errors.New("failed to load conversion")).
Once()

// define context and therefore test timeout
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()

// inputs
in := &Person{
FullName: "Chang",
Phone: "11122233355",
Currency: "CNY",
}

// call method
registerer := &Registerer{
cfg: &testConfig{},
exchanger: mockExchanger,
data: mockSaver,
}
ID, err := registerer.Do(ctx, in)

// validate expectations
require.Error(t, err)
assert.Equal(t, 0, ID)
assert.True(t, mockSaver.AssertExpectations(t))
assert.True(t, mockExchanger.AssertExpectations(t))
}

You might remember that we still have tests in our exchange package. In fact, we have two types. We have internal-facing boundary tests that call a fake HTTP server that we created. These tests ensure that when the server gives a particular response, our code reacts as we expect, as shown in the following snippet: 

func TestInternalBoundaryTest(t *testing.T) {
// start our test server
server := httptest.NewServer(&happyExchangeRateService{})
defer server.Close()

// define the config
cfg := &testConfig{
baseURL: server.URL,
apiKey: "",
}

// create a converter to test
converter := NewConverter(cfg)
resultRate, resultErr := converter.Exchange(context.Background(), 100.00, "AUD")

// validate the result
assert.Equal(t, 158.79, resultRate)
assert.NoError(t, resultErr)
}

type happyExchangeRateService struct{}

// ServeHTTP implements http.Handler
func (*happyExchangeRateService) ServeHTTP(response http.ResponseWriter, request *http.Request) {
payload := []byte(`
{
"success":true,
"timestamp":1535250248,
"base":"EUR",
"date":"2018-08-26",
"rates": {
"AUD":1.587884
}
}
`)
response.Write(payload)
}

But we also have external-facing boundary tests, which still call the upstream service. These tests help us validate that the upstream service performs as we need it to, in concert with our code. However, to ensure our tests are predictable, we do not run the external tests very often. We achieved this by adding a build tag to this file, allowing us an easy way to decide when to include the tests. Typically, I would only run these tests either when something went wrong, or in order to set up a special step in the build pipeline that runs only these tests. We could then decide how to proceed after any failures during these tests.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.99.152