9

Unit and Integration Testing

Testing is an integral part of any development process. It is always important to cover your code with automated tests, ensuring that all important logic is continuously tested on all code changes. Writing good tests often helps ensure that any changes made throughout the development process will keep the code working and reliable.

Testing is especially important in microservice development, but it brings some additional challenges to developers. It’s not enough to test each service – it’s also important to test the integrations between the services, ensuring every service can work with the others.

In this chapter, we will cover both unit testing and integration testing and illustrate how to add tests to the microservices we created in the previous chapters. We will cover the following topics:

  • Go testing overview
  • Unit tests
  • Integration tests
  • Testing best practices

You will learn how to write unit and integration tests in Go, how to use the mocking technique, and how to organize the testing code for your microservices. This knowledge will help you to build more reliable services.

Let’s proceed to the overview of Go testing tools and techniques.

Technical requirements

To complete this chapter, you need Go version 1.11+ or above.

You can find the GitHub code for this chapter here: https://github.com/PacktPublishing/microservices-with-go/tree/main/Chapter09.

Go testing overview

In this section, we are going to provide a high-level overview of Go’s testing capabilities. We will cover the basics of writing tests for Go code, list the useful functions and libraries provided with the Go SDK, and describe various techniques for writing tests that will help you in microservice development.

First, let’s cover the basics of writing tests for Go applications.

Go language has built-in support for writing automated tests and provides a package called testing for this purpose.

There is a conventional relationship between the Go code and its tests. If you have a file called example.go, its tests would reside in the same package in a file called example_test.go. Using a _test file name suffix allows you to differentiate between the code being tested and the tests for it, making it easier to navigate the source code.

Go test functions follow this conventional name format, with each test function name starting with the Test prefix:

func TestXxx(t *testing.T)

Inside these functions, you can use the testing.T structure to report test failures or use any additional helper functions provided by it.

Let’s take this test as an example:

func TestAdd(t *testing.T) {
  a, b := 1, 2
  if got, want := Add(1, 2), 3; got != want {
    t.Errorf("Add(%v, %v) = %v, want %v", a, b, got, want)
  }
}

In the preceding function, we used testing.T to report a test failure in case the Add function provides an unexpected output.

When it comes to execution, we can run the following command:

go test

The command executes each test in the target directory and prints the output, containing error messages for any failing tests or any other necessary data.

Developers are free to choose the format of their tests; however, there are some common techniques, such as table-driven tests, that often help organize test code elegantly.

Table-driven tests are tests in which inputs are stored in the form of a table or a set of rows. Let’s take this example:

func TestAdd(t *testing.T) {
    tests := []struct {
        a    int
        b    int
        want int
    }{
        {a: 1, b: 2, want: 3},
        {a: -1, b: -2, want: -3},
        {a: -3, b: 3, want: 0},
        {a: 0, b: 0, want: 0},
    }
    for _, tt := range tests {
        assert.Equal(t, tt.want, Add(tt.a, tt.b), fmt.Sprintf("Add(%v, %v)", tt.a, tt.b))
    }
}

In this code, we initialize the tests variable with the test cases for our function and then iterate over it. Note that we use the assert.Equal function provided by the github.com/stretchr/testify library to compare the expected and the actual result of the function being tested. This library provides a set of convenient functions that can simplify your test logic. Without using the assert library, the code comparing the test result would look like the following:

        if got, want := Add(tt.a, tt.b), tt.want; got != want {
            t.Errorf ("Add(%v, %v) = %v, want %v", tt.a, tt.b, got, want)
        }

Table-driven tests help reduce the repetitiveness of tests by separating test cases and the logic that performs actual checks. In general, these tests are good practice when you need to perform lots of similar checks against the defined goal states, as shown in our example.

The table-driven format also helps us improve the readability of test code, making it easier to see and compare different test cases for the same functions. The format is quite common in Go tests; however, you can always organize your test code in the way that is the best for your use case.

Now, let’s review the basic features provided by Go’s built-in testing library.

Subtests

One of the interesting features of the Go testing library is the ability to create subtests — tests that get executed inside other ones. Among the benefits of subtests is the ability to execute them separately, as well as to execute them in parallel for long-running tests and structure the test output in a more granular way.

Subtests are created by calling the Run function of the testing library:

func (t *T) Run(name string, f func(t *T)) bool

When using the Run function, you need to pass the name of the test case and the function to execute, and Go will take care of executing each test case separately. Here’s an example of a test using the Run function:

func TestProcess(t *testing.T) {
  t.Run("test case 1", func(t *testing.T) {
    // Test case 1 logic.
  })
  t.Run("test case 2", func(t *testing.T) {
    // Test case 2 logic.
  })
}

In the preceding example, we created two subtests by calling the Run function twice, one time for each subtest.

To achieve more fine-grained control over subtests, you can use the following options:

  • Each subtest, either passing or failing, can be shown separately in the output when running the go test command with the -v argument
  • You can run an individual test case by using a -run argument of the go test command

There is one other interesting benefit of using the Run function. Let’s imagine that you have a function called Process that takes seconds to complete. If you have a table test with lots of test cases and you execute them sequentially, the execution of the entire test may take a lot of time. In this case, you could let the Go test runner execute tests in parallel mode by calling the t.Parallel() function. Let’s illustrate this in the following example:

func TestProcess(t *testing.T) {
    tests := []struct {
        name  string
        input string
           want  string
    }{
        {name: "empty", input: "", want: ""},
        {name: "dog", input: "animal that barks", want: "dog"},
        {name: "cat", input: "animal that meows", want: "cat"},
    }
    for _, tt := range tests {
        input := tt.input
        t.Run(tt.name, func(t *testing.T) {
            t.Parallel()
            assert.Equal(t, tt.want, Process(input), fmt.Sprintf("Process(%v)", input))
        })
    }
}

In our example, we call the t.Run function for each test case, passing the test case name and the function to be executed. Then, we call t.Parallel() to make each test case execute in parallel. This optimization would significantly reduce the execution time in the case that our Process function is very slow.

Skipping

Imagine that you want to execute your Go tests after each change on your computer, but you have some slow tests that take a long time to run. In that case, you would want to find a way to skip running tests under certain conditions. The Go testing library has built-in support for this – the Skip function. Let’s take this test function as an example:

func TestProcess(t *testing.T) {
  if os.Getenv("RUNTIME_ENV") == "development" {
    t.Skip("Skipping a test in development environment")
  }
  ...
}

In the preceding code, we skip the test execution if there is a RUNTIME_ENV runtime environment variable with the development value. Note that we also provide the reason for skipping it inside the t.Skip call so that it is logged on test execution.

The skipping feature can be particularly useful for bypassing the execution of long-running tests, such as the tests performing slow I/O operations or doing lots of data processing. To support this, the Go testing library provides an ability to pass a specific flag, -test.short, to the go test command:

go test -test.short

With the -test.short flag, you can let the Go test runner know that you want to run tests in short mode — a testing mode when only special, short tests are getting executed. You can add the following logic to all long-running tests to exclude them in short mode:

func TestLongRunningProcess(t *testing.T) {
  if testing.Short() {
    t.Skip("Skipping a test in short mode")
  }
  ...
}

In the preceding example, the test is skipped when the -test.short flag is passed to the test command.

Using the short testing mode is useful when some of your tests are much slower than others and you need to run tests very frequently. Skipping the slow tests and executing them less frequently could significantly increase your development speed and make your development experience much better.

You can get familiar with the other Go testing features by checking out the official documentation for the testing package: https://pkg.go.dev/testing. We are now going to proceed to the next section and focus on the details of implementing unit tests for our microservices.

Unit tests

We have covered many useful features for automated testing of Go applications and are now ready to illustrate how to use them in our microservice code. First, we are going to start with unit tests — tests of individual units of code, such as structures and individual functions.

Let’s walk through the process of implementing unit tests for our code using the metadata service controller as an example. Currently, our controller file looks like this:

package metadata
import (
    "context"
    "movieexample.com/metadata/pkg/model"
)
type metadataRepository interface {
    Get(ctx context.Context, id string) (*model.Metadata, error)
}
// Controller defines a metadata service controller.
Type Controller struct {
    repo metadataRepository
}
// New creates a metadata service controller.
Func New(repo metadataRepository) *Controller {
    return &Controller{repo}
}
// Get returns movie metadata by id.
Func (c *Controller) Get(ctx context.Context, id string) (*model.Metadata, error) {
    return c.repo.Get(ctx, id)
}

Let’s list what we would like to test in our code:

  • A Get call when the repository returns ErrNotFound
  • A Get call when the repository returns an error other than ErrNotFound
  • A Get call when the repository returns metadata and no error

So far, we have three test cases to implement. All test cases need to perform operations on the metadata repository and we need to simulate three different responses from it. How exactly should we simulate the responses from our metadata repository in the test? Let’s explore the powerful technique that allows us to achieve this with our testing code.

Mocking

The technique of simulating responses from a component is called mocking. Mocking is often used in tests to simulate various scenarios, such as returning specific results or errors. There are multiple ways of using mocking in Go code. The first one is to implement the fake version of components, called mocks, manually. Let’s illustrate how to implement these mocks using our metadata repository as an example. Our metadata repository interface is defined in the following way:

type metadataRepository interface {
    Get(ctx context.Context, id string) (*model.Metadata, error)
}

The mock implementation of this interface could look like this:

type mockMetadataRepository struct {
    returnRes *model.Metadata
    returnErr error
}
func (m *mockMetadataRepository) setReturnValues(res *model.Metadata, err error) {
    m.returnRes = res
    m.returnErr = err
}
func (m *mockMetadataRepository) Get(ctx context.Context, id string) (*model.Metadata, error) {
    return m.returnRes, m.returnErr
}

In our example mock of the metadata repository, we allow set values to be returned on the upcoming calls to the Get function by providing the setReturnValues function. The mock could be used to test our controller in the following way:

m := mockMetadataRepository{}
m.setReturnValues(nil, repository.ErrNotFound)
c := New(m)
res, err := c.Get(context.Background(), "some-id")
// Check res, err.

Manual implementation of mocks is a relatively simple way to test calls to various components that are outside of the scope of the package being tested. The downside of this approach is that you need to write mock code by yourself and update its code on any interface changes.

The other way of using mocks is to use libraries that generate mocking code. An example of this kind of library is https://github.com/golang/mock, which contains a mock generation tool called mockgen. You can install it by running the following command:

go install github.com/golang/mock/mockgen

The mockgen tool can then be used in the following way:

mockgen -source=foo.go [options]

Let’s illustrate how to generate mock code for our metadata repository. Run the following command from the src directory of our project:

mockgen -package=repository -source=metadata/internal/controller/metadata/controller.go

You should get the contents of a mock source file as the output. The contents would be similar to this:

// MockmetadataRepository is a mock of metadataRepository 
// interface
type MockmetadataRepository struct {
    ctrl     *gomock.Controller
    recorder *MockmetadataRepositoryMockRecorder
}
// NewMockmetadataRepository creates a new mock instance
func NewMockmetadataRepository(ctrl *gomock.Controller) *MockmetadataRepository {
    mock := &MockmetadataRepository{ctrl: ctrl}
    mock.recorder = &MockmetadataRepositoryMockRecorder{mock}
    return mock
}
// EXPECT returns an object that allows the caller to indicate // expected use
func (m *MockmetadataRepository) EXPECT() *MockmetadataRepositoryMockRecorder {
    return m.recorder
}
// Get mocks base method.
func (m *MockmetadataRepository) Get(ctx context.Context, id string) (*model.Metadata, error) {
    ret := m.ctrl.Call(m, "Get", ctx, id)
    ret0, _ := ret[0].(*model.Metadata)
    ret1, _ := ret[1].(error)
    return ret0, ret1
}

The generated mock code implements our interface and allows us to set the expected responses to our Get function in the following way:

ctrl := gomock.NewController(t)
defer ctrl.Finish()
m := NewMockmetadataRepository(gomock.NewController())
ctx := context.Background()
id := "some-id"
m.EXPECT().Get(ctx, id).Return(nil, repository.ErrNotFound)

The mock code generated by the gomock library provides some useful features that we have not implemented in our manually created mock version. One of them is the ability to set the expected number of times that the target function should be called using the Times function:

m.EXPECT().Get(ctx, id).Return(nil, repository.ErrNotFound).Times(1)

In the preceding example, we limit the number of times the Get function is called to one. The gomock library verifies these constraints at the end of the test execution and reports whether the function was called a different number of times. This mechanism is pretty useful when you want to make sure the target function has definitely been called in your test.

So far, we have shown how to use mocks in two different ways, and you may ask what the preferred way of using them is. Let’s compare the two approaches to find out the answer.

The benefit of implementing mocks manually is the ability to do so without using any external libraries, such as gomock. However, the downsides of this approach would be the following:

  • Manual implementation of mocks takes time
  • Any changes to the mocked interfaces would require manual updates to the mock code
  • Harder to implement extra features that are provided by libraries such as gomock, such as call count verification

Using a library such as gomock for providing mock code would be beneficial for the following reasons:

  • Higher code consistency when all mocks are generated in the same way
  • No need to write boilerplate code
  • An extended mock feature set

In our comparison, automatic mock code generation seems to provide more advantages, so we will follow the gomock-based approach for automatic mock generation. In the next section, we are going to demonstrate how to do this for our services.

Implementing unit tests

We are going to illustrate how to implement controller unit tests using the generated gomock code. First, we will need to find a good place in our repository to put the generated code. We already have a directory called gen that is shared among the services. We can create a sub-directory called mock that we can use for various generated mocks. Run the mock generation command for the metadata repository again:

mockgen -package=repository -source=metadata/internal/controller/metadata/controller.go

Copy its output to the file called gen/mock/metadata/repository/repository.go. Now, let’s add a test for our metadata service controller. Create a file called controller_test.go in its directory and add to it the following code:

package metadata
import (
    "context"
    "errors"
    "testing"
    "github.com/golang/mock/gomock"
    "github.com/stretchr/testify/assert"
    gen "movieexample.com/gen/mock/metadata/repository"
    "movieexample.com/metadata/internal/repository"
    "movieexample.com/metadata/pkg/model"
)

Then, add the following code, containing the test cases in a table format:

func TestController(t *testing.T) {
    tests := []struct {
        name       string
        expRepoRes *model.Metadata
        expRepoErr error
        wantRes    *model.Metadata
        wantErr    error
    }{
        {
            name:       "not found",
            expRepoErr: repository.ErrNotFound,
            wantErr:    ErrNotFound,
        },
        {
            name:       "unexpected error",
            expRepoErr: errors.New("unexpected error"),
            wantErr:    errors.New("unexpected error"),
        },
        {
            name:       "success",
            expRepoRes: &model.Metadata{},
            wantRes:    &model.Metadata{},
        },
    }

Finally, add the code to execute our tests:

    for _, tt := range tests {
        t.Run(tt.name, func(t *testing.T) {
            ctrl := gomock.NewController(t)
            defer ctrl.Finish()
            repoMock := gen.NewMockmetadataRepository(ctrl)
            c := New(repoMock)
            ctx := context.Background()
            id := "id"
            repoMock.EXPECT().Get(ctx, id).Return(tt.expRepoRes, tt.expRepoErr)
            res, err := c.Get(ctx, id)
            assert.Equal(t, tt.wantRes, res, tt.name)
            assert.Equal(t, tt.wantErr, err, tt.name)
        })
    }
}

The code that we just added implements three different test cases for our Get function using the generated repository mock. We let the mock return the specific values by calling the EXPECT function and passing the desired values. We organized our test in a table-driven way, which we described earlier in the chapter.

To run the tests, use the regular command:

go test

If you did everything correctly, the output of the test should include ok. Congratulations, we have just implemented the unit tests and demonstrated how to use mocks! We will let you implement the remaining tests for the microservices yourself — it’s going to be a fair amount of work, but this is always a great investment for ensuring the code remains tested and reliable.

In the next section, we are going to work on another type of test – integration tests. Knowing why and how to write integration tests in addition to regular unit tests for your microservices will help you to write more stable code and make sure all services work well in integration with each other.

Integration tests

Integration tests are automated tests that verify the correctness of integrations between the individual units of your services and the services themselves. In this section, you are going to learn how to write integration tests and how to structure the logic inside them, as well as get some useful tips that will help you write your own integration tests in the future.

Unlike unit tests that test the individual pieces of code, such as functions and structures, integration tests help ensure that the combinations of individual pieces still work well together.

Let’s provide an example of an integration test, taking our rating service as an example. The integration test for our service would instantiate both the service instance and the client for it and ensure that client requests would produce the expected results. As you remember, our rating service provides two API endpoints:

  • PutRating: Writes a rating to the database
  • GetAggregatedRating: Retrieves the ratings for a provided record (such as a movie) and returns the aggregated value

Our integration test for the rating service could have the following sequence of calls:

  • Writes some data using the PutRating endpoint
  • Verifies the data using the GetAggregatedRating endpoint
  • Writes new data using the PutRating endpoint
  • Calls the GetAggregatedRating endpoint and checks that the aggregated value reflects the latest rating update

In microservice development, integration tests usually test individual services or combinations of them – developers can write tests that target an arbitrary number of services.

Unlike unit tests—which generally reside together with the code being tested and can access some internal functions, structures, constants, and variables—integration tests often treat the components being tested as black boxes. Black boxes are logical blocks for which the implementation details remain unknown and can only be accessed through publicly exposed APIs or user interfaces. This way of testing is called black box testing – testing of a system using a public interface, such as an API, instead of calling individual internal functions or accessing internal components of the system.

Microservice integration tests are often performed by instantiating service instances and performing requests either by calling service APIs or via asynchronous events in case the system handles requests in an asynchronous fashion. The structure of an integration test usually follows a similar pattern:

  • Set up the test: Instantiate the components being tested and any clients that can access their interfaces
  • Perform test operations and verify the correctness of results: Run an arbitrary number of operations and compare the outputs from the system being tested, such as a microservice, to the expected values
  • Tear down the test: Gracefully terminate the test by tearing down the components instantiated in the setup, closing any clients if needed

To illustrate how to write an integration test, let’s take three microservices from the previous chapters – metadata, movie, and rating services. To set up our test, we would need to instantiate six components – a server and a client for each microservice. To make it easier to run the test, we can instantiate servers using in-memory implementations of service registries and repositories.

Before you write the test, it’s often helpful to write down the set of operations to be tested and determine the expected outputs for each step. Let’s write down the plan for our integration test:

  1. Write metadata for an example movie using the metadata service API (the PutMetadata endpoint) and check that the operation does not return any errors.
  2. Retrieve the metadata for the same movie using the metadata service API (the GetMetadata endpoint) and check it matches the record that we submitted earlier.
  3. Get the movie details (which should only consist of metadata) for our example movie using the movie service API (the GetMovieDetails endpoint) and make sure the result matches the data that we submitted earlier.
  4. Write the first rating for our example movie using the rating service API (the PutRating endpoint) and check the operation does not return any errors.
  5. Retrieve the initial aggregated rating for our movie using the rating service API (the GetAggregatedRating endpoint) and check that the value matches the one that we just submitted in the previous step.
  6. Write the second rating for our example movie using the rating service API and check that the operation does not return any errors.
  7. Retrieve the new aggregated rating for our movie using the rating service API and check that the value reflects the last rating.
  8. Get the movie details for our example movie and check that the result includes the updated rating.

Having this kind of plan makes it easier to write the code for the integration test and brings us to the last step — actually implementing it:

  1. Create a test/integration directory and add the file called main.go with the following code:
    package main
    import (
        "context"
        "log"
        "net"
        "github.com/google/go-cmp/cmp"
        "github.com/google/go-cmp/cmp/cmpopts"
        "google.golang.org/grpc"
        "movieexample.com/gen"
        metadatatest "movieexample.com/metadata/pkg/testutil"
        movietest "movieexample.com/movie/pkg/testutil"
        "movieexample.com/pkg/discovery"
        "movieexample.com/pkg/discovery/memory"
        ratingtest "movieexample.com/rating/pkg/testutil"
        "google.golang.org/grpc/credentials/insecure"
    )
  2. Let’s add some constants with service names and addresses that we can use later in the test to the file:
    const (
        metadataServiceName = "metadata"
        ratingServiceName   = "rating"
        movieServiceName    = "movie"
        metadataServiceAddr = "localhost:8081"
        ratingServiceAddr   = "localhost:8082"
        movieServiceAddr    = "localhost:8083"
    )
  3. The next step is to implement the setup code to instantiate our service servers:
    func main() {
        log.Println("Starting the integration test")
        ctx := context.Background()
        registry := memory.NewRegistry()
        log.Println("Setting up service handlers and clients")
        metadataSrv := startMetadataService(ctx, registry)
        defer metadataSrv.GracefulStop()
        ratingSrv := startRatingService(ctx, registry)
        defer ratingSrv.GracefulStop()
        movieSrv := startMovieService(ctx, registry)
        defer movieSrv.GracefulStop()

Note that defer calls to the GracefulStop function of each server — this code is a part of the tear-down logic of our test for terminating all servers gracefully.

  1. Now, let’s set up the test clients for our services:
        opts := grpc.WithTransportCredentials(insecure.NewCredentials())
        metadataConn, err := grpc.Dial(metadataServiceAddr, opts)
        if err != nil {
            panic(err)
        }
        defer metadataConn.Close()
        metadataClient := gen.NewMetadataServiceClient(metadataConn)
        
        ratingConn, err := grpc.Dial(ratingServiceAddr, opts)
        if err != nil {
            panic(err)
        }
        defer ratingConn.Close()
        ratingClient := gen.NewRatingServiceClient(ratingConn)
        movieConn, err := grpc.Dial(movieServiceAddr, opts)
        if err != nil {
            panic(err)
        }
        defer movieConn.Close()
        movieClient := gen.NewMovieServiceClient(movieConn)

Now, we are ready to implement the sequence of our test commands. The first step is to test, write, and read the operations of the metadata service:

    log.Println("Saving test metadata via metadata service")
    m := &gen.Metadata{
        Id:          "the-movie",
        Title:       "The Movie",
        Description: "The Movie, the one and only",
        Director:    "Mr. D",
    }
    if _, err := metadataClient.PutMetadata(ctx, &gen.PutMetadataRequest{Metadata: m}); err != nil {
        log.Fatalf("put metadata: %v", err)
    }
    log.Println("Retrieving test metadata via metadata service")
    getMetadataResp, err := metadataClient.GetMetadata(ctx, &gen.GetMetadataRequest{MovieId: m.Id})
    if err != nil {
        log.Fatalf("get metadata: %v", err)
    }
    if diff := cmp.Diff(getMetadataResp.Metadata, m, cmpopts.IgnoreUnexported(gen.Metadata{})); diff != "" {
        log.Fatalf("get metadata after put mismatch: %v", diff)
    }

You may notice that we used the cmpopts.IgnoreUnexported(gen.Metadata{}) option inside the call to the cmp.Diff function — this tells the cmp library to ignore the unexported fields in the gen.Metadata structure. We have added this option because the gen.Metadata structure, generated by the Protocol Buffers code generator, includes some private fields that we want to ignore in the comparison.

The next test in our sequence would be to retrieve the movie details and check that the metadata matches the record that we submitted earlier:

    log.Println("Getting movie details via movie service")
    wantMovieDetails := &gen.MovieDetails{
        Metadata: m,
    }
    getMovieDetailsResp, err := movieClient.GetMovieDetails(ctx, &gen.GetMovieDetailsRequest{MovieId: m.Id})
    if err != nil {
        log.Fatalf("get movie details: %v", err)
    }
    if diff := cmp.Diff(getMovieDetailsResp.MovieDetails, wantMovieDetails, cmpopts.IgnoreUnexported(gen.MovieDetails{}, gen.Metadata{})); diff != "" {
        log.Fatalf("get movie details after put mismatch: %v", err)
    }

Now, we are ready to test the rating service.

Let’s implement two tests – one for writing a rating and one for retrieving the initial aggregated value, which should match the first rating:

    log.Println("Saving first rating via rating service")
    const userID = "user0"
    const recordTypeMovie = "movie"
    firstRating := int32(5)
    if _, err = ratingClient.PutRating(ctx, &gen.PutRatingRequest{
        UserId:      userID,
        RecordId:    m.Id,
        RecordType:  recordTypeMovie,
        RatingValue: firstRating,
    }); err != nil {
        log.Fatalf("put rating: %v", err)
    }
    log.Println("Retrieving initial aggregated rating via rating service")
    getAggregatedRatingResp, err := ratingClient.GetAggregatedRating(ctx, &gen.GetAggregatedRatingRequest{
        RecordId:   m.Id,
        RecordType: recordTypeMovie,
    })
    if err != nil {
        log.Fatalf("get aggreggated rating: %v", err)
    }
    if got, want := getAggregatedRatingResp.RatingValue, float64(5); got != want {
        log.Fatalf("rating mismatch: got %v want %v", got, want)
    }

The next part of the test would be to submit the second rating and check that the aggregated value was changed:

    log.Println("Saving second rating via rating service")
    secondRating := int32(1)
    if _, err = ratingClient.PutRating(ctx, &gen.PutRatingRequest{
        UserId:      userID,
        RecordId:    m.Id,
        RecordType:  recordTypeMovie,
        RatingValue: secondRating,
    }); err != nil {
        log.Fatalf("put rating: %v", err)
    }
    log.Println("Saving new aggregated rating via rating service")
    getAggregatedRatingResp, err = ratingClient.GetAggregatedRating(ctx, &gen.GetAggregatedRatingRequest{
        RecordId:   m.Id,
        RecordType: recordTypeMovie,
    })
    if err != nil {
        log.Fatalf("get aggreggated rating: %v", err)
    }
    wantRating := float64((firstRating + secondRating) / 2)
    if got, want := getAggregatedRatingResp.RatingValue, wantRating; got != want {
        log.Fatalf("rating mismatch: got %v want %v", got, want)
    }

We are almost done with our main function – let’s implement the last check:

    log.Println("Getting updated movie details via movie service")
    getMovieDetailsResp, err = movieClient.GetMovieDetails(ctx, &gen.GetMovieDetailsRequest{MovieId: m.Id})
    if err != nil {
        log.Fatalf("get movie details: %v", err)
    }
    wantMovieDetails.Rating = wantRating
    if diff := cmp.Diff(getMovieDetailsResp.MovieDetails, wantMovieDetails, cmpopts.IgnoreUnexported(gen.MovieDetails{}, gen.Metadata{})); diff != "" {
        log.Fatalf("get movie details after update mismatch: %v", err)
    }
    log.Println("Integration test execution successful")
}

Our integration test is almost ready. Let’s add the functions for initializing the servers for our services below the main function. First, add the function for creating the server for a metadata service:

func startMetadataService(ctx context.Context, registry discovery.Registry) *grpc.Server {
    log.Println("Starting metadata service on " + metadataServiceAddr)
    h := metadatatest.NewTestMetadataGRPCServer()
    l, err := net.Listen("tcp", metadataServiceAddr)
    if err != nil {
        log.Fatalf("failed to listen: %v", err)
    }
    srv := grpc.NewServer()
    gen.RegisterMetadataServiceServer(srv, h)
    go func() {
        if err := srv.Serve(l); err != nil {
            panic(err)
        }
    }()
    id := discovery.GenerateInstanceID(metadataServiceName)
    if err := registry.Register(ctx, id, metadataServiceName, metadataServiceAddr); err != nil {
        panic(err)
    }
    return srv
}

You may notice that we call the srv.Serve function inside a goroutine — this way, it doesn’t block the execution and allows us to immediately return from the function.

Let’s add a similar implementation for the rating service server to the same file:

func startRatingService(ctx context.Context, registry discovery.Registry) *grpc.Server {
    log.Println("Starting rating service on " + ratingServiceAddr)
    h := ratingtest.NewTestRatingGRPCServer()
    l, err := net.Listen("tcp", ratingServiceAddr)
    if err != nil {
        log.Fatalf("failed to listen: %v", err)
    }
    srv := grpc.NewServer()
    gen.RegisterRatingServiceServer(srv, h)
    go func() {
        if err := srv.Serve(l); err != nil {
            panic(err)
        }
    }()
    id := discovery.GenerateInstanceID(ratingServiceName)
    if err := registry.Register(ctx, id, ratingServiceName, ratingServiceAddr); err != nil {
        panic(err)
    }
    return srv
}

Finally, let’s add a function for initializing the movie server:

func startMovieService(ctx context.Context, registry discovery.Registry) *grpc.Server {
    log.Println("Starting movie service on " + movieServiceAddr)
    h := movietest.NewTestMovieGRPCServer(registry)
    l, err := net.Listen("tcp", movieServiceAddr)
    if err != nil {
        log.Fatalf("failed to listen: %v", err)
    }
    srv := grpc.NewServer()
    gen.RegisterMovieServiceServer(srv, h)
    go func() {
        if err := srv.Serve(l); err != nil {
            panic(err)
        }
    }()
    id := discovery.GenerateInstanceID(movieServiceName)
    if err := registry.Register(ctx, id, movieServiceName, movieServiceAddr); err != nil {
        panic(err)
    }
    return srv
}

Our integration test is ready! You can run it by executing the following command:

go run test/integration/*.go

If everything is correct, you should see the following output:

2022/07/16 16:20:46 Starting the integration test
2022/07/16 16:20:46 Setting up service handlers and clients
2022/07/16 16:20:46 Starting metadata service on localhost:8081
2022/07/16 16:20:46 Starting rating service on localhost:8082
2022/07/16 16:20:46 Starting movie service on localhost:8083
2022/07/16 16:20:46 Saving test metadata via metadata service
2022/07/16 16:20:46 Retrieving test metadata via metadata service
2022/07/16 16:20:46 Getting movie details via movie service
2022/07/16 16:20:46 Saving first rating via rating service
2022/07/16 16:20:46 Retrieving initial aggregated rating via rating service
2022/07/16 16:20:46 Saving second rating via rating service
2022/07/16 16:20:46 Saving new aggregated rating via rating service
2022/07/16 16:20:46 Getting updated movie details via movie service
2022/07/16 16:20:46 Integration test execution successful

As you may notice, the structure of our integration test precisely matches the sequence of test operations that we defined earlier. We implemented our integration test as an executable command and added enough log messages to help you with debugging – if any step fails, it is therefore easier to understand at which step the failure occurred and which operations preceded that step.

It is important to note that we used the in-memory versions of the metadata and rating repositories in our integration test. An alternative approach would be to set up an integration test that stores the data in some persistent databases, such as MySQL. However, there are some challenges with using existing persistent databases in integration tests:

  • Integration test data should not interfere with user data. Otherwise, it may cause unexpected effects on existing service users.
  • Ideally, test data should be cleaned up after test execution so that the database does not get filled with unnecessary, temporary data.

In order to avoid interference with the existing user data, I would suggest running integration tests on non-production environments, such as staging. Additionally, I would suggest always generating random identifiers for your test records to make sure that individual test executions don’t affect each other. For example, you can use the github.com/google/uuid library to generate new identifiers using the uuid.New() function. Lastly, I would recommend always including cleanup code at the end of each integration test that uses persistent data storage to clean up the created records, whenever this is possible.

Now, the question is when we should write integration tests. It is always up to you; however, I do have some general suggestions:

  • Test critical flows: Make sure you test the entire flows, such as user signups and logins
  • Test critical endpoints: Perform the tests of the most critical endpoints that your services provide to your users

Additionally, you may have integration tests that are executed after each code change. Systems such as Jenkins provide these kinds of features and allow you to plug any custom logic that would be executed into each update of your code. We won’t cover Jenkins setup in this book, but you can familiarize yourself with its documentation on the official website (https://www.jenkins.io).

As we have illustrated how to write both unit and integration tests, let’s proceed to the next section of the book, describing some of the best practices of Go testing.

Testing best practices

In this section, we are going to list some additional useful testing tips that are going to help you to improve the quality of your tests.

Using helpful messages

One of the most important aspects of writing tests is providing enough information in error logs that it is easy to understand exactly what went wrong and which test case triggered the failure. Consider the following test case code:

if got, want := Process(tt.in), tt.want; got != want {
  t.Errorf("Result mismatch")
}

The error log does not include both the expected and the actual value received from the function being tested, making it harder to understand what the function returned and how it was different from the expected value.

The better log line would be as follows:

t.Errorf("got %v, want %v", got, want)

This log line includes the expected and the actual returned value of the function and provides much more context to you when you debug the test.

Important note

Note that in our test logs, first, we log the actual value and then the expected one. This order is recommended by the Go team as the conventional way of logging the values in tests and is followed in all libraries and packages. Follow the same order in your logs for consistency.

An even better error message would be as follows:

t.Errorf("YourFunc(%v) = %v, want %v", tt.in, got, want)

This error log message includes some additional information – the function being called and the input argument that was passed to it.

To standardize the code for your test cases, you can use the github.com/stretchr/testify library. The following example illustrates how to compare the expected and the actual value and log the name of the function being tested, as well as the argument passed to it:

assert.Equal(t, want, got, fmt.Sprintf("YourFunc(%v)", tt.in))

The assert package of the github.com/stretchr/testify library prints both the expected and the actual value of the test result, as well as providing the details about the test case (the fmt.Sprintf result, in our case).

Avoiding the use of Fatal in your logs

The built-in Go testing library includes different functions for logging errors, including Error, Errorf, Fatal, and Fatalf. The last two functions print the logs and interrupt the execution of the tests. Consider this test code:

if err := Process(tt.in); err != nil {
  t.Fatalf("Process(%v): %v, want nil", err)
} 

The call to the Fatalf function interrupts the test execution. Interrupting test execution is often not the best idea because it leads to fewer tests being executed. Executing fewer tests leaves the developer with less information for the remaining failing test cases. Fixing one error and running all the tests again may be a suboptimal experience for many developers and it is often better to continue the test execution whenever possible.

The previous example can be re-written as follows:

if err := Process(tt.in); err != nil {
  t.Errorf("Process(%v): %v, want nil", err)
} 

If you use this code in a loop, you can add continue after the Errorf call to proceed to the next test cases.

Making a comparison using a cmp library

Imagine that you have a test that compares the Metadata structure that we defined in Chapter 2:

want := &model.Metadata{ID: "123", Title: "Some title"}
id := "123"
if got := GetMetadata(ctx, "123"); got != want {
  t.Errorf("GetMetadata(%v): %v, want %v", id, got, want)
}

The code here would not work for structure references – in our code, the want variable holds a pointer to the model.Metadata structure, so the != operator will return true even for structures with the same field values if these structures are created separately.

A comparison of structure pointers can be made in Go using the reflect.DeepEqual function:

if !reflect.DeepEqual(GetMetadata(ctx, "123"), want); {
  t.Errorf("GetMetadata(%v): %v, want %v", id, *got, *want)
}

However, the output of the test may not be easy to read. Consider that you have lots of fields inside the Metadata structure – if only one field is different, you will need to scan through both structures to find the difference. There is a convenient library that simplifies comparison in tests called cmp (https://pkg.go.dev/github.com/google/go-cmp/cmp).

The cmp library allows you to compare arbitrary Go structures in the same way as with reflect.DeepEqual, but it also provides human-readable output. Here’s an example of using the function:

if diff := cmp.Diff(want, got); diff != "" {
  t.Errorf("GetMetadata(%v): mismatch (-want +got):
%s", tt.in, diff)
}

If the structures don’t match, the diff variable will be a non-empty string, including the printable representation of the differences between them. Here’s an example of this kind of output:

GetMetadata(123) mismatch (-want +got):
  model.Metadata{
      ID:      "123",
-     Tiitle: s"Title",
+     IPAddress: s"The Title",
  }

Note how the cmp library highlighted the differences between both structures using the and + prefixes. Now, it is easy to read the test output and notice the differences between the structures — this kind of optimization will save you lots of time during debugging.

This summarizes our short collection of Go testing best practices — you can find more tips by reading the documents mentioned in the Further reading section. Make sure to familiarize yourself with the official recommendations and the comments for the testing package to learn how to write tests in a conventional way and leverage all the features provided by the built-in Go testing library.

Summary

In this chapter, we covered multiple topics related to Go testing, including the common features of the Go testing library and the basics of writing unit and integration tests for your code. You have learned how to add tests to your microservices, optimize test execution in various cases, create test mocks, and maximize the quality of your tests by following the best testing practices. The knowledge you gained from reading this chapter should help you to increase the efficiency of your testing logic and increase the reliability of your microservices.

In the next chapter, we will move to a new topic, that will cover the main aspects of service reliability and describe various techniques for making your services resilient to various types of failures.

Further reading

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.46.36