Excessive test coverage

Another problem that could arise is excessive test coverage. Yes, you read that right. Writing too many tests is possible. Programmers, being the technically minded folks that we are, love metrics. Unit-test coverage is one such metric. While it is possible to achieve 100% test coverage, realizing this goal is a huge time sink, and the resulting code can be rather terrible. Consider the following code:

func WriteAndClose(destination io.WriteCloser, contents string) error {
defer destination.Close()

_, err := destination.Write([]byte(contents))
if err != nil {
return err
}

return nil
}

To achieve 100% coverage, we would have to write a test where the destination.Close() call fails. We can totally do this, but what would it achieve? What would we be testing? It would give us another test to write and maintain. If this line of code doesn't work, would you even notice? How about this example:

func PrintAsJSON(destination io.Writer, plant Plant) error {
bytes, err := json.Marshal(plant)
if err != nil {
return err
}

destination.Write(bytes)
return nil
}

type Plant struct {
Name string
}

Again, we can totally test for that. But would we really be testing? In this case, we'd be testing that the JSON package in the Go standard library works as it's supposed to. External SDKs and packages should have their own tests so that we can just trust that they do what they claim. If this is not the case, we can always write tests for them and send them back to the project. That way, the entire community benefits.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.163.13