Table of Contents

Copyright

Brief Table of Contents

Table of Contents

Preface

Acknowledgments

About this book

About the author

About the cover illustration

1. The bigger picture

Chapter 1. The goal of unit testing

1.1. The current state of unit testing

1.2. The goal of unit testing

1.2.1. What makes a good or bad test?

1.3. Using coverage metrics to measure test suite quality

1.3.1. Understanding the code coverage metric

1.3.2. Understanding the branch coverage metric

1.3.3. Problems with coverage metrics

1.3.4. Aiming at a particular coverage number

1.4. What makes a successful test suite?

1.4.1. It’s integrated into the development cycle

1.4.2. It targets only the most important parts of your code base

1.4.3. It provides maximum value with minimum maintenance costs

1.5. What you will learn in this book

Summary

Chapter 2. What is a unit test?

2.1. The definition of “unit test”

2.1.1. The isolation issue: The London take

2.1.2. The isolation issue: The classical take

2.2. The classical and London schools of unit testing

2.2.1. How the classical and London schools handle dependencies

2.3. Contrasting the classical and London schools of unit testing

2.3.1. Unit testing one class at a time

2.3.2. Unit testing a large graph of interconnected classes

2.3.3. Revealing the precise bug location

2.3.4. Other differences between the classical and London schools

2.4. Integration tests in the two schools

2.4.1. End-to-end tests are a subset of integration tests

Summary

Chapter 3. The anatomy of a unit test

3.1. How to structure a unit test

3.1.1. Using the AAA pattern

3.1.2. Avoid multiple arrange, act, and assert sections

3.1.3. Avoid if statements in tests

3.1.4. How large should each section be?

3.1.5. How many assertions should the assert section hold?

3.1.6. What about the teardown phase?

3.1.7. Differentiating the system under test

3.1.8. Dropping the arrange, act, and assert comments from tests

3.2. Exploring the xUnit testing framework

3.3. Reusing test fixtures between tests

3.3.1. High coupling between tests is an anti-pattern

3.3.2. The use of constructors in tests diminishes test readability

3.3.3. A better way to reuse test fixtures

3.4. Naming a unit test

3.4.1. Unit test naming guidelines

3.4.2. Example: Renaming a test toward the guidelines

3.5. Refactoring to parameterized tests

3.5.1. Generating data for parameterized tests

3.6. Using an assertion library to further improve test readability

Summary

2. Making your tests work for you

Chapter 4. The four pillars of a good unit test

4.1. Diving into the four pillars of a good unit test

4.1.1. The first pillar: Protection against regressions

4.1.2. The second pillar: Resistance to refactoring

4.1.3. What causes false positives?

4.1.4. Aim at the end result instead of implementation details

4.2. The intrinsic connection between the first two attributes

4.2.1. Maximizing test accuracy

4.2.2. The importance of false positives and false negatives: The dynamics

4.3. The third and fourth pillars: Fast feedback and maintainability

4.4. In search of an ideal test

4.4.1. Is it possible to create an ideal test?

4.4.2. Extreme case #1: End-to-end tests

4.4.3. Extreme case #2: Trivial tests

4.4.4. Extreme case #3: Brittle tests

4.4.5. In search of an ideal test: The results

4.5. Exploring well-known test automation concepts

4.5.1. Breaking down the Test Pyramid

4.5.2. Choosing between black-box and white-box testing

Summary

Chapter 5. Mocks and test fragility

5.1. Differentiating mocks from stubs

5.1.1. The types of test doubles

5.1.2. Mock (the tool) vs. mock (the test double)

5.1.3. Don’t assert interactions with stubs

5.1.4. Using mocks and stubs together

5.1.5. How mocks and stubs relate to commands and queries

5.2. Observable behavior vs. implementation details

5.2.1. Observable behavior is not the same as a public API

5.2.2. Leaking implementation details: An example with an operation

5.2.3. Well-designed API and encapsulation

5.2.4. Leaking implementation details: An example with state

5.3. The relationship between mocks and test fragility

5.3.1. Defining hexagonal architecture

5.3.2. Intra-system vs. inter-system communications

5.3.3. Intra-system vs. inter-system communications: An example

5.4. The classical vs. London schools of unit testing, revisited

5.4.1. Not all out-of-process dependencies should be mocked out

5.4.2. Using mocks to verify behavior

Summary

Chapter 6. Styles of unit testing

6.1. The three styles of unit testing

6.1.1. Defining the output-based style

6.1.2. Defining the state-based style

6.1.3. Defining the communication-based style

6.2. Comparing the three styles of unit testing

6.2.1. Comparing the styles using the metrics of protection against regressions and feedback speed

6.2.2. Comparing the styles using the metric of resistance to refactoring

6.2.3. Comparing the styles using the metric of maintainability

6.2.4. Comparing the styles: The results

6.3. Understanding functional architecture

6.3.1. What is functional programming?

6.3.2. What is functional architecture?

6.3.3. Comparing functional and hexagonal architectures

6.4. Transitioning to functional architecture and output-based testing

6.4.1. Introducing an audit system

6.4.2. Using mocks to decouple tests from the filesystem

6.4.3. Refactoring toward functional architecture

6.4.4. Looking forward to further developments

6.5. Understanding the drawbacks of functional architecture

6.5.1. Applicability of functional architecture

6.5.2. Performance drawbacks

6.5.3. Increase in the code base size

Summary

Chapter 7. Refactoring toward valuable unit tests

7.1. Identifying the code to refactor

7.1.1. The four types of code

7.1.2. Using the Humble Object pattern to split overcomplicated code

7.2. Refactoring toward valuable unit tests

7.2.1. Introducing a customer management system

7.2.2. Take 1: Making implicit dependencies explicit

7.2.3. Take 2: Introducing an application services layer

7.2.4. Take 3: Removing complexity from the application service

7.2.5. Take 4: Introducing a new Company class

7.3. Analysis of optimal unit test coverage

7.3.1. Testing the domain layer and utility code

7.3.2. Testing the code from the other three quadrants

7.3.3. Should you test preconditions?

7.4. Handling conditional logic in controllers

7.4.1. Using the CanExecute/Execute pattern

7.4.2. Using domain events to track changes in the domain model

7.5. Conclusion

Summary

3. Integration testing

Chapter 8. Why integration testing?

8.1. What is an integration test?

8.1.1. The role of integration tests

8.1.2. The Test Pyramid revisited

8.1.3. Integration testing vs. failing fast

8.2. Which out-of-process dependencies to test directly

8.2.1. The two types of out-of-process dependencies

8.2.2. Working with both managed and unmanaged dependencies

8.2.3. What if you can’t use a real database in integration tests?

8.3. Integration testing: An example

8.3.1. What scenarios to test?

8.3.2. Categorizing the database and the message bus

8.3.3. What about end-to-end testing?

8.3.4. Integration testing: The first try

8.4. Using interfaces to abstract dependencies

8.4.1. Interfaces and loose coupling

8.4.2. Why use interfaces for out-of-process dependencies?

8.4.3. Using interfaces for in-process dependencies

8.5. Integration testing best practices

8.5.1. Making domain model boundaries explicit

8.5.2. Reducing the number of layers

8.5.3. Eliminating circular dependencies

8.5.4. Using multiple act sections in a test

8.6. How to test logging functionality

8.6.1. Should you test logging?

8.6.2. How should you test logging?

8.6.3. How much logging is enough?

8.6.4. How do you pass around logger instances?

8.7. Conclusion

Summary

Chapter 9. Mocking best practices

9.1. Maximizing mocks’ value

9.1.1. Verifying interactions at the system edges

9.1.2. Replacing mocks with spies

9.1.3. What about IDomainLogger?

9.2. Mocking best practices

9.2.1. Mocks are for integration tests only

9.2.2. Not just one mock per test

9.2.3. Verifying the number of calls

9.2.4. Only mock types that you own

Summary

Chapter 10. Testing the database

10.1. Prerequisites for testing the database

10.1.1. Keeping the database in the source control system

10.1.2. Reference data is part of the database schema

10.1.3. Separate instance for every developer

10.1.4. State-based vs. migration-based database delivery

10.2. Database transaction management

10.2.1. Managing database transactions in production code

10.2.2. Managing database transactions in integration tests

10.3. Test data life cycle

10.3.1. Parallel vs. sequential test execution

10.3.2. Clearing data between test runs

10.3.3. Avoid in-memory databases

10.4. Reusing code in test sections

10.4.1. Reusing code in arrange sections

10.4.2. Reusing code in act sections

10.4.3. Reusing code in assert sections

10.4.4. Does the test create too many database transactions?

10.5. Common database testing questions

10.5.1. Should you test reads?

10.5.2. Should you test repositories?

10.6. Conclusion

Summary

4. Unit testing anti-patterns

Chapter 11. Unit testing anti-patterns

11.1. Unit testing private methods

11.1.1. Private methods and test fragility

11.1.2. Private methods and insufficient coverage

11.1.3. When testing private methods is acceptable

11.2. Exposing private state

11.3. Leaking domain knowledge to tests

11.4. Code pollution

11.5. Mocking concrete classes

11.6. Working with time

11.6.1. Time as an ambient context

11.6.2. Time as an explicit dependency

11.7. Conclusion

Summary

 Chapter Map

Index

List of Figures

List of Tables

List of Listings

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.188.168.28