List of Figures

Chapter 1. The goal of unit testing

Figure 1.1. The difference in growth dynamics between projects with and without tests. A project without tests has a head start but quickly slows down to the point that it’s hard to make any progress.

Figure 1.2. The difference in growth dynamics between projects with good and bad tests. A project with badly written tests exhibits the properties of a project with good tests at the beginning, but it eventually falls into the stagnation phase.

Figure 1.3. The code coverage (test coverage) metric is calculated as the ratio between the number of code lines executed by the test suite and the total number of lines in the production code base.

Figure 1.4. The branch metric is calculated as the ratio of the number of code branches exercised by the test suite and the total number of branches in the production code base.

Figure 1.5. The method IsStringLong represented as a graph of possible code paths. Test covers only one of the two code paths, thus providing 50% branch coverage.

Figure 1.6. Hidden code paths of external libraries. Coverage metrics have no way to see how many of them there are and how many of them your tests exercise.

Chapter 2. What is a unit test?

Figure 2.1. Replacing the dependencies of the system under test with test doubles allows you to focus on verifying the system under test exclusively, as well as split the otherwise large interconnected object graph.

Figure 2.2. Isolating the class under test from its dependencies helps establish a simple test suite structure: one class with tests for each class in the production code.

Figure 2.3. Isolating unit tests from each other entails isolating the class under test from shared dependencies only. Private dependencies can be kept intact.

Figure 2.4. The hierarchy of dependencies. The classical school advocates for replacing shared dependencies with test doubles. The London school advocates for the replacement of private dependencies as well, as long as they are mutable.

Figure 2.5. The relation between shared and out-of-process dependencies. An example of a dependency that is shared but not out-of-process is a singleton (an instance that is reused by all tests) or a static field in a class. A database is shared and out-of-process—it resides outside the main process and is mutable. A read-only API is out-of-process but not shared, since tests can’t modify it and thus can’t affect each other’s execution flow.

Figure 2.6. End-to-end tests normally include all or almost all out-of-process dependencies in the scope. Integration tests check only one or two such dependencies—those that are easier to set up automatically, such as the database or the file system.

Chapter 3. The anatomy of a unit test

Figure 3.1. Multiple arrange, act, and assert sections are a hint that the test verifies too many things at once. Such a test needs to be split into several tests to fix the problem.

Figure 3.2. A typical application exhibits multiple behaviors. The greater the complexity of the behavior, the more facts are required to fully describe it. Each fact is represented by a test. Similar facts can be grouped into a single test method using parameterized tests.

Chapter 4. The four pillars of a good unit test

Figure 4.1. A test that couples to the SUT’s algorithm. Such a test expects to see one particular implementation (the specific steps the SUT must take to deliver the result) and therefore is brittle. Any refactoring of the SUT’s implementation would lead to a test failure.

Figure 4.2. The test on the left couples to the SUT’s observable behavior as opposed to implementation details. Such a test is resistant to refactoring—it will trigger few, if any, false positives.

Figure 4.3. The relationship between protection against regressions and resistance to refactoring. Protection against regressions guards against false negatives (type II errors). Resistance to refactoring minimizes the number of false positives (type I errors).

Figure 4.4. A test is accurate insofar as it generates a strong signal (is capable of finding bugs) with as little noise (false alarms) as possible.

Figure 4.5. False positives (false alarms) don’t have as much of a negative effect in the beginning. But they become increasingly important as the project grows—as important as false negatives (unnoticed bugs).

Figure 4.6. End-to-end tests provide great protection against both regression errors and false positives, but they fail at the metric of fast feedback.

Figure 4.7. Trivial tests have good resistance to refactoring, and they provide fast feedback, but such tests don’t protect you from regressions.

Figure 4.8. Brittle tests run fast and they provide good protection against regressions, but they have little resistance to refactoring.

Figure 4.9. It’s impossible to create an ideal test that would have a perfect score in all three attributes.

Figure 4.10. The best tests exhibit maximum maintainability and resistance to refactoring; always try to max out these two attributes. The trade-off comes down to the choice between protection against regressions and fast feedback.

Figure 4.11. The Test Pyramid advocates for a certain ratio of unit, integration, and end-to-end tests.

Figure 4.12. Different types of tests in the pyramid make different choices between fast feedback and protection against regressions. End-to-end tests favor protection against regressions, unit tests emphasize fast feedback, and integration tests lie in the middle.

Chapter 5. Mocks and test fragility

Figure 5.1. All variations of test doubles can be categorized into two types: mocks and stubs.

Figure 5.2. Sending an email is an outcoming interaction: an interaction that results in a side effect in the SMTP server. A test double emulating such an interaction is a mock. Retrieving data from the database is an incoming interaction; it doesn’t result in a side effect. The corresponding test double is a stub.

Figure 5.3. In the command query separation (CQS) principle, commands correspond to mocks, while queries are consistent with stubs.

Figure 5.4. In a well-designed API, the observable behavior coincides with the public API, while all implementation details are hidden behind the private API.

Figure 5.5. A system leaks implementation details when its public API extends beyond the observable behavior.

Figure 5.6. The API of User is not well-designed: it exposes the NormalizeName method, which is not part of the observable behavior.

Figure 5.7. User with a well-designed API. Only the observable behavior is public; the implementation details are now private.

Figure 5.8. A typical application consists of a domain layer and an application services layer. The domain layer contains the application’s business logic; application services tie that logic to business use cases.

Figure 5.9. A hexagonal architecture is a set of interacting applications—hexagons.

Figure 5.10. Tests working with different layers have a fractal nature: they verify the same behavior at different levels. A test of an application service checks to see how the overall business use case is executed. A test working with a domain class verifies an intermediate subgoal on the way to use-case completion.

Figure 5.11. There are two types of communications: intra-system (between classes inside the application) and inter-system (between applications).

Figure 5.12. Inter-system communications form the observable behavior of your application as a whole. Intra-system communications are implementation details.

Figure 5.13. The example in listing 5.9 represented using the hexagonal architecture. The communications between the hexagons are inter-system communications. The communication inside the hexagon is intra-system.

Figure 5.14. Communications with an out-of-process dependency that can’t be observed externally are implementation details. They don’t have to stay in place after refactoring and therefore shouldn’t be verified with mocks.

Chapter 6. Styles of unit testing

Figure 6.1. In output-based testing, tests verify the output the system generates. This style of testing assumes there are no side effects and the only result of the SUT’s work is the value it returns to the caller.

Figure 6.2. PriceEngine represented using input-output notation. Its CalculateDiscount() method accepts an array of products and calculates a discount.

Figure 6.3. In state-based testing, tests verify the final state of the system after an operation is complete. The dashed circles represent that final state.

Figure 6.4. In communication-based testing, tests substitute the SUT’s collaborators with mocks and verify that the SUT calls those collaborators correctly.

Figure 6.5. CalculateDiscount() has one input (a Product array) and one output (the decimal discount). Both the input and the output are explicitly expressed in the method’s signature, which makes CalculateDiscount() a mathematical function.

Figure 6.6. A typical example of a function in mathematics is f(x) = x + 1. For each input number x in set X, the function finds a corresponding number y in set Y.

Figure 6.7. The CalculateDiscount() method represented using the same notation as the function f(x) = x + 1. For each input array of products, the method finds a corresponding discount as an output.

Figure 6.8. Method AddComment (shown as f) has a text input and a Comment output, which are both expressed in the method signature. The side effect is an additional hidden output.

Figure 6.9. In functional architecture, the functional core is implemented using mathematical functions and makes all decisions in the application. The mutable shell provides the functional core with input data and interprets its decisions by applying side effects to out-of-process dependencies such as a database.

Figure 6.10. Hexagonal architecture is a set of interacting applications—hexagons. Your application consists of a domain layer and an application services layer, which correspond to a functional core and a mutable shell in functional architecture.

Figure 6.11. The audit system stores information about visitors in text files with a specific format. When the maximum number of entries per file is reached, the system creates a new file.

Figure 6.12. Tests covering the initial version of the audit system would have to work directly with the filesystem.

Figure 6.13. Tests can mock the filesystem and capture the writes the audit system makes to the files.

Figure 6.14. Persister and AuditManager form the functional architecture. Persister gathers files and their contents from the working directory, feeds them to AuditManager, and then converts the return value into changes in the filesystem.

Figure 6.15. ApplicationService glues the functional core (AuditManager) and the mutable shell (Persister) together and provides an entry point for external clients. In the hexagonal architecture taxonomy, ApplicationService and Persister are part of the application services layer, while AuditManager belongs to the domain model.

Figure 6.16. A dependency on the database introduces a hidden input to AuditManager. Such a class is no longer purely functional, and the whole application no longer follows the functional architecture.

Chapter 7. Refactoring toward valuable unit tests

Figure 7.1. The four types of code, categorized by code complexity and domain significance (the vertical axis) and the number of collaborators (the horizontal axis).

Figure 7.2. Refactor overcomplicated code by splitting it into algorithms and controllers. Ideally, you should have no code in the top-right quadrant.

Figure 7.3. It’s hard to test code that couples to a difficult dependency. Tests have to deal with that dependency, too, which increases their maintenance cost.

Figure 7.4. The Humble Object pattern extracts the logic out of the overcomplicated code, making that code so humble that it doesn’t need to be tested. The extracted logic is moved into another class, decoupled from the hard-to-test dependency.

Figure 7.5. The functional core in a functional architecture and the domain layer in a hexagonal architecture reside in the top-left quadrant: they have few collaborators and exhibit high complexity and domain significance. The functional core is closer to the vertical axis because it has no collaborators. The mutable shell (functional architecture) and the application services layer (hexagonal architecture) belong to the controllers’ quadrant.

Figure 7.6. Code depth versus code width is a useful metaphor to apply when you think of the separation between the business logic and orchestration responsibilities. Controllers orchestrate many dependencies (represented as arrows in the figure) but aren’t complex on their own (complexity is represented as block height). Domain classes are the opposite of that.

Figure 7.7. The initial implementation of the User class scores highly on both dimensions and thus falls into the category of overcomplicated code.

Figure 7.8. Take 2 puts User in the domain model quadrant, close to the vertical axis. UserController almost crosses the boundary with the overcomplicated quadrant because it contains complex logic.

Figure 7.9. User has shifted to the right because it now has the Company collaborator. UserController firmly stands in the controllers quadrant; all its complexity has moved to the factories.

Figure 7.10. Hexagonal and functional architectures work best when all references to out-of-process dependencies can be pushed to the edges of business operations.

Figure 7.11. A hexagonal architecture doesn’t work as well when you need to refer to out-of-process dependencies in the middle of the business operation.

Figure 7.12. There’s no single solution that satisfies all three attributes: controller simplicity, domain model testability, and performance. You have to choose two out of the three.

Figure 7.13. A map that shows communications among components in the CRM and the relationship between these communications and observable behavior

Chapter 8. Why integration testing?

Figure 8.1. Integration tests cover controllers, while unit tests cover the domain model and algorithms. Trivial and overcomplicated code shouldn’t be tested at all.

Figure 8.2. The Test Pyramid represents a trade-off that works best for most applications. Fast, cheap unit tests cover the majority of edge cases, while a smaller number of slow, more expensive integration tests ensure the correctness of the system as a whole.

Figure 8.3. The Test Pyramid of a simple project. Little complexity requires a smaller number of unit tests compared to a normal pyramid.

Figure 8.4. Communications with managed dependencies are implementation details; use such dependencies as-is in integration tests. Communications with unmanaged dependencies are part of your system’s observable behavior. Such dependencies should be mocked out.

Figure 8.5. Treat the part of the database that is visible to external applications as an unmanaged dependency. Replace it with mocks in integration tests. Treat the rest of the database as a managed dependency. Verify its final state, not interactions with it.

Figure 8.6. The use case of changing the user’s email. The controller orchestrates the work between the database, the message bus, and the domain model.

Figure 8.7. End-to-end tests emulate the external client and therefore test a deployed version of the application with all out-of-process dependencies included in the testing scope. End-to-end tests shouldn’t check managed dependencies (such as the database) directly, only indirectly through the application.

Figure 8.8. Integration tests host the application within the same process. Unlike end-to-end tests, integration tests substitute unmanaged dependencies with mocks. The only out-of-process components for integration tests are managed dependencies.

Figure 8.9. Various application concerns are often addressed by separate layers of indirection. A typical feature takes up a small portion of each layer.

Figure 8.10. You can get away with just three layers: the domain layer (contains domain logic), application services layers (provides an entry point for the external client, and coordinates the work between domain classes and out-of-process dependencies), and infrastructure layer (works with out-of-process dependencies; database repositories, ORM mappings, and SMTP gateways reside in this layer).

Figure 8.11. With an interface, you remove the circular dependency at compile time, but not at runtime. The cognitive load required to understand the code doesn’t become any smaller.

Figure 8.12. Structured logging decouples log data from renderings of that data. You can set up multiple renderings, such as a flat log file, JSON, or CSV file.

Chapter 9. Mocking best practices

Figure 9.1. IBus resides at the system’s edge; IMessageBus is only an intermediate link in the chain of types between the controller and the message bus. Mocking IBus instead of IMessageBus achieves the best protection against regressions.

Chapter 10. Testing the database

Figure 10.1. Having a dedicated instance as a model database is an anti-pattern. The database schema is best stored in a source control system.

Figure 10.2. The migration-based approach to database delivery emphasizes the use of explicit migrations that transition the database from one version to another.

Figure 10.3. The state-based approach makes the state explicit and migrations implicit; the migration-based approach makes the opposite choice.

Figure 10.4. Wrapping each database call into a separate transaction introduces a risk of inconsistencies due to hardware or software failures. For example, the application can update the number of employees in the company but not the employees themselves.

Figure 10.5. The transaction mediates interactions between the controller and the database and thus enables atomic data modification.

Figure 10.6. A unit of work executes all updates at the end of the business operation. The updates are still wrapped in a database transaction, but that transaction lives for a shorter period of time, thus reducing data congestion.

Figure 10.7. There’s no need for a domain model in reads. And because the cost of a mistake in reads is lower than it is in writes, there’s also not as much need for integration testing.

Figure 10.8. Repositories exhibit little complexity and communicate with the out-of-process dependency, thus falling into the controllers quadrant on the types-of-code diagram.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.96.146