Table of Contents

Copyright

Brief Table of Contents

Table of Contents

List of Figures

List of Tables

List of Listings

Early Praise for The Art of Unit Testing

Foreword

Preface

Acknowledgments

About this book

About the cover illustration

1. Getting started

Chapter 1. The basics of unit testing

1.1. Unit testing—the classic definition

1.1.1. The importance of writing “good” unit tests

1.1.2. We’ve all written unit tests (sort of)

1.2. Properties of a good unit test

1.3. Integration tests

1.3.1. Drawbacks of integration tests compared to automated unit tests

1.4. Good unit test—a definition

1.5. A simple unit test example

1.6. Test-driven development

1.7. Summary

Chapter 2. A first unit test

2.1. Frameworks for unit testing

2.1.1. What unit-testing frameworks offer

2.1.2. The xUnit frameworks

2.2. Introducing the LogAn project

2.3. First steps with NUnit

2.3.1. Installing NUnit

2.3.2. Loading up the solution

2.3.3. Using the NUnit attributes in your code

2.4. Writing our first test

2.4.1. The Assert class

2.4.2. Running our first test with NUnit

2.4.3. Fixing our code and passing the test

2.4.4. From red to green

2.5. More NUnit attributes

2.5.1. Setup and teardown

2.5.2. Checking for expected exceptions

2.5.3. Ignoring tests

2.5.4. Setting test categories

2.6. Indirect testing of state

2.7. Summary

2. Core techniques

Chapter 3. Using stubs to break dependencies

3.1. Introducing stubs

3.2. Identifying a filesystem dependency in LogAn

3.3. Determining how to easily test LogAnalyzer

3.4. Refactoring our design to be more testable

3.4.1. Extract an interface to allow replacing underlying implementation

3.4.2. Inject stub implementation into a class under test

3.4.3. Receive an interface at the constructor level (constructor injection)

3.4.4. Receive an interface as a property get or set

3.4.5. Getting a stub just before a method call

3.5. Variations on refactoring techniques

3.5.1. Using Extract and Override to create stub results

3.6. Overcoming the encapsulation problem

3.6.1. Using internal and [InternalsVisibleTo]

3.6.2. Using the [Conditional] attribute

3.6.3. Using #if and #endif with conditional compilation

3.7. Summary

Chapter 4. Interaction testing using mock objects

4.1. State-based versus interaction testing

4.2. The difference between mocks and stubs

4.3. A simple manual mock example

4.4. Using a mock and a stub together

4.5. One mock per test

4.6. Stub chains: stubs that produce mocks or other stubs

4.7. The problems with handwritten mocks and stubs

4.8. Summary

Chapter 5. Isolation (mock object) frameworks

5.1. Why use isolation frameworks?

5.2. Dynamically creating a fake object

5.2.1. Introducing Rhino Mocks into your tests

5.2.2. Replacing a handwritten mock object with a dynamic one

5.3. Strict versus nonstrict mock objects

5.3.1. Strict mocks

5.3.2. Nonstrict mocks

5.4. Returning values from fake objects

5.5. Creating smart stubs with an isolation framework

5.5.1. Creating a stub in Rhino Mocks

5.5.2. Combining dynamic stubs and mocks

5.6. Parameter constraints for mocks and stubs

5.6.1. Checking parameters with string constraints

5.6.2. Checking parameter object properties with constraints

5.6.3. Executing callbacks for parameter verification

5.7. Testing for event-related activities

5.7.1. Testing that an event has been subscribed to

5.7.2. Triggering events from mocks and stubs

5.7.3. Testing whether an event was triggered

5.8. Arrange-act-assert syntax for isolation

5.9. Current isolation frameworks for .NET

5.9.1. NUnit.Mocks

5.9.2. NMock

5.9.3. NMock2

5.9.4. Typemock Isolator

5.9.5. Rhino Mocks

5.9.6. Moq

5.10. Advantages of isolation frameworks

5.11. Traps to avoid when using isolation frameworks

5.11.1. Unreadable test code

5.11.2. Verifying the wrong things

5.11.3. Having more than one mock per test

5.11.4. Overspecifying the tests

5.12. Summary

3. The test code

Chapter 6. Test hierarchies and organization

6.1. Having automated builds run automated tests

6.1.1. Anatomy of an automated build

6.1.2. Triggering builds and continuous integration

6.1.3. Automated build types

6.2. Mapping out tests based on speed and type

6.2.1. The human factor of separating unit from integration tests

6.2.2. The safe green zone

6.3. Ensuring tests are part of source control

6.4. Mapping test classes to code under test

6.4.1. Mapping tests to projects

6.4.2. Mapping tests to classes

6.4.3. Mapping tests to specific methods

6.5. Building a test API for your application

6.5.1. Using test class inheritance patterns

6.5.2. Creating test utility classes and methods

6.5.3. Making your API known to developers

6.6. Summary

Chapter 7. The pillars of good tests

7.1. Writing trustworthy tests

7.1.1. Deciding when to remove or change tests

7.1.2. Avoiding logic in tests

7.1.3. Testing only one thing

7.1.4. Making tests easy to run

7.1.5. Assuring code coverage

7.2. Writing maintainable tests

7.2.1. Testing private or protected methods

7.2.2. Removing duplication

7.2.3. Using setup methods in a maintainable manner

7.2.4. Enforcing test isolation

7.2.5. Avoiding multiple asserts

7.2.6. Avoiding testing multiple aspects of the same object

7.2.7. Avoiding overspecification in tests

7.3. Writing readable tests

7.3.1. Naming unit tests

7.3.2. Naming variables

7.3.3. Asserting yourself with meaning

7.3.4. Separating asserts from actions

7.3.5. Setting up and tearing down

7.4. Summary

4. Design and process

Chapter 8. Integrating unit testing into the organization

8.1. Steps to becoming an agent of change

8.1.1. Be prepared for the tough questions

8.1.2. Convince insiders: champions and blockers

8.1.3. Identify possible entry points

8.2. Ways to succeed

8.2.1. Guerrilla implementation (bottom-up)

8.2.2. Convincing management (top-down)

8.2.3. Getting an outside champion

8.2.4. Making progress visible

8.2.5. Aiming for specific goals

8.2.6. Realizing that there will be hurdles

8.3. Ways to fail

8.3.1. Lack of a driving force

8.3.2. Lack of political support

8.3.3. Bad implementations and first impressions

8.3.4. Lack of team support

8.4. Tough questions and answers

8.4.1. How much time will this add to the current process?

8.4.2. Will my QA job be at risk because of this?

8.4.3. How do we know this is actually working?

8.4.4. Is there proof that unit testing helps?

8.4.5. Why is the QA department still finding bugs?

8.4.6. We have lots of code without tests: where do we start?

8.4.7. We work in several languages: is unit testing feasible?

8.4.8. What if we develop a combination of software and hardware?

8.4.9. How can we know we don’t have bugs in our tests?

8.4.10. My debugger shows that my code works: why do I need tests?

8.4.11. Must we do TDD-style coding?

8.5. Summary

Chapter 9. Working with legacy code

9.1. Where do you start adding tests?

9.2. Choosing a selection strategy

9.2.1. Pros and cons of the easy-first strategy

9.2.2. Pros and cons of the hard-first strategy

9.3. Writing integration tests before refactoring

9.4. Important tools for legacy code unit testing

9.4.1. Isolate dependencies easily with Typemock Isolator

9.4.2. Find testability problems with Depender

9.4.3. Use JMockit for Java legacy code

9.4.4. Use Vise while refactoring your Java code

9.4.5. Use FitNesse for acceptance tests before you refactor

9.4.6. Read Michael Feathers’ book on legacy code

9.4.7. Use NDepend to investigate your production code

9.4.8. Use ReSharper to navigate and refactor production code

9.4.9. Detect duplicate code (and bugs) with Simian

9.4.10. Detect threading issues with Typemock Racer

9.5. Summary

Appendix A. Design and testability

A.1. Why should I care about testability in my design?

A.2. Design goals for testability

A.2.1. Make methods virtual by default

A.2.2. Use interface-based designs

A.2.3. Make classes nonsealed by default

A.2.4. Avoid instantiating concrete classes inside methods with logic

A.2.5. Avoid direct calls to static methods

A.2.6. Avoid constructors and static constructors that do logic

A.2.7. Separate singletons and singleton holders

A.3. Pros and cons of designing for testability

A.3.1. Amount of work

A.3.2. Complexity

A.3.3. Exposing sensitive IP

A.3.4. Sometimes you can’t

A.4. Alternatives to designing for testability

A.5. Summary

Appendix B. Extra tools and frameworks

B.1. Isolation frameworks

B.1.1. Moq

B.1.2. Rhino Mocks

B.1.3. Typemock Isolator

B.1.4. NMock

B.1.5. NUnit.Mocks

B.2. Test frameworks

B.2.1. Microsoft’s Unit Testing Framework

B.2.2. NUnit

B.2.3. MbUnit

B.2.4. Gallio

B.2.5. xUnit

B.2.6. Pex

B.3. IoC containers

B.3.1. StructureMap

B.3.2. Microsoft Unity

B.3.3. Castle Windsor

B.3.4. Autofac

B.3.5. Common Service Locator Library

B.3.6. Spring.NET

B.3.7. Microsoft Managed Extensibility Framework

B.3.8. Ninject

B.4. Database testing

B.4.1. Use integration tests for your data layer

B.4.2. Use rollback attributes

B.4.3. Use TransactionScope to roll back

B.5. Web testing

B.5.1. Ivonna

B.5.2. Team System Web Test

B.5.3. NUnitAsp

B.5.4. Watir

B.5.5. WatiN

B.5.6. Selenium

B.6. UI testing

B.6.1. NUnitForms

B.6.2. Project White

B.6.3. Team System UI Tests

B.7. Thread-related testing

B.7.1. Typemock Racer

B.7.2. Microsoft CHESS

B.7.3. Osherove.ThreadTester

B.8. Acceptance testing

B.8.1. FitNesse

B.8.2. StoryTeller

Appendix Test Review Guidelines

Reviewing General Tests

Reviewing Mocks and Stubs

Index

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.70.170