1
The basics of Dependency Injection: What, why, and how

In this chapter

  • Dispelling common myths about Dependency Injection
  • Understanding the purpose of Dependency Injection
  • Evaluating the benefits of Dependency Injection
  • Knowing when to apply Dependency Injection

You may have heard that making a sauce béarnaise is difficult. Even among people who regularly cook, many have never attempted to make one. This is a shame, because the sauce is delicious. (It’s traditionally paired with steak, but it’s also an excellent accompaniment to white asparagus, poached eggs, and other dishes.) Some resort to substitutes like ready-made sauces or instant mixes, but these aren’t nearly as satisfying as the real thing.

A sauce béarnaise is an emulsified sauce made from egg yolk and butter, that’s flavored with tarragon, chervil, shallots, and vinegar. It contains no water. The biggest challenge to making it is that its preparation can fail. The sauce can curdle or separate, and, if either happens, you can’t resurrect it. It takes about 45 minutes to prepare, so a failed attempt means that you may not have time for a second try. On the other hand, any chef can prepare a sauce béarnaise. It’s part of their training and, as they’ll tell you, it’s not difficult.

You don’t have to be a professional cook to make sauce béarnaise. Anyone learning to make it will fail at least once, but after you get the hang of it, you’ll succeed every time. We think Dependency Injection (DI) is like sauce béarnaise. It’s assumed to be difficult, and, if you try to use it and fail, it’s likely there won’t be time for a second attempt.

Despite the fear, uncertainty, and doubt (FUD) surrounding DI, it’s as easy to learn as making a sauce béarnaise. You may make mistakes while you learn, but once you’ve mastered the technique, you’ll never again fail to apply it successfully.

Stack Overflow, the software development Q&A website, features an answer to the question, “How to explain Dependency Injection to a 5-year old?” The most highly rated answer, by John Munsch, provides a surprisingly accurate analogy targeted at the (imaginary) five-year-old inquisitor:1 

When you go and get things out of the refrigerator for yourself, you can cause problems. You might leave the door open, you might get something Mommy or Daddy doesn’t want you to have. You might even be looking for something we don’t even have or which has expired.

What you should be doing is stating a need, “I need something to drink with lunch,” and then we will make sure you have something when you sit down to eat.

What this means in terms of object-oriented software development is this: collaborating classes (the five-year-old) should rely on infrastructure (the parents) to provide necessary services.

This chapter is fairly linear in structure. First, we introduce DI, including its purpose and benefits. Although we include examples, overall, this chapter has less code than any other chapter in the book. Before we introduce DI, we discuss the basic purpose of DI — maintainability. This is important because it’s easy to misunderstand DI if you aren’t properly prepared. Next, after an example (Hello DI!), we discuss benefits and scope, laying out a road map for the book. When you’re done with this chapter, you should be prepared for the more advanced concepts in the rest of the book.

To most developers, DI may seem like a rather backward way of creating source code, and, like sauce béarnaise, there’s much FUD involved. To learn about DI, you must first understand its purpose.

1.1 Writing maintainable code

What purpose does DI serve? DI isn’t a goal in itself; rather, it’s a means to an end. Ultimately, the purpose of most programming techniques is to deliver working software as efficiently as possible. One aspect of that is to write maintainable code.

Unless you only write prototypes, or applications that never make it past their first release, you find yourself maintaining and extending existing code bases. To work effectively with such code bases, in general, the more maintainable they are, the better.

An excellent way to make code more maintainable is through loose coupling. As far back as 1994, when the Gang of Four wrote Design Patterns, this was already common knowledge:2 

Program to an interface, not an implementation.

This important piece of advice isn’t the conclusion, but, rather, the premise of Design Patterns. Loose coupling makes code extensible, and extensibility makes it maintainable. DI is nothing more than a technique that enables loose coupling. Moreover, there are many misconceptions about DI, and sometimes they get in the way of proper understanding. Before you can learn, you must unlearn what (you think) you already know.

1.1.1 Common myths about DI

You may never have come across or heard of DI before, and that’s great. Skip this section and go straight to section 1.1.2. But, if you’re reading this book, it’s likely you’ve at least come across it in conversation, in a code base you inherited, or in blog posts. You may also have noticed that it comes with a fair amount of heavy opinions. In this section, we’re going to look at four of the most common misconceptions about DI that have appeared over the years and why they aren’t true. These myths include the following:

  • DI is only relevant for late binding.
  • DI is only relevant for unit testing.
  • DI is a sort of Abstract Factory on steroids.
  • DI requires a DI Container.

Although none of these myths are true, they’re prevalent nonetheless. We need to dispel them before you can start to learn about DI.

Late binding

In this context, late binding refers to the ability to replace parts of an application without recompiling the code. An application that enables third-party add-ins (such as Visual Studio) is one example. Another example is the standard software that supports different runtime environments.

Suppose you have an application that runs on more than one database engine (for example, one that supports both Oracle and SQL Server). To support this feature, the rest of the application talks to the database through an interface. The code base provides different implementations of this interface to access Oracle and SQL Server, respectively. In this case, you can use a configuration option to control which implementation should be used for a given installation.

It’s a common misconception that DI is only relevant for this sort of scenario. That’s understandable, because DI enables this scenario. But the fallacy is to think that the relationship is symmetric. The fact that DI enables late binding doesn’t mean that it’s only relevant in late-binding scenarios. As figure 1.1 illustrates, late binding is only one of the many aspects of DI.

01-01.tif

Figure 1.1 Late binding is enabled by DI, but to assume that it’s only applicable in late-binding scenarios is to adopt a narrow view of a much broader vista.

If you thought that DI was only relevant for late-binding scenarios, this is something you need to unlearn. DI does much more than enable late binding.

Unit testing

Some people think that DI is only relevant for supporting unit testing. This isn’t true, either, although DI is certainly an important part of support for unit testing. To tell you the truth, our original introduction to DI came from struggling with certain aspects of Test-Driven Development (TDD). During that time, we discovered DI and learned that other people had used it to support some of the same scenarios we were addressing.

Even if you don’t write unit tests (if you don’t, you should start now), DI is still relevant because of all the other benefits it offers. Claiming that DI is only relevant for supporting unit testing is like claiming that it’s only relevant for supporting late binding. Figure 1.2 shows that although this is a different view, it’s a view as narrow as figure 1.1. In this book, we’ll do our best to show you the whole picture.

01-02.tif

Figure 1.2 Perhaps you’ve been assuming that unit testing is the sole purpose of DI. Although that assumption is a different view than the late-binding assumption, it, too, is a narrow view of a much broader vista.

If you thought that DI was only relevant for unit testing, unlearn this assumption. DI does much more than enable unit testing.

An Abstract Factory on steroids

Perhaps the most dangerous fallacy is that DI involves some sort of general-purpose Abstract Factory that you can use to create instances of the Dependencies needed in your applications.

In the introduction to this chapter, we wrote that “collaborating classes should rely on infrastructure to provide necessary services.” What were your initial thoughts about this sentence? Did you think about infrastructure as some sort of service you could query to get the Dependencies you need? If so, you aren’t alone. Many developers and architects think about DI as a service that can be used to locate other services. This is called a Service Locator, but it’s the exact opposite of DI.

A Service Locator is often called an Abstract Factory on steroids because, compared to a normal Abstract Factory, the list of resolvable types is unspecified and possibly endless. It typically has one method allowing the creation of all sorts of types, much like in the following:

public interface IServiceLocator
{
    object GetService(Type serviceType);
}

DI Containers

Closely associated with the previous misconception is the notion that DI requires a DI Container. If you held the previous, mistaken belief that DI involves a Service Locator, then it’s easy to conclude that a DI Container can take on the responsibility of the Service Locator. This might be the case, but it’s not at all how you should use a DI Container.

A DI Container is an optional library that makes it easier to compose classes when you wire up an application, but it’s in no way required. When you compose applications without a DI Container, it’s called Pure DI. It might take a little more work, but other than that, you don’t have to compromise on any DI principles.

We have yet to explain exactly what a DI Container is, and how and when you should use it. We’ll go into more detail on this at the end of chapter 3; part 4 is completely dedicated to it.

You may think that, although we’ve exposed four myths about DI, we have yet to make a compelling case against any of them. That’s true. In a sense, this book is one big argument against these common misconceptions, so we’ll certainly return to these topics later. For example, in chapter 5, section 5.2 discusses why Service Locator is an anti-pattern.

In our experience, unlearning is vital because people often try to retrofit what we tell them about DI and align it with what they think they already know. When this happens, it takes time before it finally dawns on them that some of their most basic assumptions are wrong. We want to spare you that experience. If you can, read this book as though you know nothing about DI.

1.1.2 Understanding the purpose of DI

DI isn’t an end goal — it’s a means to an end. DI enables loose coupling, and loose coupling makes code more maintainable. That’s quite a claim, and although we could refer you to well-established authorities like the Gang of Four for details, we find it only fair to explain why this is true.

To get this message across, the next section compares software design and several software design patterns with electrical wiring. We’ve found this to be a powerful analogy. We even use it to explain software design to non-technical people.

We use four specific design patterns in this analogy because they occur frequently in relation to DI. You’ll see many examples of three of these patterns — Decorator, Composite, and Adapter — throughout this book. (We cover the fourth, the Null Object pattern, in chapter 4.) Don’t worry if you’re not that familiar with these patterns: you will be by the end of the book.

Software development is still a rather new profession, so in many ways we’re still figuring out how to implement good architecture. But individuals with expertise in more traditional professions (such as construction) figured it out a long time ago.

Checking into a cheap hotel

If you’re staying at a cheap hotel, you might encounter a sight like the one in figure 1.3. Here, the hotel has kindly provided a hair dryer for your convenience, but apparently they don’t trust you to leave the hair dryer for the next guest: the appliance is directly attached to the wall outlet. The hotel management decided that the cost of replacing stolen hair dryers is high enough to justify what’s otherwise an obviously inferior implementation.

01-03.tif

Figure 1.3 In a cheap hotel room, you might find a hair dryer wired directly into the wall outlet. This is equivalent to using the common practice of writing tightly coupled code.

What happens when the hair dryer stops working? The hotel has to call in a skilled professional. To fix the hardwired hair dryer, the power to the room will have to be cut, rendering it temporarily useless. Then, the technician must use special tools to disconnect the hair dryer and replace it with a new one. If you’re lucky, the technician will remember to turn the power to the room back on and go back to test whether the new hair dryer works — if you’re lucky. Does this procedure sound at all familiar?

This is how you would approach working with tightly coupled code. In this scenario, the hair dryer is tightly coupled to the wall, and you can’t easily modify one without impacting the other.

Comparing electrical wiring to design patterns

Usually, we don’t wire electrical appliances together by attaching the cable directly to the wall. Instead, as in figure 1.4, we use plugs and sockets. A socket defines a shape that the plug must match.

In an analogy to software design, the socket is an interface, and the plug with its appliance is an implementation. This means that the room (the application) has one or (hopefully) more sockets, and the users of the room (the developers) can plug in appliances as they please, potentially even a customer-supplied hair dryer.

01-04.eps

Figure 1.4 Through the use of sockets and plugs, a hair dryer can be loosely coupled to a wall outlet.

In contrast to the hardwired hair dryer, plugs and sockets define a loosely coupled model for connecting electrical appliances. As long as the plug (the implementation) fits into the socket (implements the interface), and it can handle the amount of volts and hertz (obeys the interface contract), we can combine appliances in a variety of ways. What’s particularly interesting is that many of these common combinations can be compared to well-known software design principles and patterns.

First, we’re no longer constrained to hair dryers. If you’re an average reader, we would guess that you need power for a computer much more than you do for a hair dryer. That’s not a problem: you unplug the hair dryer and plug a computer into the same socket (figure 1.5).

01-05.eps

Figure 1.5 Using a socket and a plug, you can replace the original hair dryer from figure 1.4 with a computer. This corresponds to the Liskov Substitution Principle.

You can unplug the computer if you don’t need to use it at the moment. Even though nothing is plugged in, the room doesn’t explode. That is to say, if you unplug the computer from the wall, neither the wall outlet nor the computer breaks down.

With software, however, a client often expects a service to be available. If you remove the service, you get a NullReferenceException. To deal with this type of situation, you can create an implementation of an interface that does nothing. This design pattern, known as Null Object, corresponds to having a children’s safety outlet plug (a plug without a wire or appliance that still fits into the socket). And because you’re using loose coupling, you can replace a real implementation with something that does nothing without causing trouble. This is illustrated in figure 1.6.

01-06.eps

Figure 1.6 Unplugging the computer causes neither room nor computer to explode when replaced with a children’s safety outlet plug. This can be roughly likened to the Null Object pattern.

There are many other things you can do, as well. If you live in a neighborhood with intermittent power failures, you may want to keep the computer running by plugging in into an uninterrupted power supply (UPS). As shown in figure 1.7, you connect the UPS to the wall outlet and the computer to the UPS.

01-07.eps

Figure 1.7 A UPS can be introduced to keep the computer running in case of power failure. This corresponds to the Decorator design pattern.

The computer and the UPS serve separate purposes. Each has a Single Responsibility that doesn’t infringe on the other unit. The UPS and computer are likely to be produced by two different manufacturers, bought at different times, and plugged in separately. As figure 1.5 demonstrated, you can run the computer without a UPS, and you could also conceivably use the hair dryer during blackouts by plugging it into the UPS.

In software design, this way of intercepting one implementation with another implementation of the same interface is known as the Decorator design pattern.5  It gives you the ability to incrementally introduce new features and Cross-Cutting Concerns without having to rewrite or change a lot of existing code.

01-08.eps

Figure 1.8 A power strip makes it possible to plug several appliances into a single wall outlet. This corresponds to the Composite design pattern.

Another way to add new functionality to an existing code base is to refactor an existing implementation of an interface with a new implementation. When you aggregate several implementations into one, you use the Composite design pattern.6  Figure 1.8 illustrates how this corresponds to plugging diverse appliances into a power strip.

The power strip has a single plug that you can insert into a single socket, and the power strip itself provides several sockets for a variety of appliances. This enables you to add and remove the hair dryer while the computer is running. In the same way, the Composite pattern makes it easy to add or remove functionality by modifying the set of composed interface implementations.

Here’s a final example. You sometimes find yourself in situations where a plug doesn’t fit into a particular socket. If you’ve traveled to another country, you’ve likely noticed that sockets differ across the world. If you bring something like the camera in figure 1.9 along when traveling, you’ll need an adapter to charge it. Appropriately, there’s a design pattern with the same name.

01-09.eps

Figure 1.9 When traveling, you often need to use an adapter to plug an appliance into a foreign socket (for example, to recharge a camera). This corresponds to the Adapter design pattern. Sometimes, translation is as simple as changing the shape of the plug, or as complex as changing the electric current from alternating current (AC) to direct current (DC).

The Adapter design pattern works like its physical namesake.7  You can use it to match two related, yet separate, interfaces to each other. This is particularly useful when you have an existing third-party API that you want to expose as an instance of an interface your application consumes. As with the physical adapter, implementations of the Adapter design pattern can range from simple to extremely complex.

What’s amazing about the socket and plug model is that, over decades, it’s proven to be an easy and versatile model. Once the infrastructure is in place, it can be used by anyone and adapted to changing needs and unanticipated requirements. What’s even more interesting is that, when we relate this model to software development, all the building blocks are already in place in the form of design principles and patterns.

The advantage of loose coupling is the same in software design as it is in the physical socket and plug model: Once the infrastructure is in place, it can be used by anyone and adapted to changing needs and unforeseen requirements without requiring large changes to the application code base and infrastructure. This means that ideally, a new requirement should only necessitate the addition of a new class, with no changes to other already-existing classes of the system.

This concept of being able to extend an application without modifying existing code is called the Open/Closed Principle. It’s impossible to get to a situation where 100% of your code will always be open for extensibility and closed for modification. Still, loose coupling does bring you closer to that goal.

And, with every step, it gets easier to add new features and requirements to your system. Being able to add new features without touching existing parts of the system means that problems are isolated. This leads to code that’s easier to understand and test, allowing you to manage the complexity of your system. That’s what loose coupling can help you with, and that’s why it can make a code base much more maintainable. We’ll discuss the Open/Closed Principle in more detail in chapter 4.

By now you might be wondering how these patterns will look when implemented in code. Don’t worry about that. As we stated before, we’ll show you plenty of examples of those patterns throughout this book. In fact, later in this chapter, we’ll show you an implementation of both the Decorator and Adapter patterns.

The easy part of loose coupling is programming to an interface instead of an implementation. The question is, “Where do the instances come from?” In a sense, this is what this entire book is about: it’s the core question that DI seeks to answer.

You can’t create a new instance of an interface the same way that you create a new instance of a concrete type. Code like this doesn’t compile:

01-14_hedgehog.eps

An interface contains no implementation, so this isn’t possible. The writer instance must be created using a different mechanism. DI solves this problem. With this outline of the purpose of DI, we think you’re ready for an example.

1.2 A simple example: Hello DI!

In the tradition of innumerable programming textbooks, let’s take a look at a simple console application that writes “Hello DI!” to the screen. Note that the full code is available as part of the download for this book, as mentioned in the section “Code conventions and downloads” at the beginning of this book.

In this section, we’ll show you what the code looks like and briefly outline some key benefits without going into details. In the rest of the book, we’ll get more specific.

1.2.1 Hello DI! code

You’re probably used to seeing Hello World examples that are written with a single line of code. Here, we’ll take something that’s extremely simple and make it more complicated. Why? We’ll get to that shortly, but let’s first see what Hello World would look like with DI.

Collaborators

To get a sense of the structure of the program, we’ll start by looking at the Main method of the console application. Then we’ll show you the collaborating classes; but first, here’s the Main method of the Hello DI! application:

private static void Main()
{
    IMessageWriter writer = new ConsoleMessageWriter();
    var salutation = new Salutation(writer);
    salutation.Exclaim();
}

Because the program needs to write to the console, it creates a new instance of ConsoleMessageWriter that encapsulates that functionality. It passes that message writer to the Salutation class so that the salutation instance knows where to write its messages. Because everything is now wired up properly, you can execute the logic via the Exclaim method, which results in the message being written to the screen.

The construction of objects inside the Main method is a basic example of Pure DI. No DI Container is used to compose the Salutation and its ConsoleMessageWriter Dependency. Figure 1.10 shows the relationship between the collaborators.

01-10.eps

Figure 1.10 Relationship between the collaborators of the Hello DI! application

Implementing the application logic

The main logic of the application is encapsulated in the Salutation class, shown in listing 1.1.

Listing 1.1 Salutation class encapsulates the main application logic

public class Salutation
{
    private readonly IMessageWriter writer;

    public Salutation(IMessageWriter writer)    ①  
    {
        if (writer == null)    ②  
            throw new ArgumentNullException("writer");  ②  

        this.writer = writer;
    }

    public void Exclaim()
    {
        this.writer.Write("Hello DI!");    ③  
    }
}

The Salutation class depends on a custom interface called IMessageWriter (defined next). It requests an instance of it through its constructor. This practice is called Constructor Injection. A Guard Clause verifies that the supplied IMessageWriter isn’t null by throwing an exception if it is.8  And, finally, you use the previously injected IMessageWriter instance inside the implementation of the Exclaim method by calling its Write method. This sends the Hello DI! message to the IMessageWriter Dependency.

To speak in DI terminology, we say that the IMessageWriter Dependency is injected into the Salutation class using a constructor argument. Note that Salutation has no awareness of ConsoleMessageWriter. It interacts with it exclusively through the IMessageWriter interface. IMessageWriter is a simple interface defined for the occasion:

public interface IMessageWriter
{
    void Write(string message);
}

It could have had other members, but in this simple example, you only need the Write method. It’s implemented by the ConsoleMessageWriter class that the Main method passes to the Salutation class:

public class ConsoleMessageWriter : IMessageWriter
{
    public void Write(string message)
    {
        Console.WriteLine(message);
    }
}

The ConsoleMessageWriter class implements IMessageWriter by wrapping the Console class of the .NET Base Class Library (BCL). This is a simple application of the Adapter design pattern that we talked about in section 1.1.2.

1.2.2 Benefits of DI

You may be wondering about the benefit of replacing a single line of code with two classes and an interface, resulting in 28 lines total. You could easily solve the same problem as shown here:

private static void Main()
{
    Console.WriteLine("Hello DI!");
}

DI might seem like overkill, but there are several benefits to be harvested from using it. How is the previous example better than the usual single line of code you normally use to implement Hello World in C#? In this example, DI adds an overhead of 2800%, but, as complexity increases from one line of code to tens of thousands, this overhead diminishes and all but disappears. Chapter 3 provides a more complex example of applied DI. Although that example is still overly simplistic compared to real-life applications, you should notice that DI is far less intrusive.

We don’t blame you if you find the previous DI example to be over-engineered, but consider this: by its nature, the classic Hello World example is a simple problem with well-specified and constrained requirements. In the real world, software development is never like this. Requirements change and are often fuzzy. The features you must implement also tend to be much more complex. DI helps address such issues by enabling loose coupling. Specifically, you gain the benefits listed in table 1.1.

Table 1.1 Benefits gained from loose coupling. Each benefit is always available but will be valued differently depending on circumstances.
BenefitDescriptionWhen is it valuable?
Late bindingServices can be swapped with other services without recompiling code.Valuable in standard software, but perhaps less so in enterprise applications where the runtime environment tends to be well defined.
ExtensibilityCode can be extended and reused in ways not explicitly planned for.Always valuable.
Parallel developmentCode can be developed in parallel.Valuable in large, complex applications; not so much in small, simple applications.
MaintainabilityClasses with clearly defined responsibilities are easier to maintain.Always valuable.
TestabilityClasses can be unit tested.Always valuable.

We listed the late-binding benefit first because, in our experience, this is the one that’s foremost in most people’s minds. When architects and developers fail to understand the benefits of loose coupling, it’s most likely because they never consider the other benefits.

Late binding

When we explain the benefits of programming to interfaces and DI, the ability to swap out one service with another is the most conspicuous benefit for most people, so they tend to weigh the advantages against the disadvantages with only this benefit in mind. Remember when we suggested that you may need to unlearn before you can learn? You may say that you know your requirements so well that you know you’ll never have to replace, say, your SQL Server database with anything else. But requirements change.

In section 1.2.1, you didn’t use late binding because you explicitly created a new instance of IMessageWriter by hard coding the creation of a new ConsoleMessageWriter instance. You can, however, introduce late binding by changing this single line of code:

IMessageWriter writer = new ConsoleMessageWriter();

To enable late binding, you might replace that line of code with something like the following.

Listing 1.2 Late binding an IMessageWriter implementation

IConfigurationRoot configuration = new ConfigurationBuilder()
    .SetBasePath(Directory.GetCurrentDirectory())
    .AddJsonFile("appsettings.json")
    .Build();

string typeName = configuration["messageWriter"];
Type type = Type.GetType(typeName, throwOnError: true);

IMessageWriter writer = (IMessageWriter)Activator.CreateInstance(type);

By pulling the type name from the application configuration file and creating a Type instance from it, you can use reflection to create an instance of IMessageWriter without knowing the concrete type at compile time. To make this work, you specify the type name in the messageWriter application setting in the application configuration file:

{
  "messageWriter":
    "Ploeh.Samples.HelloDI.Console.ConsoleMessageWriter, HelloDI.Console"
}

Loose coupling enables late binding because there’s only a single place where you create the instance of IMessageWriter. Because the Salutation class works exclusively against the IMessageWriter interface, it never notices the difference. In the Hello DI! example, late binding would enable you to write the message to a different destination than the console; for example, a database or a file. It’s possible to add such features — even though you didn’t explicitly plan ahead for them.

Extensibility

Successful software must be able to change. You’ll need to add new features and extend existing features. Loose coupling lets you efficiently recompose the application, similar to the way you have flexibility when working with electrical plugs and sockets.

Let’s say that you want to make the Hello DI! example more secure by only allowing authenticated users to write the message. Listing 1.3 shows how you can add that feature without changing any of the existing features — you simply add a new implementation of the IMessageWriter interface.

Listing 1.3 Extending the Hello DI! application with a security feature

public class SecureMessageWriter : IMessageWriter    ①  
{
    private readonly IMessageWriter writer;
    private readonly IIdentity identity;

    public SecureMessageWriter(
        IMessageWriter writer,    ②  
        IIdentity identity)
    {
        if (writer == null)
            throw new ArgumentNullException("writer");
        if (identity == null)
            throw new ArgumentNullException("identity");

        this.writer = writer;
        this.identity = identity;
    }

    public void Write(string message)
    {
        if (this.identity.IsAuthenticated)    ③  
        {
            this.writer.Write(message);    ④  
        }
    }
}

Besides an instance of IMessageWriter, the SecureMessageWriter constructor requires an instance of IIdentity. The Write method is implemented by first checking whether the current user is authenticated, using the injected IIdentity. If this is the case, it allows the decorated writer field to Write the message. The only place where you need to change existing code is in the Main method, because you need to compose the available classes differently than before:

IMessageWriter writer =
    new SecureMessageWriter(    ①  

        new ConsoleMessageWriter(),
        WindowsIdentity.GetCurrent());

Notice that you wrap or decorate the old ConsoleMessageWriter instance with the new SecureMessageWriter class. Once more, the Salutation class is unmodified because it only consumes the IMessageWriter interface. Similarly, there’s no need to either modify or duplicate the functionality in the ConsoleWriter class, either. You use the System.Security.Principal.WindowsIdentity class to retrieve the identity of the user on whose behalf this code is being executed.10 

As we’ve stated before, loose coupling enables you to write code that’s open for extensibility, but closed for modification. The only place where you need to modify the code is at the application entry point. SecureMessageWriter implements the security features of the application, whereas ConsoleMessageWriter addresses the user interface. This enables you to vary these aspects independently of each other and compose them as needed. Each class has its own Single Responsibility.

Parallel development

Separation of concerns makes it possible to develop code in parallel. When a software development project grows to a certain size, it becomes necessary to have multiple developers work in parallel on the same code base. At a larger scale, it’s even necessary to separate the development team into multiple teams of manageable sizes. Each team is often assigned responsibility for an area of the overall application. To demarcate responsibilities, each team develops one or more modules that will need to be integrated into the finished application. Unless the areas of each team are truly independent, some teams are likely to depend on functionality developed by other teams.

In the previous example, because the SecureMessageWriter and ConsoleMessageWriter classes don’t depend directly on each other, they could’ve been developed by parallel teams. All they would have needed to agree on was the shared interface IMessageWriter.

Maintainability

As the responsibility of each class becomes clearly defined and constrained, maintenance of the overall application becomes easier. This is a consequence of the Single Responsibility Principle, which states that each class should have only a single responsibility. We’ll discuss the Single Responsibility Principle in more detail in chapter 2.

Adding new features to an application becomes simpler because it’s clear where changes should be applied. More often than not, you don’t need to change existing code, but can instead add new classes and recompose the application. This is the Open/Closed Principle in action again.

Troubleshooting also tends to become less grueling, because the scope of likely culprits narrows. With clearly defined responsibilities, you’ll often have a good idea of where to start looking for the root cause of a problem.

Testability

An application is considered Testable when it can be unit tested. For some, Testability is the least of their worries; for others, it’s an absolute requirement. Personally, we belong in the latter category. In Mark’s career, he’s declined several job offers because they involved working with certain products that weren’t Testable.

The benefit of Testability is perhaps the most controversial of those we’ve listed. Some developers and architects still don’t practice unit testing, so they consider this benefit irrelevant at best. We, however, see it as an essential part of software development, which is why we marked it as “Always valuable” in table 1.1. Michael Feathers even defines the term legacy application as any application that isn’t covered by unit tests.11 

Almost by accident, loose coupling enables unit testing because consumers follow the Liskov Substitution Principle: they don’t care about the concrete types of their Dependencies. This means that you can inject Test Doubles into the System Under Test (SUT), as you’ll see in listing 1.4.

The ability to replace intended Dependencies with test-specific replacements is a by-product of loose coupling, but we chose to list it as a separate benefit because the derived value is different. Our personal experience is that DI is beneficial even during integration testing. Although integration tests typically communicate with real external systems (like a database), you still need to have a certain degree of isolation. In other words, there are still reasons to replace, Intercept, or mock certain Dependencies in the application being tested.

Depending on the type of application you’re developing, you may or may not care about the ability to do late binding, but we always care about Testability. Some developers don’t care about Testability but find late binding important for the application they’re developing. Regardless, DI provides options in the future with minimal additional overhead today.

Example: unit testing HelloDI logic

In section 1.2.1, you saw the Hello DI! example. Although we showed you the final code first, we developed it using TDD. Listing 1.4 shows the most important unit test.

Listing 1.4 Unit testing the Salutation class

[Fact]
public void ExclaimWillWriteCorrectMessageToMessageWriter()
{
    var writer = new SpyMessageWriter();
    var sut = new Salutation(writer);    ①  
    sut.Exclaim();
    Assert.Equal(
        expected: "Hello DI!",
        actual: writer.WrittenMessage);
}

public class SpyMessageWriter : IMessageWriter
{
    public string WrittenMessage { get; private set; }

    public void Write(string message)
    {
        this.WrittenMessage += message;
    }
}

The Salutation class needs an instance of the IMessageWriter interface, so you need to create one. You could use any implementation, but in unit tests, a Test Double can be useful — in this case, you roll your own Test Spy implementation.14 

In this case, the Test Double is as involved as the production implementation. This is an artifact of how simple our example is. In most applications, a Test Double is significantly simpler than the concrete, production implementations it stands in for. The important part is to supply a test-specific implementation of IMessageWriter to ensure that you test only one thing at a time. Right now, you’re testing the Exclaim method of the Salutation class, so you don’t want a production implementation of IMessageWriter to pollute the test. To create the Salutation class, you pass in the Test Spy instance of IMessageWriter using Constructor Injection.

After exercising the SUT, you can call Assert.Equal to verify whether the expected outcome equals the actual outcome. If the IMessageWriter.Write method was invoked with the "Hello DI!" string, SpyMessageWriter would have stored this in its WrittenMessage property, and the Equal method completes. But if the Write method wasn’t called, or was called with a different value, the Equal method would throw an exception, and the test would fail.

Loose coupling provides many benefits: code becomes easier to develop, maintain, and extend, and it becomes more Testable. It’s not even particularly difficult. We program against interfaces, not concrete implementations. The only major obstacle is to figure out how to get hold of instances of those interfaces. DI surmounts this obstacle by injecting the Dependencies from the outside. Constructor Injection is the preferred method of doing that, though we’ll also explore a few additional options in chapter 4.

1.3 What to inject and what not to inject

In the previous section, we described the motivational forces that makes one think about DI in the first place. If you’re convinced that loose coupling is a benefit, you may want to make everything loosely coupled. Overall, that’s a good idea. When you need to decide how to package modules, loose coupling proves especially useful. But you don’t have to abstract everything away and make it pluggable. In this section, we’ll provide some decision tools to help you decide how to model your Dependencies.

The .NET BCL consists of many assemblies. Every time you write code that uses a type from a BCL assembly, you add a dependency to your module. In the previous section, we discussed how loose coupling is important and how programming to an interface is the cornerstone. Does this imply that you can’t reference any BCL assemblies and use their types directly in your application? What if you’d like to use an XmlWriter that’s defined in the System.Xml assembly?

You don’t have to treat all Dependencies equally. Many types in the BCL can be used without jeopardizing an application’s degree of coupling — but not all of them. It’s important to know how to distinguish between types that pose no danger and types that may tighten an application’s degree of coupling. Focus mainly on the latter.

As you learn DI, it can be helpful to categorize your Dependencies into Stable Dependencies and Volatile Dependencies. Deciding where to put your Seams will soon become second nature to you. The next sections discuss these concepts in more detail.

1.3.1 Stable Dependencies

Many of the modules in the BCL and beyond pose no threat to an application’s degree of modularity. They contain reusable functionality that you can use to make your own code more succinct. The BCL modules are always available to your application, because it needs the .NET Framework to run, and, because they already exist, the concern about parallel development doesn’t apply to these modules. You can always reuse a BCL library in another application.

By default, you can consider most (but not all) types defined in the BCL as safe, or Stable Dependencies. We call them stable because they’re already there, they tend to be backward compatible, and invoking them has deterministic outcomes. Most Stable Dependencies are BCL types, but other Dependencies can be stable too. The important criteria for Stable Dependencies include the following:

  • The class or module already exists.
  • You expect that new versions won’t contain breaking changes.
  • The types in question contain deterministic algorithms.
  • You never expect to have to replace, wrap, decorate, or Intercept the class or module with another.

Other examples may include specialized libraries that encapsulate algorithms relevant to your application. For example, if you’re developing an application that deals with chemistry, you can reference a third-party library that contains chemistry-specific functionality.

In general, Dependencies can be considered stable by exclusion. They’re stable if they aren’t volatile.

1.3.2 Volatile Dependencies

Introducing Seams into an application is extra work, so you should only do it when it’s necessary. There can be more than one reason it’s necessary to isolate a Dependency behind a Seam, but those reasons are closely related to the benefits of loose coupling (discussed in section 1.2.1).

Such Dependencies can be recognized by their tendency to interfere with one or more of these benefits. They aren’t stable because they don’t provide a sufficient foundation for applications, and we call them Volatile Dependencies for that reason. A Dependency should be considered volatile if any of the following criteria are true:

  • The Dependency introduces a requirement to set up and configure a runtime environment for the application. It isn’t so much the concrete .NET types that are volatile, but rather what they imply about the runtime environment.

    Databases are good examples of BCL types that are Volatile Dependencies, and relational databases are the archetypical example. If you don’t hide a relational database behind a Seam, you can never replace it by any other technology. It also makes it hard to set up and run automated unit tests. (Even though the Microsoft SQL Server client library is a technology contained in the BCL, its usage implies a relational database.) Other out-of-process resources like message queues, web services, and even the filesystem fall into this category. The symptoms of this type of Dependency are lack of late binding and extensibility, as well as disabled Testability.

  • The Dependency doesn’t yet exist, or is still in development.
  • The Dependency isn’t installed on all machines in the development organization. This may be the case for expensive third-party libraries or Dependencies that can’t be installed on all operating systems. The most common symptom is disabled Testability.
  • The Dependency contains nondeterministic behavior. This is particularly important in unit tests because all tests must be deterministic. Typical sources of nondeterminism are random numbers and algorithms that depend on the current date or time.

    Because the BCL defines common sources of nondeterminism, such as System.Random, System.Security.Cryptography.RandomNumberGenerator, or System.DateTime.Now, you can’t avoid having a reference to the assembly in which they’re defined. Nevertheless, you should treat them as Volatile Dependencies because they tend to destroy Testability.

Now that you understand the differences between Stable and Volatile Dependencies, you can begin to see the contours of the scope of DI. Loose coupling is a pervasive design principle, so DI (as an enabler) should be everywhere in your code base. There’s no hard line between the topic of DI and good software design, but to define the scope of the rest of the book, we’ll quickly describe what it covers.

1.4 DI scope

As we discussed before, an important element of DI is to break up various responsibilities into separate classes. One responsibility that we take away from classes is the task of creating instances of Dependencies. The task of creating instances of Dependencies is referred to as Object Composition.

We discussed this in our Hello DI! example where our Salutation class was released of the responsibility of creating its Dependency. Instead, this responsibility was moved to the application’s Main method. The UML diagram is shown again in figure 1.11.

01-12.eps

Figure 1.11 Relationship between the collaborators of the Hello DI! application (repeated)

As a class relinquishes control of Dependencies, it gives up more than the decision to select particular implementations. By doing this, we, as developers, gain some advantages. At first, it may seem like a disadvantage to let a class surrender control over which objects are created, but we don’t lose that control — we only move it to another place.

Object Composition isn’t the only dimension of control that we remove: a class also loses the ability to control the lifetime of the object. When a Dependency instance is injected into a class, the consumer doesn’t know when it was created, or when it’ll go out of scope. This should be of no concern to the consumer. Making the consumer oblivious to the lifetime of its Dependencies simplifies the consumer.

DI gives you an opportunity to manage Dependencies in a uniform way. When consumers directly create and set up instances of Dependencies, each may do so in its own way. This can be inconsistent with how other consumers do it. You have no way to centrally manage Dependencies and no easy way to address Cross-Cutting Concerns. With DI, you gain the ability to Intercept each Dependency instance and act on it before it’s passed to the consumer. This provides extensibility in applications.

With DI, you can compose applications while intercepting Dependencies and controlling their lifetimes. Object Composition, Interception, and Lifetime Management are three dimensions of DI. Next, we’ll cover each of these briefly; a more detailed treatment follows in part 3 of the book.

1.4.1 Object Composition

To harvest the benefits of extensibility, late binding, and parallel development, you must be able to compose classes into applications. This means that you’ll want to create an application out of individual classes by putting them together, much like plugging electrical appliances together. And, as with electrical appliances, you’ll want to easily rearrange those classes when new requirements are introduced, ideally, without having to make changes to existing classes.

Object Composition is often the primary motivation for introducing DI into an application. In fact, initially, DI was synonymous with Object Composition; it’s the only aspect discussed in Martin Fowler’s original article on the subject.16 

You can compose classes into an application in several ways. When we discussed late binding, we used a configuration file and a bit of dynamic object instantiation to manually compose the application from the available modules. We could also have used Configuration as Code using a DI Container. We’ll return to these in chapter 12.

Many people refer to DI as Inversion of Control (IoC). These two terms are sometimes used interchangeably, but DI is a subset of IoC. Throughout the book, we consistently use the most specific term — DI. If we mean IoC, we refer to it specifically.

1.4.2 Object Lifetime

A class that has surrendered control of its Dependencies gives up more than the power to select particular implementations of an Abstraction. It also gives up the power to control when instances are created and when they go out of scope.

In .NET, the garbage collector takes care of these things for us. A consumer can have its Dependencies injected into it and use them for as long as it wants. When it’s done, the Dependencies go out of scope. If no other classes reference them, they’re eligible for garbage collection.

What if two consumers share the same type of Dependency? Listing 1.5 illustrates that you can choose to inject a separate instance into each consumer, whereas listing 1.6 shows that you can alternatively choose to share a single instance across several consumers. But from the perspective of the consumer, there’s no difference. According to the Liskov Substitution Principle, the consumer must treat all instances of a given interface equally.

Listing 1.5 Consumers getting their own instance of the same type of Dependency

IMessageWriter writer1 = new ConsoleMessageWriter();    ①  
IMessageWriter writer2 = new ConsoleMessageWriter();    ①  

var salutation = new Salutation(writer1);    ②  
var valediction = new Valediction(writer2);    ②  

Listing 1.6 Consumers sharing an instance of the same type of Dependency

IMessageWriter writer = new ConsoleMessageWriter();    ①  

var salutation = new Salutation(writer);    ②  
var valediction = new Valediction(writer);    ②  

Because Dependencies can be shared, a single consumer can’t possibly control its lifetime. As long as a managed object can go out of scope and be garbage collected, this isn’t much of an issue. But when Dependencies implement the IDisposable interface, things become much more complicated as we’ll discuss in section 8.2. As a whole, Lifetime Management is a separate dimension of DI and important enough that we’ve set aside all of chapter 8 for it.

1.4.3 Interception

When we delegate control over Dependencies to a third party, as figure 1.12 shows, we also provide the power to modify them before we pass them on to the classes consuming them.

01-13.eps

Figure 1.12 Intercepting a ConsoleMessageWriter

In the Hello DI! example, we initially injected a ConsoleMessageWriter instance into a Salutation instance. Then, modifying the example, we added a security feature by creating a new SecureMessageWriter that only delegates further work to the ConsoleMessageWriter when the user is authenticated. This allows you to maintain the Single Responsibility Principle. It’s possible to do this because you always program to interfaces; recall that Dependencies must always be Abstractions. In the case of the Salutation, it doesn’t care whether the supplied IMessageWriter is a ConsoleMessageWriter or a SecureMessageWriter. The SecureMessageWriter can wrap a ConsoleMessageWriter that still performs the real work.

Such abilities of Interception move us along the path towards Aspect-Oriented Programming (AOP), a closely related topic that we’ll cover in chapters 10 and 11. With Interception and AOP, you can apply Cross-Cutting Concerns such as logging, auditing, access control, validation, and so forth in a well-structured manner that lets you maintain Separation of Concerns.

1.4.4 DI in three dimensions

Although DI started out as a series of patterns aimed at solving the problem of Object Composition, the term has subsequently expanded to also cover Object Lifetime and Interception. Today, we think of DI as encompassing all three in a consistent way.

Object Composition tends to dominate the picture because, without flexible Object Composition, there’d be no Interception and no need to manage Object Lifetime. Object Composition has dominated most of this chapter and will continue to dominate this book, but you shouldn’t forget the other aspects. Object Composition provides the foundation, and Lifetime Management addresses some important side effects. But it’s mainly when it comes to Interception that you start to reap the benefits.

In part 3, we’ve devoted a chapter to each dimension briefly mentioned here. But it’s important to know that, in practice, DI is more than Object Composition.

1.5 Conclusion

Dependency Injection is a means to an end, not a goal in itself. It’s the best way to enable loose coupling, an important part of maintainable code. The benefits you can reap from loose coupling aren’t always immediately apparent, but they’ll become visible over time, as the complexity of a code base grows. An important point about loose coupling in relation to DI is that, in order to be effective, it should be everywhere in your code base.

A tightly coupled code base will eventually deteriorate into Spaghetti Code;18  whereas a well-designed, loosely coupled code base can stay maintainable. It takes more than loose coupling to reach a truly supple design,19  but programming to interfaces is a prerequisite.

DI is nothing more than a collection of design principles and patterns. It’s more about a way of thinking and designing code than it is about tools and techniques. The purpose of DI is to make code maintainable. Small code bases, like a classic Hello World example, are inherently maintainable because of their size. This is why DI tends to look like overengineering in simple examples. The larger the code base becomes, the more visible the benefits. We’ve dedicated the next two chapters to a larger and more complex example to showcase these benefits.

Summary

  • Dependency Injection is a set of software design principles and patterns that enables you to develop loosely coupled code. Loose coupling makes code more maintainable.
  • When you have a loosely coupled infrastructure in place, it can be used by anyone and adapted to changing needs and unanticipated requirements without having to make large changes to the application’s code base and its infrastructure.
  • Troubleshooting tends to become less taxing because the scope of likely culprits narrows.
  • DI enables late binding, which is the ability to replace classes or modules with different ones without the need for the original code to be recompiled.
  • DI makes it easier for code to be extended and reused in ways not explicitly planned for, similar to the way you have flexibility when working with electrical plugs and sockets.
  • DI simplifies parallel development on the same code base because the Separation of Concerns allows each team member or even entire teams to work more easily on isolated parts.
  • DI makes software more Testable because you can replace Dependencies with test implementations when writing unit tests.
  • When you practice DI, collaborating classes should rely on infrastructure to provide the necessary services. You do this by letting your classes depend on interfaces, instead of concrete implementations.
  • Classes shouldn’t ask a third party for their Dependencies. This is an anti-pattern called Service Locator. Instead, classes should specify their required Dependencies statically using constructor parameters, a practice called Constructor Injection.
  • Many developers think that DI requires specialized tooling, a so-called DI Container. This is a myth. A DI Container is a useful, but optional, tool.
  • One of the most important software design principles that enables DI is the Liskov Substitution Principle. It allows replacing one implementation of an interface with another without breaking either the client or the implementation.
  • Dependencies are considered Stable in the case that they’re already available, have deterministic behavior, don’t require a setup runtime environment (such as a relational database), and don’t need to be replaced, wrapped, or intercepted.
  • Dependencies are considered Volatile when they are under development, aren’t always available on all development machines, contain nondeterministic behavior, or need to be replaced, wrapped, or intercepted.
  • Volatile Dependencies are the focal point of DI. We inject Volatile Dependencies into a class’s constructor.
  • By removing control over Dependencies from their consumers, and moving that control into the application entry point, you gain the ability to apply Cross-Cutting Concerns more easily and can manage the lifetime of Dependencies more effectively.
  • To succeed, you need to apply DI pervasively. All classes should get their required Volatile Dependencies using Constructor Injection. It’s hard to retrofit loose coupling and DI onto an existing code base.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.22.61.218