9

Common Design Patterns

Design patterns have been a widespread topic in software engineering since their original inception in the famous Gang of Four (GoF) book, Design Patterns: Elements of Reusable Object-Oriented Software. Design patterns help to solve common problems with abstractions that work for certain scenarios. When they are implemented properly, the general design of the solution can benefit from them.

In this chapter, we take a look at some of the most common design patterns, but not from the perspective of tools to apply under certain conditions (once the patterns have been devised), but rather we analyze how design patterns contribute to clean code. After presenting a solution that implements a design pattern, we will analyze how the final implementation is comparatively better than if we had chosen a different path.

As part of this analysis, we will see how to concretely implement design patterns in Python. As a result of that, we will see that the dynamic nature of Python implies some differences of implementation, with respect to other static typed languages, for which many of the design patterns were originally thought of. This means that there are some particularities about design patterns that you should bear in mind when it comes to Python, and, in some cases, trying to apply a design pattern where it doesn't really fit is non-Pythonic.

In this chapter, we will cover the following topics:

  • Common design patterns
  • Design patterns that don't apply in Python, and the idiomatic alternative that should be followed
  • The Pythonic way of implementing the most common design patterns
  • Understanding how good abstractions evolve naturally into patterns

With the knowledge from previous chapters, we're now in a position to analyze code at a higher level of design and at the same time think in terms of its detailed implementation (how would we write it in a way that uses the features of Python most efficiently?).

In this chapter, we'll analyze how we can use design patterns to achieve cleaner code, starting with analyzing some initial considerations in the following section.

Design pattern considerations in Python

Object-oriented design patterns are ideas of software construction that appear in different scenarios when we deal with models of the problem we're solving. Because they're high-level ideas, it's hard to think of them as being tied to particular programming languages. They are instead more general concepts about how objects will interact in the application. Of course, they will have their implementation details, varying from language to language, but that doesn't form the essence of a design pattern.

That's the theoretical aspect of a design pattern, the fact that it is an abstract idea that expresses concepts about the layout of the objects in the solution. There are plenty of other books and several other resources about object-oriented design, and design patterns in particular, so in this book, we are going to focus on those implementation details for Python.

Given the nature of Python, some of the classical design patterns aren't actually needed. That means that Python already supports features that render those patterns invisible. Some argue that they don't exist in Python, but keep in mind that invisible doesn't mean non-existing. They are there, just embedded in Python itself, so it's likely that we won't even notice them.

Others have a much simpler implementation, again thanks to the dynamic nature of the language, and the rest of them are practically the same as they are in other platforms, with small differences.

In any case, the important goal for achieving clean code in Python is knowing what patterns to implement and how. That means recognizing some of the patterns that Python already abstracts and how we can leverage them. For instance, it would be completely non-Pythonic to try to implement the standard definition of the iterator pattern (as we would do in different languages), because (as we have already covered) iteration is deeply embedded in Python, and the fact that we can create objects that will directly work in a for loop makes this the right way to proceed.

Something similar happens with some of the creational patterns. Classes are regular objects in Python, and so are functions. As we have seen in several examples so far, they can be passed around, decorated, reassigned, and so on. That means that whatever kind of customization we would like to make to our objects, we can most likely do it without needing any particular setup of factory classes. Also, there is no special syntax for creating objects in Python (no new keyword, for example). This is another reason why, most of the time, a simple function call will work just like a factory.

Other patterns are still needed, and we will see how, with some small adaptations, we can make them more Pythonic, taking full advantage of the features that the language provides (magic methods or the standard library).

Out of all the patterns available, not all of them are equally frequent, nor useful, so we will focus on the main ones, those that we would expect to see the most in our applications, and we will do so by following a pragmatic approach.

Design patterns in action

The canonical reference in this subject, as written by the GoF, introduces 23 design patterns, each falling under one of the creational, structural, and behavioral categories. There are even more patterns or variations of existing ones, but rather than learning all of these patterns off by heart, we should focus on keeping two things in mind. Some of the patterns are invisible in Python, and we use them probably without even noticing. Secondly, not all patterns are equally common; some of them are tremendously useful, and so they are found very frequently, while others are for more specific cases.

In this section, we will revisit the most common patterns, those that are most likely to emerge from our design. Note the use of the word emerge here. We should not force the application of a design pattern to the solution we are building, but rather evolve, refactor, and improve our solution until a pattern emerges.

Design patterns are therefore not invented but discovered. When a situation that occurs repeatedly in our code reveals itself, the general and more abstract layout of classes, objects, and related components appears under a name by which we identify a pattern.

The name of a design pattern wraps up a lot of concepts. This is probably the best thing about design patterns; they provide a language. Through design patterns, it's easier to communicate design ideas effectively. When two or more software engineers share the same vocabulary, and one of them mentions strategy, the rest of the software engineers in the room can immediately think about all the classes, and how they would be related, what their mechanics would be, and so on, without having to repeat this explanation.

The reader will notice that the code shown in this chapter is different from the canonical or original envisioning of the design pattern in question. There is more than one reason for this. The first reason is that the examples take a more pragmatic approach, aimed at solutions for particular scenarios, rather than exploring general design theory. The second reason is that the patterns are implemented with the particularities of Python, which in some cases are very subtle, but in other cases, the differences are noticeable, generally simplifying the code.

Creational patterns

In software engineering, creational patterns are those that deal with object instantiation, trying to abstract away much of the complexity (like determining the parameters to initialize an object, all the related objects that might be needed, and so on), in order to leave the user with a simpler interface that should be safer to use. The basic form of object creation could result in design problems or added complexity to the design. Creational design patterns solve this problem by somehow controlling this object creation.

Out of the five patterns for creating objects, we will discuss mainly the variants that are used to avoid the singleton pattern and replace it with the Borg pattern (most commonly used in Python applications), discussing their differences and advantages.

Factories

As was mentioned in the introduction, one of the core features of Python is that everything is an object, and as such, they can all be treated equally. This means that there are no special distinctions of things that we can or cannot do with classes, functions, or custom objects. They can all be passed by a parameter, assigned, and so on.

It is for this reason that many of the factory patterns are not usually needed. We could just simply define a function that will construct a set of objects, and we can even pass the class that we want to create with a parameter.

We saw an example of a sort of factory in action when we used pyinject as a library to help us with dependency injection, and the initialization of complex objects. In cases where we need to deal with a complex setup, and we want to make sure we are using dependency injection to initialize our objects without repeating ourselves, we can use libraries such as pyinject or come up with an analogous structure in our code.

Singleton and shared state (monostate)

The singleton pattern, on the other hand, is something not entirely abstracted away by Python. The truth is that most of the time, this pattern is either not really needed or is a bad choice. There are a lot of problems with singletons (after all, they are, in fact, a form of global variables for object-oriented software, and as such are a bad practice). They are hard to unit test, the fact that they might be modified at any time by any object makes them hard to predict, and their side effects can be really problematic.

As a general principle, we should avoid using singletons as much as possible. If in some extreme case they are required, the easiest way of achieving this in Python is by using a module. We can create an object in a module, and once it's there, it will be available from every part of the module that is imported. Python itself makes sure that modules are already singletons, in the sense that no matter how many times they're imported, and from how many places, the same module is always the one that is going to be loaded into sys.modules. Therefore, an object initialized inside this Python module will be unique.

Note how this is not quite the same as a singleton. The idea of a singleton is to create a class that no matter how many times you invoke it, will always give you the same object. The idea presented in the previous paragraph is about having a unique object. Regardless of how its class is defined, we create an object only once and then use the same object multiple times. These are sometimes called well-known objects; objects that don't need more than one of their kind.

We are familiar with these objects already. Consider None. We don't need more than one for the whole Python interpreter. Some developers claim that "None is a singleton in Python." I slightly disagree with that. It's a well-known object: something we all know, and we don't need another one. The same goes for True and False. It wouldn't make sense to try to create a different kind of boolean.

Shared state

Rather than forcing our design to have a singleton in which only one instance is created, no matter how the object is invoked, constructed, or initialized, it is better to replicate the data across multiple instances.

The idea of the monostate pattern (SNGMONO) is that we can have many instances that are just regular objects, without having to care whether they're singletons or not (seeing as they're just objects). The good thing about this pattern is that these objects will have their information synchronized, in a completely transparent way, without us having to worry about how this works internally.

This makes this pattern a much better choice, not only for its convenience, but also because it is less error-prone, and suffers from fewer of the disadvantages of singletons (regarding their testability, creating derived classes, and so on).

We can use this pattern on many levels, depending on how much information we need to synchronize.

In its simplest form, we can assume that we only need to have one attribute to be reflected across all instances. If that is the case, the implementation is as trivial as using a class variable, and we just need to take care of providing a correct interface to update and retrieve the value of the attribute.

Let's say we have an object that has to pull a version of some code in a Git repository by the latest tag. There might be multiple instances of this object, and when every client calls the method for fetching the code, this object will use the tag version from its attribute. At any point, this tag can be updated for a newer version, and we want any other instance (new or already created) to use this new branch when the fetch operation is being called, as shown in the following code:

class GitFetcher:
    _current_tag = None
    def __init__(self, tag):
        self.current_tag = tag
    @property
    def current_tag(self):
        if self._current_tag is None:
            raise AttributeError("tag was never set")
        return self._current_tag
    @current_tag.setter
    def current_tag(self, new_tag):
        self.__class__._current_tag = new_tag
    def pull(self):
        logger.info("pulling from %s", self.current_tag)
        return self.current_tag

The reader can simply verify that creating multiple objects of the GitFetcher type with different versions will result in all objects being set with the latest version at any time, as shown in the following code:

>>> f1 = GitFetcher(0.1)
>>> f2 = GitFetcher(0.2)
>>> f1.current_tag = 0.3
>>> f2.pull()
0.3
>>> f1.pull()
0.3

In the case that we need more attributes, or that we wish to encapsulate the shared attribute a bit more, to make the design cleaner, we can use a descriptor.

A descriptor, like the one shown in the following code, solves the problem, and while it's true that it requires more code, it also encapsulates a more concrete responsibility, and part of the code is actually moved away from our original class, making it more cohesive and compliant with the single responsibility principle:

class SharedAttribute:
    def __init__(self, initial_value=None):
        self.value = initial_value
        self._name = None
    def __get__(self, instance, owner):
        if instance is None:
            return self
        if self.value is None:
            raise AttributeError(f"{self._name} was never set")
        return self.value
    def __set__(self, instance, new_value):
        self.value = new_value
    def __set_name__(self, owner, name):
        self._name = name

Apart from these considerations, it's also true that the pattern is now more reusable. If we want to repeat this logic, we just have to create a new descriptor object that would work (complying with the DRY principle).

If we now want to do the same but for the current branch, we create this new class attribute, and the rest of the class is kept intact, while still having the desired logic in place, as shown in the following code:

class GitFetcher:
    current_tag = SharedAttribute()
    current_branch = SharedAttribute()
    def __init__(self, tag, branch=None):
        self.current_tag = tag
        self.current_branch = branch
    def pull(self):
        logger.info("pulling from %s", self.current_tag)
        return self.current_tag

The balance and trade-off of this new approach should be clear by now. This new implementation uses a bit more code, but it's reusable, so it saves lines of code (and duplicated logic) in the long run. Once again, refer to the three or more instances rule to decide if you should create such an abstraction.

Another important benefit of this solution is that it also reduces the repetition of unit tests (because we only need to test the SharedAttribute class, and not all uses of it).

Reusing code here will give us more confidence in the overall quality of the solution, because now we just have to write unit tests for the descriptor object, not for all the classes that use it (we can safely assume that they're correct as long as the unit tests prove the descriptor to be correct).

The Borg pattern

The previous solutions should work for most cases, but if we really have to go for a singleton (and this has to be a really good exception), then there is one last better alternative to it, only this is a riskier one.

This is the actual mono-state pattern, referred to as the Borg pattern in Python. The idea is to create an object that is capable of replicating all of its attributes among all instances of the same class. The fact that absolutely every attribute is being replicated has to be a warning to keep in mind undesired side effects. Still, this pattern has many advantages over the singleton.

In this case, we are going to split the previous object into two—one that works over Git tags, and the other over branches. And we are using the code that will make the Borg pattern work:

class BaseFetcher:
    def __init__(self, source):
        self.source = source
class TagFetcher(BaseFetcher):
    _attributes = {}
    def __init__(self, source):
        self.__dict__ = self.__class__._attributes
        super().__init__(source)
    def pull(self):
        logger.info("pulling from tag %s", self.source)
        return f"Tag = {self.source}"
class BranchFetcher(BaseFetcher):
    _attributes = {}
    def __init__(self, source):
        self.__dict__ = self.__class__._attributes
        super().__init__(source)
    def pull(self):
        logger.info("pulling from branch %s", self.source)
        return f"Branch = {self.source}"

Both objects have a base class, sharing their initialization method. But then they have to implement it again in order to make the Borg logic work. The idea is that we use a class attribute that is a dictionary to store the attributes, and then we make the dictionary of each object (at the time it's being initialized) use this very same dictionary. This means that any update on the dictionary of an object will be reflected in the class, which will be the same for the rest of the objects because their class is the same, and dictionaries are mutable objects that are passed as a reference. In other words, when we create new objects of this type, they will all use the same dictionary, and this dictionary is constantly being updated.

Note that we cannot put the logic of the dictionary on the base class, because this will mix the values among the objects of different classes, which is not what we want. This boilerplate solution is what would make many think it's actually an idiom rather than a pattern.

A possible way of abstracting this in a way that achieves the DRY principle would be to create a mixin class, as shown in the following code:

class SharedAllMixin:
    def __init__(self, *args, **kwargs):
        try:
            self.__class__._attributes
        except AttributeError:
            self.__class__._attributes = {}
        self.__dict__ = self.__class__._attributes
        super().__init__(*args, **kwargs)
class BaseFetcher:
    def __init__(self, source):
        self.source = source
class TagFetcher(SharedAllMixin, BaseFetcher):
    def pull(self):
        logger.info("pulling from tag %s", self.source)
        return f"Tag = {self.source}"
class BranchFetcher(SharedAllMixin, BaseFetcher):
    def pull(self):
        logger.info("pulling from branch %s", self.source)
        return f"Branch = {self.source}"

This time, we are using the mixin class to create the dictionary with the attributes in each class in case it doesn't already exist, and then continuing with the same logic.

This implementation should not have any major problems with inheritance, so it's a more viable alternative.

Builder

The builder pattern is an interesting pattern that abstracts away all the complex initialization of an object. This pattern does not rely on any particularity of the language, so it's as equally applicable in Python as it would be in any other language.

While it solves a valid case, it's usually also a complicated case that is more likely to appear in the design of a framework, library, or API. Similar to the recommendations given for descriptors, we should reserve this implementation for cases where we expect to expose an API that is going to be consumed by multiple users.

The high-level idea of this pattern is that we need to create a complex object, that is, an object that also requires many others to work with. Rather than letting the user create all those auxiliary objects, and then assign them to the main one, we would like to create an abstraction that allows all of that to be done in a single step. In order to achieve this, we will have a builder object that knows how to create all the parts and link them together, giving the user an interface (which could be a class method) to parametrize all the information about what the resulting object should look like.

Structural patterns

Structural patterns are useful for situations where we need to create simpler interfaces or objects that are more powerful by extending their functionality without adding complexity to their interfaces.

The best thing about these patterns is that we can create more interesting objects, with enhanced functionality, and we can achieve this in a clean way; that is, by composing multiple single objects (the clearest example of this being the composite pattern), or by gathering many simple and cohesive interfaces.

Adapter

The adapter pattern is probably one of the simplest design patterns there are, and one of the most useful ones at the same time.

Also known as a wrapper, this pattern solves the problem of adapting interfaces of two or more objects that are not compatible.

We typically encounter a situation where part of our code works with a model or set of classes that were polymorphic with respect to a method. For example, if there were multiple objects for retrieving data with a fetch() method, then we want to maintain this interface so we don't have to make major changes to our code.

But then we come to a point where we need to add a new data source, and alas, this one won't have a fetch() method. To make things worse, not only is this type of object not compatible, but it is also not something we control (perhaps a different team decided on the API, and we cannot modify the code, or it is an object coming from an external library).

Instead of using this object directly, we adapt its interface to the one we need. There are two ways of doing this.

The first way would be to create a class that inherits from the one we need and create an alias for the method (if required, it will also have to adapt the parameters and the signature), which internally will adapt the call to make it compatible with the method we need.

By means of inheritance, we import the external class and create a new one that will define the new method, calling the one that has a different name. In this example, let's say the external dependency has a method named search(), which takes only one parameter for the search because it queries in a different fashion, so our adapter method not only calls the external one, but it also translates the parameters accordingly, as shown in the following code:

from _adapter_base import UsernameLookup
class UserSource(UsernameLookup):
    def fetch(self, user_id, username):
        user_namespace = self._adapt_arguments(user_id, username)
        return self.search(user_namespace)
    @staticmethod
    def _adapt_arguments(user_id, username):
        return f"{user_id}:{username}"

Taking advantage of the fact that Python supports multiple inheritance, we can use it to create our adapters (and even create a mixin class that's an adapter, as we have seen in previous chapters).

However, as we have seen many times before, inheritance comes with more coupling (who knows how many other methods are being carried from the external library?), and it's inflexible. Conceptually, it also wouldn't be the right choice because we reserve inheritance for situations of specification (an inheritance IS-A kind of relationship), and in this case, it's not clear at all that our object has to be one of the kinds that are provided by a third-party library (especially since we don't fully comprehend that object).

Therefore, a better approach would be to use composition instead. Assuming that we can provide our object with an instance of UsernameLookup, the code would be as simple as just redirecting the petition prior to adopting the parameters, as shown in the following code:

class UserSource:
    ...
    def fetch(self, user_id, username):
        user_namespace = self._adapt_arguments(user_id, username)
        return self.username_lookup.search(user_namespace)

If we need to adapt multiple methods, and we can devise a generic way of adapting their signature as well, it might be worth using the __getattr__() magic method to redirect requests toward the wrapped object, but as always with generic implementations, we should be careful of not adding more complexity to the solution.

The use of __getattr__() might enable us to have a sort of "generic adapter"; something that can wrap another object and adapt all its methods by redirecting calls in a generic way. But we should really be careful with this because this method will create something so generic that it might be even riskier and have unanticipated side effects. If we want to perform transformations or extra functionality over an object, while keeping its original interface, the decorator pattern is a much better option, as we'll see later in this chapter.

Composite

There will be parts of our programs that require us to work with objects that are made out of other objects. We have base objects that have a well-defined logic, and then we will have other container objects that will group a bunch of base objects, and the challenge is that we want to treat both of them (the base and container objects) without noticing any differences.

The objects are structured in a tree hierarchy, where the basic objects would be the leaves of the tree, and the composed objects intermediate nodes. A client might want to call any of them to get the result of a method that is called. The composite object, however, will act as a client; this will also pass this request along with all the objects it contains, whether they are leaves or other intermediate notes, until they are all processed.

Imagine a simplified version of an online store in which we have products. Say that we offer the possibility of grouping those products, and we give customers a discount per group of products. A product has a price, and this value will be asked for when the customers come to pay. But a set of grouped products also has a price that has to be computed. We will have an object that represents this group that contains the products, and that delegates the responsibility of asking the price to each particular product (which might be another group of products as well), and so on, until there is nothing else to compute.

The implementation of this is shown in the following code:

class Product:
    def __init__(self, name: str, price: float) -> None:
        self._name = name
        self._price = price
    @property
    def price(self):
        return self._price
class ProductBundle:
    def __init__(
        self,
        name: str,
        perc_discount: float,
        *products: Iterable[Union[Product, "ProductBundle"]]
    ) -> None:
        self._name = name
        self._perc_discount = perc_discount
        self._products = products
    @property
    def price(self) -> float:
        total = sum(p.price for p in self._products)
        return total * (1 - self._perc_discount)

We expose the public interface through a property and leave price as a private attribute. The ProductBundle class uses this property to compute the value with the discount applied by first adding all the prices of all the products it contains.

The only discrepancy between these objects is that they are created with different parameters. To be fully compatible, we should have tried to mimic the same interface and then added extra methods for adding products to the bundle but using an interface that allows the creation of complete objects. Not needing these extra steps is an advantage that justifies this small difference.

Decorator

Don't confuse the decorator pattern with the concept of a Python decorator, which we have gone through in Chapter 5, Using Decorators to Improve Our Code. There is some resemblance, but the idea of the design pattern is quite different.

This pattern allows us to dynamically extend the functionality of some objects, without needing inheritance. It's a good alternative to multiple inheritance in creating more flexible objects.

We are going to create a structure that lets a user define a set of operations (decorations) to be applied over an object, and we'll see how each step takes place in the specified order.

The following code example is a simplified version of an object that constructs a query in the form of a dictionary from parameters that are passed to it (it might be an object that we would use for running queries to Elasticsearch, for instance, but the code leaves out distracting implementation details to focus on the concepts of the pattern).

In its most basic form, the query just returns the dictionary with the data it was provided when it was created. Clients expect to use the render() method of this object:

class DictQuery:
    def __init__(self, **kwargs):
        self._raw_query = kwargs
    def render(self) -> dict:
        return self._raw_query

Now we want to render the query in different ways by applying transformations to the data (filtering values, normalizing them, and so on). We could create decorators and apply them to the render method, but that wouldn't be flexible enough—what if we want to change them at runtime? Or if we want to select some of them, but not others?

The design is to create another object, with the same interface and the capability of enhancing (decorating) the original result through many steps, but that can be combined. These objects are chained, and each one of them does what it was originally supposed to do, plus something else. This something else is the particular decoration step.

Since Python has duck typing, we don't need to create a new base class and make these new objects part of that hierarchy, along with DictQuery. Simply creating a new class that has a render() method will be enough (again, polymorphism should not require inheritance). This process is shown in the following code:

class QueryEnhancer:
    def __init__(self, query: DictQuery):
        self.decorated = query
    def render(self):
        return self.decorated.render()
class RemoveEmpty(QueryEnhancer):
    def render(self):
        original = super().render()
        return {k: v for k, v in original.items() if v}
class CaseInsensitive(QueryEnhancer):
    def render(self):
        original = super().render()
        return {k: v.lower() for k, v in original.items()}

The QueryEnhancer phrase has an interface that is compatible with what the clients of DictQuery are expecting, so they are interchangeable. This object is designed to receive a decorated one. It's going to take the values from this and convert them, returning the modified version of the code.

If we want to remove all values that evaluate to False and normalize them to form our original query, we have to use the following schema:

>>> original = DictQuery(key="value", empty="", none=None, upper="UPPERCASE", title="Title")
>>> new_query = CaseInsensitive(RemoveEmpty(original))
>>> original.render()
{'key': 'value', 'empty': '', 'none': None, 'upper': 'UPPERCASE', 'title': 'Title'}
>>> new_query.render()
{'key': 'value', 'upper': 'uppercase', 'title': 'title'}

This is a pattern that we can also implement in different ways, taking advantage of the dynamic nature of Python, and the fact that functions are objects. We could implement this pattern with functions that are provided to the base decorator object (QueryEnhancer), and define each decoration step as a function, as shown in the following code:

class QueryEnhancer:
    def __init__(
        self,
        query: DictQuery,
        *decorators: Iterable[Callable[[Dict[str, str]], Dict[str, str]]]
    ) -> None:
        self._decorated = query
        self._decorators = decorators
    def render(self):
        current_result = self._decorated.render()
        for deco in self._decorators:
            current_result = deco(current_result)
        return current_result

With respect to the client, nothing has changed because this class maintains the compatibility through its render() method. Internally, however, this object is used in a slightly different fashion, as shown in the following code:

>>> query = DictQuery(foo="bar", empty="", none=None, upper="UPPERCASE", title="Title")
>>> QueryEnhancer(query, remove_empty, case_insensitive).render()
{'foo': 'bar', 'upper': 'uppercase', 'title': 'title'}

In the preceding code, remove_empty and case_insensitive are just regular functions that transform a dictionary.

In this example, the function-based approach seems easier to understand. There might be cases with more complex rules that rely on data from the object being decorated (not only its result), and in those cases, it might be worth going for the object-oriented approach, especially if we really want to create a hierarchy of objects where each class actually represents some knowledge we want to make explicit in our design.

Facade

Facade is an excellent pattern. It's useful in many situations where we want to simplify the interaction between objects. The pattern is applied where there is a relation of many-to-many among several objects, and we want them to interact. Instead of creating all of these connections, we place an intermediate object in front of many of them that act as a facade.

The facade works as a hub or a single point of reference in this layout. Every time a new object wants to connect to another one, instead of having to have N interfaces for all N possible objects it needs to connect to (requiring O(N2) total connections), it will instead just talk to the facade, and this will redirect the request accordingly. Everything that's behind the facade is completely opaque to the rest of the external objects.

Apart from the main and obvious benefit (the decoupling of objects), this pattern also encourages a simpler design with fewer interfaces and better encapsulation.

This is a pattern that we can use not only for improving the code of our domain problem but also to create better APIs. If we use this pattern and provide a single interface, acting as a single point of truth or entry point for our code, it will be much easier for our users to interact with the functionality exposed. Not only that, but by exposing a functionality and hiding everything behind an interface, we are free to change or refactor that underlying code as much as we want, because as long as it is behind the facade, it will not break backward compatibility, and our users will not be affected.

Note how this idea of using facades is not even limited to objects and classes, but also applies to packages (technically, packages are objects in Python, but still). We can use this idea of the facade to decide the layout of a package; that is, what is visible to the user and importable, and what is internal and should not be imported directly.

When we create a directory to build a package, we place the __init__.py file along with the rest of the files. This is the root of the module, a sort of facade. The rest of the files define the objects to export, but they shouldn't be directly imported by clients. The __init__.py file should import them and then clients should get them from there. This creates a better interface because users only need to know a single entry point from which to get the objects, and more importantly, the package (the rest of the files) can be refactored or rearranged as many times as needed, and this will not affect clients as long as the main API on the init file is maintained. It is of utmost importance to keep principles like this one in mind in order to build maintainable software.

There is an example of this in Python itself, with the os module. This module groups an operating system's functionality, but underneath it, uses the posix module for Portable Operating System Interface (POSIX) operating systems (this is called nt on Windows platforms). The idea is that, for portability reasons, we shouldn't ever really import the posix module directly, but always the os module. It is up to this module to determine from which platform it is being called and expose the corresponding functionality.

Behavioral patterns

Behavioral patterns aim to solve the problem of how objects should cooperate, how they should communicate, and what their interfaces should be at runtime.

We mainly discuss the following behavioral patterns:

  • Chain of responsibility
  • Template method
  • Command
  • State

This can be accomplished statically by means of inheritance or dynamically by using composition. Regardless of what the pattern uses, what we will see throughout the following examples is that what these patterns have in common is the fact that the resulting code is better in some significant way, whether this is because it avoids duplication or creates good abstractions that encapsulate behavior accordingly and decouple our models.

Chain of responsibility

Now we are going to take another look at our event systems. We want to parse information about the events that happened on the system from the log lines (text files, dumped from our HTTP application server, for example), and we want to extract this information in a convenient way.

In our previous implementation, we achieved an interesting solution that was compliant with the open/closed principle and relied on the use of the __subclasses__() magic method to discover all possible event types and process the data with the right event, resolving the responsibility through a method encapsulated on each class.

This solution worked for our purposes, and it was quite extensible, but as we'll see, this design pattern will bring additional benefits.

The idea here is that we are going to create the events in a slightly different way. Each event still has the logic to determine whether or not it can process a particular log line, but it will also have a successor. This successor is a new event, the next one in the line, that will continue processing the text line in case the first one was not able to do so. The logic is simple—we chain the events, and each one of them tries to process the data. If it can, then it just returns the result. If it can't, it will pass it to its successor and repeat, as shown in the following code:

import re
from typing import Optional, Pattern
class Event:
    pattern: Optional[Pattern[str]] = None
    def __init__(self, next_event=None):
        self.successor = next_event
    def process(self, logline: str):
        if self.can_process(logline):
            return self._process(logline)
        if self.successor is not None:
            return self.successor.process(logline)
    def _process(self, logline: str) -> dict:
        parsed_data = self._parse_data(logline)
        return {
            "type": self.__class__.__name__,
            "id": parsed_data["id"],
            "value": parsed_data["value"],
        }
    @classmethod
    def can_process(cls, logline: str) -> bool:
        return (
            cls.pattern is not None and cls.pattern.match(logline) is not None
        )
    @classmethod
    def _parse_data(cls, logline: str) -> dict:
        if not cls.pattern:
            return {}
        if (parsed := cls.pattern.match(logline)) is not None:
            return parsed.groupdict()
        return {}
class LoginEvent(Event):
    pattern = re.compile(r"(?P<id>d+):s+logins+(?P<value>S+)")
class LogoutEvent(Event):
    pattern = re.compile(r"(?P<id>d+):s+logouts+(?P<value>S+)")

With this implementation, we create the event objects, and arrange them in the particular order in which they are going to be processed. Since they all have a process() method, they are polymorphic for this message, so the order in which they are aligned is completely transparent to the client, and either one of them would be transparent too. Not only that, but the process() method has the same logic; it tries to extract the information if the data provided is correct for the type of object handling it, and if not, it moves on to the next one in the line.

This way, we could process a login event in the following way:

>>> chain = LogoutEvent(LoginEvent())
>>> chain.process("567: login User")
{'type': 'LoginEvent', 'id': '567', 'value': 'User'}

Note how LogoutEvent received LoginEvent as its successor, and when it was asked to process something that it couldn't handle, it redirected to the correct object. As we can see from the type key on the dictionary, LoginEvent was the one that actually created that dictionary.

This solution is flexible enough and shares an interesting trait with our previous one—all conditions are mutually exclusive. As long as there are no collisions, and no piece of data has more than one handler, processing the events in any order will not be an issue.

But what if we cannot make such an assumption? With the previous implementation, we could still change the __subclasses__() call for a list that we made according to our criteria, and that would have worked just fine. And what if we wanted that order of precedence to be determined at runtime (by the user or client, for example)? That would be a shortcoming.

With the new solution, it's possible to accomplish such requirements because we assemble the chain at runtime so we can manipulate it dynamically as we need to.

For example, now we add a generic type that groups both the login and logout session events, as shown in the following code:

class SessionEvent(Event):
    pattern = re.compile(r"(?P<id>d+):s+log(in|out)s+(?P<value>S+)")

If for some reason, and in some part of the application, we want to capture this before the login event, this can be done by the following chain:

chain = SessionEvent(LoginEvent(LogoutEvent()))

By changing the order, we can, for instance, say that a generic session event has a higher priority than the login, but not the logout, and so on.

The fact that this pattern works with objects makes it more flexible with respect to our previous implementation, which relied on classes (and while they are still objects in Python, they aren't excluded from some degree of rigidity).

The template method

The template method is a pattern that yields important benefits when implemented properly. Mainly, it allows us to reuse code, and it also makes our objects more flexible and easier to change while preserving polymorphism.

The idea is that there is a class hierarchy that defines some behavior, let's say an important method of its public interface. All of the classes of the hierarchy share a common template and might need to change only certain elements of it. The idea, then, is to place this generic logic in the public method of the parent class that will internally call all other (private) methods, and these methods are the ones that the derived classes are going to modify; therefore, all the logic in the template is reused.

Astute readers might have noticed that we already implemented this pattern in the previous section (as part of the chain of responsibility example). Note that the classes derived from Event implement only one thing in their particular pattern. For the rest of the logic, the template is in the Event class. The process event is generic, and relies on two auxiliary methods: can_process() and process() (which in turn calls _parse_data()).

These extra methods rely on a class attribute pattern. Therefore, in order to extend this with a new type of object, we just have to create a new derived class and place the regular expression. After that, the rest of the logic will be inherited with this new attribute changed. This reuses a lot of code because the logic for processing the log lines is defined once and only once in the parent class.

This makes the design flexible because preserving the polymorphism is also easily achievable. If we need a new event type that for some reason needs a different way of parsing data, we only override this private method in that subclass, and the compatibility will be kept, as long as it returns something of the same type as the original one (complying with the Liskov's substitution and open/closed principles). This is because it is the parent class that is calling the method from the derived classes.

This pattern is also useful if we are designing our own library or framework. By arranging the logic this way, we give users the ability to change the behavior of one of the classes quite easily. They would have to create a subclass and override the particular private method, and the result will be a new object with the new behavior that is guaranteed to be compatible with previous callers of the original object.

Command

The command pattern provides us with the ability to separate an action that needs to be done from the moment that it is requested to its actual execution. More than that, it can also separate the original request issued by a client from its recipient, which might be a different object. In this section, we are going to focus mainly on the first aspect of the patterns: the fact that we can separate how an order has to be run from when it actually executes.

We know we can create callable objects by implementing the __call__() magic method, so we could just initialize the object and then call it later on. In fact, if this is the only requirement, we might even achieve this through a nested function that, by means of a closure, creates another function to achieve the effect of delayed execution. But this pattern can be extended to ends that aren't so easily achievable.

The idea is that the command might also be modified after its definition. This means that the client specifies a command to run, and then some of its parameters might be changed, more options added, and so on, until someone finally decides to perform the action.

Examples of this can be found in libraries that interact with databases. For instance, in psycopg2 (a PostgreSQL client library), we establish a connection. From this, we get a cursor, and to that cursor, we can pass a SQL statement to run. When we call the execute method, the internal representation of the object changes, but nothing is actually run in the database. It is when we call fetchall() (or a similar method) that the data is actually queried and is available in the cursor.

The same happens in the popular Object Relational Mapper SQLAlchemy (ORM SQLAlchemy). A query is defined through several steps, and once we have the query object, we can still interact with it (add or remove filters, change the conditions, apply for an order, and so on), until we decide we want the results of the query. After calling each method, the query object changes its internal properties and returns self (itself).

These are examples that resemble the behavior that we would like to achieve. A very simple way of creating this structure would be to have an object that stores the parameters of the commands that are to be run. After that, it has to also provide methods for interacting with those parameters (adding or removing filters, and so on). Optionally, we can add tracing or logging capabilities to that object to audit the operations that have been taking place. Finally, we need to provide a method that will actually perform the action. This one can be just __call__() or a custom one. Let's call it do().

This pattern can be useful when we're dealing with asynchronous programming. As we have seen, asynchronous programming has syntax nuances. By separating the preparation of a command from its execution, we can make the former still have the synchronous form, and the latter the asynchronous syntax (assuming this is the part that needs to run asynchronously, if, for example, we're using a library to connect to a database).

State

The state pattern is a clear example of reification in software design, making the concept of our domain problem an explicit object rather than just a side value (for example, using strings or integer flags to represent values or managing state).

In Chapter 8, Unit Testing and Refactoring, we had an object that represented a merge request, and it had a state associated with it (open, closed, and so on). We used an enumeration to represent those states because, at that point, they were just data holding a value (the string representation of that particular state). If they had to have some behavior, or the entire merge request had to perform some actions depending on its state and transitions, this would not have been enough.

The fact that we are adding behavior, a runtime structure, to a part of the code has to make us think in terms of objects, because that's what objects are supposed to do, after all. And here comes the reification—now the state cannot just simply be an enumeration with a string; it needs to be an object.

Imagine that we have to add some rules to the merge request, say that when it moves from open to closed, all approvals are removed (they will have to review the code again)—and that when a merge request is just opened, the number of approvals is set to zero (regardless of whether it's a reopened or brand-new merge request). Another rule could be that when a merge request is merged, we want to delete the source branch, and of course, we want to forbid users from performing invalid transitions (for example, a closed merge request cannot be merged, and so on).

If we put all that logic into a single place, namely in the MergeRequest class, we will end up with a class that has lots of responsibilities (a sign of a poor design), probably many methods, and a very large number of if statements. It would be hard to follow the code and to understand which part is supposed to represent which business rule.

It's better to distribute this into smaller objects, each one with fewer responsibilities, and the state objects are a good place for this. We create an object for each kind of state we want to represent, and, in their methods, we place the logic for the transitions with the aforementioned rules. The MergeRequest object will then have a state collaborator, and this, in turn, will also know about MergeRequest (the double-dispatching mechanism is needed to run the appropriate actions on MergeRequest and handle the transitions).

We define a base abstract class with the set of methods to be implemented, and then a subclass for each particular state we want to represent. Then the MergeRequest object delegates all the actions to state, as shown in the following code:

class InvalidTransitionError(Exception):
    """Raised when trying to move to a target state from an unreachable 
    Source
    state.
    """
class MergeRequestState(abc.ABC):
    def __init__(self, merge_request):
        self._merge_request = merge_request
    @abc.abstractmethod
    def open(self):
        ...
    @abc.abstractmethod
    def close(self):
        ...
    @abc.abstractmethod
    def merge(self):
        ...
    def __str__(self):
        return self.__class__.__name__
class Open(MergeRequestState):
    def open(self):
        self._merge_request.approvals = 0
    def close(self):
        self._merge_request.approvals = 0
        self._merge_request.state = Closed
    def merge(self):
        logger.info("merging %s", self._merge_request)
        logger.info(
            "deleting branch %s", 
            self._merge_request.source_branch
        )
        self._merge_request.state = Merged
class Closed(MergeRequestState):
    def open(self):
        logger.info(
            "reopening closed merge request %s", 
            self._merge_request
        )
        self._merge_request.state = Open
    def close(self):
        """Current state."""
    def merge(self):
        raise InvalidTransitionError("can't merge a closed request")
class Merged(MergeRequestState):
    def open(self):
        raise InvalidTransitionError("already merged request")
    def close(self):
        raise InvalidTransitionError("already merged request")
    def merge(self):
        """Current state."""
class MergeRequest:
    def __init__(self, source_branch: str, target_branch: str) -> None:
        self.source_branch = source_branch
        self.target_branch = target_branch
        self._state = None
        self.approvals = 0
        self.state = Open
    @property
    def state(self):
        return self._state
    @state.setter
    def state(self, new_state_cls):
        self._state = new_state_cls(self)
    def open(self):
        return self.state.open()
    def close(self):
        return self.state.close()
    def merge(self):
        return self.state.merge()
    def __str__(self):
        return f"{self.target_branch}:{self.source_branch}"

The following list outlines some clarifications about implementation details and the design decisions that should be made:

  • The state is a property, so not only is it public, but there is also a single place with the definitions of how states are created for a merge request, passing self as a parameter.
  • The abstract base class is not strictly needed, but there are benefits to having it. First, it makes the kind of object we are dealing with more explicit. Second, it forces every substate to implement all the methods of the interface. There are two alternatives to this:
    • We could have not written the methods and let AttributeError raise when trying to perform an invalid action, but this is not correct, and it doesn't express what happened.
    • Related to this point is the fact that we could have just used a simple base class and left those methods empty, but then the default behavior of not doing anything doesn't make it any clearer what should happen. If one of the methods in the subclass should do nothing (as in the case of merge), then it's better to let the empty method just sit there and make it explicit that for that particular case, nothing should be done, as opposed to forcing that logic to all objects.
  • MergeRequest and MergeRequestState have links to each other. The moment a transition is made, the former object will not have extra references and should be garbage-collected, so this relationship should be always 1:1. With some small and more detailed considerations, a weak reference might be used.

The following code shows some examples of how the object is used:

>>> mr = MergeRequest("develop", "mainline") 
>>> mr.open()
>>> mr.approvals
0
>>> mr.approvals = 3
>>> mr.close()
>>> mr.approvals
0
>>> mr.open()
INFO:log:reopening closed merge request mainline:develop
>>> mr.merge()
INFO:log:merging mainline:develop
INFO:log:deleting branch develop
>>> mr.close()
Traceback (most recent call last):
...
InvalidTransitionError: already merged request

The actions for transitioning states are delegated to the state object, which MergeRequest holds at all times (this can be any of the subclasses of ABC). They all know how to respond to the same messages (in different ways), so these objects will take the appropriate actions corresponding to each transition (deleting branches, raising exceptions, and so on), and will then move MergeRequest to the next state.

Since MergeRequest delegates all actions to its state object, we will find that this typically happens every time the actions that it needs to do are in the form self.state.open(), and so on. Can we remove some of that boilerplate?

We could, by means of __getattr__(), as it is portrayed in the following code:

class MergeRequest:
    def __init__(self, source_branch: str, target_branch: str) -> None:
        self.source_branch = source_branch
        self.target_branch = target_branch
        self._state: MergeRequestState
        self.approvals = 0
        self.state = Open
    @property
    def state(self) -> MergeRequestState:
        return self._state
    @state.setter
    def state(self, new_state_cls: Type[MergeRequestState]):
        self._state = new_state_cls(self)
    @property
    def status(self):
        return str(self.state)
    def __getattr__(self, method):
        return getattr(self.state, method)
    def __str__(self):
        return f"{self.target_branch}:{self.source_branch}"

Be careful with implementing these types of generic redirections in the code, because it might harm readability. Sometimes, it's better to have some small boilerplate, but be explicit about what our code does.

On the one hand, it is good that we reuse some code and remove repetitive lines. This gives the abstract base class even more sense. Somewhere, we want to have all possible actions documented, listed in a single place. That place used to be the MergeRequest class, but now those methods are gone, so the only remaining source of that truth is in MergeRequestState. Luckily, the type annotation on the state attribute is really helpful for users to know where to look for the interface definition.

A user can simply take a look and see that everything that MergeRequest doesn't have will be asked of its state attribute. From the init definition, the annotation will tell us that this is an object of the MergeRequestState type, and by looking at this interface, we will see that we can safely ask for the open(), close(), and merge() methods on it.

The null object pattern

The null object pattern is an idea that relates to the good practices that were mentioned in previous chapters of this book. Here, we are formalizing them, and giving more context and analysis to this idea.

The principle is rather simple—functions or methods must return objects of a consistent type. If this is guaranteed, then clients of our code can use the objects that are returned with polymorphism, without having to run extra checks on them.

In the previous examples, we explored how the dynamic nature of Python made things easier for most design patterns. In some cases, they disappear entirely, and in others, they are much easier to implement. The main goal of design patterns as they were originally thought of is that methods or functions should not explicitly name the class of the object that they need in order to work. For this reason, they propose the creation of interfaces and a way of rearranging the objects to make them fit these interfaces in order to modify the design. But most of the time, this is not needed in Python, and we can just pass different objects, and as long as they respect the methods they must have, then the solution will work.

On the other hand, the fact that objects don't necessarily have to comply with an interface requires us to be more careful as to the things that are returning from such methods and functions. In the same way that our functions didn't make any assumptions about what they were receiving, it's fair to assume that clients of our code will not make any assumptions either (it is our responsibility to provide objects that are compatible). This can be enforced or validated with design by contract. Here, we will explore a simple pattern that will help us avoid these kinds of problems.

Consider the chain of responsibility design pattern explored in the previous section. We saw how flexible it is and its many advantages, such as decoupling responsibilities into smaller objects. One of the problems it has is that we never actually know what object will end up processing the message, if any. In particular, in our example, if there was no suitable object to process the log line, then the method would simply return None.

We don't know how users will use the data we passed, but we do know that they are expecting a dictionary. Therefore, the following error might occur:

AttributeError: 'NoneType' object has no attribute 'keys'

In this case, the fix is rather simple—the default value of the process() method should be an empty dictionary rather than None.

Ensure that you return objects of a consistent type.

But what if the method didn't return a dictionary, but a custom object of our domain?

To solve this problem, we should have a class that represents the empty state for that object and return it. If we have a class that represents users in our system, and a function that queries users by their ID, then in the case that a user is not found, it should do one of the following two things:

  • Raise an exception
  • Return an object of the UserUnknown type

But in no case should it return None. The phrase None doesn't represent what just happened, and the caller might legitimately try to ask methods to it, and it will fail with AttributeError.

We have discussed exceptions and their pros and cons earlier on, so we should mention that this null object should just have the same methods as the original user and do nothing for each one of them.

The advantage of using this structure is that not only are we avoiding an error at runtime but also that this object might be useful. It could make the code easier to test, and it can even, for instance, help in debugging (maybe we could put logging into the methods to understand why that state was reached, what data was provided to it, and so on).

By exploiting almost all of the magic methods of Python, it would be possible to create a generic null object that does absolutely nothing, no matter how it is called, but that can be called from almost any client. Such an object would slightly resemble a Mock object. It is not advisable to go down that path because of the following reasons:

  • It loses meaning with the domain problem. Back in our example, having an object of the UnknownUser type makes sense, and gives the caller a clear idea that something went wrong with the query.
  • It doesn't respect the original interface. This is problematic. Remember that the point is that an UnknownUser is a user, and therefore it must have the same methods. If the caller accidentally asks for a method that is not there, then, in that case, it should raise an AttributeError exception, and that would be good. With the generic null object that can do anything and respond to anything, we would be losing this information, and bugs might creep in. If we opt for creating a Mock object with spec=User, then this anomaly would be caught, but again, using a Mock object to represent what is actually an empty state doesn't match our intention of providing clear, understandable code.

This pattern is a good practice that allows us to maintain polymorphism in our objects.

Final thoughts about design patterns

We have seen the world of design patterns in Python, and in doing so, we have found solutions to common problems, as well as more techniques that will help us achieve a clean design.

All of this sounds good, but it begs the question: how good are design patterns? Some people argue that they do more harm than good, that they were created for languages whose limited type system (and lack of first-class functions) makes it impossible to accomplish things we would normally do in Python. Others claim that design patterns force a design solution, creating some bias that limits a design that would have otherwise emerged, and that would have been better. Let's look at each of these points in turn.

The influence of patterns over the design

A design pattern cannot be good or bad by itself, but rather by how it's implemented, or used. In some cases, there is no need for a design pattern when a simpler solution would do. Trying to force a pattern where it doesn't fit is a case of over-engineering, and that's clearly bad, but it doesn't mean that there is a problem with the design pattern, and most likely in these scenarios, the problem is not even related to patterns at all. Some people try to over-engineer everything because they don't understand what flexible and adaptable software really means.

As we mentioned before in this book, making good software is not about anticipating future requirements (there is no point in doing futurology), but just solving the problem that we have at hand right now, in a way that doesn't prevent us from making changes to it in the future. It doesn't have to handle those changes now; it just needs to be flexible enough so that it can be modified in the future. And when that future comes, we will still have to remember the rule of three or more instances of the same problem before coming up with a generic solution or a proper abstraction.

This is typically the point where the design patterns should emerge, once we have identified the problem correctly and can recognize the pattern and abstract accordingly.

Let's come back to the topic of the suitability of the patterns to the language. As we said in the introduction of the chapter, design patterns are high-level ideas. They typically refer to the relation of objects and their interactions. It's hard to think that such things might disappear from one language to another.

It's true that some patterns would require less work in Python, as is the case of the iterator pattern (which, as it was heavily discussed earlier in the book, is built in Python), or a strategy (because, instead, we would just pass functions like any other regular object; we don't need to encapsulate the strategy method into an object, as the function itself would be that object).

But other patterns are actually needed, and they indeed solve problems, as in the case of the decorator and composite patterns. In other cases, there are design patterns that Python itself implements, and we just don't always see them, as in the case of the facade pattern that we discussed earlier in the chapter.

As to our design patterns leading our solution in the wrong direction, we have to be careful here. Once again, it's better if we start designing our solution by thinking in terms of the domain problem and creating the right abstractions, and then later see whether there is a design pattern that emerges from that design. Let's say that it does. Is that a bad thing? The fact that there is already a solution to the problem we're trying to solve cannot be a bad thing. It would be bad to reinvent the wheel, as happens many times in our field. Moreover, the fact that we are applying a pattern, something already proven and validated, should give us greater confidence in the quality of what we are building.

Design patterns as theory

One interesting way I see design patterns is as software engineering theory. While I agree with the idea that the more naturally the code evolves, the better, that doesn't mean we should ignore design patterns completely.

Design patterns exist because there's no point in reinventing the wheel. If there's a solution that has already been devised for a particular kind of problem, it will save us some time to ponder that idea as we plan our design. In this sense (and to re-invoke an analogy from the first chapter), I like to think about design patterns as analogous to chess openings: professional chess players don't think about every combination in the early stages of a game. That's the theory. It's already been studied. It's the same as with a math or physics formula. You should understand it deeply the first time, know how to infer it, and incorporate its meaning, but after that, there's no need to develop that theory over and over again.

As practitioners of software engineering, we should use the theory of design patterns to save mental energy and come up with solutions faster. More than that, design patterns should become not only language but building blocks as well.

Names in our models

Should we mention that we are using a design pattern in our code?

If the design is good and the code is clean, it should speak for itself. It is not recommended that you name things after the design patterns you are using for a couple of reasons:

  • Users of our code and other developers don't need to know the design pattern behind the code, as long as it works as intended.
  • Stating the design pattern ruins the intention revealing principle. Adding the name of the design pattern to a class makes it lose part of its original meaning. If a class represents a query, it should be named Query or EnhancedQuery, something that reveals the intention of what that object is supposed to do. EnhancedQueryDecorator doesn't mean anything meaningful, and the Decorator suffix creates more confusion than clarity.

Mentioning the design patterns in docstrings might be acceptable because they work as documentation, and expressing the design ideas (again, communicating) in our design is a good thing. However, this should not be needed. Most of the time, though, we do not need to know that a design pattern is there.

The best designs are those in which design patterns are completely transparent to the users. An example of this is how the facade pattern appears in the standard library, making it completely transparent to users as to how to access the os module. An even more elegant example is how the iterator design pattern is so completely abstracted by the language that we don't even have to think about it.

Summary

Design patterns have always been seen as proven solutions to common problems. This is a correct assessment, but in this chapter, we explored them from the point of view of good design techniques, patterns that leverage clean code. In most of the cases, we looked at how they provide a good solution to preserve polymorphism, reduce coupling, and create the right abstractions that encapsulate details as needed—all traits that relate to the concepts explored in Chapter 8, Unit Testing and Refactoring.

Still, the best thing about design patterns is not the clean design we can obtain from applying them, but the extended vocabulary. Used as a communication tool, we can use their names to express the intention of our design. And sometimes, it's not the entire pattern that we need to apply, but we might need to take a particular idea (a substructure, for example) of a pattern from our solution, and here, too, they prove to be a way of communicating more effectively.

When we create solutions by thinking in terms of patterns, we are solving problems at a more general level. Thinking in terms of design patterns brings us closer to higher-level design. We can slowly "zoom-out" and think more in terms of architecture. And now that we are solving more general problems, it's time to start thinking about how the system is going to evolve and be maintained in the long run (how it's going to scale, change, adapt, and so on).

For a software project to be successful in these goals, it requires clean code at its core, but the architecture also has to be clean as well, which is what we are going to look at in the next chapter.

References

Here is a list of information you can refer to:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.219.112.111