13

Managing Coupling

Coupling is one of the most important ideas to think about when we start to think about how to manage complexity.

Coupling is defined as “the degree of interdependence between software modules; a measure of how closely connected two routines or modules are; the strength of the relationships between modules.”1

Coupling is an essential part of any system, and in software we are often lax in discussing it. We often talk about the value of more loosely coupled systems, but let’s be clear: if the components of your software system are perfectly decoupled, then they can’t communicate with one another. This may, or may not, be helpful.

Coupling is not something that we can, or should, aim to always wholly eliminate.

Cost of Coupling

However, coupling is the thing that impacts most directly on our ability to reliably, repeatably, and sustainably create and deliver software. Managing the coupling in our systems, and in the organizations that create them, is front and center in our ability to create software at any kind of scale or complexity.

The real reason why attributes of our systems like modularity and cohesion and techniques like abstraction and separation of concerns matter is because they help us reduce the coupling in our systems. This reduction has a direct impact on the speed and efficiency with which we can make progress and on the scalability and reliability of both our software and our organizations.

1. Source: Wikipedia, https://en.wikipedia.org/wiki/Coupling_(computer_programming)

If we don’t take the issues and costs of coupling seriously, then we create big balls of mud in software, and we create organizations that find it impossible to make or release any change into production. Coupling is a big deal!

In the previous chapter, we explored how abstraction could help us break some of the chains that bind even tiny pieces of software together. If we decide not to abstract, then our code is tightly coupled, forcing us to worry about changes in one part of the system and compromising the behavior of code in another.

If we don’t separate the concerns of essential and accidental complexity, then our code is tightly coupled and now we must worry about sometimes horribly complex ideas like concurrency, while also being comfortable that our account balance adds up correctly. This is not a nice way to work!

This does not mean that tight coupling is bad and loose coupling is good; I am afraid it is not that simple.

In general, though, by far the most common way for developers and teams to make a big mistake is in the direction of overly tight coupling. There are costs to “too loose coupling,” but they are generally much lower costs than the costs of “too tight coupling.” So, in general, we should aim to prefer looser coupling over tighter coupling, but also to understand the trade-offs that we make when we make that choice.

Scaling Up

Perhaps the biggest commercial impact of coupling is on our ability to scale up development. The message may not have reached everyone that it should yet, but we learned a long time ago that you don’t get better software faster by throwing people at the problem. There is a fairly serious limit on the size of a software development team, before adding more people slows it down (refer to Chapter 6).

The reason for this is coupling. If your team and my team are developmentally coupled, we could maybe work to coordinate our releases. We could imagine tracking changes, and each time I change my code, you are informed of it in some way. That may work for a very small number of people and teams, but it quickly gets out of hand. The overhead of keeping everyone in step rapidly spirals out of control.

There are ways in which we can minimize this overhead and make this coordination as efficient as possible. The best way to do this is through continuous integration. We will keep all our code in a shared space, a repository, and each time any of us changes anything, we will check that everything is still working. This is important for any group of people working together; even small groups of people benefit from the clarity that continuous integration brings.

This approach also scales significantly better than nearly everyone expects. For example, Google and Facebook do this for nearly all of their code. The downside of scaling up in this way is that you have to invest heavily in the engineering around repositories, builds, CI, and automated testing to get feedback on changes quickly enough to steer development activities. Most organizations are unable or unwilling to invest enough in the changes necessary to make this work.2

You can think of this strategy as coping with the symptoms of coupling. We make the feedback so fast and so efficient that even when our code, and our teams, are coupled, we can still make efficient progress.

Microservices

The other strategy that makes sense is to decouple or at least reduce the level of coupling. This is the microservices approach. Microservices are the most scalable way to build software, but they aren’t what most people think they are. The microservice approach is considerably more complex than it looks and requires a fair degree of design sophistication to achieve.

As you may have gathered from this book, I am a believer in the service model for organizing our systems. It is an effective tool for drawing lines around modules and making concrete the seams of abstraction that we discussed in the previous chapter. It is important to recognize, though, that these advantages are true, independently of how you choose to deploy your software. They also predate, by several decades, the idea of microservices.

The term microservices was first used in 2011. There was nothing new in microservices. All of the practices and approaches had been used, and often widely used before, but the microservice approach put them together and used a collection of these ideas to define what a microservice was. There are a few different definitions, but this is the list that I use.

Microservices are as follows:

  • Small

  • Focused on one task

  • Aligned with a bounded context

  • Autonomous

  • Independently deployable

  • Loosely coupled

I am sure that you can see that this definition closely aligns with the way that I describe good software design.

2. My other book Continuous Delivery describes the practices that are necessary to scale up these aspects of software engineering. See https://amzn.to/2WxRYmx.

The trickiest idea here is that the services are “independently deployable.” Independently deployable components of software have been around for a long time in lots of different contexts, but now they are part of the definition of an architectural style and a central part.

This is the key defining characteristic of microservices without this idea; they don’t introduce anything new.

Service-based systems were using semantic messaging from at least the early 1990s, and all of the other commonly listed characteristics of microservices were also in fairly common use by teams building service-based systems. The real value in microservices is that we can build, test, and deploy them independently of other services that they run alongside, and even of other services that they interact with.

Think what this means for a moment. If we can build a service and deploy it independently of other services, that means we don’t care what version those other services are at. It means that we don’t get to test our service with those other services prior to its release. This ability wins us the freedom to focus on the now simple module in front of us: our service.

Our service will need to be cohesive so that it is not too dependent on other services or other code. It needs to be very loosely coupled with respect to other services so that it, or they, can change without either one breaking the other. If not, we won’t be able to deploy our service without testing it with those other services before we release, so it isn’t independently deployable.

This independence, and its implications, are commonly missed by teams that think that they are implementing a microservice approach but have not decoupled them sufficiently to trust that their service can be deployed without testing it first with other the services that collaborate with it.

Microservices is an organizational-scaling pattern. That is its advantage. If you don’t need to scale up development in your organization, you don’t need microservices (although “services” may be a great idea).

Microservices allow us to scale our development function by decoupling the services from one another and vitally decoupling the teams that produce those services from one another.3

Now your team can make progress at its own pace, irrespective of how fast or slow my team is moving. You don’t care what version my service is because your service is sufficiently loosely coupled to allow you not to care.

There is a cost to this decoupling. The service itself needs to be designed to be more flexible in the face of change with its collaborators. We need to adopt design strategies that insulate our service from change in other places. We need to break developmental coupling so that we can work independently of one another. This cost is the reason that microservice may be the wrong choice if you don’t need to scale up your team.

Independent deployability comes at a cost, like everything else. The cost is that we need to design our service to be better abstracted, better insulated, and more loosely coupled in its interactions with other services. There are a variety of techniques that we can use to achieve this, but all of them add to the complexity of our service and to the scale of the design challenge that we undertake.

3. In 1967, Mervin Conway created something called Conway’s law that said, “Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure.”

Decoupling May Mean More Code

Let’s try to pick some of these costs apart so that we can better understand them. As ever, there is a cost to pay for the decisions that we make. That is the nature of engineering; it is always a game of trade-offs. If we choose to decouple our code, we are almost certainly going to write more code, at least to start with.

This is one of the common design mistakes that many programmers make. There is an assumption that “less code is good” and “more code is bad,” but that is not always the case, and here is a key point at which that is decidedly not the case. Let’s revisit once again the trivial example that we have used in previous chapters. Listing 13.1 shows once again the code to add an item.

Listing 13.1 One Cohesion Example (Yet Again)

def add_to_cart1(self, item):
    self.cart.add(item)
    conn = sqlite3.connect('my_db.sqlite')     cur = conn.cursor()     cur.execute('INSERT INTO cart (name, price) values (item.name, item.price)')     conn.commit()     conn.close()
    return self.calculate_cart_total();

Here we have eight lines of code, if we ignore the blank lines. If we make this code better by abstracting a method, I hope that we’d all agree that it is better, but we do need to add some more lines of code.

In Listing 13.2, the reduction in coupling, improved cohesion, and better separation of concerns has cost us two additional lines of code. If we took the next step—of introducing a new module or class that we passed as a parameter—we’d add several more lines to further improve our design.

Listing 13.2 Reducing Coupling

def add_to_cart1(self, item):
    self.cart.add(item)
    self.store_item(item)
    return self.calculate_cart_total();
def store_item(self, item):     conn = sqlite3.connect('my_db.sqlite')     cur = conn.cursor()     cur.execute('INSERT INTO cart (name, price) values (item.name, item.price)')     conn.commit()     conn.close()

I have heard programmers reject the approach to design that I describe in this book, and I have heard others reject the use of automated testing because “I have to type more.” These programmers are optimizing for the wrong things.

Code is a means of communication, and it is primarily a means of communication to other human beings, not to computers.

Our aim is to make our lives and the lives of other humans who interact with our code easier. This means that the readability isn’t an effete, abstract property of code that is only meaningful for people who nerd out about style and aesthetics. Readability is a fundamental property of good code. It has a direct economic impact on the value of that code.

So taking care so that our code and systems are understandable is important. It’s more than that, though. The idea that taking a dumb, naive approach to evaluating efficiency by counting the characters that we type is ridiculous. The kind of unstructured, coupled code in Listing 13.1 may be fewer lines of code if we are looking at eight lines. If this function was 800 lines, though, it is much more likely that there will be duplication and redundancy. Managing the complexity of our code is important for many reasons, but one of those reasons is that it significantly helps us in spotting redundancy and duplication and removing it.

In real systems, we end up with less code by thinking carefully, designing well, and communicating clearly through code, not by counting how many characters we type.

We should optimize for thinking, not for typing!

Loose Coupling Isn’t the Only Kind That Matters

Michael Nygard4 has an excellent model to describe coupling. He divides it into a series of categories (see Table 13.1).

Table 13.1 The Nygard Model of Coupling

Type

Effect

Operational

A consumer can’t run without a provider

Developmental

Changes in producers and consumers must be coordinated

Semantic

Change together because of shared concepts

Functional

Change together because of shared responsibility

Incidental

Change together for no good reason (e.g., breaking API changes)

This is a useful model, and the design of our systems has an impact on all of these types of coupling. If you can’t release your changes into production unless I am finished with mine, then we are developmentally coupled. We can address that coupling by the choices we make in our design.

4. Michael Nygard is a software architect and author of Release It. He presented his model of coupling at several conferences in this excellent talk: https://bit.ly/3j2dGIP.

If my service can’t start unless yours is already running, then our services are operationally coupled, and, once again, we can choose to address that through the design of our systems.

Recognizing these different kinds of coupling is a good step forward. Thinking about them and deciding which to address and how to address them is another.

Prefer Loose Coupling

As we have seen, loose coupling comes at a cost, and the cost of more lines of code can also end up being a cost in performance.

Coupling Can Be Too Loose

Many years ago I did some consultancy for a large finance company. They had a rather serious performance problem with an important order-management system that they had built. I was there to see if I could help them improve the performance of the system.

The architect responsible for the design was very proud of the fact that they had “followed best practice.” His interpretation of “best practice” was to reduce coupling and increase abstraction, both good things in my opinion, but one of the ways that the team had done this was to create a completely abstract schema for their relational database. The team was proud of the fact that they could store “anything” in their database.

What they had done was, in essence, create a “name-value pair” store mixed with a kind of custom “star schema” that used a relational database as the store. More than that, though, each element in a “record” as far as their application was concerned was a separate record in the database, along with links that allowed you to retrieve sibling records. This meant that it was highly recursive.

The code was very general, very abstract, but if you wanted to load almost anything, it involved hundreds, and sometimes thousands, of interactions with the database to pull the data out before you could operate on it.

Too much abstraction and too much decoupling can be harmful!

It is important then to be aware of these potential costs and not take our abstraction and decoupling too far, but as I said earlier, the vastly more common failure is the inverse. Big balls of mud are much more common than overly abstract, overly decoupled designs.

I spent the latter part of my career working in very high-performance systems, so I take performance in design seriously. However, it is a common mistake to assume that high-performance code is messy and can’t afford too many function or method calls. This is old-school thinking and should be dismissed.

The route to high performance is simple, efficient code, and these days, for most common languages and platforms, it’s simple, efficient code that can be easily and, even better, predictably, understood by our compilers and hardware. Performance is not an excuse for a big ball of mud!

Even so, I can accept the argument that within high-performance blocks of code, tread a little carefully with the level of decoupling.

The trick is to draw the seams of abstraction so that high-performance parts of the system fall on one side of that line or another so that they are cohesive, accepting that the transition from one service, or one module, to another will incur additional costs.

These interfaces between services prefer looser coupling to the extent that each service hides details from another. These interfaces are more significant points in the design of your system and should be treated with more care and allowed to come at a little higher cost in terms of runtime overhead as well as lines of code. This is an acceptable trade-off and a valuable step toward more modular, more flexible systems.

How Does This Differ from Separation of Concerns?

It may seem that loose coupling and separation of concerns are similar ideas, and they are certainly related. However, it is perfectly reasonable to have two pieces of code that are tightly coupled, but with a very good separation of concerns or loosely coupled with a poor separation of concerns.

The first of these is easy to imagine. We could have a service that processes orders and service that stores the orders. This is a good separation of concerns, but the information that we send between them may be detailed and precise. It may require that both services change together. If one service changes its concept of an “order,” it may break the other, so they are tightly coupled.

The second, loose coupled but with a poor separation of concerns, is probably a little more difficult to imagine in a real system, though easy enough to think of in the abstract.

We could imagine two services that manage two separate accounts of some kind and one account sending money to credit the other. Let’s imagine that our two accounts exchange information asynchronously, via messages.

Account A sends message “Account A Debited by X, Credit Account B.” Sometime later, Account B sees the message and credits itself with the funds. The transaction here is divided between the two distinct services. What we want to happen is that money moves from one account to the other. That is the behavior, but it is not cohesive; we are removing funds in one place and adding them in another, even though there needs to be some sense of overall “transaction” going on here.

If we implemented this as I have described, it would be a very bad idea. It’s overly simplistic and doomed to failure. If there was a problem in transmission somewhere, money could vanish.

We’d definitely need to do more work than that. Establish some kind of protocol that checked that the two ends of the transaction were in step perhaps. Then we could confirm that if the money was removed from the first account, it certainly arrived in the second, but we could still imagine doing this in a way that was loosely coupled, technically if not semantically.

DRY Is Too Simplistic

DRY is short for “Don’t Repeat Yourself.” It is a short hand description of our desire to have a single canonical representation of each piece of behavior in our system. This is good advice, but it is not always good advice. As ever, it is more complex than that.

DRY is excellent advice within the context of a single function, service, or module. It is good advice; beyond that, I would extend DRY to the scope of a version control repository or a deployment pipeline. It comes at a cost, though. Sometimes this is a very significant cost when applied between services or modules, particularly if they are developed independently.

The problem is that the cost of having one canonical representation of any given idea across a whole system increases coupling, and the cost of coupling can exceed the cost of duplication.

This is a balancing act.

Dependency management is an insidious form of developmental coupling. If your service and my service share the use of a library of some kind and you are forced to update your service when I update mine, then our services and our teams are developmentally coupled.

This coupling will have a profound impact on our ability to work autonomously and to make progress on the things that matter to us. It may be a problem for you to hold your release until you have changed to consume the new version of the library that my team imposed upon you. Or it may be a pain because you were in the middle of some other piece of work that this change now makes more difficult.

The advantage of DRY is that when something changes, we need to change it in only one place; the disadvantage is that every place that uses that code is coupled in some way.

From an engineering standpoint, there are some tools that we can use to help us. The most important one is the deployment pipeline.

In continuous delivery, a deployment pipeline is meant to give us clear, definitive feedback on the releasability of our systems. If the pipeline says “everything looks good,” then we are safe to release with no further work. That implicitly says something important about the scope of a deployment pipeline; it should be “an independently deployable unit of software.”

So, if our pipeline says all is good, we can release; that gives us a sensible scope to use for DRY. DRY should be the guiding principle within the scope of a deployment pipeline but should be actively avoided between pipelines.

So if you are creating a microservice-based system, with each service being independently deployable, and each service having its own deployment pipeline, you should not apply DRY between microservices. Don’t share code between microservices.

This is interesting and sort of foundational to the thinking that prompted me to write this book. It is not random chance or an accident that my advice on coupling is related to something that may seem distant. Here is a line of reasoning that goes from a fairly basic idea in computer science, coupling, and links it, through design and architecture, to something that is seemingly to do with how we build and test our software: a deployment pipeline.

This is part of the engineering philosophy and approach that I am attempting to describe and promote here.

If we follow a line of reasoning—from ideas like the importance of getting great feedback on our work, creating efficient, effective approaches to learning as our work proceeds and dividing our work into parts that allow us to deal with the complexity of the systems that we create, and the human systems that allow us to create them–then we end up here.

By working so that our software is always in a releasable state, the core tenet of continuous delivery, we are forced to consider deployability and the scope of our deployment pipelines. By optimizing our approach so that we can learn quickly and fail fast if we make a mistake, which is the goal of the first section of this book, then we are forced to address the testability of our systems. This guides us to create code that is more modular, more cohesive, has better separation of concerns, and has better lines of abstraction that keep change isolated and loosely coupled.

All of these ideas are linked. All reinforce one another, and if we take them seriously and adopt them as the foundations for how we approach our work, they result in us creating better software faster.

Whatever software engineering is, if it doesn’t help us create better software faster, it doesn’t count as “engineering.”

Async as a Tool for Loose Coupling

The previous chapter discussed the leakiness of abstractions. One of those leaky abstractions is the idea of synchronous computing across process boundaries.

As soon as we establish such a boundary, whatever its nature, any idea of synchrony is an illusion, and that illusion comes at a cost.

The leakiness of this abstraction is most dramatic when thinking about distributed computing. If service A communicates with service B, consider all the places where this communication can fail if a network separates them.

The illusion, the leaky abstraction, of synchrony can exist, but only to the point where one of these failures happens—and they will happen. Figure 13.1 shows the places where a distributed conversation can go wrong.

Figure 13.1
Failure points in synchronous communications

  1. There may be a bug in A.

  2. A may fail to establish a connection to the network.

  3. The message may be lost in transmission.

  4. B may fail to establish a connection to the network.

  5. There may be a bug in B.

  6. The connection to the network may fail before B can send a response.

  7. The response may be lost in transmission.

  8. A may lose the connection before it has the response.

  9. There may be a bug in A’s handling of the response.

Apart from 1 and 9, each of the points of failure listed is a leak in the abstraction of synchronous communications. Each adds to the complexity of dealing with errors. Nearly all of these errors could leave A and B out of step with one another, further compounding the complexity. Only some of these failures are detectable by the sender, A.

Now imagine that A and B are communicating about some business-level behavior as though this conversation was synchronous. At the point that something like a connection problem or a dropped message on the network happens, this technical failure intrudes into the business-level conversation.

This kind of leak can be mitigated significantly by more closely representing what is really going on. Networks are really asynchronous communications devices; communication in the real world is asynchronous.

If you and I converse, my brain doesn’t freeze awaiting a response after I have asked you a question; it carries on doing other things. A better abstraction, closer to reality, will leak in less unpleasant ways.

This is not really the place to go into too much detail of specific approaches to design, but I am a believer in treating process boundaries as asynchronous and communicating between distributed services and modules via only asynchronous events. For complex distributed systems, this approach significantly reduces the impact of abstraction leaks and reduces the coupling to the underlying accidental complexity that sits beneath our systems.

Imagine for a moment the impact of a reliable, asynchronous messaging system on the list of failure points in Figure 13.4. All of the same failures can occur, but if Service A only sends asynchronous messages, and some time later receives only a new async message, then now Service A doesn’t need to worry about any of them after step 2. If a meteorite has hit the data center that contains Service B, then we can rebuild the data center, redeploy a copy of Service B, and resubmit the message that Service A sent originally. Although rather late, all the processing continues in precisely the same way as though the whole conversation had taken only a few microseconds.

This chapter is about coupling, not asynchronous programming or design. My intent here is not to convince you of the merits of asynchronous programming, though there are many, but rather to use it as an example to show that by smart use of the ideas of reducing coupling, in this case between the accidental complexity of networks and remote comms and the essential complexity of the business functions of my services, then I can write one piece of code that works when the system is working well and when it is not. This is a well-engineered answer to a particular class of problem.

Designing for Loose Coupling

Yet again, striving for testable code will provide a useful pressure on our design that encourages us, if we pay attention, to design more loosely coupled systems. If our code is hard to test, it is commonly as a result of some unfortunate degree of coupling.

So we can react to the feedback from our design and change it to reduce the coupling, make testing easier, and end up with a higher-quality design. This ability to amplify the quality of our code and designs is the minimum that I would expect of a genuine engineering approach for software.

Loose Coupling in Human Systems

I have grown to think of coupling, in general, as being at the heart of software development. It is the thing that makes software difficult.

Most people can learn to write a simple program in a few hours. Human beings are extremely good at languages, even weird, grammatically constrained, abstract things like a programming languages. That isn’t the problem. In fact, the ease with which most people can pick up a few concepts that allows them to write a few lines of code is a different kind of problem altogether, in that it is sufficiently simple to lull people into a false sense of their own capabilities.

Professional programming isn’t about translating instructions from a human language into a programming language. Machines can do that.5 Professional programming is about creating solutions to problems, and code is the tool that we use to capture our solutions.

5. GPT3 is a machine learning system trained on the Internet, all of it. Given instructions in English, it can code simple apps. See https://bit.ly/3ugOpzQ.

There are a lot of things to learn about when learning to code, but you can get started quickly and, while working on easy problems on your own, make good progress. The hard part comes as the systems that we create, and the teams that we create them with, grow in size and complexity. That is when coupling begins to have its effect.

As I have hinted, this is not just about the code, but vitally, it is about coupling in the organizations that create it, too. Developmental coupling is a common, expensive problem in big organizations.

If we decide to solve this by integrating our work, then however we decide to deal with that, the integration will come at a cost. My other book, Continuous Delivery, is fundamentally about strategies to manage that coupling efficiently.

In my professional life, I see many large organizations hamstrung by organizational coupling. They find it almost impossible to release any change into production, because over the years they have ignored the costs of coupling, and now making the smallest change involves tens, or hundreds, of people to coordinate their work.

There are only two strategies that make sense: you take either a coordinated approach or a distributed approach. Each comes with costs and benefits. This is, it seems, part of the nature of engineering.

Both approaches are, importantly, deeply affected by the efficiency with which we can gather feedback, which is why continuous delivery is such an important concept. Continuous delivery is built on the idea of optimizing the feedback loops in development to the extent that we have, in essence, continuous feedback on the quality of our work.

If you want consistency across a large, complex piece of software, you should adopt the coordinated approach. In this you store everything together, build everything together, test everything together, and deploy everything together.

This gives you the clearest, most accurate picture but comes at the cost of your needing to be able to do all of these things quickly and efficiently. I generally recommend that you strive to achieve this kind of feedback multiple times per day. This can mean a significant investment in time, effort, and technology to get feedback quickly enough.

This doesn’t prevent multiple teams from working on the system, nor does it imply that the systems that the teams create this way are tightly coupled. Here we are talking about the scope of evaluation for a production release. In this case, that scope is an entire system.

Where separate teams are working semi-independently, they coordinate their activities through the shared codebase and a continuous delivery deployment pipeline for the whole system.

This approach allows for teams working on code, services, or modules that are more tightly coupled to make good progress, with the minimum of costs in terms of feedback, but, I repeat, you have to work hard to make it fast enough.

The distributed approach is currently more in favor; it is a microservices approach. In microservices organizations, decision-making is intentionally distributed. Microservice teams work independently of one another, each service is independently deployable, and there is no direct coordination cost between teams. There is, though, an indirect cost, and that cost comes in terms of design.

To reduce organizational coupling, it is important to avoid the need to test services together later in the process. If services are independently deployable, that means they are tested independently too, since how can we judge deployability without testing? If we test two services together and find out that version 4 of one works with version 6 of another, are we really then going to release version 4 and version 17 without testing them? So they aren’t independent.

A microservice approach is the most scalable strategy for software development. You can have as many teams as you want, or at least as many as you can find people to populate and funds to pay them.

The cost is that you give up on coordination, or at least reduce it to the simplest, most generic terms. You can offer centralized guidance, but you can’t enforce it, because enforcement will incur coordination costs.

Organizations that take microservices seriously consciously loosen control; in fact, a microservices approach makes little or no sense in the absence of that loosening of control.

Both of these approaches—the only two that make any real sense—are all about different strategies to manage the coupling between teams. You manage coupling by speeding up the frequency with which you check for mistakes when coupling is high, or you don’t check at all, at least prior to release, when coupling is low.

There are costs to this either way, but there is no real middle ground, though many organizations mistakenly attempt to forge one.

Summary

Coupling is the monster at the heart of software development. Once the complexity of your software extends beyond the trivial, then getting the coupling right, or at least working to manage whatever level of coupling you have designed into it, is often the difference between success and failure.

If your team and mine can make progress without needing to coordinate our activities, the “State of DevOps” reports say that we are more likely to be supplying high-quality code more regularly.

We can achieve this in three ways. We can work with more coupled code and systems but through continuous integration and continuous delivery get fast enough feedback to identify problems quickly. We can design more decoupled systems that we can safely, with confidence, change without forcing change on others. Or we can work with interfaces that have been agreed on and fixed so that we never change them. These are really the only strategies available.

You ignore the costs of coupling, both in your software and in your organization, at your peril.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.156.202