Chapter 6. Technical Debt and Architecture

In this chapter, we explain how to recognize technical debt at the architectural level. We introduce lightweight structural analysis techniques that you can apply to the code or the design to help identify and understand design decisions that lead to technical debt.

Beyond the Code

In Chapter 5, “Technical Debt and the Source Code,” we showed how the accumulation of small deficiencies in the code can lead to a substantive amount of technical debt, which can in turn make forward progress harder, more costly, and more error prone. But there is increasing evidence that the most expensive technical debt is related to the architecture of the software system—and it is harder to pay back. The effective management of technical debt must therefore extend beyond coding issues and consider the architecture of the system.

One common example of this type of technical debt is created when a development team, pressed for time, designs an initial system with little modularity for its first release. This lack of modularity affects development time for subsequent releases. Additional functionality can be added later only by doing extensive refactoring, and this refactoring impacts future timelines and introduces additional defects. In this category, which we will call “architectural debt,” we find not only the structure of the system—organization, decomposition, and interfaces—but also the choice of key technologies, from operating systems to programming languages and from selection of frameworks to open-source components.

Compared to code-level debt, architectural debt is more likely to be intentional. It follows from decisions made in the early phases of a project, often because the development team did not understand how the system would evolve in the future or because the business context significantly changed. Architectural debt can also be an unintentional consequence of what we called the technological gap in Chapter 2, “What Is Technical Debt?”: The original design was fine at the time it was made, but technology evolved over the years, turning the original choice into technical debt. For example, perhaps you designed a system with a local database, but 10 years later, having all your data in the cloud would be a better choice, and your local database now represents technical debt.

In Chapter 5, we explained how tools can assist in spotting most of your code-level technical debt. For architectural debt, these tools are less helpful. Some tools can expose the structural issues of a system, such as circular dependencies, high coupling between modules, and classes that have too much responsibility. These and other practices result in unmaintainable and hard-to-modify systems that require significant rework later in development; hence they accumulate technical debt. But there are aspects of the architectural debt that cannot simply be detected by tools. This type of debt must be dug out of the heads of the people most familiar with it: its designers. No tool will tell you that you should have used a NoSQL database instead of a relational database. Architectural constructs and decisions are in many cases only conventions used in further design and implementation.

There is a direct relationship between a well-thought-out architecture that also guided the implementation of the system and a manageable accumulation of technical debt. For example, if the goal is for the system to be sustained for decades and to respond to changing technology, the architecture of the system must enable separation of concerns, use decoupled technology layers for ease of upgrading, and ensure that change is localized for ease of adding new functionality. These are important architecture concerns that should drive the design reviews as well as manifest themselves in the codebase, not only at the beginning of the system’s development but throughout its lifecycle. The system should be designed and monitored for quality attributes, or architecturally significant requirements, such as requirements about how reliable, secure, or maintainable the system is. Quality attributes help focus attention on cross-cutting aspects of the system, such as how it performs under different conditions, how data flows and is managed, and how it depends on other software such as databases, user interface and backend frameworks, middleware, and so on.

We can supplement the limited functionality of tools in uncovering architectural debt by assessing specific quality attributes. Again, these assessments will likely mostly reveal symptoms of technical debt; designers will have to identify the actual architectural elements that are subject to debt as technical debt items. For example, when scaling from a few hundred users to 10,000 simultaneous users, the drop in performance is a symptom of technical debt: A key quality attribute is affected. The symptom is caused by the large number of remote procedure calls between these two subsystems—the debt item itself—which was a not a problem when the system had only a few hundred users.

Here is an example of architectural debt voiced by a developer of the Phoebe project:

There were some problems in the infrastructure code where there was originally an architecture in place, but it wasn’t followed consistently. So, thought had been given to the architecture, but in the implementation, shortcuts were taken, and dependencies were not clean. This shows up as increased complexity and coupling in the codebase.

This phenomenon is called architectural drift: The intended architecture is poorly or inconsistently implemented throughout the system. This example emphasizes that this kind of technical debt accumulates slowly over the life of the project, which gradually drifts into debt. It is not a sudden, visible event that could trigger corrective action. Now Phoebe developers know the areas of the codebase where the increased complexity has become overwhelming, and their best course of action going forward is to concretely specify the highly complex areas. With some strategic thinking, code analysis can help you uncover such accumulating architectural issues.

Paradoxically, too much early focus on architecture and evolvability may lead to technical debt, too. The developers of Phoebe complain:

The original design had lots of options and flexibility, which in the end we were never to exploit. But as a result, many of the interfaces to key components are very heavy, complex, hard to use (especially by newcomers in the project), and error prone. This is now slowing us down, with no real benefit yet to the project.

There are several strategies you can use to uncover technical debt in the architecture of a system as you iterate through the activities of the technical debt analysis (as described in Chapter 4, “Recognizing Technical Debt”). You can ask the designers about the general health of the system or start with a problem. You can examine the architecture itself or the code and other software artifacts to get insight into the architecture. Typically, the best approach is a combination of these activities:

  • Ask the designers about the health of the system or a problem.

  • Examine the architecture.

  • Examine the code to get insight into the architecture.

We’ll review these options in this chapter. The starting point, the line of investigation, and the analysis differ among these three approaches, but the objective is the same: to identify architectural technical debt items in the context of key business goals.

A design depicts the impact of architectural technical debt on the system.

Ask the Designers

Ask the people who know the system best, the designers themselves, about the current state and history of the system. Ask the designers about the general health of the system or start with an important problem.

Here is a sketch of a strategy to inquire about the general health of a system and start locating technical debt items:

  • Identify the people who have been involved in the project as software architects, technical leads, or experienced developers.

  • Secure some time to meet with them individually or in small groups of two or three. A one-hour interview should give you enough information.

  • Explain clearly the objective of the meeting and define the term technical debt. Stress that it may not be major defects of the system that are already known and visible in the project issue tracker. To better focus the interview, you may also explain some of the ultimate goals: flexibility, shorter release cycle, higher dependability, and so on.

  • Ask questions such as these:

    • In retrospect, what design decisions did you or others make about the system that you regret now?

    • Why do you regret that decision now? (What are the negative consequences?)

    • Was there an alternative at the time?

    • Is this alternative still feasible today?

    • Can you envision another alternative that would remediate the situation?

  • Focus only on the software, not on the people who made the not-quite-right decision, or who pushed the team to do so, to avoid blaming anyone.

  • Rephrase the concern to express the technical debt items—the software artifacts affected, causes, and consequences.

  • Break down generic, high-level concerns into several smaller technical debt items.

  • You may rapidly find references to already identified technical debt items as you do a sequence of individual interviews; move quickly to each new one.

  • Quickly move on when you encounter what appears to be a matter-of-taste issue: “For this kind of system, I much prefer Java over Ruby. Our original choice of Ruby was a mistake!”

Doing individual interviews has some advantages and some drawbacks: On one hand, it is more costly and time consuming. On the other hand, it allows Designer 1 to express concerns about a decision made by Designer 2, who may be his or her supervisor or a much more senior person. Honesty might be harder to express in a group setting, depending on the culture of the organization.

Some of the findings from these interviews may have to be validated by inspecting the design and code. On very large systems that have evolved over time, or if the interviewee has not worked on the project recently, some technical debt items may have already been repaid. You may be told, for example, that “we removed MySQL and replaced it with Neo4J for Release 7 about three months ago.”

This interview strategy will bring out the elephant in the room, the technical debt that everyone is aware of but does not want to express for a variety of reasons:

  • Protecting the person who made the decision that resulted in technical debt, who may be a key player in the organization

  • A fatalistic feeling that nothing can change the system now, or it would be too costly, so why bother

  • Cultural and social dynamics issues, such as losing face

  • Familiarity with the current situation and fear of the unknown (uncertainty avoidance)

The Five Whys is an iterative interrogative technique used to explore the cause-and-effect relationships underlying a particular problem. The primary goal of the technique is to determine the root cause of a defect or problem by repeating the question “Why?” Each answer forms the basis of the next question. When multiple causes are suspected, they can be represented as a fishbone, or Ishikawa, diagram. Here is an example of inquiring about an observed symptom that involves asking “Why?”:

“This type of update takes too long to make.”

“Why?”

“Because the code to update is in six different places.”

“Why is the code in six different places?”

“Because of the strict decomposition of classes to realize the domain-neutral component pattern we picked.”

“Why are we using this pattern?”

The outcome of this activity is the addition of technical debt items to your technical debt registry. These new technical debt items must be investigated by inspecting the design or the code.

Examine the Architecture

A number of analysis techniques have proven useful for examining the architecture as it is being designed and used throughout the software development lifecycle:

  • Thought experiments and reflective questions: Conducting thought experiments and asking reflective questions can augment analysis. People think differently when they are solving problems than when they are reflecting. Asking reflective questions can challenge the decisions people have made, and that challenges them to examine their biases. Ask questions such as these: What are the risks that certain events will happen? How do the risks influence the ­solution? Is the risk acceptable?

  • Checklists: Use a checklist to guide your analysis. A checklist is a detailed set of questions developed based on much experience evaluating systems. Checklists can come from taxonomies of quality attributes and associated architectural tactics that cover the space of design possibilities for managing the quality attribute. For example, architectural means for controlling the properties of modifiability are concerned with coupling and cohesion. Ask questions such as these: What is the cost of modifying a single feature? Does the system consistently support increasing semantic coherence? Does the system consistently encapsulate functionality? Does the system restrict dependencies between modules in a systematic way? Does the system design regularly defer binding of important functionality so that it can be replaced later in the lifecycle, perhaps even by users? Checklists can also be based on experience with particular technology choices or specific domains.

  • Scenario-based analysis: A scenario is a short description of an interaction with the system from the point of view of one of its stakeholders. A stakeholder may pose a change scenario to see how costly it would be to modify the system, given its architecture. Analysts can use quality attribute scenarios to examine whether and how a scenario can be satisfied.

  • Analytic models: Well-established models can be used to predict properties of a system such as performance or availability.

  • Prototypes and simulations: The creation of prototypes or simulations complements the more conceptual techniques for analyzing the architecture. ­Prototypes provide a deeper understanding of the system but with added cost and effort.

A risk is an indicator of poor architectural health. These analysis techniques can bring to light architectural risks, potentially problematic design decisions whose consequences put the achievement of system requirements at risk and business goals in jeopardy. Over time, if overlooked, they can create large amounts of technical debt. Design issues in conjunction with evidence of accumulating rework could result in adding a new technical debt item to the registry or conducting additional analysis to confirm whether there is a risk or not.

Examine the Code to Get Insight into the Architecture

Even if you do not have a description of the architecture to work with, you can still get insight into the architecture by examining the code with the help of a tool that understands dependencies and structures in the code.

Tools that support code analysis are becoming increasingly sophisticated and now often also support dependency analysis. Quantitative techniques involve applying some technique or tool to a software artifact to answer specific questions about specific system properties. Many of the quantitative measures used on code can be applied to the implementation structure or module view to assess the state of the architecture. Some tools provide the ability to extract this module view directly from the code. Other tools provide the ability to represent the module view as designed and compare it with the code structure to check that the code conforms to the architecture.

Code measures have been adapted to code and design elements of increasing scale. For example, cyclomatic complexity has been adapted to code and design elements such as methods, classes, packages, modules, and subsystems of large ­systems; complexity can serve as a starting point for understanding how a system is structured. Some tools also include rules to check for well-established architecture-relevant ­patterns—for example, decoupling business logic from SQL statements (Model—View—Controller) or checking for conformance to framework usage. Run-time measures bring to the surface other architectural concerns that have close relationships to how the code is structured—for example, how services are decomposed and interact with each other, how responsive the system is, and how data is handled.

To understand the impact of a change, developers need to identify the modules of a system that are the focus of a change and follow the dependencies to the dependent modules that will be affected by the change. Relevant techniques for analyzing individual elements and their dependencies include the following:

  • Complexity of individual software elements: Lines of code, module size ­uniformity, cyclomatic complexity

  • Interfaces of software elements: Dependency profiles identifying hidden, inbound, outbound, and transit modules; state access violation; API function usage

  • Interrelationships among the software elements: Coupling, inheritance, cycles

  • System-wide properties: Change impact, cumulative dependencies, propagation, stability

  • Interrelationships between software elements and stakeholder concerns: Concern scope, concern overlap, concern diffusion over software elements

In using these techniques, it is important to focus not only on the results but also on the assumptions under which a measurement was taken. Not all measures are applicable, but there are a number of useful measures to draw from. Those you select will depend on a number of criteria. What part of the system are you ­measuring? Account for external dependencies, libraries, and frameworks. What is being ­measured? Tools often produce different results for seemingly simple measures such as lines of code. How is the system represented? For example, propagation measures make assumptions about data and control flow using an abstract model of the code that makes trade-offs in the fidelity of the results (for example, accuracy and ­precision). How are results combined? Some tools roll up technical measures into a single economic measure of health. The underlying measures can still be useful. For these reasons, it is helpful to look at the dependencies among the measures and understand whether the assumptions apply to your situation. But looking at the code is not ideal: Having different repositories or technologies makes spotting the many interactions and dependencies very difficult.

These measures, whether qualitative or quantitative, can be compared with industry trends or the project’s own data to establish thresholds. Exceeding a threshold is an indicator of poor architectural health that could result in adding a new technical debt item to the registry or conducting additional analysis to confirm whether there is a risk.

The Case of Technical Debt in the Architecture of Phoebe

In Chapter 5, we looked at examples of strategies Team Phoebe employed to uncover debt. Phoebe started with an observed symptom of increasing defects and worked to get to the root cause. The first step was for the project manager to ask the developers, who pointed to the spaghetti code. Then a quality objective was elicited that set the context for examining the code. The team identified two technical debt items in the code: “Remove empty Java packages” and “Remove duplicate code.”

Team Phoebe continues to monitor the system for symptoms, iterating through the steps of the technical debt analysis to see what additional information the architecture analysis will uncover. The team focuses on the following activities:

  1. Understand the key business goals.

  2. Identify key concerns/questions about the Phoebe system related to these business goals.

  3. Define observable qualitative and quantitative criteria related to their questions and goals.

  4. Select and apply one or more techniques or tools to analyze the software for the criteria defined.

  5. Document the issues uncovered as technical debt items and add them to the registry.

  6. Iterate through activities 2 to 5 as needed.

Team Phoebe plans to switch focus between code and design as issues are uncovered. Related issues in the code could lead to an overarching design issue. Issues in the architecture could point to hotspots worth analyzing in depth in the detailed design and code. When team members perform activity 4, they now have the three new techniques in their toolbox that we just described: ask the designers about the health of the system or a problem, examine the architecture, and examine the code to get insight into the architecture.

Understand Key Business Goals and Concerns/Questions

The key business goals were defined in the first iteration. One business goal driving the Phoebe project is “Create an easy-to-evolve product.” The development team has already looked at this goal from a code perspective. Another related business goal is “Increase market share.” There is growing concern over security breaches that are causing users to have lower confidence in the system. These breaches are another pain point and have been traced to security-related bugs such as a crash due to an out-of-bounds number. The developers discuss possible solutions. One offers, “We could just fend off out-of-bounds numbers near the crash site, or we can dig deeper to find out how this is happening.”

Another developer notes, “Time permitting, I’m inclined to want to know the root cause. My sense is that if we patch it here, it will pop up somewhere else later.”

Given the urgency of the issue, the team makes a quick fix and closes the issue, only to have to open it again. A team member records the rationale as a comment in the ticket associated with this issue: “Hmm…reopening. The test case crashes a debug build. I have confirmed that the original source code does crash the production build, so there must be multiple things going on here.”

The team members turn their attention to the two business goals to understand technical debt in the architecture. The architecture design is now the artifact of interest to complement the concerns and questions about the source code. The team tries to answer more questions: How do we understand whether or not the design is messy? How is the architecture related to the areas of the code that are messy?

The team also tries to answer questions about the new attribute of concern: How much time have we spent patching the code in response to the breaches? Do these patches get to the root cause, or is there an underlying design issue? Are the breaches related to each other? Are they related to the messy design?

Define the Architecture Measurement Criteria

From the questions and concerns, team members define the criteria that provide a measure of the architecture to see if they are on track to achieve key business goals. Maintainability, as defined in the ISO/IEC 25010 standard, comes from a collection of subattributes: modularity, reusability, analyzability, modifiability, and testability.

Modifiability may be related to adding new capability, a change in technology (which we call the technological gap in the technical debt landscape), or the evolution of other operational quality attribute scenarios to handle more stringent demands as the system grows over time. Modifiability can be cast as a quality attribute scenario:

The developer wishes to change the user interface by modifying the code at design time. The modifications are made and unit tested, with no side effects within three hours.

The response measure of the modifiability scenario (no side effects within three hours) can be analyzed in terms of system quality measures (properties of the software development process) such as cost-effectiveness in avoiding or eliminating defects. Or it might be analyzed in terms of design measurement criteria (properties of the architecture) such as module design complexity, module independence, complexity in interrelations, and concern scope, overlap, and diffusion. The latter overlaps with the code measurement criteria that the team employed earlier. Some code grouping constructs such as classes and packages can give insight into design elements.

Next Team Phoebe defines the criteria for security. Security as defined in the quality standard ISO/IEC 25010 is a collection of subattributes including confidentiality, integrity, non-repudiation, authenticity, and accountability. Security can be cast as a quality attribute scenario:

An attacker from a remote location attempts to access private data during normal operations of the system. The system maintains an audit trail, the data is kept private, and the source of the tampering is identified.

The response measure of the security scenario (how much data is vulnerable to a particular attack; how much time passes before an attack is detected) can be analyzed in terms of system quality measures (properties of the software development process) such as cost-effectiveness in avoiding or eliminating vulnerabilities. Or it might be analyzed in terms of design measurement criteria (properties of the architecture) such as adherence to secure design standards. If the response measure cannot be met, then the ease of supporting this requirement can be considered a growth scenario that has implications for modifiability.

Select and Apply Architecture Analysis Techniques to Get to the Artifact

Realizing that there is only so much that can be learned from the code, the Phoebe project brings in an external team to conduct an architecture evaluation. During the evaluation, all the business goals and quality attributes are considered to discover risks and trade-offs throughout the system. Qualitative reviews of the design uncover risks to meeting Team Phoebe’s quality attribute goals. The analysis from the architecture review shows what business drivers are at risk.

The Phoebe team identified risks related to the adapter/gateway separation of their architecture. Their architecture concept had a common gateway component that presents a transaction service interface to the integrated enterprise systems and applications while hiding the external resource interface. It also had a customized adapter component to bridge the incompatible interfaces of the enterprise systems and applications. The concerns they identified included the following:

  • The reference implementation for the adapter is not production quality.

  • The gateway has evolved to include operations not needed by all users and defers some common operations, such as audit and logging, to the adapter. These dependencies make it difficult, if not impossible, to separate the two components.

  • For use cases that require interaction with multiple endpoints, an application can orchestrate multiple transactions itself or allow the gateway to handle the request fan-out. The responsibilities of the gateway and adapter are not well defined, leading to implementations with different performance, robustness, security, and other quality-of-service characteristics.

The design review also provided details about the problem of crashes. They weren’t caused by a local problem, as the developers suspected. Tracing interconnections in the Phoebe design revealed a dependency on an external library maintained by another group. Figure 6.1 shows these causes and their effect as a fishbone ­diagram (also called an Ishikawa diagram).

A fishbone diagram explores the cause-and-effect relationships underlying the problem of unexpected crashes.
Figure 6.1 Exploring the cause-and-effect relationships underlying the problem of unexpected crashes

To complement the architecture review, the team used automated software analysis measures to uncover the fact that the system is becoming difficult to ­maintain. Risks from the review provided context for scoping the code analysis to gain insight into the design by measuring the complexity and change propagation of the architecture. A number of methods, classes, and packages demonstrated high complexity, measured with a combination of metrics such as method and class size, cyclomatic complexity, and fan-in and fan-out. The analysis also showed a rise in system cyclicity.

Document the Technical Debt Items

As team members apply the methods and tools, they document the analysis outcome as the starting point of comparison with the project’s key concerns. The sample technical debt item in Table 6.1 shows analysis of both the design and the code to get insight into the maintainability of the architecture.

Table 6.1 Techdebt on architectural choices

Name

Phoebe #420: Locked-in architectural choices in adapter/gateway separation

Summary

Phoebe is based on service-oriented architecture design principles and web service interfaces. The architecture is broken down into two sections: a gateway and an adapter. The gateway handles communication between different organizations’ health information systems. The adapter adapts the gateway to an organization’s backend system. Phoebe has evolved to reflect a more complete architecture but was stymied by increasing complexity and locking in architectural choices that later proved limiting.

Consequences

Immediate benefit is implementing a solution within schedule constraints. Review of the feature matrix by each release shows that the project is struggling to add new functionality. Most releases are preoccupied with dealing with integration, security, and other quality-related issues.

Long-term cost is predicted to be slowing velocity due to accumulation of debt that requires extra work to add more capabilities. Analysis of the artifact indicates the risks and areas of rework:

  • A major risk theme surfaced by the architecture review is adapter/gateway separation.

  • Static analysis of code provides insight into areas of the architecture of major complexity and change propagation based on dependency information.

Remediation approach

Better define responsibilities of the adapter and gateway; refactor to better separate the two components.

Reporter/assignee

Design team.

As shown in Table 6.2, the team also documented a technical debt item to record the design issue at the root of the unexpected crashes.

Table 6.2 Techdebt on unexpected crashes

Name

Phoebe #421: Screen spacing creates unexpected crashes due to API incompatibility.

Summary

The source code uses a very large negative letter-spacing in an attempt to move the text offscreen. The system handles up to −186 em fine but crashes on anything larger. A similar issue #432 was fixed with a patch, but there was another similar report. Time permitting, I’m inclined to want to know what the root cause of this is. My sense is that if we patch it here, it will pop up somewhere else later.

Consequences

We already had 28 reports from seven clients. And it definitely leaves the software vulnerable. Finding the root cause of this crash can be timely.

Remediation approach

The quick and easy solution is to write a patch, but we already seem to have done this twice. The responsible thing to do is to first find the root cause and create a patch at the source. I have a feeling the external web client and our software have an API incompatibility. The course of action I would take is to:

  • Verify where the root of this is.

  • See if we can fix it on our side, but I am tempted to believe the external web client team needs to fix it, so we would need to negotiate.

Reporter/assignee

I need to discuss this with Brant as the fix may be more involved than we think.

Service the Debt

After selecting analysis criteria, conducting the analysis, and inspecting the design, Team Phoebe has a handful of technical debt items. Some of these items pertain to code conformance issues. The code does not conform to the architecture. Understanding the architecture as designed provides the context for refactoring the as-is architecture embodied in the code. Other items pertain to design verification issues. The architecture does not support the business goals and needs to be re-architected, which in turn triggers corresponding changes in the code. We will say more on this topic in Chapter 9, “Servicing the Technical Debt.”

What Can You Do Today?

It is important to communicate the goals and the design approaches chosen for the project with your team. These activities may be useful:

  • Get clarity on the yardstick by which you measure design and architecture, at a minimum by clearly identifying architecturally significant requirements, including their measurable, testable completion criteria

  • Review the architecture. If it is not documented, glean insights from team knowledge, source code, and the issues being tracked

  • Make reviewing architectural concerns a regular part of iteration/sprint reviews and retrospectives

  • Use your knowledge of architectural risk to guide automated analysis of the source code

  • When fixing a defect or adding a new feature request, look beyond the immediate implementation to see if there are longer-term design issues leading to technical debt

Look for the presence of technical debt during these activities and respond by including them in the technical debt registry.

For Further Reading

If you are not familiar with the concept of software architecture, start with the ­Wikipedia definition (2018). Ian Gorton’s book Essential Software Architecture (2006) is a fast and easy read, and if you are coming from an agile perspective, Simon Brown’s Software Architecture for Developers (2018) is for you. For a more thorough treatment of the topic of software architecture, our colleagues at the Software Engineering Institute have evolved over 10 years the reference opus Software Architecture in Practice (Bass et al. 2012). This book also provides more information about quality attribute scenarios and architectural tactics. Just Enough Software Architecture: A Risk-Driven Approach focuses on the risks that prevent development progress (Fairbanks 2010). A continuous architecting approach to system development and sustainment is essential for avoiding unintentional technical debt.

The Architecture Tradeoff Analysis Method (ATAM) is a method for evaluating software architectures relative to quality attribute goals to expose architectural risks that could potentially inhibit an organization’s achievement of its business goals (Clements et al. 2001). Knodel and Naab (2016) introduce architecture evaluations in the context of continuous architecting. Designing Software Architectures, by Humberto Cervantes and Rick Kazman (2016), provides more information about lightweight analysis techniques during design, and the appendix contains tactics questionnaires.

An architecture description language (ADL) could be used to describe a software architecture. The appendix of Documenting Software Architectures: Views and Beyond by Clements and colleagues (2011) provides an overview of AADL, SysML, and UML. These three ADLs are representative of the range of formal or semiformal descriptive languages, textual and/or graphics languages, and associated tools. The benefit of using an ADL is the support it provides in design and analysis activities.

Design Rules introduces design structure matrices to understand dependencies between product elements and how to decouple them for effective evolution (Baldwin & Clark 2000). Researchers and tool vendors have applied the ideas from this book to software to provide tool support. For example, Tornhill (2018) and Kazman and colleagues (2015) put such an analysis in the context of technical debt.

Ford, Parsons, and Kua (2017) introduce the idea of an executable “fitness function” in their book Building Evolutionary Architectures. This is one way of trying to spot architectural debt when it occurs, though only some kinds of architectural constraint are amenable to being checked like this.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.41.205