Foreword by Bill Curtis: Managing Systems Qualities through Architecture

Dr. Bill Curtis, Senior vice president and chief scientist at CAST, Fort Worth, Texas

As I begin writing this foreword, computers at the NASDAQ stock exchange have been crashing all too frequently over the past several weeks. Airline reservation system glitches have grounded airlines numerous times over the past several years. My friends in Great Britain are wondering when their banks will next separate them from their money for days on end. These disruptions account for a tiny fraction of the financial mayhem that poor software quality is inflicting on society. With the software disasters at RBS and Knight Trading running between a quarter and a half billion dollars, software quality is now a boardroom issue.

I hear executives complaining that it takes too long to get new products or business functionality to the market and that it is more important to get it delivered than to thoroughly test it. They tend to temper these opinions when it is their quarter billion dollars that just crashed. Nevertheless, they have a point in the need to deliver new software quickly to keep pace with competitors. We need faster and more thorough ways to evaluate the quality of software before it is delivered. Sometimes speed wins, and sometimes speed kills.

We are pretty good at testing software components and even at testing within layers of software written in the same language or technology platform. That is no longer the challenge. Modern business applications and many products are composed from stacks of technologies. Just because a component passes its unit tests and appears to be well-constructed does not guarantee that it will avoid disastrous interactions that the developer did not anticipate. Many of the worst outages are caused by faulty interactions among components with good structural characteristics. Our problems lie not so much in our components as in our architectures and their interconnections.

The complexity of our systems has now exceeded the capacity of any individual or team to comprehend their totality. Developers may be experts in one or two technologies and languages, but few possess expert knowledge in all the languages and technologies integrated into modern products and systems. Consequently, they make assumptions about how different parts of the system will interact. Generally, their assumptions are correct. However, their incorrect assumptions can create flaws from which small glitches become system outages. All too frequently it is not the small glitch, but the chain of events it instigated that led to disaster.

The challenge of modern software systems brings us ultimately to their architecture. As systems become larger and more complex, their architectures assume ever greater importance in managing their growing integrity and coherence. When architectural integrity is compromised, the probability for serious operational problems increases dramatically. Interactions among layers and subsystems will become increasingly more difficult to understand. The ability to assess unwanted side effects before implementing changes will become more laborious. The making of changes will become more intricate and tedious. Consequently, the verification of functional and structural quality becomes less thorough when speed is the priority. Architectural integrity enables safe speeds to be increased. Architectural disarray makes any speed unsafe.

Maintaining architectural quality across a continuing stream of system enhancements and modifications is critical for at least five reasons. First, it decreases the likelihood of injecting new defects into the system, some of which could be disastrous. Second, it reduces the time required to understand the software, which studies report to be 50% of the effort in making changes. Third, it shortens the time to implement changes because fewer components need to be touched if an optimal coupling-cohesion balance has been sustained. These two points combine to shorten the time it takes to release new products or features.

Fourth, it allows the system to scale, regardless of whether the scaling is driven by new features or increased load. Fifth, it allows the life of a system to be extended. Once the quality of an architecture can be described as “sclerotic,” the system becomes a candidate for an expensive overhaul, if not an even more expensive replacement. Given that seriously degraded architectures are extremely hard to analyze, overhauls and replacements are usually fraught with errors and omissions that make you wonder if devil you knew wasn’t better than the new devil you created.

Maintaining architectural integrity is not easy. Architectural quality requires investment and discipline. First, system architects must establish a set of architectural principles that guide the original construction and subsequent enhancement of the system. Second, managers must enforce a disciplined change process with specific practices to ensure the architecture does not degrade over time. Third, architects must have access to representations that support modeling and envisioning the architecture to be constructed, as well as undated as-is representations of the architecture throughout the system’s lifetime. Fourth, automated tools should be used to analyze the system and provide insight into structural quality issues that are obscure to individual developers. Finally, management must be willing to invest in revising or extending the architecture when new uses or technologies require it. To this latter point, sustainable architectures can be transformed; degraded architectures leave organizations trapped in antiquity.

To meet these requirements for sustainable architectural quality, theoretical computer scientists must continue their exploration of architectural methods, analysis, and measurement. Experimental computer scientists must continue prototyping powerful new tools make these advances available to architects and developers. Empirical computer scientists must continue evaluating how these new concepts and technologies work in practical applications at industrial scale. The editors in this book have undertaken these challenges. While it is tempting to call such work “academic,” we are doomed by the complexity of our systems unless at least some of these efforts produce the next generation of architectural technologies. Our thirst for size and complexity is unending. Our search to simplify and manage it must keep pace.

September 3, 2013

About the Author

Dr. Bill Curtis is a senior vice president and chief scientist at CAST. He is best known for leading development of the Capability Maturity Model (CMM), which has become the global standard for evaluating the capability of software development organizations. Prior to joining CAST, Curtis was a cofounder of TeraQuest, the global leader in CMM-based services, which was acquired by Borland. Prior to TeraQuest, he directed the Software Process Program at the Software Engineering Institute (SEI) at Carnegie Mellon University. Prior to the SEI, he directed research on intelligent user interface technology and the software design process at MCC, the fifth generation computer research consortium in Austin, Texas. Before MCC he developed a software productivity and quality measurement system for ITT, managed research on software practices and metrics at GE Space Division, and taught statistics at the University of Washington. Bill holds a Ph.D. from Texas Christian University, an M.A. from the University of Texas, and a B.A. from Eckerd College. He was recently elected a Fellow of the Institute of Electrical and Electronics Engineers for his contributions to software process improvement and measurement.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.233.43