3.1. Software Architecture Paradigm Shift

Unless you program telecommunications systems, video games, mainframe operating systems, or rigorously inspected software (e.g., CMM Level 5), almost every piece of software you will ever encounter is riddled with defects and, at least in theory, doesn't really work. It only appears to work—until an unexpected combination of inputs sends it crashing down. That is a very hard truth to accept, but experienced architects know it to be the case. In commercial software, nothing is real. If you don't believe this, invite a noncomputer user to experiment with your system. It won't take long for them to lock up one or more applications and possibly invoke the Blue Screen of Death.

In order to cope with this uncertain terrain, you need to begin thinking about software as inherently unreliable, defect ridden, and likely to fail unexpectedly. In addition, you need to confront numerous issues regarding distributed computing that aren't taught in most schools or training courses.

We have many things to learn and unlearn as we go to war. We begin by recognizing a key paradigm shift that leads to a deeper understanding of distributed computing and its pervasive consequences.

Traditional System Assumptions

The essence of the paradigm shift revolves around system assumptions. Traditional system assumptions are geared toward nondistributed systems—for example, departmental data processing systems. Under these assumptions, we assume that the system comprises a centrally managed application where the majority of processing is local, the communications are predictable, and the global states are readily observable. We further assume that the hardware/software suite is stable and homogeneous and fails infrequently and absolutely: Either the system is up or the system is down. Traditional system assumptions are the basis for the vast majority of software methodology and software engineering.

Traditional system assumptions are adequate for a world of isolated von Neumann machines (i.e., sequential processors) and dedicated terminals. The traditional assumptions are analogous to Newton's laws of physics in that they are reasonable models of objects that are changing slowly with respect to the speed of light.

Distribution Reverses Assumptions

However, the von Neumann and Newtonian models are no longer adequate descriptions of today's systems. Systems are becoming much less isolated and increasingly connected through intranets, extranets, and the Internet. Electro- magnetic waves move very close to the speed of light in digital communications. With global digital communications, the Internet, and distributed objects, today's systems are operating more in accord with Einstein's relativity model. In large distributed systems, there is no single global state or single notion of time; everything is relative. System state is distributed and accessed indirectly through messages (an object-oriented concept). In addition, services and state may be replicated in multiple locations for availability and efficiency. Chaos theory is also relevant to distributed object systems. In any large, distributed system, partial failures are occurring all the time: network packets are corrupted, servers generate exceptions, processes fail, and operating systems crash. The overall application system must be fault-tolerant to accommodate these commonplace partial failures.

Multiorganizational Systems

Systems integration projects that span multiple departments and organizations are becoming more frequent. Whether created through business mergers, business process reengineering, or business alliances, multiorganizational systems introduce significant architectural challenges, including hardware/software heterogeneity, autonomy, security, and mobility. For example, a set of individually developed systems have their own autonomous control models; integration must address how these models interoperate and cooperate, possibly without changes to the assumptions in either model.

Making the Paradigm Shift

Distributed computing is a complex programming challenge that requires architectural planning in order to be successful. If you attempt to build today's distributed systems with traditional systems assumptions, you are likely to spend much of your budget battling the complex, distributed aspects of the system.

The difficulty of implementing distributed systems usually leads to fairly brittle solutions, which do not adapt well to changing business needs and technology evolution.

The important ideas listed below can help organizations transition through this paradigm shift and avoid the consequences of traditional system assumptions:

  1. Proactive Thinking Leads to Architecture. The challenges of distributed computing are fundamental, and an active, forward-thinking approach is required to anticipate causes and manage outcomes. The core of a proactive IT approach involves architecture. Architecture is technology planning which provides proactive management of technology problems. The standards basis for distributed object architecture is the Reference Model for Open Distributed Processing (RM-ODP).

  2. Design and Software Reuse. Another key aspect of the paradigm shift is avoidance of the classic antipattern: "Reinventing the Wheel." In software practice there is continual reinvention of basic solutions and fundamental software capabilities. Discovery of new distributed computing solutions is a difficult research problem which is beyond the scope of most real-world software projects. Design patterns is a mechanism for capturing recurring solutions. Many useful distributed computing solutions have already been documented using patterns. While patterns address design reuse, object-oriented frameworks are a key mechanism for software reuse. To develop distributed systems successfully, effective use of design patterns and frameworks can be crucial.

  3. Tools. The management of complex systems architecture requires the support of sophisticated modeling tools. The Unified Modeling Language makes these tools infinitely more useful because we can expect the majority of readers to understand the object diagrams (for the first time in history). Tools are essential to provide both forward and reverse engineering support for complex systems. Future tools will provide increasing support for architecture modeling, design pattern reuse, and software reuse through OO frameworks.

The software architecture paradigm shift is driven by powerful forces, including the physics of relativity and chaos theory, as well as changing business requirements and relentless technology evolution. Making the shift requires proactive architecture planning, pattern/framework reuse, and proper tools for defining and managing architecture. The potential benefits include: development project success, multiorganizational interoperability, adaptability to new business needs, and exploitation of new technologies. The consequences of not making the paradigm shift are well documented; for example, 5 out of 6 corporate software projects are unsuccessful. Using architecture to leverage the reversed assumptions of distributed processing can lead to a reversal of misfortunes in software development.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.19.55.116