4.3. Systems Integration

We extend our discussion of architectural issues related to client server systems integration by covering a number of additional areas from which many important questions arise. Handling tough questions about your architecture is one of the key skills which we hope you will learn in our drill school. You may have detected an attitude of skepticism in some of the previous remarks which we believe is appropriate for a mature understanding of technology capabilities and how they apply to system development. Object-oriented architects are responsible for developing the technology plans that manage these underlying technologies in a way that supports the full system life cycle, which may range up to 15 years for systems in the public sector.

The key concepts for technology management allow us to predict that technologies in today's configurations will be evolving into new technologies which may obsolesce many of today's current interfaces and infrastructure assumptions. One approach for mitigating this inevitable commercial technology change is by defining application software interfaces which the architect controls and maintains to isolate application technologies from the majority of commercial infrastructure which are subject to rapid innovation. We have covered these concepts and the details of how to implement them in significantly more detail in some of the authors' writings; please refer to the bibliography.

"Use open architectures. You will need them once the market begins to respond" [Rechtin 97].

Taking a somewhat cynical view of open systems technologies, one can conclude that the developers of standards in both formal and consortium organizations represent the interest of technology suppliers. There are significant differences in quality between the kinds of specifications which are created and utilized for the general information technology market, comprising the vast majority of object technology specifications and the specifications used in particular mission-critical markets such as telecommunications. For general information technology specifications, there are many cases where the standards do not support testing. In fact, only about 5 or 6 percent of formal standards have test suites which are readily available. The majority of testable standards are compilers such as FORTRAN compilers, PASCAL compilers, and so forth. The CORBA technology market has taken a direction to resolve this issue, at least in terms of the base specifications for CORBA technologies. Significant additional work needs to occur to enable general information technology standards to truly meet the needs of object-oriented architects.

What about the Internet? The integration of Internet technologies is a capability that has high priority in many organizations. The use of intranets and extranets is becoming a mission-critical capability for large and medium size enterprises. There has been substantial research and development in this domain. Figure 4.1 shows some of the kinds of interfaces which have been created to support the integration of object technologies to the Internet. Commercially supplied products which tie CORBA technologies directly to the Internet, such as HTTP, are readily available. The implementation of ORB technologies in an Internet-ready fashion has occurred—for example, with the implementation of Java language based ORBs which are integrated with browser environments. The use of object-oriented middleware is an important and revolutionary step in the evolution of the Internet. Object oriented middleware represents the ability to rapidly create new types of services and dynamically connect to new types of servers. These capabilities go well beyond what is currently feasible with technologies like http and the Java remote method invocation, which is a language-specific distributed computing capability.

Figure 4.1. Integration of Multiple Technology Bases


Figure 4.2 addresses the question of integration of Microsoft technologies with other object-oriented open systems capabilities. Based upon industry-adopted standards, it is now possible to integrate shrink-wrapped applications into distributed object environments supporting both CORBA and COM+. The definition of application architectures can implement this capability in several ways. One approach is to adopt the shrink-wrapped defined interfaces into the application software architecture. In this way the application's subsystems become directly dependent upon proprietary control interfaces, which may be obsolesced at the vendor's discretion. The alternative approach is to apply object wrappers to profile the complexity of the shrink-wrap interfaces and isolate the proprietary interfaces from the majority of the application subsystem interactions. The same level of interoperability can be achieved with either approach, but the architectural benefits of isolation can prove significant.

Figure 4.2. Systems Integration with Object Wrapping


What about security? Computer security is a challenging requirement that is becoming a necessity because of the increasing integration and distribution of systems, including intranet and the Internet itself. One reason why security is so challenging is that it has frequently been supplied to the market as a niche-market or nonstandard capability. For example, the COM+ technology and its ActiveX counterparts do not have a security capability. When one downloads an ActiveX component on the Internet, that component has access to virtually any resource in the operating-system environment, including data on the disk and system resources which could be used for destructive purposes. The implication is that it is not wise for anyone to be using ActiveX and COM+ in Internet-based transactions and information retrieval. The object management group has addressed this issue because of end-user questions about how this capability can be supplied. The group adopted the CORBA security service, which defines a standard mechanism for how multiple vendors can provide security capabilities in their various infrastructure implementations. Computer security has been implemented in selected environments. An understanding of the CORBA security service and how to apply it will be important in the future to enable organizations to satisfy this critical requirement.

What about performance? Object-oriented technology has suffered criticism with respect to performance. Because object technology is providing more dynamic capability, there are certain overheads which are consequential. In the case of OMG and CORBA specifications, it is fair to say that the CORBA architecture itself has no particular performance consequences, because it is simply a specification of interface boundaries and not the underlying infrastructure. In practice, CORBA implementations have similar underlying behaviors with a few exceptions. In general, CORBA implementations can be thought of as managing a lower-level protocol stack which in many cases is a socket-level or TCP/IP layer. Because the CORBA mechanisms provide a higher level of distraction which simplifies programming when an initial invocation occurs, the ORB infrastructure needs to intelligently establish communications between the client program and the server program. For the initial invocation, certainly additional overhead and handshaking are required to perform this purpose. This handshaking would have to be programmed manually by the application developer without this infrastructure.

Once the ORB establishes the lower-level communication link, the ORB can then pass messages efficiently through the lower-level layer. In benchmarks of ORB technologies, some researchers have found the CORBA technologies are actually faster in some applications than comparable programs written using remote procedure calls. Part of the reason is that all of the middleware infrastructures are evolving and becoming more efficient as technology innovation progresses. On the second and subsequent invocations in an ORB environment, the performance is comparable to remote procedure calls and in some cases faster. The primary performance distinction between ORB invocations and custom programming to the socket layer is involved in what is called the marshaling algorithms. The marshaling algorithms are responsible for taking application data, which is passed as parameters in an ORB invocation, and flattening it into a stream of bytes which can be sent through a network by lower-level protocols. If a machine generates the marshaling algorithms with custom marshaling, it cannot be quite as effective as a programmer who knows how to tailor the marshaling for a specific application. Because of the increasing speed of processors, the performance of marshaling algorithms is a fairly minuscule consideration overall compared to other performance factors such as the actual network communication overhead.

Proper distributed object infrastructures give you additional options for managing performance. Because these infrastructures have the access transparency property, it is possible to substitute alternative protocol stacks underneath the programming interfaces which are generated. Once the application developer understands and stabilizes the interfaces required, it is then possible to program alternative protocol stacks to provide various qualities of service. This approach is conformant with best practices regarding benchmarking and performance optimization. The appropriate approach is to first determine a clean architecture for the application interaction, next to determine the performance hot spots in the application, and then to compromise the architecture as appropriate in order to perform optimizations. Within a single object-oriented program, compromises to the architecture are one of the few options that one has. In a distributed object architecture, because the actual communication mechanisms are transparent through access transparency, it is possible to optimize the underlying communications without direct compromises to the application software structure. In this sense, the use of distributed object computing has some distinct advantages in terms of performance optimization that are not available under normal programming circumstances.

What about reliability? Reliability is a very important requirement when multiple organizations are involved in various kinds of transactions. It is not reasonable to lose money during electronic funds transfers or lose critical orders during mission-critical interaction. The good news is that distributed object infrastructures, because of their increasing level of abstraction from the network, do provide some inherent benefits in the area of reliability. Both COM+ and CORBA support automatic activation of remote services. CORBA provides this in a completely transparent manner called persistence transparency, whereas COM+ requires the allocation of an interface pointer, which is an explicitly programmed operation that also manages the activation of the services, once that operation is completed. If a program providing CORBA services fails, CORBA implementations are obligated to attempt to automatically restart the application. In a COM+ environment, one would have to allocate a new interface reference and reinitiate communications.

An important capability for ensuring reliability is the use of transaction monitors. The object management group has standardized the interfaces for transaction monitors through the object transaction service. This interface is available commercially through multiple suppliers today. Transaction monitors support the so-called acid properties: durability, isolation, and consistency. Transaction monitors provide these properties independent of the distribution of the application software. Use of middleware technologies with transaction monitors provides a reasonably reliable level of communications for many mission-critical applications. Other niche-market capabilities that go beyond this level can be discovered through cursory searches of the Internet. In conclusion, what is needed from commercial technology to satisfy application requirements is quality support for user capabilities. This includes quality specifications that meet the user's needs and products that meet the specifications.

In order to ensure that these capabilities are supported, new kinds of testing and inspection processes are needed that are able to keep pace with the rapid technology innovation occurring in consortium and proprietary vendors today. The end users need to play a larger role in driving the open systems processes in order to realize these benefits. In terms of application software development it is necessary to have on each development team one or more object-oriented architects who understand these issues and are able to structure the application to take advantage of the commercial capabilities and mitigate the risks of commercial innovations that may result in maintenance cost. The use of application profiles at the system profile level for families of systems and the functional profile level for the mains should be considered when application systems are constructed. It is also important for software managers to be cognizant of these issues and to support their staffs in the judicious design, architecture, and implementation of new information systems.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.254.28