Chapter 5. NDC Protocols

The advent of NDC made communication protocols the essential underpinning of computing models. As all computing models, regardless of theoretical equivalence, must recognize and adjust to the implications and realities of communication, communication protocols must form part of the basis for any computing model.

Datacom protocols are the essence of the Internet. Understanding datacom at a rudimentary level should be as important to NDC developers as any language, development environment, or deployment platform consideration.

Conceptual Background

The Church-Turing thesis states that all computing models, known or unknown, are effectively equivalent. What this means is that no matter which theoretical computing model you may choose, it will give rise to equivalent results that meet certain criteria, namely (where M is a method or procedure):[1]

  1. M is set out in terms of a finite number of exact instructions with each instruction expressed by means of a finite number of symbols;

  2. M will, if carried out without error, always produce the desired results in a finite number of steps;

  3. M can (in principle) be carried out by a human being unaided by any mechanical device save paper and pencil;

  4. M demands no insight or ingenuity on the part of the human being in question.

Turing formally approached computability from the perspective of the Turing machine, claiming that whenever there is an effective method for obtaining values from a mathematical function, that function can be computed with a Turing machine. The Church-Turing thesis extends the metaphor, asserting that any model for computation, the use of which is effective or mechanical with respect to a method as delineated above, is equivalent to any other computational model. This remains, in fact, a thesis—more than pure conjecture but less than a rigorously proven theorem. While the implications for NDC may not be immediately obvious, they are nonetheless real.

Given Kauffman's observations with respect to innovation and fit scapes, the inevitability of innovation in computing models should be assumed, despite the Church-Turing thesis. This means that Deutsch's Eighth Fallacy (the network is homogeneous) is itself subject to innovative pressures; the network will paradoxically become more heterogeneous over time rather than less so, despite good intentions, standardization efforts, and market-domination strategies to the contrary. That does not say the heterogeneity will not be expressed in new and different fashion; indeed, over time a “sedimentation” occurs from a functional perspective, as what was once a differentiator becomes the must-have-to-compete item of tomorrow, which includes computing interfaces and document structures. From a theoretical perspective, whether it be a finite-state machine, a pushdown automata, a standard Turing machine, or a Swarms computing model, the Church-Turing thesis pretty well guarantees that new networked arrays of various compute devices will continue to emerge as new problem spaces become available for innovation. The importance of interoperability, which is founded on communication protocols that must therefore be standardized, cannot be understated.

NDC interoperability, which includes protocol as well as data and document format issues, is covered later in this book. Network protocols, which serve as the communication underpinnings for interoperability, are important to understand, from both a theoretical and an implementation perspective. The history of Internet protocols is an interesting study as well.

A Brief History: From ARPANET to the Modern Internet

The U.S. government's Advanced Research Projects Agency (ARPA) commissioned the ARPANET project in 1968, during the heyday of the Cold War. ARPA itself was formed after the 1957 launch of Sputnik, the first human-made satellite to orbit the earth. Sputnik was a creation of the now-defunct Soviet Union, and since the United States was highly reactive to Soviet developments at the time, the competitive pressures between the two nations gave rise to considerable research in many areas, including computer science. The objectives of the original ARPANET were to assist U.S. institutions in gaining experience in interconnecting computers and to improve computer science research productivity through a better sharing of resources and ideas. Military needs were also reflected in the original plans, as were scientific considerations.

From the beginning of ARPA to the first implementation of the ARPANET, considerable debate on a variety of issues, technical as well as political and organizational, delayed the ultimate awarding of the first contract to actually build such a network. At length, academic institutions were selected on the basis of either their network support capabilities or other unique resources that could be mustered. Technical capabilities were also a consideration since the development of protocols that would allow communications between a variety of types of computers was deemed a necessity for the ARPANET. The first four sites selected met the needs of that aboriginal network:

  • Stanford Research Institute (SRI)

  • University of California at Los Angeles (UCLA)

  • University of California at Santa Barbara (UCSB)

  • University of Utah (UTAH)

The original communication protocol called for an interface message processor (IMP) to be located at each site. The thinking at the time was that facilitation of communication between a wide variety of computer systems could be managed by one communication system, requiring deployment of a standard communication processor to handle the interface to the ARPANET. Thus, each site would have to write only one interface to one standard IMP. The beginnings of communication standards for the Internet were embodied in that early system, which utilized a modified Honey well DDP-516 computer, a refrigerator-sized unit with a massive 12 KB of memory, one of the most powerful minicomputers on the market at the time. IMP-1 was delivered to UCLA two weeks before Labor Day in 1969. A few weeks later, SRI had its own IMP installed and was ready to test the first connection of the ARPANET. Using telephone lines for network communication, an undergraduate student at UCLA name Charlie Klein began a coordinated login process. The first successful message, “LO” was followed by another first: the network crashed before the G could be successfully transmitted.

From those inauspicious beginnings, the ARPANET slowly grew. Once two IMPs existed, the pioneers had to implement a working communication protocol. The initial set of host protocols included a remote login for interactive use (telnet) and a way to copy files between remote hosts (File Transfer Protocol), ancestors of similar capabilities that are still widely used every day on the Internet.

An asymmetric, client-server communication protocol was all that was initially available. Over the next few months, a symmetric host-host protocol was defined. An abstract implementation of the protocol became known as NCP (from the host-run network control program that had to be hacked into each operating system in machine-specific assembler language).That, along with telnet and FTP, seemed to be enough for a while. A seven-layer gestalt had yet to emerge in the collective consciousness of computer science.

Over the following decade, the expansion of hosts on the ARPANET was slow but steady. By 1981, 213 hosts exchanged data, with a new host added approximately every 20 days. In 1982, when TCP/IP became the communications protocol of choice for connected hosts, the term “Internet” was used for the first time, and the modern packet-switching network was truly born. That same year, coincident with the invention of TCP/IP, came the formation of a small private company called Sun Microsystems, whose first product featured a built-in TCP/IP communications stack. Since then, every product sold by Sun has been Internet-ready (no other company can make that claim for as long or for as many systems as can Sun). Sun's IPO came several years later, but the die had been cast.

From a governance perspective, the growth of the Internet has been a case study in cooperative anarchy. In 1979, the Internet Control and Configuration Board (ICCB) was formed, chartered with oversight functions of design and deployment of protocols within the connected Internet. In 1983, the ICCB was rechristened the Internet Activities Board (IAB), with an original charter similar to that of the ICCB. The IAB evolved into a de facto standards organization that effectively ratified Internet standards.

In 1986, the IAB morphed again to provide an oversight function for a number of subsidiary groups. Two primary groups emerged: the Internet Research Task Force (IRTF) to supervise research activities related to TCP/IP Internet architecture, and the IETF (the Internet Engineering Task Force) to concentrate on short- to medium-term engineering issues related to the Internet. Both are still in existence. The IETF today still serves as the global clearing-house for RFCs (request for comment), which were inherited from the aboriginal ARPANET project. RFCs are the means by which de facto Internet standards are adopted. IRTF and IETF are made up of engineers, enthusiasts, and representatives from like-minded organizations, representing virtually all global cultures today. The IETF and its supervisory body, the Internet Engineering Steering Group (IESG); the IRTF and its supervisory body, the Internet Research Steering Group (IRSG); and the IAB all operate under the auspices of the Internet Society (ISOC). The ISOC is an international nonprofit, nongovernmental, professional membership organization that focuses on Internet standards, public policy, education, and training.

By 1984, when William Gibson coined the term “cyberspace,” the number of Internet hosts exceeded 1,000. Three years later, it was 10,000. On November 1, 1988, when a malicious program called the “Morris Internet Worm” (which took advantage of a design flaw in the sendmail daemon, which was easily patched) caused widespread havoc, there were approximately 60,000 Internet hosts. In 1995, the year the Network Age began, the Internet grew from roughly 4.8 to 9.5 million hosts, and URLs started appearing on television commercials. Email addresses, once the exclusive domain of computer scientists, programmers, and geeks, became more important than zip or area codes for upwardly mobile aspirational achievers, led primarily by the commercial sensibility of an under-30 generation and whisperings of dotcom riches to come. Since then, we have all learned a lot about irrational exuberance, but the Internet, despite pitfalls, is still growing.

As I write this, there are maybe 150,000,000 host systems on the Internet. An Internet host is one with a registered IP address; my systems at home, for example, all share one IP address that we multiplex through our home router. My wife and I have at least six and sometimes seven systems connected by one IP address most of the time, so the number of nodes on the Internet is considerably higher than the number of hosts would indicate, given the growing popularity of home LANs. Estimates for the number of Internet users worldwide today range from 550 to 650 million, or roughly 10 percent of all living human beings. From the perspective of Metcalfe's law, that may not be enough; given the events of September 11, 2001, it may be too many. The network metaphor works both ways, for good and for evil, another paradox in an age so riddled with paradox. But capitalism, if it is to be viable, must foster growth, which requires innovation, which demands increasing productivity and ephermalization. As such, the great potential of the Internet and NDC is as yet unrealized.

Back at the Stack: OSI 7

Any honest examination of NDC today would not be complete without reference to existing datacom structures, for example, the stack. TCP/IP, which provided the datacom genesis of the Internet, was introduced in 1982. That same year, the International Standards Organization (ISO) published the Open Systems Interconnect (OSI) 7-layer conceptual model of data communications. The OSI 7 model wasn't adopted as a standard until 1984, so the implementation of TCP/IP predates the OSI 7 model by a few years. The OSI 7 model was designed for use with mainframes, identifying protocols necessary for those systems to communicate with devices such as modems and terminals. Although the OSI 7 model isn't directly reflected in reality as a wildly successful implementation such as TCP/IP, understanding the concepts behind OSI 7 helps put both TCP/IP and current NDC protocol research and development into better perspective.

The ISO OSI 7-layer model provides the means whereby we can meaningfully discuss the nature of datacom and how it can be conceptually approached. In the OSI 7 model, communication functions are partitioned into seven layers as shown in Figure 5.1.

ISO OSI 7-layer conceptual model

Figure 5.1. ISO OSI 7-layer conceptual model

Each layer has a specific set of functions; there is a minimization of interlayer chatter; functionality is not evenly distributed across the layers. The rationale for taking this approach was to allow layers of technology to move forward without breaking everything else. For example, as new means of physically connecting nodes are engineered, it would be a shame to have to toss out all investments in software simply to accommodate that new means. Were it not for a conceptual datacom model like the OSI 7 stack, going from twisted pair to coax to optics to wireless would mean that everything that relies on datacom would have to be designed anew; all NDC applications would have to be rewritten. While that, in and of itself, may not necessarily be a bad thing, given the considerable investments made and anticipated in NDC, a monolithic approach was considered unthinkable by the standards body.

In the OSI 7 model, each layer provides services to the next higher layer, including primitives and associated data. Each layer therefore relies on the next lower layer. The OSI 7 stack identifies seven distinct layers for the facilitation of datacom, as shown in Figure 5.2.

ISO OSI 7-layer separation

Figure 5.2. ISO OSI 7-layer separation

At the bottom, the Physical layer represents physical circuits and transmission of bits. Here, electrons or photons or the chirping songs of modems are simply transmitted. An understanding receiver is implied. At the Data Link layer, error detection and correction, something in the way of flow control, and the task of assembling bits into something we would call “frames” or discrete “packets” of bits occurs. The Network layer, too, contributes to flow control; it's a complex layer that facilitates the routing and transfer of data across a network, the topology of which can change in an instant.

The Transport layer provides for a logical connection—a virtual circuit. This layer can either wait and ensure reliable transfer of data, or it can proceed without doing so; transmission speed is always a tradeoff with reliability, and the Transport layer negotiates that deal.

The Session layer is where the user virtual circuit is established. The session begins, the session ends; there is work that must be done to construct and deconstruct the virtual circuit.

The next layer up is the Presentation layer, at which data transformation can occur. A change from big-endian to little-endian, for example, can be handled at this conceptual layer.

And finally, the Application layer; the place where user programs would conceptually play. The circuit is complete at this layer; input is as expected in a format that requires no massaging or error correction, freeing NDC applications to work without concern for routing, transmission errors, or packet assembly.

Part of the conceptual model is the idea of peer layer interaction—that one peer layer talks to another peer layer, but peer boundaries are not crossed. For example, the Transport layer on node A would not speak directly with the Data Link layer on node B. Layered functions must exist on both nodes for such a model to be valid. Peer layers communicate according to a set of rules, or protocols, that control data and dictate its format, coordination, and timing. Flow control is also implied.

Direct communication takes place only at the Physical layer; all other layers are indirect. If we were to look at node A and node B over an arbitary network, there could be an indeterminate number of nodes in between that also host a Network stack, as shown in Figure 5.3. Note that the Network stack is also a node that presumably hosts software that complies with the same conceptual model; note also that in order to function as a routing node, all that node must do is ascend that stack to the Network layer in order to facilitate the further transmission of data.

ISO OSI 7: Network stack

Figure 5.3. ISO OSI 7: Network stack

Effectively, end-to-end communication is facilitated from the Transport layer up in the OSI 7 conceptual model. OSI 7 provides a means whereby we can reasonably consider evolution of datacom going forward, but implementations of that model are another matter. In the OSI 7 model, user data, data transfer, and links between host and network are clearly delineated. In the TCP/IP reality of the world, that delineation is not so clear.

TCP/IP

The structure of TCP/IP can be visualized as shown in Figure 5.4.

TCP/IP structure

Figure 5.4. TCP/IP structure

The user exchange of data is bounded by Application, Presentation, and Session, the three top layers of the conceptual stack. The bottom three handle the network-dependent exchange of data. The Transport layer, the layer that evenly divides the conceptual stack, is interesting to explore, given the implementation of TCP/IP and the reality of the world. At that layer we find the transmission control protocol (TCP), which is made of two parts: the interface (such as a socket) and the “on the wire” protocol (that which is actually delivered and managed by vendors).

In the Internet protocol suite (whence TCP and UDP are derived), the responsibilities of the top three conceptual layers—Application, Presentation, and Session—are assumed by programs that operate on those levels. As shown in Figure 5.5, this means that programs that operate on those layers must take responsibility for the tasks that have conceptually been assigned to those layers.

Internet protocol suite

Figure 5.5. Internet protocol suite

At the Transport layer, both TCP and UDP operate (along with other protocol options, discussed in later chapters); Transport is where the virtual circuit is made. Think of it as the warehouse for a shipping company. Packages arrive, packages leave; the responsibility of the warehouse is to ensure that “repackaging” takes place. All items from one truck from one location are unloaded and marked for a particular location and then repacked into another truck. The warehouse supervisor and the processes he oversees are analogous to the Transport layer. Packages that require signed receipts for delivery are akin to TCP; those requiring no signature and no assurance of error-free delivery, like bulk snail mail, are more analogous to UDP (User Datagram Protocol).

This brief discussion of the OSI 7-layer conceptual model and the TCP/IP implementation will serve as a basis for protocol discussions throughout the remaining chapters. Much of NDC is now defined and shaped by datacom protocols; indeed, the emergence of Web Services, Jini network technology, the JXTA Project, wireless and mobile computing, and much more is very much a story of protocol evolution.

Email

Once the Internet hit its first plateau of critical mass, it started to dawn on people that it might be nice if the Internet could do something useful beyond swapping interesting research data. Two of the earliest protocols to open the Internet for use outside of strictly technical applications were RFCs 0821 (Simple Mail Transfer Protocol: SMTP) and 0822 (Standard for the format of ARPA Internet text messages). Both were introduced in August 1982 and both have since been made obsolete by subsequent definitions, but they paved the way for the beginnings of email, an application that remains one of the most useful, and widely used, application on the Internet. Reflecting the sensibilities of its name, the first SMTP protocol was simple. It focused on doing one thing really well—sending 7-bit plain text messages across the Internet Protocol Suite Network Layer protocol, called IP (Internet Protocol), between client and server. Since then, SMTP has evolved to accommodate modern messaging requirements.

In recent years, the evolution of SMTP has accelerated, reflecting the broadening use of the Internet. Extended SMTP (ESMTP) and Multipurpose Internet Mail Extension (MIME) are two major advances that have enabled more interesting instances of spam in the modern age. ESMTP allows vendors to extend SMTP in order to manage broader classes of messages. MIME established rules for the labeling and transmission of data types beyond plain text within messages; MIME tells mail systems how to process parts of the message body so recipients see (theoretically) exactly what the sender intended. (Hence, the more interesting spam.) MIME also serves as one approach for the transmission of streaming data (audio or video).

Once email was established on the nascent Internet, some companies began to see the potential. Sun Microsystems was one of a handful of pioneers. Other early adopters (those companies that would later provide TCP/IP ports) included Digital Equipment Corporation (DEC, later acquired by Compaq), Apollo Systems (later acquired by Hewlett-Packard), Sperry-Univac (merged with Burroughs to form Unisys), IBM, Bull, Nixdorf (later acquired by Siemens), ICL (later acquired by Fujitsu) and a host of early UNIX players, most of whose names are now lost to the fickle memory of history. Along with email, phase I of Internet adoption technologies also included FTP, gopher (an indexed, early step toward HTML to facilitate searching), and news groups.

The first release of the Mosaic browser, the precursor of Netscape, wouldn't come until 1993. So Internet adoption phase I, from 1982 to 1993, was pretty well limited to academic institutions, geeks, computer scientists, programmers, and other computer enthusiasts. And yet by early 1989, in the wake of the Morris Worm, the critical need for one of the earliest agent-based (and therefore visionary) NDC application models found its fulfillment in the Simple Network Management Protocol (SNMP). There were finally enough nodes, LANs, and WANs, to merit investment in systems and network management tools and infrastructures.

At this juncture, early adopters could be classified as “commercial” in every sense of the word and had learned that resource management, which estimates claimed to have cost $15.5 million (arguably because of the Morris Worm), was simply too costly to ignore any longer; the Internet had reached a commercial level of respectability if only due to maintenance costs, not to mention national strategic value from the perspective of the U.S. government. Paradoxically, we may have Robert Morris to thank for that early wake-up call. In the following months, Internet management software was born. Note that a standard protocol was the basis for the evolution.

Systems and Network Management Before Protocols

What specifically is management software? While some applications might have an obvious management function, such as the ability to react to and recover from a fault on an arbitrary network node, other applications might not. As local programming is intrinsically different from NDC, applications behave differently with respect to Deutsch's Eight Fallacies—some may be network aware, others may not, depending on the network-awareness of the NDC developer and the tools used during development. Any node may be host to a mixed bag of applications from a network-awareness perspective.

Based on traditional approaches, and, consequently, the capabilities expressed by existing or legacy infrastructures, the high-level functional areas for system and network management historically have been the following.

  • Configuration management: inventory, configuration, and provisioning

  • Fault management: reactive and proactive network fault management

  • Performance management: number of packets dropped, timeouts, collisions, CRC errors

  • Security management: not traditionally covered by SNMP-based applications

  • Accounting management: cost management and charge-back assessment

  • Asset management: equipment, facility, and administration personnel statistics

  • Planning management: trend analysis to help justify a network upgrade or bandwidth increase

Management applications built since the adoption of TCP/IP tended to address network challenges in one or more of these areas. Noticeably absent from this functional list is the general area of storage management. The need for the management of data itself came during a later phase of Internet adoption, after the PC's stealth invasion of the corporate desktop, which began in earnest in the early 1990s.

Sun Microsystems was one of the pioneers in software management; the SunNet Manager (SNM), first shipped by Sun in the late 1980s, was one of the first market entries in the wake of the Morris Worm. SNM provided a cost-effective, extensible product that could expand to meet the needs of businesses in the early 1990s, when the networking of smaller systems emerged from strategic goal to commercial health. In November 1997, Sun announced that no further development work would be funded for SNM, though echoes of it still remain in various instantiations, perhaps even now a cash cow too lucrative to slaughter.

Novell, a pioneer in networking itself, was also one of the early entrants in the management space, although its focus on a more proprietary network protocol (SPX/IPX) instead of TCP/IP proved to be an evolutionary dead end. That, and having the unfortunate karma of having its technology targeted by Microsoft, left Novell decimated, compared to their once enviable position in early PC LANs.

Today, a number of players exist in the management space, most of whom have provided point-to-point solutions within proprietary frameworks (although the advent of Web Services investments has once again changed the landscape). Computer Associates, IBM (Tivoli), Hewlett-Packard (Open View), and BMC represented the bulk of the big business players when it came to enterprise-scale management framework packages during the late 90s and early 00s. Other providers included Platinum, Cisco, Microsoft, and, of course, Sun Microsystems with its Enterprise Manager products.

The point-to-point approach gets a little dicey in a heterogeneous environment. Consider the network in Figure 5.6. Each node in the network—each network appliance, whether it be an end-user system, a server, an array of storage devices, a printer, a wireless Internet appliance, or a network enabler, such as a hub or a router—requires some form of management during its service cycle. Each appliance, coming from a different vendor, has its own interface. As such, point-to-point players in the management space must provide a solution for each possible connection within the network. Just as a network's value increases exponentially with the number of nodes, so do the management headaches. Web Services will make matters either better or worse, depending on the evolution of the management fitscape in the wake of emerging NDC innovation pressures.

Point-to-point systems and network management

Figure 5.6. Point-to-point systems and network management

At this juncture, several players in the management fitscape are scrambling to plug a hole in the Web Services story: the promise of composable components exposes something of a transaction-assurance nightmare if examined in the cold light of day. Given this problem, one healthy first step toward ensuring a viable yet interoperable NDC infrastructure that can actually evolve beyond an N-tier application model is a systems and network management infrastructure that is Web Services aware.

SNMP and UDP

The systems and network management fitscape is not without standards. As alluded to earlier, SNMP was the first from the IETF to emerge to facilitate some management activities.

The odd story of SNMP's ascendancy didn't begin with the Morris Worm. Internet standards take a bit longer to mature than the the few months that elapsed between November 1988 and and February 1989, when the first two SNMP RFCs were accepted. The genesis of SNMP was earlier, in the second half of the 1980s, when the task force concluded that, simply due to its size, the rapidly growing Internet could no longer be managed on an ad hoc basis, and decided to use OSI's Common Management Information Protocol (CMIP). The Open Systems Interconnect Reference Model (whence OSI 7 hails), had been proposed in 1974 by ISO (the International Organization for Standardization) to address problems in networking that arise when proprietary approaches prevail. To fit CMIP into the TCP/IP-based Internet, a number of minor changes were needed; the modified protocol was called CMOT (Common Management Over TCP/IP). But despite the apparent good fit, the development of OSI management took quite some time.

Because the IETF didn't want to just sit and wait for results, it decided to further develop the already existing Simple Gateway Monitoring Protocol (SGMP), which was defined in 1987 by an RFC to manage the ever-expanding router network on the Internet, and use that modified protocol, which would become SNMP, as a stopgap solution. SGMP was quite short-lived dealing only with router management. But it did provide a basis for a broader management protocol, and thus SNMP traces its roots to SGMP. As a short-term solution, SNMP seemed to do the trick.

The task force intended to eventually replace SNMP with a structural solution based on OSI's CMIP. But the IETF was surprised, to say the least. Standards can become established in many ways, market adoption among them (even without monopoly at the helm); SNMP was the right solution at the right time, and adopters emerged in droves. Within a few years, SNMP demonstrated that it could satisfy the protocol demands of many managed applications and thus dealt with the majority of devices linked to the Internet at that time. As a result, today's producers of many datacom devices still incorporate SNMP by default. In short, the protocol has become one of the most important standards for network management. The unexpected success of SNMP was complete in 1992, when the IETF dropped its original plan of replacing SNMP with CMOT. Given the subsequent slow acceptance of the CIM (the Common Information Model from the OMG) it now seems unlikely that OSI will ever be used for network management.

SNMP uses UDP as the transport mechanism for SNMP messages, as diagrammed in Figure 5.7.

Transport mechanism for SNMP messages

Figure 5.7. Transport mechanism for SNMP messages

In the TCP/IP suite, UDP allows application programs to send a datagram to other application programs on a remote machine. Basically, UDP is an unreliable transport service; delivery and duplicate detection are not guaranteed, as with TCP. UDP uses IP to transport a message between the machines, as do TCP and a lot of other Transport layer protocols. It is connectionless and does not use acknowledgments, establish flow control, or control the order of arrival. As a consequence, UDP messages sometimes get lost. But the tradeoff at the Transport layer is reliability versus cost (overhead); thus, the least intrusive choice for a management protocol is UDP as the transport mechanism. There are clear implications to NDC developers who play at the Transport layer: SNMP messages, like any others that utilize UDP, can be lost.

Clearly, the designers of SNMP could have elected to use TCP rather than UDP if data loss were a serious enough problem for management applications. But management issues are generally not that time critical. If an agent raises an alarm and does not hear back from a known manager in an arbitrary amount of time, it can simply raise another alarm. By the same token, if a manager polls an agent and does not hear back, another effort to poll the agent can easily be made without significant impact on either the manager or the agent, depending upon the nature of the relationship and the resources involved. For most management issues that rely on SNMP, UDP does the job and has the least overall impact.

SNMP succeeded thanks to a simple set of attractive features:

  • It can be used to manage (almost) every node linked to the Internet.

  • Implementation costs are minimal.

  • By defining new managed objects, it easily extended its management capabilities.

In addition, SNMP is fairly robust. Despite the occasional lost datagram, management applications using SNMP proved effective in solving early network growth problems as the modern Internet-dependent enterprise evolved. Even in the event of a partial failure of the network, well-programmed management frameworks can often continue functioning.

And yet, even with a standard like SNMP, the basic problems of managing a heterogeneous network environment remain. Granted, every vendor on the network can agree to communicate via TCP/IP and to use SNMP when it comes to exposed management interfaces. But fundamentally, without common agreement on exposed managed-object properties, gnarly point-to-point solutions will naturally evolve, just as they have over the past decade. As mentioned earlier, the recent upswing in Web Services investment compounds matters further, perhaps creating opportunities for profitable problem solving.

Early Network Agents

Protocols form the basis for any meaningful evolution of the Internet. Systems and network management frameworks and applications are just an early example of the relationship between protocols, problems, and profitable opportunities. That same pioneering approach to an application framework gave rise to what was arguably the first Internet agent-based model as well.

Agents, as discussed in Chapter 3, are separate entities that act on behalf of other entities. Early systems and network management implementations, of necessity, pioneered approaches to NDC agents. Consider the problems inherent simply in backing up, or making copies of, documents that are distributed over a local heterogeneous environment; once critical documents leave the domain of the central server, the need for an agent technology arises.

From the perspective of a backup server or that central place in which an organization wants to maintain timely copies of critical documents, it is necessary to dispatch an agent to act for each node on behalf of the server. Those agents need to accomplish much in the way of information collection, analysis, distillation, and state determination, and report that information back to the central server. The agents are subsequently responsible for scheduling and supervising the transfer of only the appropriate documents across the network for central server backup. Inherent questions include the following:

  1. Which documents (or portions of documents) are candidates for backup?

  2. If all documents are always candidates for backup, what is the cost in network overhead, and can all nodes take this approach to ensure adequate backup?

  3. What is the local node environment (which by definition will be different than that of the central server) with respect to file systems, data and time representation, file access characteristics (including permissions and namespace rules), and available resources?

  4. What translation mechanisms are needed to preserve document integrity during and after backup?

  5. What are the document restore implications?

  6. How often are backups warranted? When should they be scheduled? Are means of backup scheduling available on the node in question, or must scheduling be the domain of the backup server?

  7. How are backups initiated? In what format are documents transmitted? What actions are taken when backups fail?

Questions like these must be answered in any design of an NDC application that would offer backup services in an arbitrary heterogeneous LAN, which is just one example of the kinds of services to be considered in the systems and network management space. And this is assuming that proper protocols (such as TCP/IP and SNMP) are already in place. A generalized approach to agent technology was not part or parcel of the SNMP work, nor was it intended to be. Systems and network management frameworks therefore needed to create agent mechanisms in order to effectively provide services—and they did. But those frameworks are very much like the point-to-point example given above. Each agent was specifically crafted for the species of node to which it would be applied.

Early management frameworks were constrained to some workable subset of node types in the modern LAN. For example, if I'm a framework vendor, I may provide an agent to manage a SPARC/Solaris 2.7 or greater system, but not one for any earlier version, despite the fact that some nodes running earlier versions may remain on some networks. Once a node is devoted to a given task, which it does well, why disturb it? The adage “If it ain't broke, don't fix it” applies fully to NDC application provisioning. As such, the vast number of node configurations on the Internet tends to exceed even the most ambitious point-to-point type solutions.

Generalized agent frameworks, while imaginable with the Java platform and promised by the Semantic Web, remain the stuff of Internet fiction. Systems and network management implementations have had to solve some small part of the problem in order to provide services of sufficient value to carve out a profitable market niche. But in the end, pioneers only mark rough trails that others may one day pave.

Later Players at the OSI 7 Transport Layer

In the context of systems and network management, there is much more to say in the way of implementations and protocols. Discussions of the CIM and relationships with SNMP implementations have not been included. From the perspective of network protocols, the SNMP segue serves to illustrate an important point regarding protocols, problems, and profits.

The point was made earlier that the Transport layer of the OSI 7 conceptual model was one of considerable interest (SNMP plays at the TCP/IP Application layer, which in TCP/IP covers all three of the top OSI 7 layers, or the user data area). Much work is done or not done at the Transport layer, depending on application needs. It's interesting to note that the only standard datacom layer that is specifically cited in Deutsch's fallacies is the Transport layer (transport cost is zero). Like the shipping warehouse described earlier, or a hub in the hub-and-spoke configuration of a modern airline, the Transport layer eliminates the need for point-to-point networks, which must be avoided if the Internet is to be viable. But as Deutsch has asserted, this flexibility does not come without cost.

IP plays at the Network layer. IP is a connectionless protocol featuring some type-of-service options (such as IPv6 over IPv4), but much of the action—and therefore cost—occurs at the Transport level. One 8-bit field in the IP header contains the Transport layer encapsulated protocol assigned to that given packet. TCP is protocol number 6 from the IP perspective; UDP is Transport protocol 17. An 8-bit field can specify up to 256 different protocols, so there is plenty of room for others at Transport.

A number of other protocols beyond TCP and UDEP have been standardized over the years, many of which are available in the Internet Protocol Suite. Many purposes would be served at Transport, including options like Transport Multiplexing (TMux, protocol 18), Host Monitoring Protocol (HMP, protocol 20) and Multicast Transport Protocol (MTP, protocol 92).

The following protocols are currently assigned an IP Protocol number and can therefore play at the OSI 7 Transport layer:

AH

IP Authentication Header

AX25

Internet Protocol Encapsulation of AX25 Frames

CBT

Core Based Trees

EGP

Exterior Gateway Protocol

ESP

Encapsulating Security Payload

GGP

Gateway to Gateway Protocol

GRE

Generic Routing Encapsulation

HMP

Host Monitoring Protocol

ICMP

Internet Control Message Protocol

ICMPv6

Internet Control Message Protocol for IPv6

IDPR

Inter-Domain Policy Routing Protocol

IFMP

Ipsilon Flow Management Protocol

IGMP

Internet Group Management Protocol

IP

IP Encapsulation (useful for Wireless/Mobile hosts)

IPPCP

IP Payload Compression Protocol

IRTP

Internet Reliable Transaction Protocol

MEP

Minimal Encapsulation Protocol

MOSPF

Multicast Open Shortest Path First

MTP

Multicast Transport Protocol

NARP

NBMA Address Resolution Protocol

NETBLT

Network Block Transfer

NVP

Network Voice Protocol

OSPF

Open Shortest Path First Routing Protocol

PGM

Pragmatic General Multicast

PIM

Protocol Independent Multicast

PTP

Performance Transparency Protocol

RDP

Reliable Data Protocol

RSVP

Resource ReSerVation Protocol

SCTP

Stream Control Transmission Protocol

SDRP

Source Demand Routing Protocol

SKIP

Simple Key management for Internet Protocol

ST

Internet Stream Protocol

TCP

Transmission Control Protocol

TMux

Transport Multiplexing Protocol

UDP

User Datagram Protocol

VMTP

Versatile Message Transaction Protocol

VRRP

Virtual Router Redundancy Protocol

Each Transport layer protocol features attributes germane to solving certain specific NDC application tasks, providing a rationalized basis for growth of Internet usefulness and scope. But our examination of Transport layer needs is far from complete. Indeed, many current NDC standardization efforts are reexamining Tranport layer needs and costs, as the network metaphor extends further and further toward the edges of innovation. We'll come back to Transport issues and potential with comparisons of competing NDC frameworks, as Transport assumptions and needs must be meaningfully addressed by real implementations.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.15.100