8

_______________

A Marriage Made in Heaven: Computers and Communications

 

 

Introduction—The Computer Comes of Age

On February 14, 1996, Vice President Al Gore flew into Philadelphia to celebrate the fiftieth anniversary of the ENIAC, or Electronic Numerical Integrator and Computer, the world’s first general-purpose digital computer. (The claim, as we will see, is disputable.) The thirty-ton machine was constructed under contract to the United States government. The machine, which had actually been used prior to the 1946 unveiling, was built to meet the needs of the U.S. Army Ordnance Corps to compute all army and air corps artillery firing tables. While in some respects ENIAC was a beginning, in other respects it was a culmination of many steps, frequently traced to Joseph-Marie Jacquard’s 1801 invention of a loom programmed by a continuous belt of punched cards that controlled and could change the patterns designed. While other important contributions occurred before 1876, when the telephone was invented, clearly the most important was the work of Charles Babbage (1792–1871). His two difference engines were the first automatic calculating machines. At the same time, he developed the use of punched cards for computation and as a storage device—a prophetic achievement.1

As advances continued after the invention of the telephone, developments were focused largely on the process of rapidly and accurately calculating—as the eventually adopted term computer implies. For example, in 1911 Elmer Sperry invented a computer that made automatic corrections to his gyrocompass. Five years later Sperry produced another analog computer that plotted both a warship’s and its target’s course and speed.2 Sperry’s ingenuity, like that of so many others before and after, was focused on the economic exploitation of his ideas. Accordingly, in 1910 he founded the Sperry Gyroscope Company to exploit his past and future achievements. That company was one of the ancestors of computer giant Unisys Corp. In similar fashion IBM can trace its lineage to Herman Hollerith, who won a Census Bureau contest with a method to collect, count, and sort 1890 census data. Hollerith developed the idea of storing the data on punched cards in which holes represented number of people in the family, occupation, and so on. The cards were placed in a card-reading machine that used an electric current to determine whether holes were punched in each entry. The machine tallied each entry and displayed the results on dials on the front of the machine. In 1896 Hollerith formed the Tabulating Machine Company, one of the ancestors of International Business Machines—IBM.3

The path from calculators to the computer—a system consisting of a central processor, memory, and input-output peripheral equipment (such as disk drives) that is capable of manipulating data in many forms (such as numbers, symbols, graphics, and so on)—was a long one marked by many scientific and technological breakthroughs. We will now consider some of the most important early breakthroughs that had important bearing on the integration of computing and telecommunications. Some of these developments occurred as abstract contributions, while others were in response to economic needs.

We begin in 1929, a year that saw both the crest of the post–World War I boom and the onset of the Great Depression. No industry more characterized that boom than electricity. Lamps, refrigerators, radios, and a host of other appliances boosted the demand for electricity dramatically in the 1920s. In turn this called for the design and construction of huge generating stations and transmission networks. And this created complex problems to balance the supply and demand of electricity in a network. Engineers had to know how load changes in any part of the network would affect other parts of the network as well as the connections between adjacent networks. The differential equations that had to be solved grew so complex that electrical engineers could not solve them. Given the anticipation that electric-power consumption would continue to grow, solving the mathematical problems rated as a very high priority. In 1929 General Electric Company and the Massachusetts Institute of Technology introduced the A-C Network Analyzer, a special-purpose machine capable of solving the complex differential equations raised by electricity distribution. Based on the efforts to produce the A-C Network Analyzer, MIT’s Vannevar Bush built the Bush Differential Analyzer, an all-purpose machine that could solve long, intricate differential equations previously considered unsolvable. Bush’s machine, which he conceived as the precursor of larger ones that could solve even more complex differential equations, computed mechanically, but used vacuum tubes as storage devices. The machine, nevertheless, was an analog one that largely used mechanical parts to perform calculations. It triggered further interest in the development of machines that could solve complex mathematical problems, especially those associated with military applications.

Bush and his group were not the only team working on the development of a computer in order to solve economic problems in the pre–World War II period. AT&T, too, was faced with mathematical issues that impeded growth just as the electricity industry did. AT&T’s major problem concerned relays, which are small, glass-encapsulated electromechanical switching devices that are closed in response to the number dialed and that send pulses through coils wound around the relay capsules. Relays automatically convert phone numbers into routing instructions. In the mid-1930s, the mathematics concerned with relays and transmission had become sufficiently complex that telephone engineers found ordinary desktop calculators inadequate to perform the calculations necessary to satisfy the company’s growth expectations. In 1937, George Stibitz, a Bell Labs mathematician, experimented with relays that he brought home from an AT&T scrap heap. His fundamental insight was that the ones and zeroes used in the binary numbering system correspond to the two-state quality—on and off—of the relay. The homemade device that Stibitz constructed attracted the attention of his Bell Labs’ superiors, who in 1938 commissioned a team to construct a machine based on Stibitz’s ideas. The Complex Number Calculator, fully operational in 1940, did not have a stored program but could rapidly add, subtract, multiply, and divide complex numbers. More importantly for the development of the computer-telecommunications interface was that the Complex Number Calculator was the first machine to be used from a remote location. In 1940 Stibitz, attending a meeting of the American Mathematical Society in Hanover, New Hampshire, arranged for a telephone connection between a teletype machine in the meeting area and the calculator in New York. Conference participants entered problems and the machine answered in less than one minute. Later variants of the Bell Labs machine could also engage in multitasking.4

While practical problems triggered research at various locations, two mathematicians made theoretical contributions that paved the way for the remarkable progress coalescing various programs that took place during World War II. In 1937 Claude E. Shannon earned his master’s degree from MIT by using Boolean algebra to describe the behavior of circuits. George Boole (1815–64) had developed the algebra of logic in which different classes can be divided into the absence or presence of the same property—that is, zeroes and ones. Shannon drew a parallel between switching circuits and the algebra of logic. True and false values are analogous to open and closed circuits. Similarly, the binary system can correspond to high or low voltage, a punched hole or a nonpunched area on a tape, or any other duality that corresponds to the simple operations of a machine. The superiority of digital techniques to analog ones for directing information through switches and circuits became apparent as a result of Shannon’s work; building a reliable machine the circuits of which could tell the difference between a one and a zero was clearly much easier than the task of designing circuits that would be compelled to distinguish between a larger number of digits. For the same reason, a digital machine would be far more accurate. Shannon’s conception “that information can be treated like any other quantity and be subjected to the manipulation of a machine” had a major influence on the general-purpose computers that were soon designed.5

In 1937 British mathematician Alan Turing published another seminal paper that shaped the conception of the computer. Working in the area of mathematical logic, Turing sought to prove that there was no mechanical process by which all provable assertions could be proved, then a topic of considerable debate among mathematical logicians. In the course of his proof, Turing invented what is called a “Turing Machine,” an abstract machine that “could recognize and process symbols, including arithmetic, according to a ‘table of behavior’ with which it was programmed.”6 Using the “machine” to compute numbers, Turing showed that there was a universal machine that with appropriate programming could do the work of any machine designed for special-purpose problem solving. The machine processed not numbers but symbols, and was therefore capable of processing any kind of information for which instructions could be written. Turing, in a word, invented a theoretical computer.

By about 1940 many of the intellectual and practical elements were in place for the development of the computer. Considering these developments will help us understand the later interface of computing and telecommunications and the important role that digital technology has played, and will continue to play, in that marriage. While computers would eventually have been built based on the foregoing developments, there is no doubt that World War II clearly forced the pace. As we saw, the U.S. military faced major trajectory problems that could not be solved with machines available in the early 1940s. In 1942 the U.S. Army Ordnance Corps commissioned a group of scientists and engineers centered at the University of Pennsylvania to create a machine that could rapidly solve trajectory problems. Dr. John Mauchly and his chief assistant, J.P. Eckert, Jr., headed the group and collected the requisite information and talented people to do the job. One of their early decisions was to construct a machine with as many electronic parts (and as few mechanical parts) as possible. John Antanasoff, a professor at Iowa State University who had designed large-scale calculating machines, influenced Mauchly’s vacuum-tube design. By the time the ENIAC was completed in February 1946, it was the largest and most complicated electronic device ever built. It contained 18,000 vacuum tubes of sixteen basic types, 70,000 resistors, 10,000 capacitors, and so on. The machine was eight feet high, three feet wide, and almost one hundred feet long. It weighed thirty tons and consumed 140 kilowatts of power. By the time the machine was constructed, the total cost was almost $487,000.

Because vacuum tubes generate considerable heat, draw enormous amounts of power, and have a high failure rate, they imposed a serious limitation on the postwar utility of computers. At this point center stage is once again occupied by AT&T, for in 1947 three Bell Labs scientists invented the point-contact transistor—the first kind of transistor. AT&T had long undertaken research that it hoped would develop technologies to supplant the vacuum tube for the reasons cited above. The research focused on semiconductors, which have different characteristics than either insulators (such as glass) that resist the flow of electricity or conductors (such as copper) that allow electricity to flow. Semiconductors, such as silicon, can be altered to be either a conductor or an insulator. Transistors are semiconductor devices designed to simulate the functions of vacuum tubes, but they are much more efficient, reliable, and compact than vacuum tubes. In addition, they switch on and off much faster than vacuum tubes, an obviously important need for computers. In contrast to vacuum tubes, transistors stand up well to mechanical shock and do not require a warm-up period. Transistors can also serve as memory devices. It is no exaggeration to assert that computers would not have developed as a large-scale business without the advent of the transistor. Indeed, the power requirements alone of vacuum tubes would have precluded it.

While the transistor’s potential importance was recognized quickly, it was not until 1954 that Texas Instruments, under license from AT&T, produced the first commercial silicon one. In that same year Texas Instruments designed the first marketable transistor radio.7 By 1960 all significant computers used transistors for their logic functions. But in the meantime ENIAC, even though initially based on vacuum tubes, signaled the beginning of the new computer industry. And important events occurred quickly that would have major implications for the connections between computers and telecommunications. In June 1948 a team at Manchester University in England produced the first computer with a true stored-program capability, a conception for which American mathematician John von Neumann was partly responsible. Because the rapid developments and technological breakthroughs that, in fact, occurred were reasonably anticipated and the National Bureau of Standards promised orders, Eckert and Mauchly, ENIAC’s two leading figures, established the Electronic Control Company in September 1946, and set up shop over a clothing store in Philadelphia. Other orders and promises followed as Eckert and Mauchly developed an all-purpose machine that they called the Universal Automatic Computer, or Univac. Nevertheless, financial and engineering difficulties (especially with vacuum tubes) as well as missed deadlines compelled the founders to sell their company to Remington Rand in March 1950.

At the time of the Remington Rand buyout (which became Sperry-Rand after a 1955 merger), the Univac machine was under construction. In 1951 the Univac was completed for the Bureau of the Census, and in 1954 the first commercial installation took place at a General Electric plant in Louisville, Kentucky. But it was earlier, in 1952, that Univac became a household word when it was used by CBS to predict the outcome of the 1952 presidential election.

It was not Sperry-Rand, however, that was destined to dominate the mainframe computer industry and to move the product to new uses, including telecommunications. The company that came to dominate the mainframe industry was, of course, IBM. Shortly after communist armies invaded South Korea on June 25, 1950, IBM wrote a letter to President Harry S. Truman indicating the company’s intention to support the American war effort in any way possible. The theme that persistently reached the company was that the military desperately needed more computing power. IBM had assisted a Harvard University team, led by Mark Aiken, on a computer project during World War II, but it was initially reluctant to enter the fledgling industry on the theory that few customers would be found to lease such a complex machine for a large rental fee. Notwithstanding IBM’s success in developing computers that served well during the Korean War, the company’s attitude was that “computers afforded limited opportunities and were a sideshow when compared to the punched card accounting machines … to meet the needs of … the ordinary businessman.”8 Nevertheless, as the 1950s advanced, it became apparent to IBM’s leadership that the days of the accounting machine were coming to an end and would be supplanted by the solid-state (all transistor) computer. IBM’s engineering talent, thus, became focused on the development of computers.

One of IBM’s objectives was moving from building one-of-a-kind machines to fabricating many at a time. The company also focused on automating fabrication. Finally, it sought to establish a decisive advantage over its competitors—an endeavor it christened Project Stretch. Although it had considerable success beforehand, IBM’s introduction in 1964 of its breakthrough product, the System/360—the research and development costs of which were more than $1 billion—allowed the company to attain its objectives. The System/360 was not a single computer but a family of compatible machines that allowed customers to move from smaller to larger ones as their needs dictated. At the same time, IBM introduced the 1403 printer, another decisive breakthrough, which allowed customers to print at the previously unthinkable rate of eleven hundred text lines per minute. IBM’s domination of the mainframe market, although challenged, continued as it introduced each new generation of products.9 That, of course, meant that when computers became integrated with telecommunications, the world saw the contests that would appear to be between two sumo wrestlers—IBM versus AT&T.

Computers Meet Communications

If the distinction between computing—the processing of information—and communicating could have been maintained as easily in practice as it can conceptually, the history of telecommunications would have been dramatically different. But technological advances tended to break down the barriers between one sector that was to be governed by the principles of open competition and the other sector that was governed by the public service principle and was heavily regulated by national and state agencies. Because the two markets eventually came to converge, policymakers were compelled to address the issues, first, of how to draw the line and, then, of what principles should govern the large border area of computing and communications. AT&T, which already had many adversaries, now came in conflict with IBM and other computer companies. The problem was complicated by the fact that computer hardware devices with communicating abilities were only one type of interconnecting CPE. Consequently, the many interconnection issues raised in Hush-A-Phone and Carterfone were fused with the computer boundary issues. Since the interconnection issue was tied to one concerning a crucial technology, the F.C.C. had to treat it with heightened sensitivity. The agency had a public-interest mandate to facilitate the progress of the computer and its applications. AT&T had to fight against not only more traditional interconnecting companies but also computer companies. AT&T’s traditional CPE rivals now had powerful allies and their claim of standing for the public interest became more substantiated.

As we saw, IBM’s 1964 introduction of System/360 marked its rise to preeminence in the mainframe industry. IBM’s bold innovation, which allowed a single computer to perform both office management and engineering functions, and users to move upward in computers without the costly burden of developing new programs, paid off rapidly. From 1968 through the 1970s IBM held more than 65 percent of the world’s general-purpose mainframe market.10 Notwithstanding IBM’s preeminence in computers, it would be a mistake to view the problem of delineating the respective terrains of computing and communications as simply a clash between IBM and AT&T. Clearly, there were differences in position between the two firms, but IBM was precluded under the 1956 consent decree into which it entered from engaging in the service-bureau business, except through a completely separate subsidiary. And it is precisely in the service-bureau part of the business that the initial difficulties arose. Service-bureau activities are those in which customer data are manipulated or changed. While IBM was able, in small ways, to circumvent this restriction, its major business was computer mainframes and other hardware. Nevertheless, it strongly supported the views of its many service-bureau customers.

The 1956 Western Electric consent decree appeared to prohibit AT&T from entering any industry except the regulated telephone industry, and Western Electric from manufacturing anything (with a few exceptions) other than equipment for telephone operations. Nevertheless, AT&T moved forward in the development of computers, peripherals, and software for its internal use. Computing capacity in the Bell system during the early 1960s grew exponentially with a doubling time of less than two years. By 1969 AT&T had developed the UNIX operating system, a major advance that allowed, among other advantages, programs to operate together smoothly. By the mid-1960s, AT&T was in an anomalous position—although a leading firm in virtually all phases of computer development, it was precluded from entering the business because computing was not a regulated public service. However, when researchers at MIT’s Lincoln Laboratories developed a system in the 1950s for transmitting digital signals between air defense sites over analog telephone lines, a conflict was inevitable.11

AT&T and others engaged in programs to develop and improve modems that would be able to transmit and receive data over telephone lines through paths originally or principally devised for voice transmission. As early as November 1956, AT&T decided to begin the development of a commercial data service. An experimental program called Dataphone began in February 1958, demonstrating the feasibility of sending data at various speeds over telephone lines. In this period, IBM, too, undertook important developments in the field of data communications. As the 1950s came to a close, data transmission over private lines and through the switched network constituted a rapidly expanding market.12 For this reason, AT&T became attentive to all matters that might establish a general principle. In particular, it remembered Hush-A-Phone, and the interconnection battles lost to small companies. The incident that triggered the F.C.C.’s major computer inquiries, which wrought such extraordinary changes in American telecommunications, was a relatively minor affair. The Bunker-Ramo Corporation, an early service bureau, had developed an information service for stockbrokers called Telequote III. Pursuant to this system, undertaken through arrangements with the New York and other exchanges, Bunker-Ramo gathered, updated, and stored in regionally located computers information important to stockbrokers. For example, a broker dialing a Bunker-Ramo computer could instantly obtain information about the last price sold, last price offered, and so on for any stock traded on the exchanges. The common carriers had no objection to Telequote III; indeed, one may surmise that they viewed it as the progenitor of many other lucrative dial-in services to computers.13

Telequote IV, which Bunker-Ramo sought to introduce in 1965, however, presented a problem to the carriers because it added a message-switching capacity that allowed any of the offices of the subscribing brokerage firms to communicate information to any other. Telequote IV, of course, took advantage of the fact that the same computer could not only store and transmit information but could switch it as well. Obviously, the speed with which information can be obtained and transmitted is critical to the broker-age business.

Before we consider the underlying reasons that Western Union, Bunker-Ramo’s principal carrier, refused to provide Bunker-Ramo with private-line service for Telequote IV, we should first note that the refusal put Western Union and the other common carriers supporting the action on the defensive. Western Union, by refusing to supply private lines for Telequote IV, appeared to be blocking progress. Computers in 1965, as now, were considered important objects because of their extraordinary capacity to solve innumerable problems and make our lives better. Moreover, Western Union had taken on not only the computer service-bureau industry but, more importantly, the entire financial industry (including brokers and banks), which even then could foresee major uses for message switching. The distinction between message switching and circuit switching should be noted. The latter involves a carrier providing a customer with exclusive use of an open channel for direct and immediate electrical connection between two or more points. The traditional telephone system typifies circuit switching, whereas the telegraph industry uses message switching, which is indirect in the sense of a temporary delay or storage of information prior to forwarding the message to its destination. While the F.C.C. conceived of circuit switching by a computer as communications, message switching by a computer was the focus of controversy because it inexorably involved some processing by a computer.

Western Union and its ally AT&T must have been aware of the risks they faced, the potential affront to major communications users, and the fact that Bunker-Ramo would not abandon such a potentially lucrative service without a fight before the F.C.C. and, perhaps, the courts as well. It is therefore necessary to consider carefully the arguments of the public service companies. Western Union advanced two basic arguments to justify its refusal: (1) Bunker-Ramo was not entitled to switched service under the carriers tariffs; and (2) if Bunker-Ramo engaged in Telequote IV, it would be subject to F.C.C. regulation under the Communications Act. Under the then prevailing conception of private lines, they could not be connected to the general exchange system nor could switching be permitted between the stations. In 1942, the New York Public Service Commission stated: “Communication may be had between several stations and a central point but the different stations are not connected one with the other.”14 Thus, the switching of information through Bunker-Ramo’s computers would be considered a common-carrier activity, in violation of existing tariffs, and subject to F.C.C. regulation. Further, the switching would be used by Bunker-Ramo to obtain compensation from customers—precisely the activity carved out for communications public service companies. Bunker-Ramo would, according to Western Union, become a retailer of common-carrier services with Western Union forced to become, against its will, a wholesaler. To follow the logic of Western Union’s argument, there would be no obstacles preventing any large enterprise from leasing what purported to be a private line from the telephone company for computer services, and then adding switching capacity so that all customers in the network could communicate with each other. This issue had become a practical one when computer technology permitted, first, the installation of remote terminals that would transmit and receive data from computers, and, second, time sharing. Prior to the advent of time sharing in late 1961, a computer user had to deliver his or her problem to a computer’s managers and then wait hours or days for an answer that would take the machine only a few seconds to generate. Until time sharing, the computer worked on one problem at a time. In 1959, British physicist Christopher Strachey proposed methods that would allow a computer to work on several problems simultaneously. Following through Strachey’s suggestions, MIT’s computation center developed a time-sharing system in 1961, which contained many direct connections to the computer. Moreover, the MIT system was connected into the Bell and Western Union systems so that access to the MIT computer could be had from terminals anywhere in the United States and abroad.15

The time-sharing concept virtually begs for connection not only between users and the computer but also between the users themselves. Users can carry on communication among themselves through the machine, cooperatively examining a set of problems and sharing information. Although there were early difficulties in large-scale time sharing, by the mid-1960s it was clear that it was growing, in part because Bell Labs endorsed the concept in 1964 by ordering General Electric computers that featured time sharing. The boom in time sharing lowered computing costs and drew more companies into computer use, which, in turn, stimulated the rapid development of service bureaus, some of which developed specialized software for a variety of business and scientific uses.

The Telequote controversy was settled through negotiation in February 1966. Bunker-Ramo devised a modified Telequote IV service that it transmitted over the telephone. But Bunker-Ramo’s complaint to the F.C.C. and the more general questions that it raised about the appropriate terrains of computing and telecommunications, unregulated and regulated activities, cried out for comprehensive treatment. The F.C.C. hoped that by addressing the issues raised by the computer-communications interface early in the development of the field, it could resolve many of them before they became full-blown controversies. Little did the agency realize, when it adopted a notice of inquiry into the interdependence of computers and communications in November 1966, that major questions would still be unresolved into the 1990s.

After a staff investigation, the compilation of a lengthy record, a lengthy study by the Stanford Research Institute (SRI) commissioned by the agency, and voluminous industry comments on the SRI study, the F.C.C. issued its long-awaited tentative decision (known as the Computer I tentative decision) on April 3, 1970. Essentially, the agency concluded that computer-communication services should be governed by the principles of free competition except in circumstances of natural monopoly or where other factors “are present to require governmental intervention to protect the public interest because a potential for unfair practice exists.”16 Not only would the F.C.C. not impose regulation on most computer services, but the existing common carriers (with the exception of AT&T because of the 1956 consent decree) would be free to enter data-processing services for three reasons. First, they would add competition and, possibly, innovation. Second, they might exploit economies resulting from integrated operation. Third, computer services might afford an opportunity for Western Union to diversify in the face of its declining message telegraph business. However, to prevent the ills that could result from admixing regulated and unregulated businesses (including subsidizing unregulated business with the profits of regulated business, or disregarding the primary responsibility of the regulated activity), the F.C.C. required strict separation of the two sets of activities and nondiscriminatory treatment between the separate subsidiary and other customers.

AT&T was barred from computer services not only because of the 1956 decree but also because of the F.C.C.’s policy of favoring smaller competitors in newer technologically based fields. The agency’s policy made the issue of what constitutes a “computer service” far more consequential than a matter of definitional craftsmanship. The niceties of definitional hairsplitting were now potentially worth billions of dollars, for if an activity was on the communications side of the boundary AT&T might be allowed to engage in it, but if on the computer side it could not. The agency invented the phrase “hybrid service” to describe the close cases, those that combined data processing and message switching to form a single integrated service. In such hybrid services where message switching was offered as an integral part of a package that was “primarily” data processing, it would be treated as data processing. But where the data processing feature or function was an integral part of and incidental to message switching, the entire service would be treated as a communications service subject to regulation. The Bell system was warned to stay on the correct side of the line. The presumption in a close case would be against allowing the Bell system to engage in a service. But could this decision rule be applied in practice?

In March 1971 the F.C.C. spoke again, generally endorsing the framework it had developed in the tentative decision, but going further. Communications common carriers would now be barred from buying data-processing services from their own affiliates. This harsh provision was adopted because the agency felt that it would be virtually impossible to investigate every dealing between a carrier and its data-processing affiliate. The maze of ambiguities guaranteed that appeals would be made. In February 1973 the Court of Appeals for the Seventh Circuit provided an anticlimax, holding that the F.C.C. lacked the authority to regulate data processing; therefore, the rules restricting dealings between common carriers and their data-processing subsidiaries were invalid. On the other hand, the rules requiring strict separation were upheld since these covered the structure of the common carriers only. The agency, however, could not concern itself with the structure or desirable behavior of the data-processing sector.17 The commission amended its rules in March 1973 to reflect the appeals court’s decision—almost seven years after the inquiry began. Although the agency still had not been able to arrive at a formulation clearly distinguishing computing from communications, something far more important had occurred. Because of the F.C.C.’s new thinking about inter-connection in the post-Carterfone era that began in 1968, computers came to be considered as only one kind of interconnection—albeit a very important one—into the telephone network, and the rules applying to them had to parallel those for other interconnection devices.

The F.C.C.’s first important step in anticipating the post-Carterfone controversies was to commission in June 1969 the National Academy of Sciences (NAS), through its Computer Science and Engineering Board, to study the technical factors of customer-provided interconnection. The NAS report, issued in June 1970, concluded that uncontrolled interconnection could cause harm to telephone company personnel, network performance, and telephone equipment. It determined that harm might arise as a result of hazardous voltages, excessive signal-power levels, line imbalances, or improper network-control signaling. Finding that the electrical criteria in AT&T’s tariffs relating to signal amplitude, waveform, and spectrum were technically based and valid, the NAS concluded that two approaches were acceptable to provide the required degree of network protection: (1) common carriers could own, install, and maintain connecting arrangements and assure adherence to the tariff-specified signal criteria; and (2) a program certifying appropriate standards for equipment and for safety and network protection could be instituted. The NAS warned that “No certification program … will work unless proper standards have been established. In the case of telephone interconnection, standards must be developed to cover certification for installation and maintenance of equipment and facilities, as well as for equipment manufacture, since all of these combine to determine the net effectiveness of the program.”18 In 1971 and 1972, the F.C.C. announced the establishment of advisory committees that would attempt to develop technical standards for protecting the network from harms that could result from customer-provided CPE. The committees included representatives from NARUC, the F.C.C, the common carriers, independent CPE manufacturers, suppliers and distributors, and others. Under the auspices of the committees, progress was made toward establishing standards agreeable to the various interests.

After appointing the advisory committees, the F.C.C. convened a joint board of state and federal regulators to report to the agency on whether customers should be allowed to furnish their own network-control signaling units and connecting arrangements and, if so, what rules the F.C.C. should institute. This body would provide crucial support for the direction that regulators would take at both the state and federal levels on interconnection issues, including those pertaining to computers. Between April 1975, when the joint board issued its report, and November, when the F.C.C. issued its First Report and Order, the agency clearly indicated the direction it would follow in interconnection. It would gradually break down the distinctions among types of interconnection equipment, leaving no exclusive preserve to the public service companies. Full-scale competition in all CPE would not take place all at once but gradually. Thus, in June 1975 the commission issued an order in which it rejected the distinction that it had earlier made between a substitution for telephone company equipment and an add-on to such equipment. The F.C.C. concluded that the distinction made no sense in terms of implementing the basic Carterfone policy. In view of the increasing complexity and integration of equipment within one shell, the distinction between substitution and add-on would have become unworkable.

In November 1975, the F.C.C. issued its Computer I First Report and Order. Complaining that the common carriers had failed to devise an acceptable interconnection program in the seven years that had elapsed since the Carterfone decision, the commission adopted a registration program for carrier- and customer-provided terminal equipment other than PBXs, KTSs,* main stations, party-line equipment, and coin telephones, which would be considered separately. The agency indicated, however, that it saw no reason to ultimately exclude such devices even though the joint board had done so, in that the technical concerns raised by the joint board about these classes of equipment had been mooted. The F.C.C. adopted a simple decision rule in the First Report and Order: If equipment was not shown to cause harm to the network, tariff provisions could not limit the customer’s right to make reasonable use of the services and facilities furnished by the common carriers. Any device registered could be used by a subscriber, and registration was to be based on “representations and test data [that] … are found to comply with specific interface criteria and other requirements.”19

During this period a technological advance that began quietly in 1958 would have a major impact on the eventual breakdown in the ability of regulators to sharply demarcate computing from telecommunications and, indeed, other interconnecting devices as well. The decision rule quoted above would become extremely difficult to apply. This was the development of the integrated circuit, the importance of which would not be realized until the 1970s. The first critical event was Fairchild Semiconductor’s development of the planar process to manufacture transistors more cheaply. Jack Kilby, while at Texas Instruments (TI), was the first person to conceive of integrating transistors and other components on a single chip in 1958. Kilby, however, did not develop a chip on which the devices could be interconnected, except by hand. In 1959 Robert Noyce, then at Fairchild Semiconductor, led a team that conceived and developed the separation and interconnection of transistors and other circuit elements electrically, rather than physically. In 1961 the United States Patent Office granted a patent to Noyce, inaugurating a ten-year battle between the two companies and a battle for the honor of first development between the two men. Eventually both men were accorded the honor.20 By the late 1960s, large-scale integration (LSI)—the placement of more than one hundred transistors on a single chip—had become possible.

One critical development to which integrated circuits led was the microprocessor, which in turn led to the personal computer revolution. The enormous impact of these developments will be considered later. At present, however, we are concerned about the impact of integrated circuits on the dull, dumb telephone that Western Electric had been manufacturing for AT&T. In 1967 TI invented the electronic handheld calculator. By 1971, TI was able to announce the release of electronic calculators at relatively low prices; these small calculators could be put into a pocket. The consumer electronics boom, largely dominated by Japanese firms, was based on the integrated circuit and its ability to miniaturize what were previously bulkier products. The introduction in 1972 of digital watches and smart 35 mm cameras triggered the equally dramatic growth of these industries as well, as prices came tumbling down.21

If integrated circuits can revolutionize performance in a host of other small appliances, why can they not do the same in the telephone and its peripheral equipment? The thought, obviously, was not lost on AT&T and entrepreneurs seeking opportunities in customer premises equipment. Planning was, of course, earlier than development, but the central theme that drove the movement to use integrated circuits to “smarten” telephones was articulated by ITT telecommunications expert Leonard A. Muller: “And when you put intelligence into devices, you begin to do things you never dreamed possible…. When you place in the hands of a homeowner a device that has electronic intelligence and can communicate with other electronic intelligence elsewhere only God knows what the applications could be.”22 Without minimizing the engineering difficulties that had to be overcome in order to develop such features as speed dialing, last-number recall, call forwarding, call waiting, and other more advanced features that chips within a telephone could perform, it is important to appreciate that the vision of such capabilities occurred during the period that the F.C.C.’s first computer inquiry was drawing to a close. New notions—that integrated circuits could allow the telephone to embrace computer functions, that the telephone was becoming a heterogeneous device with the possibility of performing a mix of many possible functions, and that the number of possibilities opened up was limitless—had a major impact on the F.C.C.’s thinking from that period forward. At the very least, these technological possibilities thoroughly undermined the idea that AT&T should own or strongly control the installation or design of customer premises equipment. This also encouraged the agency to open the interconnection market as wide as possible.

Computer II

As the foregoing shows, it became increasingly impossible to show in concrete situations whether computing or communications was the dominant activity in many applications. Thus, one of the central distinctions made in Computer I broke down. Further, as we have seen, post-Computer I computing equipment was capable of performing data processing and communications functions simultaneously: “Computer networks no longer followed the neat pattern of first processing information, and subsequently sending it over communications lines. Remote computer users could now receive raw or partially processed data at their locations and complete the processing themselves. In addition ‘smart terminals’ which were capable of performing some data processing functions were being developed.”23 Computer II also occurred as a result of a changing conception of the office during the 1970s. The office of the future would be equipped with such things as CPE that could exercise control functions for activities outside the office, and devices that allowed more kinds of information to be economically transmitted over existing telecommunications distribution facilities (such as wires, microwave, and satellite) and new kinds of facilities (such as optic fibers). Thus, control of robots in distant factories, electronic funds transfers, transmission and analysis of medical readings, and the appropriate CPE were components of the office of the future. The widespread transmission not only of data but of higher-quality video information and facsimile, holographs, electronic mail, and even complex engineering blueprints are other examples. Although not all of these advances would occur rapidly, the actors had to ready themselves for them.24

Among the many factors contributing to the onset of what the F.C.C. knew would be a massive inquiry in Computer II was AT&T’s Dataspeed 40/4 filing in November 1975. The Dataspeed 40/4 terminal was a smart remote access device that could not only transmit messages but also store, query, and examine data. Errors that were detected could be corrected locally without the need to interact with a mainframe computer. IBM, the Computer Industries Association (CIA), and the Computer and Business Equipment Manufacturers Association (CBEMA) petitioned to have the Dataspeed 40/4 tariff revisions rejected on the ground that the service constituted data processing rather than communications, which was not permitted under the Computer I rules. The Common Carrier Bureau agreed that Dataspeed should be rejected because it was a data-processing service. Arguing before the F.C.C, AT&T pointed out that the Common Carrier Bureau’s views would effectively remove the company “as a provider of data terminal services, whenever customers wish to update their service in order to communicate more efficiently with a computer without the need for an intervening operator.”25 That is, the Common Carrier Bureau required AT&T to be technologically backward and, therefore, uncompetitive. The F.C.C. rejected the Common Carrier Bureau’s recommendation, holding that Dataspeed 40/4 was primarily a communications service. The agency also recognized that the existing rules were becoming increasingly inadequate since the capacity of terminal devices to engage in data processing had increased markedly since the Computer I rules were established. Accordingly, in 1976, during the pendency of the Dataspeed 40/4 proceeding, the F.C.C. launched its second computer inquiry. At this time, mini- and microcomputers as well as other devices that could compute at a user’s premises and be readily interconnected into telephone lines had clearly rendered the old definitions and conceptions obsolete. Further complicating the issues was the ability of the common carriers to use some of their facilities to allow terminals to converse with each other. In addition, the common carriers had become capable of offering performance features that would otherwise be located in a smart terminal, including automatic call forwarding, restricted and abbreviated dialing, and special announcements. Accordingly, the F.C.C. proposed in its notice of inquiry a new set of definitions, which it hoped would make distinctions superior to those made in Computer I. A supplemental notice raised the possibility of common carriers having a data-processing subsidiary separate from the regulated entity.

After compiling a massive record and the filing of numerous corporate statements, including sharply conflicting ones from IBM and AT&T, the F.C.C. issued its tentative decision in Computer II in May 1979. Reversing the rules of Computer I, the new set of definitions recognized that technological advances had made the problem of defining the boundary between communications and data processing unworkable. The new framework focused “on the nature of various categories of services and the structure under which they are provided.”26 The F.C.C.’s tentative decision employed three basic categories: voice, basic nonvoice, and enhanced non-voice services. Voice service was defined simply as the electronic transmission of the human voice, such that one person is able to converse with another. An enhanced nonvoice service was defined as “any non-voice service which is more than the ‘basic’ service, where computer processing applications are used to act on the form, content, code, protocol, etc. of the inputted information.” Finally, basic nonvoice service was defined as “the transmission of subscriber inputted information or data where the carrier: (a) electronically converts originating messages to signals which are compatible with a transmission medium, (b) routes those signals through the network to an appropriate destination, (c) maintains signal integrity in the presence of noise and other impairments to transmission, (d) corrects transmission errors and (e) converts the electrical signals to usable form at the destination.”27 The central distinction that the basic nonvoice definition sought to convey was that the original information was not transformed in content. However, the definitions applied to basic and enhanced nonvoice services were sufficiently complex as to invite controversy and difficulty in application. Essentially, the new definitions would allow the public service companies to offer enhanced nonvoice services only through a separate subsidiary, which would lease telecommunications lines on the same terms and conditions available to information-processing firms without a common-carrier subsidiary.

While AT&T was not pleased with the F.C.C.’s new definitions or with the strict separate-subsidiary requirement, it was pleased with the F.C.C.’s discussion of the 1956 consent decree. Using Section V(g) of the decree, which permitted AT&T to provide services and products incidental to communications services, the F.C.C. decided that many enhanced nonvoice services may fall within the “incidental” category. Noting AT&T’s technological prowess, the F.C.C. observed that the public interest would not be served if AT&T had to restrict internally developed computer hardware and software to the Bell system only. Accordingly, the commission tentatively decided to permit AT&T to market such incidental products and services through a strictly separate subsidiary in situations “where market forces promise to be adequate and where full regulation is therefore not required but the offering … would be in the public interest.”28 Thus, for the first time AT&T could enter the door of the unregulated computer business.

The F.C.C. conceded that its new general definitions were subject to reevaluation if necessary, that the new AT&T rules in particular would require case-by-case analysis, and that many issues remained unresolved. For these reasons, the F.C.C. decision was deliberately considered a tentative one and it called for further comments from interested parties.

Released on May 2, 1980, the F.C.C.’s final decision hardly sounded final. At virtually every turn the divided agency promised to review and reconsider its conclusions and rules. Six separate statements accompanied the 122-page decision, and the changes and supplements to the tentative decision were substantial. Instead of three categories of service, only two remained, basic transmission service and enhanced services. Basic transmission service was “limited to the common carrier offering of transmission capacity for the movement of information.” Enhanced services were defined as offerings over a telecommunications network that add computer-processing applications and “act on the content, code, protocol and other aspects of the subscriber’s information.”29 Conceding that its prior definitions were faulty, the F.C.C. was now satisfied that it had constructed workable categories that coincided with those used in the marketplace. It believed that an underlying carrier would now have clear guidelines on which services it could provide directly and which required a separate subsidiary. But the service distinctions, while having the merit of simplicity, would not definitively determine on which side of the boundary all of the new service offerings would be, especially those involving information storage.

Having decided that AT&T should form separate subsidiaries, the F.C.C.’s next step was to elaborate a complicated scheme of the activities that could be conducted jointly on behalf of parent and subsidiary and those that had to be separated. The subsidiary would have its own operating, marketing, installation, and repair personnel. Certain kinds of information could be shared, other kinds could not. To assure arm’s-length dealings, the parent and subsidiary could not share space, and the implication was clear that any fraternization would be risky. The rules were so detailed and complex that they were tantamount to a deeper regulatory presence in the day-to-day operation of both businesses than had ever occurred. The final blow dealt to AT&T followed from the series of post-Carterfone interconnection decisions. Not just computer communications devices but all CPE, including the basic telephone, were to be deregulated pursuant to calendar schedules. Subscribers would now be able to own or lease any CPE—and pay for the cost of repairs. CPE would be removed from tariff regulation (detariffed) and would have to be offered through the separate subsidiary. In this way, CPE provision and transmission would be unbundled.

Almost every major participant in Computer II filed notices of appeal with the U.S. Court of Appeals for the District of Columbia. The court upheld the Computer II rules in 1982. But the story is far from finished, for the F.C.C. then had to consider the Computer II rules in view of the massive AT&T divestiture and decision entered in that same year. This led to the third computer inquiry, launched in July 1985. The focus now would be on the separate-subsidiary requirement and the services that should be regulated.

Computer III

Recall that the events with which we are concerned here began when Bunker-Ramo sought to introduce Telequote IV in 1965. Thirty-two years later, in 1997, the dust had not fully settled. During this period, of course, enormous technological progress occurred in both the telecommunications and the computer fields, and their integration has continued apace in no small part because of the continuing development of integrated circuitry and semiconductors. Additionally, the AT&T breakup played a significant role and, more recently, the 1996 Telecommunications Act (which will be discussed in the next chapter) raises still further considerations. While many considerations contributed to the opening of the Computer III proceedings in August 1985, the most important was the F.C.C.’s growing doubts about the efficacy of the separate-subsidiary requirement that was the centerpiece of the Computer II rules, especially after the dramatic restructuring of the old AT&T after January 1, 1984.

In order to understand the dramatic changes that the Computer III rules have wrought, we begin with the unarguable conclusion that the old AT&T, largely through the extraordinary efforts of Bell Labs, had compiled an amazing record of scientific and technological progress. Much of it had been in the field of pure science for which important applications were found a considerable time after the scientific discoveries. As we have seen, some of the most important developments in the computer and related fields, including the UNIX operating system, the transistor, computer movies, and a variety of programming languages, occurred in the Bell system. The 1984 breakup thrust AT&T into a new competitive environment in both long-distance and the various equipment markets. While Bell Labs remained a component of AT&T, many observers feared that the new short-run profit considerations would undermine the scientific and theoretical work of Bell Labs, the resources of which would be largely directed toward marginal product improvements. At the same time, part of Bell Labs was spun off to the regional Bell operating companies in the form of Bellcore, further ruining what most observers held was a national treasure in the form of the old Bell Labs. If one accepted these premises—as many observers did—it became necessary to keep AT&T and the RBOCs occupied in technologically advanced markets. Impediments to their engaging in competing in leading-edge technologies and enhanced services should be removed. It is within this context that we must understand the dramatic changes in the Computer II rules that Computer III sought to bring.

AT&T and the RBOCs, not surprisingly, sought to capitalize on these anxieties in an attempt to secure relief from the Computer II structural-separation requirements, under which the separated services had to be conducted in separate physical locations and with separate computer facilities. Their argument was based on the economies-of-scope conception employed unsuccessfully in U.S. v. AT&T. The Computer II rules, they claimed, imposed unnecessary costs on them by requiring duplication of staff, facilities, hardware, and functions. Further, the transaction costs in the forms of data and paperwork imposed on the arm’s-length related firms could be substantially reduced if the requirement was lifted. Moreover, these firms would be deterred from realizing their potential for innovation by sharing technological information and integrating it in novel service offerings. The rules, they argued, also denied consumers the opportunity to engage in one-source shopping. All of the foregoing, they urged, injures consumer choice, unnecessarily reducing customers’ options. “All the reasons [for separation] are gone. It’s time to catch up with our customers,” argued Alfred C. Partoll, an AT&T executive vice president.30

The focus of the campaign was, of course, in F.C.C. proceedings. At first, the RBOCs and AT&T sought waivers from the Computer II rules, which the F.C.C. largely granted in every instance. As this process developed, some of the commissioners felt that the waiver process was too cumbersome, slow, and uncertain. Virtually every other sector involved in the computer-communications interface opposed the waivers. When the accumulated impact of the waivers led to a call for reexamining structural separation, generally they opposed that as well.31 The opposition of every segment of the computer industry as well as the segments of the telecommunications business that would face additional competition from relaxing the Computer II rules has been a consistent theme from the inception of the proceedings to the present day. Additionally, one must appreciate that the information-provision business was in its infancy when the Computer III proceedings began. But even then, CompuServe, one of the oldest on-line services, vigorously fought AT&T and its progeny.32

The pressures and dissatisfactions led to the F.C.C.’s August 1985 Notice of Proposed Rulemaking. The notice was a lengthy, complex document that, in essence, criticized the separate-subsidiary requirement, asserting that it imposed unnecessary costs on the RBOCs, to the ultimate detriment of consumers. Accordingly, the agency proposed that regulatory arrangements other than separate subsidiaries would better serve the public interest. Maintaining the long-standing distinction between basic and enhanced services, the commission nevertheless observed that there could be enhanced services that could most effectively be offered when integrated into the network. Since a basic service is conceived as traditional voice telephone service and enhanced services “use the telephone network to deliver services that provide more than a basic voice transmission offering,”33 protocol conversion and voice messaging might be included within the basic service category, even though technically in the latter one. Voice messaging, which was the topic of one of the AT&T Computer II waivers, allows voice messages to be stored in the network and delivered later at the caller’s or recipient’s option. Protocol conversion concerns the fact that computer and computerlike devices do not necessarily speak the same language in the same way that you and I communicate in English. Protocols are sets of standards for exchanging information between two computers or computerlike devices. Protocol conversion involves the processing that permits communication between terminals or networks with different protocols. Thus, the notice devised three categories of services that might be governed by different rules. Withal, competition was to be preferred wherever possible.

In June 1986 the F.C.C. released its lengthy Report and Order in the Computer III matter.34 The decision retained the basic/enhanced definitional distinction to determine whether a service should be regulated. Second, the agency decided that the Computer III regulatory scheme should apply only to AT&T and the RBOCs. Thus, GTE and all other telephone companies were exempted from the elaborate scheme that the decision devised. Third, the F.C.C. preempted the states from imposing their own separate-subsidiary or tariff requirements inconsistent with those on which the F.C.C. decided. Fourth, the agency decided to abandon the separate-subsidiary requirements in favor of a new set of conceptions that constituted the most important Computer III contribution to resolving the computer-communications issues. The two concepts are open network architecture (ONA) and comparably efficient interconnection (CEI). ONA was intended to allow enhanced-service providers to interconnect into the telephone network on a technically equal basis to AT&T and RBOCs operations. CEI requires that the telephone company offer enhanced-service providers (ESPs) equally efficient use of its basic service that it employs itself. If the telephone company offers an enhanced service, it must be required to offer equal network interconnection (called collocation) to competing or other ESPs. Lists of documents containing interface information and technical characteristics must also be provided.35

The key to the F.C.C.’s view that ONA could replace the separate-subsidiary idea and prevent abuse of monopoly power was the requirement that the RBOCs must unbundle their network services into individual cost-based elements that the ESPs could order as needed. The F.C.C. conceived unbundling (fragmenting) basic service “building blocks” into separate components as the centerpiece of the ONA plan. Unbundling services must be offered on the same terms to ESPs as to their own operations. In its order the F.C.C. required the RBOCs to file ONA plans indicating how they would comply with the requirements. Thus, the RBOCs had to show how ESPs could purchase various switching and transmission services so that a level playing field was created for each of the enhanced services. Until the acceptance of these plans, RBOCs had to maintain separate subsidiaries. The drastic overhaul of the Computer II structural separation rules was based on the RBOC arguments that they were hampered in introducing new services and that many new advanced services that had not appeared in the public network, such as voice messaging, were being employed in private networks operated by large businesses. Notably, in 1987 AT&T successfully argued that because it faced considerable competition and did not have control of switching facilities that an ESP had to use, it should be exempt from most ONA requirements.36

While the F.C.C. thought that it had successfully shaped policies that promoted RBOC efficiency while at the same time encouraging innovative service providers, dissatisfaction among the RBOC opponents, who sought to impose higher costs on the telephone companies, was widespread. Appeals and further proceedings were as inevitable as night following day. The RBOCs had to overcome not just this hurdle but the additional one of Judge Greene’s prohibition in the Modification of Final Judgment against providing “information services,” which partly overlapped the F.C.C.’s “enhanced services” concept. But the stakes were huge and well worth the fight. Enhanced services based on ONA were uniformly considered to be a major telecommunications growth area. Among the earlier network services on which ESPs can provide offerings are automatic callback, automatic recall, calling-number identification, selective call rejection, selective call acceptance, selective call forwarding, distinctive ringing, customer originated trace, and so on.37

While such network services have clearly offered lucrative opportunities, these pale before the possibilities raised by three other ONA technologies undergoing development and refinement. Common channel signaling (CCS) is a network architecture under which call setup and billing data are transmitted through separate network facilities from those that transmit the actual communications between two (or more) parties. The services mentioned in the last paragraph are available through CCS. But the new innovative technology is nonassociated CCS, which greatly reduces call setup time and enhances network flexibility because traffic resources are not wasted until signaling indicates that the call can, in fact, be completed. Since network capacity is becoming more and more taxed, especially during emergencies and through increased Internet use, the advantages of nonassociated CCS are eminently clear. Second, integrated service digital network (ISDN) is not so much a product as a guideline for offerings that permit a single network to handle simultaneous voice, data, video, and other services, in contrast with the separate arrangements now required for such simultaneous transmission. ISDN, thus, can greatly facilitate communications between local area networks, on-line business transactions, desktop conferencing, work at home, and a host of other applications. The intelligent network (IN)—the most advanced of these network services—involves moving the software housed in every switch to fewer centralized databases. INs facilitate the rapid creation of new services because software modifications are made only to the centralized databases, rather than to every switch in the network. In addition, IN promises the eventual adoption of personalized communications—a personal telephone number wherever one happens to be located—rather than the current system of terminal-assigned telephone numbers.38

While the F.C.C. has been occupied with proceedings focused on unbundling and ONA issues with respect to these and other technologies, it has also been involved in court proceedings aimed at untracking the Computer III program and reverting to the Computer II separate-subsidiary rules. An appeal of the Computer III rules was made to the Court of Appeals for the Ninth Circuit by the California PUC and others. The court held in 1990 that the F.C.C. abused its discretion in abandoning structural separation by failing to show that (1) its new program adequately protected against cross-subsidization of enhanced services by the RBOCs, and (2) the sweeping preemption of state regulation was necessary to achieve F.C.C. goals. The preemption, in short, was too broad.39 Computer III was, therefore, remanded to the agency for future proceedings consistent with the court’s conclusion. After additional F.C.C. and court proceedings, revising the original Computer III order, the commission modified but still retained the basic ONA/CEI system that it originally devised. In April 1994 the agency decided to apply the framework to the GTE Corporation. But the commission received another court of appeals rebuff in the 1994 California III decision.40 Eight years after the Computer III rules were announced, the court of appeals accepted the F.C.C. ‘s preemption language but faulted the agency for failing to adjust its cost-benefit analysis. The court held that the agency still had not sufficiently explained its conclusion that “totally removing structural separation requirements was in the public interest given that… ONA requirements no longer called for ‘fundamental unbundling’ of the BOC networks.”41 The Ninth Circuit court, one might add, is the most interventionist one in the federal system.

And so the Computer III proceedings drag on to the benefit of the innumerable lawyers involved in the proceedings. But in the period from the end of the Computer II proceedings in 1980 to the present day, technology has not stood still. Developments would further integrate telecommunications and computing in ways then undreamed. The microprocessor would spawn the personal computer (PC) revolution, which in turn would spawn the Internet, the World Wide Web, and the multimedia revolution.

The Microprocessor and the PC

How important was the invention of the microprocessor in 1971? Clearly it is one of the most important inventions of the twentieth century. Netscape Communications vice president Marc Andreessen asserts, “When we add human vision, innovation, insight, knowledge and wisdom in the form of software to the microprocessor, we can see that [the impact has been] so much more than even the wheel. And we’ve only begun to scratch the surface of what the microprocessor makes possible.”42 Perhaps this is an overstatement; only time will tell. Nevertheless, there is no question that the telecommunications-computer interface has changed dramatically since its invention. When Intel launched the 4004, the first commercial microprocessor, in 1971, only three years after the company’s founding, it boasted that the invention would usher in a new age in microelectronics. The boast, even at that primitive stage of microprocessor development, was fully justified. The 4004, costing about two hundred dollars and occupying twelve square millimeters, offered approximately the same performance that the ENIAC did in 1946. The extraordinary achievement involved incorporating two remarkable innovations. First, most of the transistors in a computer’s logic circuits were placed on a single chip. Second, the chip was programmable: it could be controlled by software and could, therefore, perform numerous functions. In the words of Intel CEO Andrew S. Grove, “Microprocessors are the brains of the computer; they calculate while memory chips merely store.”43 Microprocessors (the central processing units, or CPUs, of personal computers) have been frequently analogized to brains, the functions of which include receiving input from one’s senses through nerve connections (buses, in the case of microprocessors) and then determining a response based on the information stored in one’s memory. In the case of the microprocessor, the response is based on the control software stored in the main memory.

The road from the invention of the microprocessor to its use in the personal computer and the subsequent dramatic transformation in telecommunications is a long and fascinating one.44 Every observer agrees that these developments changed the world, but from the perspective of this book, the most important focus is on how they changed telecommunications by both enlarging the uses of conventional channels and creating new ones—most significantly, the various components of the Internet. In the former case, greatly increased use of fax machines, dial-up information services, modems, telemedicine devices, and so on have dramatically increased the use of conventional telephone lines. On the other hand, the explosion of communication through electronic mail, wireless modems, and more recently Internet telephones poses a long-term revenue threat to telephone carriers. One should appreciate, however, that the microprocessor’s impact on modern products is hardly limited to the realms of telecommunications and computing. New cars use microprocessors that monitor and control engine operations, antilock brakes, air bags, and other facets of automobile travel. Cameras, ovens, air conditioners, watches, VCRs, video games, and a host of other devices also employ microprocessors (often called microcontrollers when they are designed so that their primary function is to manage ongoing physical events rather than provide intelligence).

Considering how microprocessors have found their most important use in computers, it is surprising how long it took for that application to come to fruition. The story begins when Busicom, a Japanese calculator company, in 1969 approached Intel, then a small start-up firm, to design and manufacture a customized chip that would put all of the calculator’s functions on a single chip using the new metal-on-silicon (MOS) manufacturing technology. Instead Ted Hoff, Intel’s twelfth employee, began working on a new general-purpose calculator chip architecture. The principal difficulty was in translating the architecture into a working chip design. By October 1970 Intel’s Federico Faggin had resolved the problems, and the 4000 family of microprocessors went into prototype fabrication. In March 1971, Intel shipped the first 4000 chip sets to Busicom, and the microprocessor revolution had begun. But Intel would not have clear sailing, for in the same year Texas Instruments would also develop a single chip microprocessor, dubbed “the calculator on a chip.”45 The showdown would occur not so much in technological innovation but in envisioning the market possibilities—with a little bit of luck thrown in.

By the middle of 1972, the electronics industry was cognizant of the microprocessor’s potential. During the 1970s, Intel and a number of competitors sold microprocessors for many embedded applications, including calculators, digital watches, home appliances, and machine tools. In the face of competition Intel developed improved models, most importantly in 1972 the 8008, the first eight-bit microprocessor. This model, in turn, was improved and simplified to create the 8080 in 1974, a microprocessor capable of addressing 64 kilobytes of memory. Intel did not then know it, but the 8080 planted the seeds of the forthcoming PC revolution. That event began quietly when Ed Roberts, an electronic hobbyist, built a PC in early 1975 using the 8080, a primitive PC by today’s standards, but one with slots for additional memory and devices. That PC, termed the Altair, sold well in the electronic hobbyist community and created a market for add-on devices. Within months competitors entered the market, and Gary Kildall had written CP/M, the first important operating system, triggering considerable applications software. Altair’s success spurred a vision among some firms that the market could reach beyond hobbyists, and in 1977 the new machines included the Apple II, using Motorola microprocessors. While Apple Computer was one of the fastest-growing American companies in 1978, cofounder Steve Jobs’s vision became greater than satisfying business, scientific, and enthusiast markets. In 1979 Jobs conceived the PC as a household device that could be used by the average person.

Jobs’s ambitious project was based on a visit he had made to Xerox’s PARC research center in Palo Alto. On that visit and subsequent ones, Jobs observed graphics-oriented computers with sharp display images, on-screen icon controls, and a hand-operated mouse. Overcoming opposition within the Apple organization, Jobs was intent on creating a PC incorporating the Xerox research group’s innovations with a powerful Motorola microprocessor. Continuing to release PCs without these features, Apple prepared the ground for the eventual introduction of the Macintosh on Super Sunday 1984 through earlier moves, including the introduction of a printer in 1979; the establishment of a school educational program to overcome computer fear; the supplying of applications software (made by other firms) for people who could not write their own programs; the use of interface cards allowing the computer to be linked to scientific and technical instruments; the addition of a hard disk; and the production of various peripheral devices, including modems for the transmission of computer-generated information. This last was in response to a 1979 F.C.C. announcement encouraging the use of PCs for electronic mail.46

As important as Apple’s innovations were, in some ways they were overshadowed by IBM’s introduction of the 5150 PC in August 1981. Featuring a 4.77 MHz Intel 8088 microprocessor, 64 kilobytes of random access memory, and Microsoft’s MS-DOS operating system, it sold for approximately three thousand dollars.47 More importantly, the entry of IBM, which dominated the world mainframe market, into the PC arena legitimized the new industry for much of the world. Many observers confidently expected IBM to soon dominate the new market as it had dominated most other markets that it had entered since the company’s founding. IBM, it should be noted, had failed in earlier attempts in the 1960s and 1970s to sell low-end computers, but Apple’s successes persuaded IBM executives that the time was now ripe. IBM selected the Intel over other microprocessors, not only because cause the 8088 was perceived to be the superior microprocessor, but also because Intel produced support chips and made a long-term commitment to the 8086 microprocessor family (of which the 8088 was a version). Notwithstanding the enormous introductory success of the IBM PC, IBM made a strategic error from its perspective, although not from the consumer’s. IBM decided that its PC should be open, allowing anyone to design hardware or software for it. Its purpose was to encourage hardware and software competition that would provide the best possible products. This led to the development of an enormous clone market, and it allowed Intel and Microsoft to become economic powerhouses as these firms supplied the microprocessors and operating systems for the exploding PC market.

From the perspective of telecommunications, the competition expanded the developments in which Apple led the way. Since almost any kind of information could be transformed into digital form, computers and peripherals could generate, transmit through translation channels, modulate (convert digital signals to analog form), and demodulate (the reverse process). The information content could be not only data—the traditional one—but also voice, still pictures, motion images, music, text, and so on. The computer, in a word, was on its way to transforming discrete markets into a single one, the product of which is information. Telecommunications was in the process of being transformed into hypercommunications, as any digital device or source convertible into digital information could create a market for a computer peripheral. Thus, photo, audio, and video discs, or any other information-storage device, and sound and television broadcasts could theoretically be inputted and transformed in a PC. Moreover, two or more of these media could be brought into play at once—the so-called multimedia revolution. Lighting, mechanical animation, and sound could also be added. Thus, the PC was not simply a smaller computer but was the vehicle of a revolutionary transformation in telecommunications.48

All of these developments, of course, took time to unfold; indeed, they are still unfolding. Two significant steps were required to point the way: networking and perfecting the modem. Xerox once again led the way in the early 1970s in networking, allowing local resources to be linked together to serve work groups by connecting machines together more reliably and faster than before. The development of such networking would obviously have wider implications for communications outside the local area network. In 1980 Intel, Digital Equipment Corporation (DEC), and Xerox joined hands to create Ethernet, which would become the dominant PC networking technology. In this way PCs within a limited area can exchange information, share expensive peripherals, and draw on the resources of a vast storage unit called a file server. The first Ethernet standard, and subsequent ones designed for faster communication, received the imprimatur of the prestigious Institute of Electrical and Electronics Engineers (IEEE) and have become widely used in large organizations. Networking through Ethernet and its competitors, in short, not only encouraged the wider deployment of PCs, but also fostered the development of communications between computers.49

The second important device that enlarged the use of computer communications was the modem. In the early 1960s, Bell Labs engineers invented a method to convert the computer’s digital data into a form that could be carried over ordinary telephone lines. Their device, the modem (modulator-demodulator), converted data into a series of tones that travel over telephone lines to another modem where they are reconverted into digital data. The modem allowed businesses to avoid expensive, specialized leased lines to carry data—they could now use ordinary telephone lines. But several entrepreneurs saw the modem in a new way—the device that would permit PC users to communicate with each other and bring a vast amount of information into the home through the PC. “Communications will break open the home market,” said Michael Preston, a New York microcomputer securities analyst.50 As database services such as CompuServe, The Source, and Dow Jones News Retrieval sprang up in the early 1980s, they encouraged start-up modem manufacturers, most importantly Hayes Microcomputer Products, to fashion faster, more accurate, and cheaper modems for the PC market.51 But it was once again IBM’s endorsement of the information-service market through a joint venture with Sears in 1988 called Prodigy that marked the arrival of the modem. Prodigy, unlike its predecessors, would use colorful graphics that required faster modems than the 300 bits per second then prevalent for text-only services. Prodigy teamed up with Hayes, which then produced the Hayes Personal Modem 1200, designed specifically for the home market.52 Prodigy and Hayes would eventually fall on hard times, but the launching of the new service would forever reshape the marriage of computers and telecommunications. A little-known government agency, the Advanced Research Projects Agency (ARPA), would trigger the next phase.

The Internet

In contrast to the telephone network, the structure of which was framed largely by commercial considerations, the Internet network (hereafter the Internet) initially developed largely because of defense considerations and the efforts of the United States Defense Department. Later, universities came to play a major role in shaping civilian Internet applications, but even then the initial defense phase stamped the shape of the Internet. Only since the development of information services such as CompuServe and, more recently, the growth of the World Wide Web have commercial considerations played a considerable role. But to this day the defense phase determines the basic structure, which is far more decentralized than the telephone network. Because of this decentralization, the Internet is composed of a variety of very different services—electronic mail, Gopher, UseNet, the World Wide Web, and so on. Many people are content to use but one of these services (usually electronic mail), while others will use several. Because of its variety of services, the Internet is difficult to define. Perhaps the best definition of the Internet is “a worldwide network of computer networks.”53 Another definition is, “It’s a network of networks, all freely exchanging information.”54

The origins and underlying purpose provide a better sense of what became the Internet than these definitions.55 In 1957 the Soviet Union launched Sputnik, the first artificial earth satellite. In panicked response, the government established ARPA within the Defense Department to work on highly sophisticated defense-related projects. One of the key problems on which ARPA focused was maintaining the defense communications system in the event of hostilities. The great fear was that the telecommunications network (essentially the AT&T network) could easily be disabled in case of an enemy strategic attack. The network was based on a hierarchical system of centralized switches, so that a concerted attack on high-level switches would completely disable the network. Unless a substitute telecommunications system could be found, the entire command and control system that a modern military establishment requires would collapse. The development of such a system, therefore, became a high priority, not only within ARPA but also among military contractors such as the Rand Corporation, where Paul Baran, who more than anyone else was responsible for the eventual shape of the Internet, became interested in the survivability of communications systems under nuclear attack.

Baran, basing his ideas on the ability of the human brain to bypass dysfunctional regions to send messages, proposed in 1962 a twofold solution to the problem. First, he proposed a distributed network instead of a centralized one. Under this concept the network would be composed of many nodes without central command points but in which all surviving points would be able to reestablish contact in the event of attack on any one or several points (see Figure 1). He devised the notion of redundancy—the number of interconnections between neighboring nodes. A redundancy of one meant that there was only a single link to a node, implying a low probability of survival in case of an attack. The concept of redundancy, therefore, sharply focused on the issue of comparing redundancy levels with probability of survival. From our contemporary perspective, however, the distributed network idea was the germ of the highly fragmented nature of the Internet and its architecture providing many paths to obtaining information when one is occupied. It is also one of the reasons that there is no huge international firm providing connections to the various Internet users throughout the world. As the Internet has grown and the issue of survival has receded in importance, there are, of course, hierarchy and backbones (networks through which other networks are connected), but the distributed network idea is still the heart of the Internet.

image

Figure 8.1 Network Types

Baran’s second extraordinary idea was packet switching. In packet switching a digital or digitized analog message is broken into smaller packets. Each packet is appended with the destination address of the entire message as well as the sequence number of each packet so that it can be reassembled at the destination in the same sequence in which it was sent. In this way each packet can be sent over a different route, depending on availability. The destination computer holds the packets until all arrive. The sequence in which they arrive, thus, becomes irrelevant. Packet switching also allows the receiving computer to send a message back so that lost packets can be identified and re-sent. Notwithstanding AT&T’s skepticism about the feasibility of packet switching, ARPA produced a design paper in 1967 on a proposed Arpanet. By 1969 the Defense Department was prepared to establish a network linking university sites that were engaged in defense-contract work. By 1971 fifteen nodes (with twenty-three hosts) were connected to the Arpanet. In this way, the forerunner to the Internet was born. At this stage no one saw these networks as a threat to traditional telephone networks.

The next important step taken, the development of electronic mail (E-mail), which now does constitute a major threat to telephone company revenues, came about almost casually as a minor byproduct of the scientific and technical work of the few persons with ready access to the Arpanet.56 J.C.R. Licklider, one of ARPA’s top officials, had been an advocate of humanizing the computer since he became associated with the project in 1962. Ray Tomlinson, an engineer at an ARPA contractor, sent the first E-mail in 1971 to himself. The second message, sent out to others, announced E-mail’s availability and provided instructions on how to address users on other machines. In July 1972 Abhay Bhushan, a programmer, suggested a way of getting E-mail to run on the ARPA network by using file transfer protocol (the rules defining program and data files to be transmitted error free). That was only the beginning, as later programmers developed software that allowed subject indexing, deleting, message forwarding, and so on. Because E-mail rapidly became popular among the members of the Arpanet community, other researchers wrote programs that made it more and more user-friendly. Comparisons to the telephone were inevitable, and E-mail’s advocates argued that “among the advantages of the network message services over the telephone were … the message services produced a preservable record, and that the sender and receiver did not have to be available at the same time.”57

The inevitable next step was the development of mailing lists directed to persons on the Arpanet with similar interests. Most of the early subjects were, of course, scientific in nature. But as E-mail’s popularity increased, topics such as wine tasting, science fiction, and so on proliferated. At first system administrators cautioned against the overuse of such lists, but they were overwhelmed by the flood. This eventually led to the development of the UseNet in 1979 when two Duke University graduate students came up with the idea of distributing information of interest to UNIX operating system users. In 1981 a Berkeley graduate student and a nearby high school student added more features that were able to handle a large number of postings in comparison to the small-scale plan envisioned by the Duke students. The idea spread like wildfire, eventually leading to the thousands of newsgroups that flourish today.

The necessary prelude to the proliferation and enlargement of networks beyond the limited community of defense-related researchers was the establishment of protocols that linked the various machines and networks. The problem was acute in the military itself because the army, air force, and navy had accepted bids on very different computers. How could the army’s DEC computers speak to the air force’s IBMs and the navy’s Unisys? The Defense Department commissioned a project that would link different networks designed by different suppliers into a network of networks—an Internet. Work begun in the 1970s resulted in the transmission control protocol (TCP) and the Internet protocol (IP), commonly known as TCP/IP, becoming established in 1982. TCP is responsible for verifying the correct delivery of data from client to server and retransmitting until the correct data is completely received. IP moves packets of data from node to node based on a destination address.58 TCP/IP thus permitted the creation of the Internet from the variety of networks created in the wake of Arpanet, the most important of which were UseNet, CSnet, and Bitnet. Bitnet, established in 1981, was a network of cooperating universities that provided E-mail between persons associated with the universities. CSnet (computer science network), established in the same year through seed money provided by the National Science Foundation, provided networking services to scientists with no Arpanet access.

Slowly, then, the Internet was being formed, expanding from a small community of defense scientists and engineers to the larger university community. Numerous other nets were formed in the 1980s, the most important of which was NSFnet, created in 1986 with five supercomputer centers to provide widespread high computing power.

The next significant development requires us to shift focus to Switzerland. This was the 1992 release of the World Wide Web (WWW) by CERN, the European particle physics laboratory located near Geneva, Switzerland. Prior to the WWW, there had been few commercial Internet activities, the most important of which were the 1990 relay between MCI Mail and the Internet through the clearinghouse for Networked Information, and the 1991 establishment of the Commercial Internet Exchange to facilitate packet exchange among commercial service providers. To appreciate the enormous role the WWW has played in spawning commercial use of the Internet, consider only that the number of hosts grew from approximately 376,000 in January 1991, the year before the WWW release, to almost thirteen million in 1996.59 Much of that explosive growth has been spurred by the attractiveness of the WWW. Innumerable companies and other organizations have created Web sites, in many cases very complex ones, and that Netscape and Microsoft have created Web browsers while DEC, Sun Microsystems, and others have established search engines that offer great facility to look for information. By 1994 the WWW had become the second most popular service, after E-mail, on the Internet.

The heart of the WWW is hypertext, by which information is organized as a series of documents with links for search and retrieval of text, images, sound, and video.60 Thus, hypertext incorporates multimedia as part of its basic conception. Hypertext is also nonsequential. In contrast to a book, where page three is necessarily read after page two, hypertext allows the reader to choose his or her own options in the sequence desired. For example, you may choose option D, then B, skipping C and E entirely. Another person’s options and sequence may be different. The central idea of hypertext can be traced to Vannevar Bush, who feared in 1945 we would be drowned in an explosion of information and sought ways in which to make it more accessible. One answer was hypertext, which Bush called “associative trails.” In the early 1980s, physicist Tim Berners-Lee, working at CERN, began his project on hypertext links. In 1989 he proposed such a program, but it was initially met with skepticism on the ground that a hypertext program would be too complicated. Gradually he won converts, and in 1990, he wrote the first Web browser and Web server programs. By the end of 1990 the CERN phone book became the first hypertext file. When MIT and CERN signed a pact in July 1994 to further develop and standardize WWW software, the credentials of the new service attained the respectability that would gain it worldwide success.

The WWW, unlike the other Internet services, has become a commercial success. Web-masters have proliferated to design extraordinarily attractive Web sites for small and large companies to not only provide information but sell products and services as well. In every city and town, large and small firms have become Internet providers, allowing a user paying a monthly fee to access the Web and enjoy a sophisticated E-mail service like Qualcomm’s Eudora. Software and hardware manufacturers have produced innumerable products, including high-quality graphics cards and sound cards. Nevertheless, one company above all others stands out for making the WWW as popular as it has become. That is Netscape Communications, which provides the Navigator, the most popular family of browsers, and the Netsite Commerce Server, a purportedly secure method of paying for goods and services. Founded in April 1994, the company provided the earlier versions of Navigator free in order to become the industry leader. While it has been challenged by Microsoft’s Explorer browser, there is no question that its ease of use has been a valuable instrument in making large numbers of people comfortable in searching the WWW. The combination of easy-to-use browsers and inexpensive access through the fierce competition among Internet providers has been instrumental in the vast expansion of Internet use.

This same expansion has in one way benefited the telephone companies through vastly increased use, as many people will not uncommonly access the WWW all evening. But lurking in the background is a threat, perhaps more serious than E-mail. For not only are text, data, and attractive graphics among the multimedia features of the WWW, so potentially is the voice-grade telephone, especially for long-distance calls. Internet technology readily permits voice transmission.61 Users must have a microphone, speakers, a plug-in card that converts speech to data and back, and phone software stored in their PCs, and they must agree to make the call at the same time. The voice is encoded to a file using software developed for this purpose and then decoded back to sound at the other end. Often the software is provided free over the WWW or by file transfer protocol. Since the call is almost free, Internet telephony, although in its infancy, is viewed by telephone companies as a very serious threat. One estimate is that there will be sixteen million Internet telephone users by 1999. Accordingly, the American Carriers Telecommunications Association (ACTA) filed a petition with the F.C.C. seeking to ban the sale of Internet telephone software. Local telephone companies are also seeking to make Internet providers, now exempt from paying access charges under F.C.C. rules, begin paying.62 While it is not clear from the present perspective whether Paul Baran’s or Alexander Graham Bell’s and Theodore Vail’s network vision—or both—will prevail, the impact of the computer on communications is only beginning. From the perspective of any firm, this provides one more reason to cover all bases, either through internal expansion, or through links with others.

_______________

* In key telephone systems (KTSs) a number of telephone lines are connected to each telephone set in a system and a line is selected by pushing a burton corresponding to one line.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.222.110.183