3

image

Honest Policy Wonks

Well, I will be offering—I’ll be offering my vision when my campaign begins. And it will be comprehensive and sweeping. And I hope that it will be compelling enough to draw people toward it. I feel that it will be. But it will emerge from my dialogue with the American people. I’ve traveled to every part of this country during the last six years. During my service in the United States Congress, I took the initiative in creating the Internet. I took the initiative in moving forward a whole range of initiatives that have proven to be important to our country’s economic growth and environmental protection, improvements in our educational system. During a quarter century of public service, including most of it long before I came into my current job, I have worked to try to improve the quality of life in our country and in our world. And what I’ve seen during that experience is an emerging future that’s very exciting, about which I’m very optimistic, and toward which I want to lead.

Al Gore, March 9,19991

What did Al Gore invent and when did he invent it? The topic became a guaranteed laugh line during the presidential campaign of 2000. The joke was so well known that even the candidate could poke fun at himself and expect a response. On September 26, 2000, with the election less than six weeks away, Gore participated in a town hall meeting as part of MTV’s Choose or Lose series. Students attended at the Media Union at the University of Michigan. Gore quipped, “I invented the environment.” The friendly audience erupted in laughter.2

image

FIGURE 3.1 Al Gore, US senator and vice president (photo from January 1, 1994)

image

FIGURE 3.2 Rick Boucher, representative from Virginia’s 9th District, 1983–2011 (photo from official 109th Congress Photo, photo taken July 28, 2007)

How did a senator’s involvement in Internet policy in the 1980s evolve into a joke? What distance lay between joke and fact, and what did that distance say about the development of the Internet under government sponsorship?

One element of this situation appears in the quote at the start of this chapter. The interview occurred in March 1999, well before the presidential race heated up. Out of all the muddled sentences, one would eventually garner the most attention: I took the initiative in creating the Internet.

Initially the interview received little public attention, but one reporter made a story of it, and political insiders used it as fodder for a few jokes.3 Senate majority leader Trent Lott, no political ally of Gore’s, issued a press release ridiculing the interview. Lott sarcastically claimed to invent the paper clip, adding cheekily that “paper clips binded the nation together.” Vice president Dan Quayle, poking fun at his own less-than-stellar orthographical reputation, said, “If Gore invented the Internet, then I invented

Spell-Check.”4

Professional advisors to politicians stress the importance of controlling the message. In this instance, Gore gradually lost control. He fought back with quips5 and self-deprecating humor,6 but neither slowed the onslaught of mockery. Other public figures picked up the meme about invention, and gradually it gained traction as a source of humor. With frequent retelling, Gore’s awkward phrase about creation morphed into a ripe and exaggerated claim about invention. For example, late night television entertainers Jay Leno and David Letterman worked humor into their monologues about other things Gore invented. In Letterman’s case, his writers also created a “Top Ten List” of things invented by Gore. Journalists eventually began to quote the exaggerated claim as if it was fact, and George Bush campaign television commercials used it as well.7 By the end of his presidential campaign most audiences assumed Gore had not merely fumbled over his words in a television interview, but had actually claimed to be the inventor of the Internet.

What does the evolution of this joke illustrate? For reasons explained in later chapters, by the time Gore ran for president, the origins of the Internet had become buried in all the flotsam and jetsam the blossoming of the commercial Internet generated. While it was not readily apparent to the man on the street who had invented what and when, it also was obvious that no single individual could take credit for inventing the Internet. Such a claim came across as absurdly funny, and from a politician it came across as an egotistical fantasy, the delusions of someone who overestimates their own power and significance. More to the point, it was a powerful slogan, and persuasive to the uninformed.

Perhaps more remarkable, except to a cynic about public discourse, once the joke about Al Gore’s exaggerated claims gained credence, the actual words of the interview and the actual facts of the Internet’s invention did not play a role in popular conversation. That observation begs the deeper question: If the actual facts had been widely known, would they have helped or hurt Gore? The answer is not so clear. Looking closely, more accuracy might have suggested Gore had had good intentions, but it also might have suggested an ambiguous interpretation of Gore’s actions. That ambiguity also touches on a theme in this chapter, so it merits understanding.

Gore did have a record to stand on, and that seemed to work in his favor. Although congressional records are typically pockmarked by many more symbolic initiatives than actual accomplishments, Gore’s interest in the Internet was not in dispute, and neither were his legislative accomplishments. As early as 1986, for example, then-Senator Gore made speeches on the Senate floor supporting basic research in computer networking. He compared it with public support for highways. He referred to the information superhighway, a metaphor Gore would use frequently over the next few years. For example, inserted into the congressional record is the following from 1986 (excerpted):

In order to cope with the explosion of computer use in the country, we must look to new ways to advance the state-of-the-art in telecommunications—new ways to increase the speed and quality of the data transmission. Without these improvements, the telecommunications networks face data bottlenecks like those we face every day in our crowded highways.

The private sector is already aware of the need to evaluate and adopt new technologies. One promising technology is the development of fiber optic systems for voice and data transmission. Eventually we will see a system of fiber optic systems being installed nationwide.

America’s highways transport people and materials across the country. Federal freeways connect with state highways which connect in turn with county roads and city streets. To transport data and ideas, we will need telecommunications highway connecting users coast to coast, state to state, city to city.8

Gore eventually would become chairman of the Senate Subcommittee on Science, Technology, and Space. He would conduct hearings in 1989 about upgrading the Internet backbone. Those hearings, in turn, would lead Gore to sponsor a piece of legislation in November of 1991: The High-Performance Computing Act of 1991, passed by both houses and signed by President Bush.9 The act provided a framework for funding the National Science Foundation’s efforts to upgrade the Internet backbone and support use of supercomputers in universities. These efforts directly produced a key component of the Internet, its first long distance high-speed lines. It also indirectly funded other inventions discussed in the next chapter, the NCSA (National Center for Super Computing Applications) Mosaic browser and the inventions that would become the most popular web server. As several later chapters describe, these inventions became essential for catalyzing growth of electronic commerce.

Gore’s record also was not in dispute a decade later. Not many other contemporary national politicians in the 1980s had shared Gore’s interest in large-scale networking. It gave Gore a distinctive legacy.10

Why all the fuss over such a legacy? Political opponents sought to undermine any gains Gore might accrue from having such distinctive tastes. Of course, part of the motivation arose from naked political desires to have George Bush Jr. elected president. In addition, the debate about Gore’s record also served as a proxy war for a big and long-standing ideological fight about the role of the government in subsidizing invention and technical advance, particularly when many aspects of that advance, such as its market value, could not be predicted or anticipated with certainty. The computer industry offered one of the best cases for such government involvement in an industry that faced unpredictability about its value. Hence, undermining the view that government played a salutary role in this specific industry also undermined the broader assertions about government’s positive role.

Long before the actions of the NSFNET grabbed the attention of the US electorate, the US computer industry and federal government coexisted in a mutually beneficial relationship. The US computer industry employed many inventors, as did many US research laboratories and universities. The US computer industry commercialized those inventions, and the federal government bought frontier computers and applications at uncommonly high prices. NASA, the Department of Defense, the US Census, the Federal Aviation Administration, and the National Security Administration all had served as the lead buyers for frontier computing of the 1950s, 1960s, and 1970s. Both sides could see plenty of benefit from the relationship. Through its procurement of frontier computing, the US government indirectly subsidized a part of the innovation in the US computing industry. US agencies gained advantages in new capabilities, new military strategies, and new types of information.11

US firms gained from the government’s demand for frontier computing. This demand also gave them a leg up in learning how to meet the private demand that arose not long after government demand. The learning could have great strategic importance, because private demand tended to grow exponentially, eventually overwhelming any government demand. The pattern had played itself out in numerous technologies: timesharing, client/server computing, graphics, LANs, workstations, graphical user interfaces, VLSI design, RISC processors, relational databases, parallel databases, data mining, parallel computing, RAID/disk servers, portable communications, speech recognition, and broadband.12

By the time Gore ran for president, the Internet appeared to be yet another example in a long string of examples where government subsidies played a role in fostering frontier developments—except that this one differed in an important respect. In a short time—by 1999—it was obvious that nobody a decade earlier, not even the most optimistic visionary, could have forecasted the direction the Internet took in commercial markets. After privatization, too many contingencies and surprises shaped outcomes, making the ultimate economic outcome fundamentally unknowable in advance. It strained credibility to believe one politician or one government policy had had much foresight about the specific consequences that would result from subsidizing invention of the Internet. Any statement suggesting such foresight invited ridicule.

This chapter begins to set the record straight. What actions had government managers taken, and when had they taken them? Privatization involved more than just privatizing the Internet backbone, as this chapter describes. The actual record contains complications and requires a nuanced understanding. Broadening use of the Internet occurred with the blessing of government managers, and certainly not with complete foresight about the consequences of their policies.

The Symbiosis between Government and Industry

The happiness with the symbiotic experience in computing contrasted with the results from many other federal programs to subsidize large-scale innovative activities. The list of unsatisfying experiences with federal subsidy of technology was long. The supersonic transport, the communication satellite, the space shuttle, the Breeder reactor, the synthetic fuels program, and the photovoltaics commercialization program, to name a few, had been funded with high hopes and big promises.13 While large-scale innovative activities funded at the federal level might yield big scientific demonstrations, the projects yielded mixed results for commercial success.

The details differed, but the general character of the problem did not; something disorderly always emerged. Once the industry became dependent on funding from the federal government, it became difficult to untie the relationship, and attempts to do so came after enormous political efforts. Sometimes simply managing the relationship placed unusually large strains on all parties. The profits and stockholder value of commercial firms became tied to the nuances of policy decisions in direct ways. The employment of workers in politicians’ districts depended on whether technological projects reached intermediate bureaucratic benchmarks. That induced wrangling and negotiating over the use of public dollars and assets, not to mention self-dealing in the setting and design of benchmarks for policy decisions, which could not be divorced from the self-interested actions of firms and employees. Scandals and the hint of corruption made headlines, often overshadowing good intentions and technical accomplishments.

Those unsatisfying experiences framed many questions prior to the privatization of the Internet. Would privatization of the National Science Foundation’s (NSF’s) backbone resemble most of the experiences in computing, helping the industry generally, or would it lead to a disorderly process, as occurred in other large-scale technologies when government handed assets to private firms? As it would turn out, events would contain a bit of everything. Privatization initially involved disorderliness and eventually gave industry a tremendous boost.

Disorderliness during the privatization of the NSFNET did not arise from lack of planning or poor engineering. There was no disorderliness with many aspects of the planning. Starting with the meetings in 1989, the National Science Foundation conducted robust conversations with a wide variety of stakeholders, such as the IETF and regional network operators.14 There was no disorderliness with the engineering. Throughout the first half of the 1990s, every day users woke up, logged on, sent electronic mail, and the Internet continued to work well.

To the extent that it arose, disarray and confusion arose from a combination of political, administrative, and economic issues. As this chapter explains, extracting the NSF from the Internet involved greater complications than any of the management at the NSF had anticipated. By mid-1992 it was clear that Wolff could not merely turn off the NSFNET or just hand it over to IBM, MCI, or any other organization operating the Internet backbone. Several other firms also had staked out commercial positions, and they did not want the transition to provide IBM or any other insider with a competitive advantage in the soon-to-arrive competitive world where the NSF no longer played a role.

As it turned out, the resolution of these issues changed the plans for privatization. Those changes left irreversible marks on the structure of the commercial industry, and generally for the better. The chapter provides the long explanation for how that happened, but a short summary will give a sense of where the detail leads. Stated succinctly, after taking some questionable actions that received considerable scrutiny, Wolff listened to the advice of many honest policy wonks and put their advice into practice.

What was an honest policy wonk? Describing an honest policy wonk is a bit like describing an elephant for someone who has never seen one. A description in words will sound odd. An honest policy wonk was someone who worked in government and sincerely aspired to make policy decisions as part of serving a reasoned conception of the public interest. It was easier to say what an honest policy wonk was not: it could not be bribed; did not suppress information, even to save someone in a powerful position from embarrassment; did not decide new rules behind closed doors after consulting with a few executives; did not tow a demagogic line merely because the party in power desires it; did not abandon ideals or dreams for mere material reward or short-term payoff.

Most honest policy wonks arose in less-than-glamorous places within government service: at the FCC, at the NSF, at DARPA, and elsewhere within the broader technology policy community in Washington, DC. The Internet inspired honest policy wonks. It appealed to the utopian ideals of the global communication network. Many could speak of a vision, working for a better day, vaguely in the future, when networks could connect the world and permit participants from far reaches of the earth to communicate with one another.15

Getting a Kick out of Acceptable Use

The NSF operated under a charter—granted by the US Congress several decades earlier—that most of its employees found inspiring. The charter gave the NSF a grand purpose: to stretch the frontier of science and aid the institutions of discovery. Most employees believed in the power of science to make the world a better place. Notwithstanding the higher ideals, some words in the NSF charter put limits on the scope of the NSF’s mission, and did not inspire such love. While the NSF’s mission allowed the NSF to directly help research in the United States, it did not give the NSF permission to start an industry that could compete with others already in existence.16

The charter embodied a convenient fiction. The NSF occasionally subsidized industry without admitting to it directly. If scientists conducted science as an end in itself, then the NSF’s mission remained safely distant from the interests of commercial firms. Scientists would make discoveries with the aid of public moneys and firms would use what they considered valuable. However, the separation of science from industry was a fiction in areas where scientific discovery could have immediate pragmatic applications, as in some parts of frontier computing. In these cases, the NSF would pay for discovery that firms could put to use. Often at the same time the NSF would subsidize training of computer scientists that industry could put to work on the same technologies. In a few well-known cases, the NSF had funded research that had led to inventions that made venture capitalists rich.

The growth of the Internet under the NSF management put strain on that fiction and exposed its weakness. More to the point, the charter did not give the NSF permission to build the Internet with taxpayer money so firms could conduct commerce over that network. The precedent had been set some time ago: if the NSF ever stepped over those bounds they could expect a congressional hearing to bring the agency back in line. But there was little precedence for the situation the NSF faced: what if the technology could do two types of activities—help scientists conduct scientific research and help private users conduct commercial transactions?

The acceptable use policy, or AUP for short, was the policy for addressing this type of question. The policy issue did not present itself in a clean way in practice. A common situation from the late 1980s illustrates one aspect of the issue: could students at a university hold a flea market online, one selling, say, tickets for a college football game to another? If it was done informally over electronic mail, there was not much anybody could do about that transaction. However, if the owner of the tickets advertised their existence on a university owned and operated server, then it potentially violated the AUP, as the advertisement used university resources to support commerce. University administrators might take the post down and advise the student to post outside the Internet, in a privately operated bulletin board network. The student might complain, and with some merit, that most of the potential buyers—other students—were not using the privately operated bulletin board network. Requiring all transactions to move off the Internet was not in the interests of either buyer or seller.

Carriers faced a more complex set of questions, and these arose from the same origins. If a network carried traffic from the NSF-sponsored Internet, was it acceptable to carry traffic from anyone else over the same lines? At first the answer was absolutely not. If the US government paid to set up the infrastructure, as it did with the NSFNET, carrying commercial traffic could be interpreted as a violation of the NSF’s charter.

Most engineers despised this policy for no reason other than its pointless rigidity. Engineers saw quite clearly that economies of scale would be easier to achieve by putting as much traffic as possible over the backbone lines. Anything else—such as separating lines into NSF traffic and non-NSF traffic would be inefficient, a ridiculous limitation on achieving an optimal use of expensive resources.

All could see that once the government no longer owned and governed the NSFNET, questions about the AUP did not have to stand in the way of sharing traffic. That was partly what motivated the plan to privatize in the first place. However, it also introduced a hitch. What policies would apply during the transition, the time between the announcement that the NSF would privatize the Internet and the moment that it actually happened?

This was not a trivial question. Discussion about privatization began quietly in 1989 but had become public by 1990. In the meantime, the NSFNET had to operate, make upgrades to accommodate growing traffic, and provide service for academic users. Foresighted entrepreneurial firms had to keep earning revenue and covering as much of their operating costs as possible. Yet the NSF did not—could not—act quickly. As delays began to creep into the schedule for privatization, firms looked after their own interests. Disorderliness lay around the corner.

Entrepreneurs Taking Action

In 1989 three entrepreneurs saw an opportunity and acted: PSINet and UUNET started their businesses. PSINet (PSI = Performance Systems International) had been founded by William Schrader and Martin Schoffstall in late 1989 by privatizing the lines that supported the NYSERNET, the regional network that Schrader had managed until that point. UUNET was founded at roughly the same time by Rick Adams.

As participants in operating parts of the network, Schrader, Schoffstall, and Adams were familiar with the plans to privatize the NSFNET and thought about commercial issues the same way. They went into the business of carrying TCP/IP-based traffic in advance of the growth of the market, and viewed themselves as foresighted entrepreneurs. They anticipated a big growing market in the near term. They also could see that many Internet insiders did not care to start businesses, nor did many established firms, such as telephone companies. Unlike these established players, these entrepreneurs forecast that the new market would be a good future for an entrepreneurial firm.17 They located their firms near each other in northern Virginia, a stone’s throw from Washington, DC, which they regarded as necessary to deliver lobbying that might protect their young firms.

How did they each start their firms? Rick Adams merely changed the status of his organization. He turned the nonprofit organization by the same name—a hosting site that facilitated Usenet, e-mail, and text exchanges between bulletin boards—into a profit-oriented firm that provided backbone services independent of government backbone assets. Schrader and Schoffstall cut a sweetheart deal. They bought assets from the state of New York and NYSERNET, turned around and rented services back to the very same customers. The university users liked the deal because they began getting services from a large, growing private firm that realized greater efficiencies and passed some of the savings to the users. The state government of New York liked it because it could claim that its public expenditure had spawned private enterprise. For PSINet the situation was as close to ideal as any entrepreneurial company could desire. They gained working assets on day one, as well as a set of paying customers.18

The founding of two entrepreneurial firms changed the chess game for IBM. IBM no longer could anticipate being the sole beneficiary of privatization. IBM would face competition from these upstarts, and potentially others that followed their example, adding to what IBM anticipated from others (such as Sprint). The entrepreneurs also would add their voices to the policy discussion, pushing for policies IBM did not favor.

As delays mounted in the privatization plan, IBM devised a strategy for the time period before privatization completed.19 It established Advanced Network Services (ANS) as its commercial division for the private Internet. IBM’s managers sensed that it could gain an advantage for ANS’s services by interpreting the AUP in a self-serving and narrow way, one that yielded strategic gains during the transition. As figure 3.3 illustrates, the NSFNET provided national coverage. De facto, so too would ANS. IBM’s management believed that ANS’s geographic scope would provide it with advantages when bidding for new business. It believed it could keep the advantage to itself if it did not allow any interconnection with a competitor.

image

FIGURE 3.3. NSFNET Backbone Service, 1993 (available at the Atlas of Cyber Spaces, maintained by Martin Dodge, http://personalpages.manchester.ac.uk/staff/m.dodge/cybergeography//atlas/geographic.html, accessed March 2004)

IBM’s lawyers wrote a self-serving legal opinion. The opinion stated, in effect, that IBM and its newly established division, ANS, could not interconnect with others because it would lead to mixing of traffic from research-oriented and non-research-oriented sources, violating their lawyer’s interpretation of the AUP’s application to the NSFNET. Since they did not want to violate the law, the opinion said, IBM’s managers had no choice but to refuse interconnection to the NSFNET to any other firm carrying traffic from sources other than the research community. That meant practically everybody would be denied interconnection, especially PSINet and UUNET, since much of their new growth was coming from outside the research community.

Through additional maneuvers IBM tried to tip the scales in its own favor. Its lawyers split ANS into two firms, one for profit and the other not for profit. When ANS managed to carry commercial traffic it involved the for-profit company. When it carried NSF traffic it involved the not-for-profit organization. Of course, in either case, the traffic traveled over the very lines that ANS refused to connect with others carrying commercial traffic. However, through this legal technicality, the traffic was not mixed.

There was one more aspect, and it infuriated observers who cared about Internet policy.20 IBM’s split gave it a way to interconnect with other regional networks already in the NSF network. When this was announced, it was quite controversial, especially because the NSF seemed to initially accept the proposal. ANS’s managers put forward a settlement plan for interconnection with the for-profit part of ANS. In this plan, other firms paid ANS an interconnection fee for the volume of traffic another network put on ANS’s lines. These actions were designed to leave ANS as the biggest operator of the national backbone, with no close rival, and in a position to collect money from others.

Outraged observers cried foul.21 Bill Schrader gained attention by accusing ANS of gaining a competitive advantage against PSINet with the help of the NSF, violating the NSF’s charter. It eventually would lead to an investigation from the Office of the Inspector General and a congressional hearing.

With the benefit of hindsight it is possible to see why ANS’s proposals and behavior generated outrage. First, it raised issues about the different conduct appropriate for assets when they are both public and private. In the eyes of many participants of the research-oriented Internet, IBM’s assets were supposed to be operated in the public interest until the NSF was privatized, not prior to it. A second issue arose over the unilateral and nontransparent way in which IBM received these decisions. ANS’s actions were not conducted in broad daylight. The Internet community was accustomed to participatory decision making for changes in the use of public assets. Instead, these actions were taken without consulting the broader Internet community, and with only consulting Stephen Wolff. Wolff had been involved because the cooperative agreement between IBM and the NSF had to be modified.22

As it turned out, one congressman was paying attention and decided to take action. Rick Boucher of Virginia chaired the subcommittee that oversaw NSF’s activities. His subcommittee eventually would conduct hearings in March 1992. A number of people testified, including Schrader, Wolff, and others. These hearings highlighted the confrontation and would change the shape of the commercial Internet, leaving an indelible mark.

Organizing against IBM

Before Boucher held his hearing, one other significant event had profound long-term consequences for the Internet. PSINet and UUNET took action designed to bypass ANS’s maneuvers, and, in doing so, PSINet and UUNET created their own interconnection policy. Starting from conversations in late 1990,23 and involving Susan Estrada, who operated CERFNET (California Education and Research Foundation), based in San Diego, this team established the Commercial Internet eXchange, or CIX for short.24 The conversations came to fruition in the summer of 1991. CIX solved the interconnection problem all three of them faced.

The engineering behind CIX was straightforward. CIX placed a router and a series of interconnecting lines from the three providers in Washington, DC. Users of one firm’s network could send direct messages to users of another firm’s network. CIX resolved interconnection issues with an innovative contracting solution. All firms who joined the CIX brought their traffic to the router at the center of the exchange and agreed to take traffic from every other firm who deposited traffic at the same location. Each member of CIX paid a flat fee to support the cost of the equipment and maintenance, and each agreed not to charge each other on the basis of the volume of traffic they delivered.

Soon thereafter CIX opened another router in Santa Clara (eventually moving it next door to Palo Alto), and another in Chicago a year later. CIX quickly covered the needs of the burgeoning private industry in the entire country. CIX’s actions offered a solution to support interconnection between networks. CIX’s solution put all the costs in fixed fees and left none of it to volume pricing. It contrasted with ANS’s model, which charged in proportion to volume of traffic.

As it would turn out, the CIX model became the predominant model for interconnection for the next five years. Very quickly other commercial providers of services, notably Sprint, signed up. From there, the momentum kept building, with agreements with small Internet service providers as well as some of the regional networks. Just a little less than a year later, CIX essentially had everyone except ANS.

By the time Boucher held his hearing, ANS had become isolated, substantially eroding their negotiating leverage with others. By June 1992 ANS’s settlement proposals no longer appeared viable. In a very public surrender of its strategy, it agreed to interconnect with the CIX on a seemingly short-term basis and retained the right to leave on a moment’s notice. In fact, it never rescinded the decision.25 From then on every provider “peered” with every other in CIX at no cost.

This confrontation was more than a mere hiccup on the path to privatization. Out of the confrontation with ANS were born the institutions that defined the relationship between the multiple providers of data services in the US national network. Once it was formed, CIX held together for several years. No firm had an economic incentive to depart from this solution, as no carrier was large enough to go its own way without cooperation from others. Peering in this way would continue to be practiced until the NSF put forward a plan that imitated CIX’s ideas and rendered CIX’s contribution less essential.

Why was this arrangement so crucial? It held together long enough to get the industry off to a competitive start, not close to a monopoly in its backbone. As it would turn out, this feature did not change. The backbone market remained competitive for most of the 1990s. Indeed, nothing would threaten to change that until the largest backbone providers of Internet services, MCI and WorldCom, proposed to merge, generating a government-mandated divestiture as a condition to achieve the merger. Similar issues arose, once again, when MCI/WorldCom tried to merge with Sprint near the end of the decade. That proposal would collapse over antitrust issues.

Did this disorderly crisis with ANS result in a healthy, privately operated commercial Internet? In the long run it did. With a touch of irony and the sense of relief that comes with retrospective hindsight, it is possible to say, “Thank goodness for IBM’s self-serving behavior!” Without it, perhaps no firm would have been motivated to explore solutions to the hard questions about how firms would interconnect. And without it, perhaps NSF would not have taken a new approach to peering.

The Setting, Planning, and Anticipation

The episode with ANS and CIX had an irreversible effect on the privatization of the NSFNET. Wolff found himself facing more scrutiny than any government administrator would prefer. The NSF needed to reassess, figure out what was possible, and revise its plan. Pragmatically speaking, how does a government agency respect such processes and still transfer the backbone to private parties? To make a long story short, the NSF, with Wolff as the leader on these initiatives, held meetings and hearings, adopted a plan, obtained comments, and revised the plan while simultaneously responding to commentary from Internet administrators and policy wonks all over the country, who—to say it charitably—willingly provided advice, kibitzing every step of the way.26 All the honest policy wonks—and more!—had their say.

The final plan emerged in May 1993.27 It had two broad goals: First, it proposed a timetable for turning off the NSFNET, effectively transferring the assets to IBM and MCI. Second, it established institutions, modeled after CIX and described below, that helped the commercial Internet operate as a competitive market after the NSFNET shut down. The second goal supported competitive interconnection in the newly privatized Internet and provided a significant lasting legacy from the privatization of the NSFNET. Much of this became implemented more than a year later, when firms were selected to implement various elements of the plan.

The shutdown date for the NSFNET finally was set for the spring of 1995. The process behind the shutdown was genuinely complex. That made it hard for anyone outside of the most experienced Internet insiders to understand the consequences of what the NSF had proposed.

Stepping back from details of events and seen in retrospect, NSF’s actions raise a puzzle. The United States was the first country to privatize a carrier industry for data using TCP/IP protocols, so the NSF had no precedent to follow. Every action and policy had to be invented from scratch, without any clear consensus on how to proceed. Despite being the forerunner in developing a commercial Internet, the US experience did not set a precedent for other countries. In contrast to the United States’ competitive data carrier business, in many countries foreign carrier services were handed over to each country’s telephone company, resulting in monopoly provision. Why did the United States take such a unique path?

As chapter 2 implied, the answer has much to do with timing. Had commercialization in the United States occurred in a setting resembling the market structure several decades earlier, the new communications technology would have been handed to the largest communications company in the country, AT&T. It would have been an exclusive franchise (if AT&T’s management had cared to commercialize it, which they may not have). Instead, the United States got a competitive data carrier business because it was possible to take advantage of the recently created competitive telephone industry. Credit for that should go to the United States’ unique set of antitrust laws, which had broken up AT&T.

Divestiture created a market structure where a dozen different firms possessed the ability to perform commercial data carrier services. More specifically, in late 1992 there were seven regional Bell operating companies,28 three major long-distance services plus many small firms,29 a large geographically dispersed local telephone company,30 many local telephone firms in several medium-sized cites,31 and a small but growing cellular telephone market.32 In addition, at least two entrepreneurial entrants, UUNET and PSINet, had made their presence known. Finally, IBM and its division, ANS, stood poised to offer service. Soon there would be others who would try. In short, many firms possessed the necessary technical and commercial assets to reach the technological frontier and offer a viable service. If they did not possess such assets, it was possible to quickly assemble them by combining existing assets with those they purchased from markets.

Other high-tech markets such as the computing and telecom equipment markets had a similar feature. The ability of multiple actors to take action on the technical frontier was sometimes called divided technical leadership. Although familiar to computing, it was a structural feature novel to the communications carrier sector—and one the trustbusters had artificially nurtured. Industries with divided technical leadership innovate more quickly than monopolies. Monopolies typically prefer a quieter life of controlled experimentation to one of unrestrained experimentation in the service of competitive goals. With divided technical leadership, in contrast, no single firm controlled the direction of technical change, nor could any single firm block a new initiative. NSF operated in the midst of completely new territory, fostering private initiatives.

What to Do with Divided Technical Leadership?

In June 1992, Boucher introduced a small amendment to an existing bill. It was a deft legal solution. It changed the charter of the NSF, altering the legal definition of NSF’s mission just enough to permit the NSFNET to alter the AUP. That change in the AUP permitted others to share traffic. The key change was seemingly small. It read:

Section 3 of the National Science Foundation Act of 1950 (42 U.S.C. 1862) is amended by adding at the end the following new subsection: (g) In carrying out subsection (a)(4), the Foundation is authorized to foster and support the development and use of computer networks which may be used substantially for purposes in addition to research and education in the sciences and engineering, if the additional uses will tend to increase the overall capabilities of the networks to support such research and education activities.

That slight change in words made all the difference. It meant that the NSF did not violate its charter when it shared lines with private users because sharing those lines lowered the costs to researchers in the sciences and engineering fields.33 Additional uses actually did increase the overall capabilities of the networks, as the statute said.

The amendment passed in October. Once it passed, IBM’s lawyers could not plausibly offer a narrow interpretation of the AUP as an excuse for not interconnecting. It also meant that other firms could bypass ANS altogether, using the CIX method to interchange traffic. That did not enable the Internet to grow right away—it would take time for private firms to appreciate what they could do with the Internet—but it was a crucial step.

Introducing competition had one drawback. It added a high degree of unpredictability to the privatization efforts. Even knowledgeable observers of communications markets were not quite sure what features the final industry structure would have and how it would operate. While there was some experience with competitive telephony and some of the digital technologies that supported it, there simply was not much experience with the competitive supply of data carrier services organized around a packet switching technology, such as one where all computers used a TCP/IP network. Only a few companies had ever offered similar services, and it had been at a smaller scale. The NSF’s plan ultimately used the division of technical leadership to nurture a market structure that began to head in a direction that most observers of communications markets had never before seen.

The consequences of the NSF plan were hard to understand for another reason: there were many compromises embedded in the details. It was not an ideal plan in many respects. Rather, it was tailored to the NSF’s strengths and weaknesses in the role of sponsoring federal agency.

The NSF normally subsidized research at US universities. From time to time, NSF funded research that directly and quickly led to new products, or saved costs in production of existing products. Most of these gains to private industry happened in a piecemeal fashion, and out of NSF’s control. NSF was not normally in the business of launching new industries or regulating ongoing industries. It did not possess the best set of tools for the task. It also did not possess the legal authority to regulate economic activity other than the conduct of research. In this case, it lacked authority to regulate the commercial Internet once it grew out of its research orientation.

Regulatory agencies, such as the FCC, have two broad tools, which might be simplified as either carrots or sticks. Carrots come in many forms, and the NSF did have several carrots when it used its budget judiciously. For example, NSF’s managers knew how to use solicitation of bids, and through solicitation the NSF could subsidize the establishment of an activity that might not otherwise have occurred. Sticks also come in many forms—but they usually involve a lawsuit to make someone cease and desist from performing an activity. The NSF, however, had few sticks and few staff with relevant experience. In short, the NSF plan relied on carrots because those were the tools it possessed.

At a broad level, the launch of the private and commercial Internet involved four related actions. First, technical planning for the Internet had to move out of government hands. This was substantially accomplished by removing sponsorship of the IAB, which acted as the organizational home for standard-setting activity at the IETF. It moved the groups to private not-for-profit sponsorship, and called the new organization The Internet Society. It evolved on a track independent of other actions, but like the others, it was essentially a government gift.

Second, the NSFNET had to cease to operate, and the retirement date for the NSFNET was announced as April 1995. This date brought up an additional distraction and controversy because it extended IBM and MCI’s contract a second time beyond the original granting date.34 These concerns partly reflected a lack of experience with the recent change in the Acceptable Use Policy. As it turned out, this extension was less controversial. While it mildly benefited ANS directly, as long as all networks interconnected, this extension did not hurt the other commercial networks. The announcement of the date also anticipated that private firms would step in to provide service after that time.

Third and fourth, there needed to be additional access points for interchanging data, and privatization of domain name systems had to occur. These parts of the plan had very distinct experiences. The former encouraged the development of competitive markets, while the latter resulted in monopoly. After the fact, the former received few complaints, while the latter became the focus of enormous amounts of attention.

The Plan for Network Interconnection

The NSF’s plan altered the structure of data exchange among backbone firms. This was the most technically challenging piece of the plan, and, in comparison to the casual discussions prior to 1991, also the aspect that underwent the largest change. It was also the most unpredictable piece. There was no precedent for operating a commercial carrier service on a national scale with many data-exchange points. It had not been technically feasible for most of the 1980s due to the routing protocols in use, which assumed a single backbone. The stage was set after moving to a new protocol, known as “best-effort routing,”35 which allowed for multiple backbone providers.

While no serious observer was 100 percent certain how well the NSF’s plan would work in practice, the uncertainty should not be exaggerated. The federally sponsored network had several data-exchange points. While the NSF was planning the shutdown, the CIX was already operating its precedent-setting servers. Most of the uncertainty concerned economic conduct: how would firms behave?

The plan for network interconnection underwent a noticeable revision because of the controversy over ANS and the AUP. Chided by the negative attention, the NSF attempted to be more transparent and put out its plan for comment from many of the Internet’s participants. It received suggestions to support multiple data-exchange points in multiple locations, as well as encouragement to support a competitive market. The attention bore fruit; rather than anticipating one commercial backbone, the NSF’s 1993 plan anticipated and accommodated multiple backbone providers.36

The plan relied on two stylized versions of textbook economics and one leap of faith in competitive incentives. The subsidization of the establishment of several network access points, or NAPs, was the first strategy that relied on a version of textbook economics—it aspired to avoid bottlenecks. If there had been only one place for backbones to interconnect, then it would have been easier for a single entity to erect a bottleneck to new entrants and thereby control a monopoly. Instead, there were multiple points of interconnection, so such bottlenecks could not arise.

For the second version of textbook economics, the NSF’s plan set the rights to operate a NAP out for bids in an auction. In theory the highest bid would come from the potential owner who foresaw the lowest costs. While each NAP operated on its own and with a distinct owner, the NSF anticipated a grander geographic logic to this structure. Every NAP stood at the center of a regional collection of backbones and networks operated by many other firms, and served as a single place to collect and exchange traffic. The geographic structure of the Internet backbone and regional networks determined the geographic shape of the network. It followed the limits and pathways determined by the geographic dispersion of the US university system. These aggregated into regional alliances to support sending traffic between campuses, and those alliances followed rather natural geographic groups.37

The leap of faith relied on the owner of a NAP to do everything necessary to keep a NAP operational after it was established, such as manage it, collect revenue to cover costs, respond to unanticipated crises, and so on. Thus in one of its more questionable choices, the NSF did not place any additional restriction on the operations of NAPs. Bidders were not required to operate with rules similar to CIX. While it would have been reasonable to assume the NAPs would operate in a manner similar to CIX in 1992, that presumption was not made explicit.

The plan relied on competitive discipline to keep quality at an acceptable minimum. The reasoning went as follows: If multiple backbone providers had choices among multiple NAPs, then it put pressure on NAP owners. NAP owners would have incentive to make sure users of the NAP did not move their data elsewhere. In brief, each NAP competed with the other for the fees provided by interconnecting carriers. That generated incentives for each of them to lower cost and raise quality.

Consider the inherent effectiveness of such discipline. This could be most effective when all carriers participated in NAPs, since then every NAP would be competing for the connection fees of every other carrier. That also implies the opposite: This discipline could be less effective if some carriers found reasons to bypass the NAPs altogether. Indeed, if backbone carriers abandoned NAPs, exchanging their traffic elsewhere, it was clear what would happen to NAPs—they would wither.

The NAPs were put out to bid and awarded in 1994. The NSF held several different auctions for the right to be the NAP provider in different designated locations as potential data-exchange points. Three were called Metropolitan Area Exchanges, and the acronym was pronounced as MAE, as in Mae West, the actress. In total four NAPs arose in four locations: San Francisco (called MAE West, operated by Pacific Bell), Washington, DC (called MAE East, operated by Metropolitan Fiber Systems), Chicago (called MAE Central, operated by Ameritech), and just outside New York (operated by Sprint).

Even though the financial support for NAPs would not be strong, the backbone market was born as a competitive market, and that competitiveness would persist long after birth, even after NAPs diminished in importance. Later events would show the wisdom of starting the industry off with a structure that fostered multiple access firms and competitive supply. That wisdom would become apparent even though NAPs would not retain a central place in the long term.

The Soap Opera over Domain Names

Almost from the outset a conflict emerged around the privatization of the domain name system, or DNS for short. The foregoing provides a brief overview of why the privatization of the DNS began to interfere with the privatization of the Internet.38

For all intent and purposes, the large-scale domain name system came into existence in 1987. To make a long story short, a small team of researchers—with the help of DARPA funding and directives—invented the domain name system in order to address the incompatibilities across distinct networks in the United States (whose origins were discussed in chapter 2). The incompatibilities shaped almost every action, even the simplest, such as sending e-mail. It was possible to send e-mail between the Internet and the other networks in the country—Bitnet, UUCP, or CSNET—but it required the sender to specify the path for the data to take. It took so much technical knowledge that only the most technically skilled sender could

do it.39

Networks had “gateways,” protocols that allow messages to pass between mail servers at individual locations and one of the networks. The domain name system was part of a solution to allow individual sites to move from CSNET/BITNET/UUCP to the Internet by reconfiguring only the mail server to deliver mail via TCP/IP on the Internet.

The new design was modular, allowing for different ways for data to move in and out of a gateway. That unified all e-mail from different networks. It also enabled new building blocks to be added. In the short run it accomplished its primary goal of unifying all communication. In the long run it would have profound consequences. After privatization, this design would enable a multiplicity of e-mail programs to thrive at households and business establishments.

The unification had an asymmetric effect on the four different networks it brought together. It allowed all to survive, and each network could grow at whatever pace its users preferred. As the largest and most widely supported network among them, it would favor further growth in the TCP/IP network. In turn, the growing user base encouraged further improvements, which further encouraged more users.

The privatization of the domain name system occurred in a two-step process. First, in 1991, the US government reasserted its authority over the domain name system, taking control of part of it from Jon Postel, a well-known Internet pioneer, who had been among those who helped to invent the DNS. After its inception he had almost single-handedly managed and operated the system.40 On one level removing Postel from operations was straightforward: he worked under a contract from DARPA to manage the domain name system, and when the old contract expired, the Defense Department sent it out for new bids. Second, and relatedly, a private firm won the right to manage domain name services. The text related to the root server was transferred to a company called Network Solutions, which was based in Virginia, just outside of Washington, DC.

Neither step was particularly well received by the Internet community at large. Every experienced administrator knew Postel, and many insiders had known him for more than a decade or two. Known for his extraordinary intelligence, lack of self-serving behavior, and hardheaded yet gentle persistence and stubbornness, Postel’s conduct was trusted by virtually every longtime participant. In contrast, the managers at Network Solutions, as well as their motives, were unknown. Trust would have to be earned. No amount of prospective efficiency gains from privatization would wipe away the suspicion inherent in this transition. No abstract argument about the need to routinize and scale the domain name system for prospective growth of the Internet could eliminate Jon Postel’s personal authority, or reduce the sense that something was being taken away from a mainstay of the old guard.

The transition was jarring for another reason. The domain name system was an essential function for operating the Internet, and it had operated with a mixture of formal and informal principles. This transition began steps toward the elimination of that informal element and replaced it with a process managed at a firm. For example, prior to this change an Internet administrator could simply send an e-mail to Postel or buttonhole him at a conference and casually raise issues. It also potentially altered the habits of every Internet administrator and designer connected to Postel—every major contributor to the Internet up until that point.

Privatizing the domain name system underscored an unsettled question among many Internet participants—namely, once a government or private firm operates the domain name system, what policies should govern an operation that many others depend on? Should it depend on Jon Postel’s judgment, or should it depend on institutional rules founded in government decisions? No law had compelled any participant in the Internet to cooperate with Postel, and everyone did so voluntarily. Why should everyone remain so cooperative in a formal system? No satisfying answer to this question was found over the next decade. Indeed, there could not possibly be one, since many inconsistent opinions existed and any action necessarily left many insiders unhappy with the outcome.

The transfer of authority over the root server to Network Solutions set the terms for this debate and drove the transition forward by rolling two functions into one. The first function was the management of a database that associated domain names underneath com, edu, net, gov, and org with appropriate IP addresses. The second function involved notifying other participants on the Internet about new names and the IP address associated with it.41 The latter also enabled the transfer of existing names and eventually would set the bounds for operating a secondary market in domain names.

The computer science behind this registry was straightforward. This involved maintaining the registry, updating it, and providing this information to others. However, as the Internet grew these processes became much more involved than anticipated, because the central registry also had to delete old registered names from those who no long wanted to keep a name or (later, after their establishment) failed to pay their fees. Network Solutions needed to make available those names to others, as well as make it feasible for new applicants to apply for new names. From the outset there were clearly many open questions concerning the registry’s pricing for services, its processes, and its involvement in the secondary markets. Critics began to worry that Network Solutions could (and would) employ its controlling position to gain self-serving ends in other operations within the Internet.42

In retrospect many aspects of that choice are easy to question.43 For example, why was the contract for five years and not shorter? Why grant a sole provider but not embed the contract with an assumption about regular review? Why did the contract not include any specification about the minimal quality of services offered?44 Why was com not given to one firm, edu to another, net to another, and so on, so some form of competition between these names could exist from the outset?

The contract also was incomplete in some essential business matters, such as the pricing of domain names. Perhaps that made sense in 1992, since there had been no charge for registering names in the informally operated research-oriented Internet. In 1995, however, this policy was changed at the request of the owner—a decision that was quite unpopular with the existing Internet community.45 Once again, such an action touched on core policy philosophies surrounding an essential asset. Should a firm be allowed to charge revenue for this action and profit from it? If so, should their actions be held in check by any regulatory mechanism? What form should that mechanism take?

By the standard norms of network economics, the governance of domain names lacked solid foundations. The manager of the top-level domain names occupied a monopoly position and an essential function inside the network’s operations. That combination required some sort of regular oversight to prevent self-dealing by Network Solutions, and from using the monopoly position to subvert the competitive process in other areas related to domain names. Oversight would have placed limitations on the firm that performed the function. It might have limited its pricing or its ability to achieve other self-serving goals. However, the NSF did not put in place any oversight on the private firm. Later observers became outraged at the lack of oversight.46

The reason behind the absence of oversight goes back to carrots and sticks. How was NSF supposed to do that using carrots? It is normally the purview of the US Congress to design regulatory institutions. If the US Congress wanted to regulate the DNS system, it could have done so at any time. It just did not.

Stepping back from the soap opera’s details, perhaps Congress did not act because there was not a sufficient crisis. The most important economic goal for the transition was avoiding disaster, and that was achieved. The transition to a privately managed domain name system did not cripple the commercial Internet, nor particularly handicap its global operations. Long after the privatization, prices stayed low. The cup was partially empty as well. The domain name system was not a blindingly efficient system in 1995. Recycling retired names involved a particularly cumbersome process, for example, and trademark holders did not have an easy way to address complaints of infringement.47

In short, there were no major delays or costs preventing other parts of the Internet from growing. CNN found a way to own CNN.com and IBM found a way to own IBM.com. Even Encyclopedia Britannica owned both encyclopediabritannica.com and eb.com and began experimenting with them. An active secondary market for domain names also began to emerge, some of which became quite valuable as the potential for the Internet became more apparent.

In many respects the initial actions in the early 1990s kicked the policy issues down the road and turned any little event into an engaging sideshow for insiders. After privatization Postel still existed and continued to tinker with improvements to the Internet. Network Solutions still operated under contract. Not surprisingly, therefore, tension did not diminish after privatization and received considerable attention in Internet forums and conferences. The Internet community sensed the gap in governance and made attempts to assert its independence from government control. Perhaps egged on by his friends, or frustrated by their lack of progress, Postel eventually made one last attempt to reassert his control, but that failed.48 In the greater scheme of things, none of these events fundamentally changed the commercial domain name system, which continued to operate under the contracts it had.

In 1997 and 1998, in response to the perceived need, the Clinton administration set up ICANN (Internet Corporation for Assigned Names and Numbers) under the domain of the Department of Commerce, with an understanding about moving the agency out of US government control sometime in the next decade. This creation was contentious, and those difficulties made for salacious stories among Internet insiders and compelled many observers to wonder why nothing had been done sooner.49

For all intent and purposes, the establishment of ICANN sealed the issues within the boundaries of one institution. While many squabbled over the behavior of ICANN and its core principles, the Internet continued to function as it always had.

A New Industry Structure

Years later, after the commercial Internet grew to a gargantuan size, a myth arose that the policies behind the privatization of the Internet succeeded because the stakes were too low to attract attention. That myth is misleading. The privatization cannot be characterized as a quiet walk in the park. The stakes were high enough to attract action from many concerned firms, inventors, and entrepreneurs. The countervailing stakes of different parties attracted the attention of congressional representatives on oversight committees, and eventually attention from the investigative arm of the US government.

There is also a myth, popular among cynics, that all regulatory decisions eventually become “captured” by private interests with high economic stakes in swaying decisions in their favor. Again, this view is misleading. The privatization of the Internet involved complex decisions in which a few very large firms had very high stakes in the outcome. According to the standard theory, the situation was ripe for capture, and, indeed, IBM tried. Yet that does not characterize events accurately either. Although the process was disorderly and messy, the outcome was not captured by a single firm.

Appeals to higher ideals motivated actions and had an enormous influence over decisions. The ideals of many honest policy wonks came to the center of the conversations about the policies for privatizing the Internet. They generated a reaction at the NSF, and their actions could not be characterized as convenient or the path of least resistance. The NSF’s managers took very costly and inconvenient actions to meet the ideals of many honest policy wonks. By the time the 1993 plan was developed, it no longer focused on only cost savings from privatization. It also aspired to nurture conditions that would allow the commercial Internet to flourish—such as supporting multiple backbone firms and developing interconnection policies to enable multiple firms to thrive.

Wisdom did not prevail in every policy action affiliated with privatizing the Internet, and it would be saying too much to claim that government policy makers always got it right. DARPA and the NSF privatized the domain name system without fully addressing long-term governance issues. While this part of privatization achieved minimal functional goals, the initial clumsiness generated complaints and resentment, and the issues continued to fester for many years thereafter.

More to the point, though honest policy wonks had their day, there was no consensus forecast among them, and the participants only knew one thing for certain: after privatization every firm would have a fighting chance to compete. Nobody had ever seen a market structure like the one the NSF proposed, and the lack of experience stood in the way of making any grounded forecast about the features competition would display. It also interfered with forecasting the scale of adoption, and the best strategic actions for capturing value as a large-scale commercial service. As later chapters will explain, largely unforeseen at the time, that also set the stage for the emergence of innovation from the edges.

1 The quote came in response to a question from Wolf Blitzer on CNN’s Late Edition. He asked: “I want to get to some of the substance of domestic and international issues in a minute, but let’s just wrap up a little bit of the politics right now. Why should Democrats, looking at the Democratic nomination process, support you instead of Bill Bradley, a friend of yours, a former colleague in the Senate? What do you have to bring to this that he doesn’t necessarily bring to this process?” See Wiggins (2000), McCullagh (2000), and Agre (2000).

2 Recounted in Wiggins (2000).

3 McCullagh (2000) and Agre (2000).

4 Wiggins (2000).

5 The quip by Trent Lott about paper clips was returned with the volley, “It’s no surprise that Senator Lott and his fellow Republicans are taking credit for an invention that was created a long time ago. After all, they’re the party whose ideas will take us back to the Dark Ages.”

6 “I was pretty tired when I made that comment because I had been up very late the night before inventing the camcorder,” Gore told the Democratic National Committee on Saturday, March 20, 1999.

7 In a January 2000 written column, Steven Roberts and Cokie Roberts reported on a series of person-in-the-street exchanges. They started with: “When Gore does try to assert himself, it often backfires—witness his claim that he helped invent the Internet.” The announcer on a Bush commercial states, “If Al Gore invented the Internet, then I invented the remote control.” Recounted in Wiggins (2000).

8 Wiggins (2000).

9 That does not complete the privatization of the Internet, however. The next fall Gore ran as the vice presidential running mate to presidential candidate Bill Clinton, and could not take steps in Congress. This chapter describes the last steps, sponsored by representative Rick Boucher.

10 Several participants in the government-sponsored Internet, especially those involved in building and governing the network, remembered Gore’s interest, and during his presidential run would affirm that Gore made significant contributions, highlighting the 1991 legislation. This included quotes from Vint Cerf in Time Magazine, June 14, 2000, and a widely quoted Internet mailing from Bob Kahn and Vint Cerf on September 28, 2000. See additional observations about attempts to set the facts straight in public discourse in Agre (2000).

11 See, e.g., the comprehensive accounting of these relationships in Flamm (1988), and a summary of some of the transfers of inventions from universities to industry in National Research Council of the National Academies (2003).

12 See, e.g., the page 6 and 7 of National Research Council of the National Academies (2003). It covers a range of computing technologies, including those mentioned above and the Internet and World Wide Web. In some cases industry laboratories precedes university work, but in most cases university work precedes work in industry laboratories, and spills into it.

13 For analysis of many experiences in the 1960s and 1970s, see Cohen and Noll (1991).

14 See Brian Kahin, Request for Comment 1192, Commercialization of the Internet, Internet Engineering Task Force, http://tools.ietf.org/html/rfc1192, accessed February 2014.

15 A great deal has been written about the ideal visions that propelled the networking community. See, e.g., Edwards (1997), Hafner and Lyon (1998), Oxman (1999), Castells (2001), Markoff (2005), and Turner (2006).

16 This is described in Abbate (1999) and Kesan and Shah (2001).

17 Bill Schrader, private conversation, July 2008.

18 Bill Schrader, private conversation, July 2008.

19 Cook (1992–94) provides a detailed account of these steps.

20 This is described in detail in Cook (1992–94) and Kesan and Shah (2001).

21 The nontransparency for the legal maneuvering magnified the outrage. Stephen Wolff was notified, though no others knew about the actions. See Cook (1992–94) for a summary.

22 The outrage shows up in the accounts of Cook (1992–94) and Kesan and Shah (2001).

23 See Hussain (2003a, b, c), and, in particular, see the CIX Router Timeline (2003a).

24 Hussain (2003a), footnote 1 for the CIX Router Timeline, makes an interesting point about CERFNET, which, like many of the other regional networks, helped to connect the San Diego supercomputer. It was a for-profit organization, and as one, it had more natural alignment of interests with PSINet and UUNET. Also see the discussion in Oxman (1999).

25 The agreement was forged by Mitch Kapor, who became chairman of the CIX after Marty Schoffstall stepped down. The rivalry between PSINet and ANS made it difficult for Schoffstall to act both as CIX chair and as PSINet executive, while forging a compromise with ANS.

26 This included an audit from the Office of the Inspector General, principally looking at whether changes in the contracting arrangement between the NSF and MERIT had violated federal law. For a summary, see the Cook Report (1992–94).

27 That simplifies a long planning process that stretched over many years. The basic elements of the redesign were first proposed by the NSF in late 1991, reacting to proposals by others. In June 1992 the NSF released draft solicitation for public comment, receiving a wide array of comments. The final draft—what was called the “revised solicitation”—was released May 6, 1993. The bid winners and final look of the plan were released in September 1994. A detailed summary of these steps is provided in Kesan and Shah (2001).

28 Bell Atlantic in New England, NYNEX in New York and environs, Bell South in the Southeast, Southern Bell Company in the mid-south region, Ameritech in the Midwest, US West in the Rocky Mountain states and environs, and Pacific Bell in the far west regions. However, due to ongoing debate about the permissible line of businesses for local telephone firms within Judge Harold Greene’s court, it was unclear how many different business lines each local telephone firm could enter.

29 The major long-distance companies for home and business long distance included AT&T, MCI, and Sprint.

30 General Telephone and Electric (GTE) owned and operated a large number of geographically dispersed franchises all over the United States.

31 This list including local franchises, such as Rochester Telephone, Southern New England Telephone, Cincinnati Bell, and many rural co-ops.

32 At this point, every major city in the United States had cellular services provided by a duopoly, where one provider was the local telephone monopoly and the other was another firm.

33 The hearings took place in the House Science, Space, and Technology Committee’s science subcommittee, chaired by representative Rick Boucher (D-VA), which has jurisdiction over the NSF policy. Boucher held hearings on March 12, 1992, after reporter Brock Meeks published several articles about procedural irregularities in Communications Daily (on February 4, 1992; February 6, 1992; and February 21, 1992), according to Hauben and Hauben (1997), chapter 12. This was introduced in the 102nd Congress as the modification to the NSF’s charter in 1992 and Amendment to the National Science Foundation (NSF) Act of 1950, H.R 5344, which was introduced June 9, 1992, “to authorize the National Science Foundation to foster and support the development and use of certain computer networks.” See http://www.lib.ncsu.edu/congbibs/house/102hdgst2.html, accessed July 2009. It was passed by the House on June 29 and was signed into law October 23.

34 The original contract had been set to expire in 1992. It was extended twice before the NSFNET was retired in 1995. See Kesan and Shah (2001) for a timeline.

35 In part this was due to the work on the border gateway protocol (BGP), which accommodated multiple backbone providers. The first RFC for BGP was published in June 1989 (RFC 1105), replaced a year later by RFC 1163 and 1164. It continued to evolve over the next few years.

36 This also was anticipated in Aiken, Braun, and Ford (1992), “NSF Implementation Plan for Interim NREN.” Insiders appreciated that the technical experience with exchanging data in the last few years had helped spur new thinking about how to design more efficient points for exchanging data. Already there were plans for what would become a Metropolitan Access Exchange, or MAE. These plans called for a MAE East and MAE West. The number of network access points (NAPs) did not remain fixed at two. Eventually, the NSF increased the number.

37 In an intriguing footnote, Abbate (1999, p. 239), states that this structure was probably inspired by the structure of the telephone industry at the time, which had a mix of regional networks and national interconnection. It is unclear how much of this inspiration was loosely “conceptual.” In fact, the resemblance was coincidental, not causal. The data networks followed their own regional logic, based on the alliances formed in the mid-level networks and geographic constraints. Stephen Wolff, private conversation, September 2008.

38 These events have received extensive treatment elsewhere. See, e.g., Mueller (2004), or Kesan and Shah (2001), and Goldsmith and Wu (2006) for a full description.

39 See Partridge (2008). Some of this text also comes from Partridge, private communication, 2008.

40 See Mueller (2004), and Goldsmith and Wu (2006) for a full description.

41 Mueller (2004) provides a comprehensive history of this function.

42 See Mueller (2004).

43 Indeed, many observers have done so. See, e.g., Mueller (2004) or Kesan and Shah (2001).

44 For a discussion, see Mueller (2004).

45 This occurred after Network Solutions Incorporated was purchased by Science Applications International Corporation.

46 Kesan and Shah (2001) summarize these positions.

47 For a discussion, see Mueller (2004).

48 This is a well-known story, and is recounted in both Mueller (2004) and Goldsmith and Wu (2006).

49 This is a longer story. See also the accounts in Ceruzzi ([2006] 2008); Aspray (2004), chapter 3; and Mueller (2004).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.177.85