7

image

Platforms at the Core and Periphery

The remarkable social impact and economic success of the Internet is in many ways directly attributable to the architectural characteristics that were part of its design. The Internet was designed with no gatekeepers over new content or services.

Vinton Cerf, codesigner of TCP/IP1

In 1994 a company called Spry Technologies Inc. developed an endearing little product called “Internet-in-a-box.” Quaint by modern standards, it appeared to be a CD for downloading a program to generate an Internet connection to a dial-up ISP, including a licensed version of Mosaic. In fact, it was a clever attempt to create and package everything a user needed to get on the web. It turned a complex process into something simpler. It filled in the technical pieces a user needed to have access to the web. The box was designed to appear familiar, which was thought to be reassuring to the mass-market user. It appeared to be packaged software, just like every other application software for personal computers, and it was sold on the same shelves in retail outlets.

The product generated $40 million in revenue its first year, which was enough to suggest Spry was on to something. That early success in the market attracted the attention of CompuServe, who bought the firm in the middle of 1995. It used Spry’s assets to develop its Internet strategy.2

image

FIGURE 7.1 David D. Clark, chief protocol architect, Internet Architecture Board, 1981–89 (photo by Garrett A Wollmann, March 19, 2009)

Those events illustrate a phenomenon that would repeat itself in many forms in the commercial Internet. As many chapters will discuss, the commercial form of the Internet and web permitted many small-scale businesses to enter and generate revenue in excess of their costs. Many of them aimed to simplify the Internet for users, making it more useful. Just as the modular organization of the Internet in the academic setting had enabled specialization in invention, its commercial arrangement permitted a few (and eventually a flood of) specialists, but in this case, they came in commercial form. These specialists primarily worried about their business application and could take for granted the workings of the remainder of the system.

Stepping back from this specific example, another observation leads to a deeper lesson. Spry sold the Internet to users in a format that referenced familiar features of the commercial PC software industry, even though the Internet was nothing like it. The Internet was not a corporal good. The Internet could not be touched, unpacked, or shipped in a turnkey form. This rendered the business proposition for the Internet rather inchoate in comparison to the PC, even though users wanted it to be easy to use, like the PC. Users needed the technology to work easily. Users needed a low price in a standardized format. Just like the packaged software business, many users wanted firms to offer products, promise to provide service for them if they did not work, and back up those promises.

The value chain for the Internet differed in a marked way from the value chain for the personal computer. A value chain is a set of interrelated activities that together produce a final product of value greater than the incremental value of each part. All mature industries have a value chain. In 1995, the value chain for the commercial Internet was quite young and still evolving, while the PC market was almost two decades old and largely settled.

The value chain for the PC involved a wide set of firms. The typical PC used a microprocessor (usually from Intel or AMD), a motherboard design that descended from IBM’s 1981 design. It contained several standard upgrades to the memory (from one of many firms) and an input/output bus (redesigned by Intel), an operating system (typically from Microsoft), and an internal memory device (from one of several hard-drive firms), as well as many other parts. These were assembled together by one of several firms (Compaq, Dell, Gateway, IBM, and so on) and distributed to users along with other compatible peripheral components, such as printers (most often from Hewlett-Packard, but also from many other companies). Users acquired the products either directly from the assembler (Dell, Gateway) or from a third-party retailer (BestBuy, Circuit City). Standard software applications included office-oriented applications (from Microsoft, Intuit), games (from a variety of firms), and several utilities (Norton, McAfee). Any PC also could accommodate thousands of niche applications.

In comparison, by early 1995, the commercial Internet’s value chain had resolved some open questions, and a few remained. The best-laid plans at NSF had come to fruition. Not only had the commercial Internet survived its transition from public funding to privatization, but a regular routine for exchanging data and settling accounts between private parties had also continued and made a transition into a marketplace. With the release of the web, the prospects for growth looked promising. The market prospects for Internet services were improving, and awareness was growing outside of the cognoscenti into a mass market.

The two markets did have many economic similarities. Both the Internet and the PC had platforms at their core—namely, a standard bundle of components that users employed together as a system to regularly deliver services. In North America, the Internet and the PC also both drew on a similar knowledge base and similar pools of programming talent who shared lessons with one another, sometimes reluctantly. The most talented developers and programmers moved between multiple platforms, whether it was a Unix-oriented network software or C++ for a PC.3

The PC and Internet also shared another feature, discretion to make modifications after purchase. The IBM PC (and its descendants) had allowed for user modification ever since its introduction, through user additions of electronic cards, peripherals, or software applications. The Internet also gave considerable discretion to users. If any administrators or users found existing code unsatisfactory, they were at liberty to propose a new addition or retrofit another.

One key difference struck all observers. Unlike the PC market, there was no profit-oriented organization providing platform leadership for the commercial Internet in mid-1995. That difference received considerable attention from contemporaries. The PC bundle was developed and sold in a proprietary environment, while developers called the Internet more open. There was only one problem with that label. Open was inherently vague. To a contemporary, it elicited about as many definitions as an Inuit had for snow.4

Looking behind the scenes more deeply, there were meaningful differences in the processes for decision making. Two commercial firms in the PC market, Microsoft and Intel, retained and guarded their right to make unilateral decisions about the pervasive standards embedded within the most common configuration of the PC platform. Microsoft’s processes were proprietary, resulting in limited rights for other market participants. Only the platform leaders had unrestricted access to information. The Internet, in contrast, employed a consensus process for determining the design of pervasive standards and protocols. The processes employed documented standards and did not restrict access to these documents or their use by any participant in the Internet.

If pressed, many inside the Internet argued that proprietary platform leadership represented the antithesis of what they aspired to achieve. Yet on the surface it was unclear why Microsoft inspired so much opprobrium. Many of Microsoft’s practices were not appreciably different from those found at other well-known and successful software firms, such as Oracle, Apple, SAP, or IBM. Moreover, Microsoft was astonishingly efficient at supporting a wide and diverse group of application developers. Its managers knew how to reach the organizational frontier for production of large-scale mass-market software in ways few others did.5 Shouldn’t that have inspired either reluctant or approving admiration?

As the Internet began to commercialize in 1995, this topic often generated more heat than light and obscured the plain economics. Both processes had strengths, and both platforms could support innovative outcomes, often with only minor differences in performance. However, proprietary platforms possessed a major strength that open-source software lacked: their ability to execute on a “big deliberate push” toward a new frontier, especially if it were coordinated inside of a well-managed firm. Open platforms, in contrast, had an extraordinary strength that proprietary platforms lacked. Openness nurtured radical exploration around unanticipated and underanticipated value, and that could lead to growth around a platform at a rapid pace. That was especially so if existing firms had been reluctant to pursue the unanticipated value, and many entrepreneurs perceived the opportunity.

Both strengths shaped outcomes in the commercial Internet at a watershed moment in 1995. Microsoft was in the midst of demonstrating the principal strength of proprietary software development during its rollout of Windows 95. At about the same time, the birth of the commercial web would demonstrate the principal strengths of open governance, and its ability to enable radical transformation. Markets rarely allow for such direct comparisons of the relative strengths of starkly alternative organizational forms at the same time, but in this case events conspired to sharpen that comparison. Those similarities and differences are worth exploring, and that is the purpose of this chapter. It will frame topics found in many later chapters, such as why the Internet diffused so quickly, and why this contrast lay at the heart of the largest market conflicts of this era.

Computing Platforms and the Internet

To disentangle the consequences of the economic differences between proprietary and open platforms, it is essential to view the functions of platform leadership broadly. According to Bresnahan and Greenstein (1997),

A computing platform is a reconfigurable base of compatible components on which users build applications. Platforms are most readily identified with their technical standards, i.e., engineering specifications for compatible hardware and software.

The failure or reduction in performance of any of the essential activities can lead to inferior outcomes. Well-designed platforms encourage successful innovation from multiple parties in settings where no single firm can easily develop and provide a majority of the applications. All platforms—whether open or proprietary—share four functions, and all the leaders of platforms aspire to provide these functions:

•  Designing a standard bundle of technical implementations that others use in their applications;

•  Operating processes to alter those standards and inform others about the alteration;

•  Establishing targets and roadmaps to coordinate complementary developer and user investment;

•  Providing tools and alternative forms of assistance to others who wanted to build applications using their technical standards.

Unilateral and consensus processes perform these functions in very different ways. In addition, proprietary and open processes employ distinct policies for releasing information.

The Proprietary Side: Windows 95

In early 1995, the PC platform was in the midst of an upgrade to Windows 95, which was due in the summer of that year. The upgrade was a culmination of a major transition away from DOS as the predominant compatible system for IBM PCs. As a text-based operating system, DOS, along with several upgrades, had been the operating system since 1981.

Yet getting from blackboard conceptions into mass-market products had vexed many firms. By early 1995 it was apparent that Microsoft had found a way to make that transition for users of descendants of the IBM PC, and simultaneously keep the Redmond-based firm at the center of the industry.6

What had Microsoft done? In short, Windows 3.0 had been an addition, residing on top of DOS, and later improved in Windows 3.1. Microsoft also had lined up every major application firm around a new development, later to be named Windows 95. The new operating system would resemble a more polished version of Windows 3.1 in its user interface but would include many improvements. In addition, Windows 95 would reside at the core of the operating system, which would yield numerous technical benefits and functional improvements.7

The project imposed a set of rules and policies on application developers. Microsoft defined a technical layer, invented on one side of it, and then enabled peripheral developments at the other side of the layer. It was a base for developers to build upon. These boundaries between the layers were called APIs (application programming interfaces). The use of APIs was typical for a proprietary platform. Other proprietary software firms (for example, Apple, Oracle, and SAP) used similar practices to organize a community of others. If anything was novel, it was the scale of the community involved in Windows 95, extending to tens of thousands of firms, many of whom were very small. The number of participants in the PC community in the mid-1990s exceeded anything ever organized by Apple, IBM, or SUN, for example, or by Microsoft, for that matter.

This strategy produced many challenges in execution. Microsoft’s employees worked hard to provide inviting pervasive standards on which others built their applications. This required good designs that supported a developer’s needs, which had to be backed up with technical support staff. The firm also considered subsidizing the costs of widely used tools for developers, encouraging practices that helped PC users.

None of this arose solely through altruism. Helping a mass of developers also helped Microsoft. If a tool helped many developers, then the costs of its development were defrayed over a large base. The development of more applications for Windows 95 also would help Microsoft through its sales of operating systems.

Routine financial calculations stood at the center of Microsoft’s actions. It gave away tools if it thought it recouped the investment with more developer actions or more valuable applications. It invested in designs if the returns looked financially strong. It invested in an organization to provide technical support because developers made the operating system more valuable for a greater range of applications.

The resulting operating system was quite an astonishing combination of old and new. Windows 95 was backward compatible with many applications from Windows 3.0, but the system included many features that were bigger, faster, more varied, more complex, and more efficient for many tasks. That embodied a promise for a high payoff. Most developers anticipated selling more applications, and Microsoft anticipated selling more operating systems. For that reason alone virtually every vendor acquiesced to Microsoft’s design choices for most of the key standards.

That experience, along with related actions from Intel that had industry-wide benefits, fueled the view that platform leadership helped computing markets grow.8 Yet a growing antipathy from the entangling quid pro quos that less accessible information and more restrictions on the use of technology embodied was holding the enthusiasm in check.

A Strategy for Pervasive Standards

Microsoft did not aspire to control all frontier technologies. Rather, its strategy stressed having an exclusive position as the provider of pervasive technologies on which applications were built. That exclusive commercial position was valuable to Microsoft for several reasons.

Obviously, it made life easier for Microsoft’s own employees. Microsoft’s application designers could contact other Microsoft employees to have their needs addressed. Microsoft also could retain the right to change any feature if it yielded strategic benefits, and it could do so without coordinating with any other firm.

Indeed, outsiders frequently accused Microsoft of using its position to make its life easier, such as documenting for Microsoft’s use, but not necessarily for any others, or not documenting code so they could alter it to their advantage. These accusations also could be exaggerated. For example, Allison (2006), who takes a developer’s perspective, sees the merits of multiple viewpoints. When discussing why Microsoft did not document one part of the internal subsystems for Win32, he states:

Why do this, one might ask? Well, the official reasoning is that it allows Microsoft to tune and modify the system call layer at will, improving performance and adding features without being forced to provide backward compatibility application binary interfaces.… The more nefarious reasoning is that it allows Microsoft applications to cheat, and call directly into the undocumented Win32 subsystem call interface to provide services that competing applications cannot. Several Microsoft applications were subsequently discovered to be doing just that, of course.… These days this is less of a problem, as there are several books that document this system call layer. But it left a nasty taste in the mouths of many early Windows NT developers (myself included).9

While Microsoft’s technological leadership position reinforced the value of the PC, its actions also shaped its ability to capture a larger fraction of the value from each PC sold if it reinforced the lack of substitutes for its operating system. For example, it contributed value to the PC through supporting new applications, growing user demand, or motivating more frequent purchases. It also improved its ability to capture a larger fraction of value by enhancing its bargaining position with developers, often through the inclusion of features into the operating system that substituted for another firm’s, and for which the other firm had charged users.10

Microsoft protected assets important to making the standards by using intellectual property, such as patents, copyright, or trade secrets, and excluding others from imitating their actions. Microsoft also prevented others from gaining information about a standard’s features, how it operated, and how it would change, necessitating that others negotiate with Microsoft to gain access to that information. These actions could and did bring about conflicts with others in the PC business, whose plans conflicted with Microsoft’s.

As a practical matter, most application firms liked the improvements but objected to how the governance of the platform had changed around them. They perceived many entanglements, the use of too many carrots and sticks by Microsoft. What were the carrots? Those might be the tools Microsoft provided to developers to make Windows 95 easier to use, for example. What were the sticks? Those might be actions punishing unfriendly developers, such as withholding information about near-term changes, or merely not returning phone calls promptly. An alternative carrot or stick involved offering or withholding lucrative marketing deals or discounts to friendly or unfriendly firms.

Of course, one thing mattered above the others. If developers wanted to reach users of the PCs, they had to work with Microsoft. They had to call for support services, at a minimum to find out how to make the technical aspects of their applications work well with the operating system. Microsoft’s management knew this—indeed, they had set it up this way deliberately. In circumstances where speed to market was valuable or margins were tight, offering relevant information sooner, in contrast to withholding it, could be a very effective incentive or punishment to inducing cooperation.11

Many vendors could perceive that they had become dependent on Microsoft’s proprietary standards. Microsoft’s technical staff held important information about how the APIs worked inside the operating system. This dependency arose at the beginning of product development, in the middle, and sometimes even just prior to launching a new product. It left developers vulnerable to getting held up at crucial moments in an urgent development process or marketing campaign. Microsoft’s management recognized the situation too, and increasingly the use of technical support came along with reminders that only developers with friendly relationships with Microsoft got the best service.12

Dependency also came with a danger in the near future. Many software executives wondered if they could trust ambitious Microsoft employees with sensitive information. Executives at application firms who had seen the innovative features of prior software show up as features in later versions of Microsoft’s products wondered if an employee’s conversations with Microsoft’s technical staff would contribute to seeding a future competitor. Years earlier the danger had seemed to be an exaggeration, but by early 1995 the experience was quite common and well known.

Adding to the distrust, Microsoft seemed to be able to expand its actions in ubiquitous ways. For instance, it could expand into a software market if its ambitious executives thought it was profitable to do. By the mid-1990s Microsoft had entered a wide range of product areas, such as network software, games, Encyclopedias (Encarta), and online services (MSN), as well as announcing its intent to produce a wider range of application software—most typically through self-development, but potentially also through purchase of small entrepreneurial firms. In fact, only the opposition of the antitrust division at the Department of Justice stopped Microsoft from buying Intuit, the leading developer of tax and accounting software.

The public discussion had become heated over these practices. SUN’s CEO, Scott McNealy, for example, relished making public remarks using sharp language that denigrated Microsoft’s lack of openness in comparison to his firm’s. For example:

We had openness. In other words, nobody should own the written and spoken language of computing … nobody owns English, French, or German. Now, Microsoft might disagree and think they ought to own the written and spoken language of computing and charge us all a $250 right-to-use license to speak English or Windows or what they happen to own.13

By 1995, virtually all computer industry participants were familiar with these practices. Microsoft retained control over information about the standards that composed its platform as part of a broader strategy to enhance its bargaining position with others, to deter entry, and to support its ability to generate revenue. Yet most developers tolerated it because some of them prospered. There also was no other avenue for selling PC software to the large number of buyers of IBM-compatible PCs.

Standard Protocols

Viewed broadly, the commercial Internet had comparable processes in place for making a standard bundle of protocols and altering those standards, but the details were quite different from the PC. For making standards and altering them, the bulk of activity took place in the Internet Engineering Task Force, or IETF. Whereas Microsoft unilaterally made decisions about standards, all such decisions in Internet standards emerged from consensus. In addition, the processes were much looser regarding establishing roadmaps and targets, and providing tools and assistance for others.

As chapters 2 and 3 discussed, the IETF had been established during the NSF era. It became part of the (not-for-profit) Internet Society after 1992, which was established during privatization. The IETF inherited a process for designing and endorsing protocol standards, still called Request for Comments (RFCs) from the earlier days.14 The standards-making processes had not changed for the most part except in one respect described below, nor, ostensibly, had the technical skill of those involved in working groups. However, the number of participants had increased, which altered the composition of participation, involving more employees of profit-making companies.15

After 1992 the IETF differed from Microsoft in one important way. It had altered the assignment of authority at the top, devolving it from its older hierarchy in which the members of IAB had the final say over approval of proposals for standards. By the mid-1990s, such authority rested with the members of the Internet Engineering Steering Group (IESG), which was a larger group comprising different area heads. The area heads appointed/coordinated with working groups and tried to impose a semblance of structure on an otherwise loosely organized voluntary activity. In the mid-1990s many of the members of the IESG had been appointed by the very members of the IAB who had previously held authority. Uncertainty about the transition was largely gone by late 1994 and early 1995. As will be described below, routines had emerged.16

In contrast to Microsoft, the IETF produced nonproprietary standards. Like efforts in other industry-wide standard-setting organizations, the IETF asked workshop leaders to develop protocols and standards that did not use patents or other forms of proprietary technology, if possible.17 Workshop leaders were discouraged from endorsing proprietary technologies unless the owner agreed to license those at reasonable and nondiscriminatory rates.18

Nothing in the commercial Internet contained subsidies after the NSF withdrew its support. Yet the IETF leadership did its best to invest in several actions to aid the process it governed. First, it tried to provide editorial guidance and support for the entire process. That resulted in remarkably clear and comprehensive documentation (particularly from some contributors who were not practiced at clarity and thoroughness). It also helped coordinate and sponsor plugfests, where vendors could test the interoperability of software (that could operate and be implemented). In principle, these fests were used to verify the existence of running code before advancing a proposal for an RFC to a final draft.

The Internet did not lack what programmers called “tools and libraries.” These were partly a legacy of years of development with widespread university support. To be fair, while RFCs could be found in one central location, the same could not be said for many tools, which resided among many sites, dispersed across the Internet. In addition, the IETF leadership and many participants from some of the numerous new young firms in the commercial Internet continued to develop new protocols.

The situation for roadmaps and targets did not resemble commercial norms in the least. There was no figure comparable to Bill Gates at Microsoft or Andy Grove at Intel, who stood at the center of the commercial Internet providing executive vision and discipline, or, for that matter, settling disputes between inconsistent visions. The IETF had limits. It was not the NSF. NSF had provided planning in the past; however, for all intents and purposes, those decisions were made and implemented in 1993 and 1994. NSF stepped away from planning after it privatized the backbone. By the norms of commercial computing, that left a void in planning as of 1994 and beyond.

Why So Little Centralization?

It would not have been odd to embed some coordination in processes for building Internet protocols. After all, that was the model in other successful commercial platforms. So why did the IETF take an approach at the outset of privatization that emphasized so much decentralization? As chapter 2 began to hint, if a rationale existed, it was simple: the membership—university computer scientists, equipment firm engineers, and many independent contractors with long-standing interests in the technology—wanted it that way, in their long-standing desire to avoid the centralized control endemic to AT&T and other established telecommunications carriers and contrasted with practices at international standardization efforts, such as OSI. Like OSI, the IETF encouraged enthusiastic participation from a wide variety of innovative contributors, but the composition of contributors differed sharply, weighting participation toward those who knew the workings of the Internet and had helped it in the recent past.

The result occurred gradually, and had been self-reinforcing. Enthusiastic supporters of the Internet volunteered their time to get the IETF established in the 1980s. Where they had choices they chose decentralized processes, which inspired enthusiastic participation from others. In addition, several parochial tugs and pulls inherited from the precommercial era coalesced into a unique set of processes and policies.

First and broadly, lack of centralization and looseness about roadmaps and targets was consistent with many other decisions made at the IETF and elsewhere. Beginning with the earliest conversations, subsequent legislation, and the final drafts of the NSF privatization plan, there had been plenty of opportunities to establish central institutions. Only in rare cases were those opportunities used for such a purpose, such as with the establishment of the domain name system. In large part, many participants in the NSF-sponsored Internet did not view centralization favorably.

The looseness about roadmaps also arose, in part, from the norms governing academic debate. That is, virtually no argument was settled (in principle), so that some contributor could make a proposal if it had technical merit. In other words, there was a preference for open-ended debate almost for its own sake. That accommodated proposals from many corners.

An open-ended process also was thought to accommodate the realities of developing a communications technology whose user base was geographically dispersed, where there were many administrators or intense users of the technology, each with a different experience that informed their contribution. It accumulated incremental advances in publicly documented ways, which far-flung contributors could independently access and review. As noted in earlier chapters, that had worked well in evolving applications that coordinated end-to-end processes with many contributors, such as e-mail. It also could accommodate a swarm of standards, that is, multiple proposals for potentially solving the same problem.19

While documenting decision making at intermediate and final stages was necessary to support open-ended debate, it also had another essential benefit: it gave public credit for individual effort.20 The IETF’s RFC process gave academics concrete results to which they could point for credit within their universities.21 It was also useful because so many tools and contributions came voluntarily, and contributors needed to establish their historical credentials to gain rewards from prior accomplishments.

As in any large organization, there were multiple motives behind each practice, and despite all these efficacious reasons, numerous noneconomic motives also played a role. In particular, some participants expressed a strong distaste for the insular bureaucratic hierarchies that had historically shaped the flow of information and decision making in regulated communications markets.

As discussed in chapter 2, there was tension between the decentralization of the IETF’s protocol making (or upgrading) processes, which many participants preferred, and the appointment of the leadership at the IAB, which inherited its top-down and (largely) unchanging leadership from the legacy of TCP/IP’s origins at DARPA. Privatization would necessarily have put pressure on this tension, pushing it toward more decentralization. This would have continued to grow throughout the mid-1990s, as new constituencies joined the conversation and brought a new set of views about the improvements needed to support commercial buyers.

As it happened, however, decentralization came to the leadership after 1992. A phrase from a 1992 speech by the IAB chair from 1981 to 1989, David Clark, encapsulates the transition. His phrase was:

We reject: kings, presidents, and voting. We believe in: Rough consensus and running code.

Later many members shortened this to the alliterative credo, “rough consensus and running code.”22 For many participants after 1992 this became the slogan for the IETF.

A computer scientist by background and training, and one of the few elite cognoscenti of the Internet, Clark knew the boundary between the pragmatic and long-term ideal visions for the Internet. He also had a reputation for being a voice of reason during a chaotic debate. If that was not enough, he had a knack for being associated with catchy phrases at the boundary of technology and governance—for example, he had been one of the authors of the phrase, end-to-end. Catchiness aside, what did rough consensus and running code mean in practice?

Seen against a broad prospective, rough consensus and running code was meant to highlight the heretofore recognizable identity of the organization’s shared sense of norms, and how the organization’s participants chose among the various options for organizing its processes. In his overhead slides Clark referred to the trade-offs between processes. A standards organization could be flexible and responsive to current events, meeting unanticipated requirements. Or an organization could give slower and more deliberate consideration to various options. Clark referred to a firm headquartered in New Jersey, which was a not-so-veiled reference to AT&T, whose corporate hierarchical practices were held in low esteem by many participants. At the time that Clark coined the phrase, he also contrasted the processes in the IETF and IAB with those of the OSI, which was more deliberative and much slower.

The phrase emerged during a crucial argument in the summer of 1992. A few members of the IAB were considering whether to coordinate a specific proposal (for upgrading the address system) with those behind the design of the OSI. This would have required making parts of TCP/IP and OSI designs compatible, and it would have required some coordination between the two efforts behind each standard. After long debate, this proposal, which was favored by some members of the IAB, was abandoned in the face of a general uproar from many participants in the IETF.23 In other words, the IAB bowed to the wishes of the majority, even though the leadership favored an alternative course of action. It was in this talk that Clark declared that rough consensus was the preferred option over executive declaration from above.24

Ultimately, Clark’s speech also was remembered because it occurred at the same time that the “wise old men” of the IAB agreed to a process for regularly turning over the leadership of the IAB. That solidified the preference for “bottom-up” processes in all aspects of the standardization processes for the Internet. That became the norms for the Internet Society, which became the oversight for the IETF.

That did not take long to become routine. By early 1995 the norm of open-endedness was embedded in the identity of the community. In their own materials the IETF leaders drew attention to decentralization and its emphasis on running code. This comes out clearly in Bradner (1999), for example. After summarizing many of the processes for newcomers, he states,

In brief, the IETF operates in a bottom-up task creation mode and believes in “fly before you buy.”

The importance of preferences for open-endedness came with a large potential risk—namely, lack of coordination between the technical proposals of distinct groups. No participant wanted to see the growth of families of mutually incompatible protocols from distinct subgroups within the IETF. Yet such an outcome was not only technically possible, but also it was likely as the commercial Internet grew and participation at the IETF grew with it.

Rather than abandon the bottom-up processes for the sake of coordination, however, the IETF adopted a structure to accommodate decentralization. The costs of this process were explicitly placed on the shoulders of working groups and their area coordinators within the IESG. They were charged with the responsibility of identifying and coordinating development across distinct working groups with overlapping agendas, even requiring area coordinators to do so before assessing whether rough consensus had emerged for a proposal.25

Hence the IETF did not lack a planning process or de facto roadmap at a broad level. The collective efforts of the IESG at planning and assigning work between working groups provided the guidance for developments in the short run. They could only do that job if working groups documented their progress at every stage of decision making. Then a conversation with one of the members of the IESG could serve the same function as a road-map, providing a member with a sense of where their efforts fit into the broader collection of ongoing work.

That solution worked as long as the community of researchers would share information about their technical inventions with each other through e-mail, postings in RFCs, and conversations at meetings. In other words, making accessible all documentation of decision making supported other principles of importance to the community.

In summary, by early 1995 the IETF had moved beyond its experiments with accommodating many points of view. It had established an organization that did not restrict participation or restrict access to its results. It remained blithely transparent in order to accommodate its unrestrictive participation rules. For many participants the transparency was a goal in itself. Most of its progress became documented online, even at intermediate stages of development. In addition, it placed no restrictions on use of information after publication. Although nobody had any stated intent to foster innovation from the edges, these processes would have profound consequences for its occurrence.

Openness and Innovation

A few years later, as the Internet boom got underway, many enthusiasts for openness held it responsible for creating much innovation. What parts of these claims are myth and what claims stand up to scrutiny? The question cannot be addressed with a simple answer because the answer must be comparative. If openness had not existed, what was the alternative? Many innovations would have arisen with or without openness, and with almost any set of rules. A more nuanced question has to be asked: When did openness matter in comparison to a plausible alternative and why did it matter?

The remainder of this chapter offers evidence that begins to form the answer, and other chapters in the book will return to the question and complete it. In outline the remainder of this chapter says the following: Up until 1994, a skeptic had plenty of reason to think that openness played little role in fostering commercially important innovation. During the Internet gold rush, however, openness played a key role at two crucial and irreversible moments. It shaped outcomes in the fight between Microsoft and Netscape, and it shaped the relationship between Tim Berners-Lee and others at the IETF. Before getting to the heart of these two episodes, it is important to recognize their significance as counterexamples to a skeptical view.

The skeptical view had considerable merits. For example, though open processes had played a role in facilitating some of the entrepreneurial entry among carriers—for example, at PSINet or Netcom, as well as a plethora of equipment firms—it was possible to argue that this was a small factor in determining outcomes. Relatedly, it was plausible to argue that the supply conditions were a by-product of the academic origins of the Internet, and, as chapter 4 stressed, not unusual for inventions inside universities. That explanation also shifts emphasis to something other than openness, the core economic conditions that gave rise to the earliest entrepreneurs—namely, growth in demand for Internet services—in fostering incentives to innovate among these early entrepreneurs.

By early 1995, moreover, the potential drawbacks to openness were becoming more apparent. In early 1995, the discretion to make hardware and software upgrades resided with private parties. It left many unanswered questions about the future structure of operations. The NSF privatization was reaching its completion, which also raised budgetary uncertainty about long-term revenues from research and university clients, as well as private clients. The uncertainty in economic conditions did not encourage innovation from private firms, and firms with questions about the future saw no other large established firm in a leading position, ready to take a call and provide an answer.

Openness also did not help resolve other unaddressed questions about the near future. The IETF used nonproprietary standards, which eliminated points of differentiation. That potentially contributed to turning data carrier services and equipment design into a commodity service or product. Lack of coordination also cast uncertainty over the plans of every participant, and nobody had sufficient experience to forecast whether the commercial markets would be very profitable or not. None of the major carriers or equipment makers found it attractive to serve a market with so much potential uncertainty.

As it turned out, lack of a central planner was not a crucial gap at this moment. Carriers and equipment makers did not really have any alternative but to live with the openness. Experienced router and network equipment firms existed prior to privatization (for example, Cisco, IBM, 3Com, and many smaller firms), and they existed after as well. On the whole, in 1995 the Internet contained a healthy set of infrastructure operations built around TCP/IP-compatible operations. Market relations between carriers were functioning well, and data managed to find their way across all the networks without incident.

There was no magic to it. A skeptic would emphasize the basic economics at work. Existing firms stayed in the market after NSF withdrew support because they had spent and sunk investments to enter this market and they continued to see an opportunity to profit. Those were comparatively straightforward business calculations. Openness had little to do with them.

Openness also had little role at Microsoft, the most successful software firm at the time. Microsoft’s process of using incentives and subsidies to encourage incrementally innovative applications had a couple seeming advantages over the openness of the Internet. The commercial Internet did not encourage anything with subsidies.

To be sure, this comparison was not a slam dunk. Microsoft provided tools for many applications and often subsidized their development. But the commercial Internet did not lack supplies of tools and libraries for sharing, often coming from university contributors and researchers with a wide variety of settings. The IETF also could accommodate an innovative commercial application, just as any platform leader would have.

A skeptic also could have argued one additional observation: Microsoft’s organized platform appeared to have an innovative advantage over an open one. A large platform leader, such as Microsoft, had skills at organizing an upgrade requiring a big push, one that simultaneously altered the programming environment for many applications and that supported it with technical help. The upgrade to Windows 95 was an illustration of that innovative advantage. Nothing equivalent took place at the IETF, which tended to focus on incremental improvements on existing protocols, as developed by small teams of contributors to a working group. In short, before the commercial Internet had shown it could appeal to mass-market users, it was reasonable to argue that commercial platform leadership had an edge in pushing through big technically substantial projects into mass markets.

Despite merits of a skeptic’s views, skepticism about openness is not fully persuasive in retrospect. Why? Openness turned out to have one important consequence for structural change. It permitted radical change to reach the market when it otherwise might have encountered roadblocks at private firms. However, this was less obvious until circumstances exposed it. The key features at the IETF were its lack of restriction on the development of new protocols that challenged established platforms, and the lack of restriction on access to the specifications for pervasive standards that challenged established platforms. Tim Berners-Lee’s experience can illustrate this point.

How the Web Grew

The World Wide Web was first diffused through the initiatives of Tim Berners-Lee on shareware sites, starting in the early 1990s.26 Initially, Berners-Lee expected an individual installation to adopt software to support the three components of the web, the hypertext markup language (HTML), the hypertext transfer protocol (HTTP), and what eventually would be called the uniform resource locator (or URL), which Berners-Lee preferred to call the universal resource identifier (or URI).27 Berners-Lee also diffused a browser, but very quickly the online community began to put forward better versions throughout 1992 and 1993. As discussed in chapter 4, this eventually led to Mosaic and Netscape.

Even before the web began to explode into large-scale use, Berners-Lee worried about how he would standardize his creation. He believed that standards would facilitate activities by others. By 1992, numerous installations had experience using the core technologies. In the summer of 1992, Berners-Lee took the initiative to start standardizing several aspects of the web within an IETF-sponsored subgroup.

From Berners-Lee’s perspective a standards process had several benefits. It would permit the web to evolve in a unified way, that is, add new functionality without fragmenting into versions that could not interoperate. He also believed such standardization would contribute to making the web more pervasive, if users perceived less uncertainty about the management of its future.28

When he first approached the IETF, Berners-Lee naively assumed his experience would go well. He had running code, and he had followed Bradner’s maxim, as stated earlier, approaching the IETF only after his code could fly. As he learned to his frustration, however, the existence of his running code and spreading adoption did not imply the emergence of a rapid consensus. The lack of centralization manifested in Berners-Lee’s experience as an absence of decisive resolution.

Berners-Lee’s proposals led to a contentious debate at the working group within the IETF. Problems in developing hypertext had generated intellectual interest for decades in computer science. Numerous camps for distinct theoretical and philosophical viewpoints had formed long ago. The existence of working software did not change viewpoints. Deliberations became mired in many long-standing debates. About the meetings in 1992 through 1994 said Berners-Lee:

Progress in the URI working group was slow, partly due to the number of endless philosophical rat holes down which technical conversations would disappear. When years later the URI working group had met twelve times and still failed to agree on a nine-page document, John Klensis, the then IETF Applications Area director, was to angrily disband it.29

By Berners-Lee’s account, the debate became focused too often on issues he did not consider important, such as the meaning of “universal” when it is used as a label for a protocol. He gave in on several of these issues in the hope of realizing resolution, but he did not give in on anything that had to do with his design.

Almost certainly the debates partly reflected the social strains taking place at the IETF at the time. Berners-Lee was one of hundreds of comparatively new participants, and one with a particularly ambitious agenda.30 While he also had the confidence that came with success diffusing his running code, his standing as a comparatively new participant was bound to interfere with persuading long-time acolytes.

Another issue initially clouded the debate. In 1992, Berners-Lee’s employer, CERN, still retained rights to the code. Participants at the IETF clearly were concerned about that, and Berners-Lee could appreciate why. That issue was not settled until, at Berners-Lee’s urging, CERN renounced all claims to intellectual property rights in web technologies in April 1993. After that, Berners-Lee pressed ahead, hoping CERN’s act would lead to consensus. Yet, once again, consensus eluded the working group.

Eventually, in mid-1994, Berners-Lee followed two paths simultaneously. He both worked with the IETF and established another institution to support the web’s standards. In effect, he decided to defer getting a consensus at the IETF, and, instead, he issued an informational RFC (RFC 1630) about what he had already designed. That effectively concluded the conversation in the short run. This RFC left too many issues unresolved to serve as an effective standard—namely, a completely finished design for a protocol endorsed by the IETF.

At the same time Berners-Lee began working to establish the World Wide Web Consortium (W3C). In February of 1994, he had a meeting of minds with Michael Dertouzos at MIT. It altered Berners-Lee’s vision for what type of institutional support for the web would help him achieve his goals. Berners-Lee established his consortium in mid-1994. By early 1995, his consortium was set up at MIT and in Europe, and he was able to elicit cooperation from many firms in the industry—in terms of both financial contributions and representation at consortium discussions. In short, Berners-Lee started on his way to standardizing the web in his own organization while bypassing existing processes at the IETF.

A brief comparison between the processes at the IETF and those adopted by Berners-Lee at the W3C shows what lesson Berners-Lee drew from his experience at the IETF. Berners-Lee stated that he had wanted a standardization process that worked more rapidly than the IETF, but otherwise shared many of its features, such as full documentation and no restrictions on how others used the standards.

In contrast to the IETF, the W3C would not be a bottom-up organization with independent initiatives, nor would it have unrestricted participation. Berners-Lee would act in a capacity to initiate and coordinate activities. To afford some of these, his consortium would charge companies for participating in efforts and for the right to keep up to date on developments. Notably, Berners-Lee retained the right to act decisively.

Rivalry played a surprisingly muted role in this situation. In spite of all these misunderstandings at the IETF, Berners-Lee continued to have cordial relationships with many active Internet pioneers and participants. Why was cordiality noteworthy? The establishment of the W3C removed the IETF from a position of technological leadership in the development of an important new infrastructure for Internet users and developers. And it rendered concrete what was commonly stated: IETF’s leaders remained open to a variety of proposals to improve the Internet. They did not presume to be the exclusive institutional progenitor or technological source for all Internet-related activities.31

To be sure, most initiatives ended up coming back to the IETF processes in one form or another because it was useful for them to do so. Protocol writers found it helpful to use and reuse existing processes. Indeed, Berners-Lee also returned to the IETF to eventually get the RFCs he had sought all along, just much later than he had initially aspired to, and much later than would have been useful.32 Eventually the primary dispute between the IETF and W3C would be over minor issues, such as the boundaries of expertise between the two groups for niche topics.

This experience illustrates consequence from one aspect of openness, the lack of restrictions on use of information. The working group participants at the IETF treated the ideas behind the web as an object for philosophical debate, and (initially) the leaders treated the W3C with benign neglect, particularly in 1994. Neither action was helpful to the web, but none deterred it. More to the point, lack of control over the IETF standards effectively gave discretion to other individuals or organizations—in this case, Berners-Lee and the W3C—that were entrepreneurial enough to build on existing Internet protocols, even in ways that bypassed the existing process for making protocols and established new ones.

The Commercial Web and the PC

Tim Berners-Lee’s experience with the IETF would contrast with events in the PC market, although that was not apparent at the outset. Microsoft’s managers acted much like everyone else, incorrectly forecasting little technological and commercial success for the World Wide Web. Moreover, when Netscape first began, it was treated like any other application firm meeting a niche need. In early 1995, before Bill Gates changed his mind, Netscape was viewed as a firm making an application that helped raise demand for the PC platform. It was given access to software tools and to the usual technical information about Windows 3.0 and 3.1, as well as Windows 95. Netscape’s programmers would call for technical support, just like any other firm.

Events got more interesting in the spring of 1995. Bill Gates wrote “The Internet Tidal Wave” in May. As chronicled in the chapter 6, Gates’s memo no longer cataloged Netscape as just an application. Not long after writing this memo, Gates tried to have Microsoft buy a stake in Netscape and gain a board seat. Those negotiations did not get very far.33

After further talks, the executives in Redmond concluded that Netscape would not cooperate on their terms, which led Microsoft to treat Netscape in as unfriendly a manner as it could in the summer of 1995, by denying it access to technical information and other marketing deals. Coming as late as these actions did for a product that had already been released, these actions only slowed Netscape down a little bit, and came nowhere close to crippling it.

The confrontation escalated from there. Over the next few years, as later chapters will describe in detail, Gates went to enormous and deliberate lengths to prevent Netscape’s browser from becoming pervasive, particularly with new users. Microsoft put forward its own browser and undertook numerous defensive actions related to the distribution of browsers, such as making deals to prevent Netscape’s browser from becoming a default setting at ISPs or in the products PC assemblers shipped.

A Conspiracy to Offer a Comparison

After the Internet had blossomed into a large market in the late 1990s, many of the most vocal defenders of Internet exceptionalism began to argue that open standards were superior to proprietary standards. These unqualified opinions were misleading at best or just plain wrong. There was no economic law then—just as there is none now—that makes platforms built on open standards superior to the others in the market, or vice versa. Both can succeed with meeting customer needs, depending on how they are designed, priced, and supported. Both will address many of the same incremental opportunities in much the same way and yield much the same outcomes.

Historical circumstances rarely provide a clean comparison of the consequences from such distinct market and organizational structures, but in 1995 circumstances conspired to do so. Here were two markets and organizations that provided just such a comparison. Only a few months apart, the same technology, the web, diffused into two settings with distinct processes for making decisions. One was open while the other was proprietary. In both cases, initially the web received little help from established leaders, and that motivated actions that did not rely on the leader for much. In both cases, the leadership eventually recognized the error in their perceptions and began to take action.

There the similarities end. The IETF eventually came to an understanding with the W3C, Netscape, and all the complements built on top of it. In contrast, the first firm to develop a product for the web, Netscape, invited many other firms to build complements, and met with active resistance from Microsoft, the firm providing proprietary technological leadership. Microsoft did not want competition for its position of leadership from Berners-Lee, Netscape, or its partners.

Economic incentives accounted for the difference. Microsoft was intent on protecting its position as the exclusive firm to support pervasive standards in PCs, and it interfered with its aspirations to acquire a similar position in servers for business enterprises. In contrast, the IETF had a set of principles regarding equal access to information, transparency, and participation, and the organization stuck to those principles.

That comparison permits a summary of the key questions motivating this chapter: Did openness matter for the commercial Internet? Did openness shape the arrival of innovation from the edges?

It certainly did help to keep many longtime participants in the Internet motivated. Many participants preferred open institutions and processes for their own sake. Their participation helped keep the IETF vital. Openness could not help entrepreneurial firms raise revenue, or make payroll, but it did not stop inventive specialists from exploring many new application and content markets. It did enable participation and incremental innovation from a wide variety of contributors.

The experience of Tim Berners-Lee also helped those entrepreneurs. No policy at the IETF prevented him from taking initiatives, which he did, and to the benefit of others.

In contrast, the leader of the personal computing market, Microsoft, did not greet the emergence of the World Wide Web with glee. Microsoft actively tried to gain exclusive control over the deployment of pervasive standards.

The commercial development of the newly privatized Internet would have been different without an open organization at the IETF and W3C. While its enthusiastic proponents exaggerated the importance of openness, there was more than a grain of truth underneath the rhetoric. The commercial Internet would have encountered many more challenges without such an open structure to enable the growth of innovation from the edges.

1 Vint Cerf and Robert Kahn are co-creators of TCP/IP. At the time of this quote Cerf held the title of chief Internet evangelist for Google. The quote comes from a letter to the US House of Representatives, Committee on Energy and Commerce, Subcommittee on Communications and Technology, November 9, 2005. See Cerf (2005).

2 Wilder (1995).

3 For more on moving between the Unix and Windows programming environment, see, e.g., DiBona, Danese, and Stone (2006).

4 For an analysis of the wide range of meanings, see, e.g., West (2007).

5 See, e.g., Cusumano and Selby (1995).

6 This is a long story. The account in the text provides the basic outline, but does not provide much information about the alternatives, such as OS2, DR-DOS, and so on. See, e.g., Cusumano and Selby (1995).

7 DOS became a layer within Windows 95, so old DOS users could face as few problems as possible when migrating their data and programs into the new operating system environment.

8 See, e.g., the discussion in Gawer and Cusumano (2002) about “growing the pie,” which features Intel’s sponsorship prominently.

9 Page 47 of Allison (2006).

10 See Bresnahan (2003), and Henderson (2001).

11 See discussions in Gawer and Cusumano (2002), Bresnahan and Greenstein (1999), Gawer and Henderson (2007), Bresnahan (2003), and Henderson (2001) for illustrations of the range of such actions.

12 Henderson (2001) stresses that this action was typically taken in context, as one of many ways for Microsoft to discourage other firms from taking action it deemed undesirable.

13 Segaller (1998), 235.

14 For a review of the process and how it has changed, documented as RFCs, see http://www.ietf.org/IETF-Standards-Process.html. Other places that explain drafting standards are RFC 2026 or RFC 1602.

15 The IETF appoints working groups, and those have grown considerably over time. See, e.g., Abatte (1999), Russell (2006, 2014), or Simcoe (2006).

16 See Russell (2006, 2014) or Simcoe (2006).

17 Firms were required to disclose patent holdings if these were pertinent to the topic under discussion. For the present general guidelines, see https://datatracker.ietf.org/ipr/about/ and RFC 3979. Prior policies are largely spelled out in RFC 2026, and the anticipated processes and policies that pertained to the mid-1990s can be found in RFCs 1602 and 1310 (especially sections 5 and 6).

18 The IETF leadership chose a position that allowed it to retain its inherited functioning processes. It also continued to do as many other standards organizations. It did not close the door on adopting a protocol that covered a private firm’s patent, as long as that firm agreed to license at a reasonable and nondiscriminatory rate.

19 The term “swarm of standards” is due to Updegrove (2007).

20 See Crocker (1993), Bradner (1999), or Simcoe (2006) for explanations about the roles of “informational” RFCs, which often served as a public documentation about a discussion for a new protocol or standard at an intermediate stage.

21 Assigning credit was useful in the era when NSF funded improvements and academics needed to document their contributions to the NSF. It continued to have value later.

22 See Russell (2006, 2014) for a full account of the developments behind this speech. It was delivered in July 1992 in Cambridge at an IETF meeting.

23 Numerous reasons motivated the opposition. In addition to the concerns, noted in the text, about hierarchy and centralization of authority, there were technical objections and issues about melding the process. See Russell (2014).

24 The strong emphasis of the contemporary debate on process above result, and on remaining independent of the OSI efforts rather than intertwined with them, led other contemporaries to label this a “religious war” among standardization efforts. See, e.g., Drake (1993).

25 Once again, the leadership at the IETF was aware of these requirements, and area directors (ADs) from the Internet Engineering Steering Group (IESG) were to have frequent contact with working group chairmen. See RFC 4677 (titled, The Toa of the IETF: A Novice’s Guide to the Internet Engineering Task Force).

26 For the challenges and difficulties, see Berners-Lee and Fischetti (1999) and Gillies and Cailliau (2000).

27 The URL is the term that emerged from the IETF deliberations. Berners-Lee preferred URI because it came closer to what he was trying to accomplish philosophically. Berners-Lee and Fishetti (1999), 62.

28 This is a frequent theme in the first few chapters of Berners-Lee and Fischetti (1999), especially before the formation of the W3C.

29 Berners-Lee and Fischetti (1999), 62.

30 The IETF started to experience growing participation at this time. Bradner (1999) states there were 500 participants in the March 1992 meeting, 750 in the March 1994 meeting, 1,000 in the December 1994 meeting, and 2,000 by the December 1996 meeting.

31 Moreover, cordiality was notable because it was not an inevitable outcome. It stood in contrast to the rivalry and strained relationships between many IETF participants and other organizations that claimed to design networking standards in the precommercial Internet, such as the OSI. See Drake (1993) and Russell (2006).

32 Berners-Lee did return to the IETF for further refinements and upgrades. This included RFC 1738 in December of 1994 (a proposal for URL), RFC 1866 in 1995 (a standard for HTML), RFC 1945 in May 1996 (informational about HTTP), RFC 2068 in January 1997 (a proposal about HTTP), RFC 2396 in August 1998 (a draft standard about URIs), RFC 2616 in June 1999 (a draft standard for HTTP), and RFC 3986 in January 2005 (a standard for URIs).

33 Cusumano and Yoffie (1998) provide a timeline and analysis of these events.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.116.36.194