4

image

A Taste of Champaign

DAVE: Open the pod bay doors, HAL.

HAL: I’m sorry, Dave. I’m afraid I can’t do that.

In 1968, Arthur C. Clarke published 2001: A Space Odyssey. Thanks to Stanley Kubrick’s movie, the oddly wired and soft-spoken computer HAL became a classic canonical cinematic figure. In the book, Clarke imagined a world in which HAL would be a midwesterner at birth, becoming operational at the HAL plant at the University of Illinois at Urbana-Champaign on January 12, 1997. In the movie, HAL was five years older, born in 1992.1

While enough time has passed to see that no computer resembling HAL has been created in either Urbana or Champaign, or in 1992, Clarke should not be regarded as a technological Nostradamus who missed the target. Clarke’s writings were somewhat prescient. He picked the right location and almost the right year for the emergence of a revolutionary invention in computing. Around 1992 the campus gave birth to an important prototype software application for the Internet called Mosaic. It was a browser.

image

FIGURE 4.1 Tim Berners-Lee, creator of the World Wide Web, and founder World Wide Web Consortium (photo by Paul Clarke, 2014)

Mosaic was nothing like HAL, and neither was the birth of the browser, and the differences would shape how the browser commercialized. Clarke had presumed premeditated development of a large computer system by a single, hierarchical, and deliberate organization. In other words, HAL could have been developed by the IBM of the 1960s, the most familiar archetype for a successful computer company. Hal was conceived as an instrument for a closed world, a spaceship. The central drama of 2001 revolves around Hal’s interactions with the crew within that closed space.2

Mosaic, in contrast, was a child of the Internet’s decentralized, cumulative, and unpredictable development. It emerged from a small project inside of a research institute funded by the National Science Foundation, which, ironically, operated a big box computer pushing the frontier called a supercomputer. Mosaic built on top of, and openly imitated, many working prototypes created elsewhere by others. The project had goals that loosely connected to the primary purpose of the organization. Deliberate and hierarchical did not describe the manner of Mosaic’s invention. Nor was it an instrument for a closed world, unless the expanding, unpredictable, and fast-growing Internet were considered closed.

image

FIGURE 4.2 Robert Cailliau, helped Tim Berners-Lee create the World Wide Web (photo by en: CERN, 2005)

image

FIGURE 4.3 Marc Andreessen, co-creator of Mosaic, and cofounder of Netscape (photo by Elisabeth Fall, September 24, 2013)

As one piece in a much bigger system, Mosaic was yet another software contribution from one set of developers adding to the end-to-end Internet. Mosaic was a piece of code for PCs and worked with other layers of the World Wide Web—namely, HTML (hypertext markup language), the hypertext transfer protocol (HTTP), and URLs (uniform resource locators). The web had been designed by Tim Berners-Lee to work with the domain name system that had become a standard piece of Internet infrastructure, all of it broadly based on TCP/IP.

Mosaic’s birth led to the birth of another piece of software, the Netscape browser, and many later chapters will discuss how Netscape catalyzed commercial events. The Mosaic browser was not the first browser; rather, it was the first one whose use exceeded more than a million users. Mosaic was a catalytic commercial prototype for the entire Internet because it led to the founding of Netscape. Netscape’s browser was catalytic for commercial markets because it eventually caused virtually every participant in the commercial computing and communications market to alter their investment plans and strategic priorities.

How did Mosaic initially cross the boundary from research tool to commercial software? Some of the key decisions did take place in seemingly unlikely places. One was Champaign, Illinois, and though it was the fictional home to HAL, it was a likely source for invention. It had a long and proud history as one of several leading academic centers of research in computer science in the United States. The second was a bit more unlikely, at a lab in Switzerland called CERN, which is home to the largest collection of high-energy physicists in the world.3 Only the third could have been predicted with near certainty: many other key decisions occurred in Silicon Valley, where venture-funded firms predominated.

How did a European physics lab and a public university in the Midwest enable prototype software to transition from the research-oriented Internet to the commercial Internet? Even an inventive writer of science fiction such as Arthur C. Clarke could not have imagined the unlikely consequences of the combination of parochial events.

Looking more deeply, it was not just the coincident discoveries of a few blind squirrels finding their nuts. There was an underlying economic archetype at work, and one familiar to the Internet—namely, inventive specialization. The Internet of the early 1990s was reliable enough, dispersed enough, and modular enough that the network could support a range of new applications invented by specialists. Several different foresighted inventors considered different ways to take advantage of the potential and aspired to realize their visions, and, collectively, altered the value users gained from the Internet. Just as had occurred in the past, in the 1990s the Internet’s architecture could nurture decentralized invention and independent application development.

What was new? There was one key difference with the era in which NSF governed the Internet: privatization allowed inventions to move rapidly into private use and into commercial sale. More to the point, because the difference between the research-oriented network and the private Internet were small, an innovation like the browser—one seemingly from the edges of the network—could become available to commercial users and quickly become adopted. In this case, private firms had incentives to get it adopted, and that helped it move quickly into widespread use and catalyzed a set of actions that transformed the world.

Connecting Continents

Prior to 1989, the physics research community in Europe expressed impatience with its inability to electronically communicate with its North American colleagues, who had been using the research-oriented Internet for some time. In 1989 Europe contained very few users of the TCP/IP-based Internet. In that year a group of researchers successfully ignored all the cross-continental rivalry in the Internet/ISO debate and negotiated to establish RIPE, a European-wide network of connections to the Internet in North America.4

That connection alone did meet some of the needs of researchers, since it made sending electronic mail and transfer files possible. Yet researchers wanted more. The Internet in North America had evolved into a complex, interconnected set of computing facilities from a heterogeneous set of universities, research laboratories, and academic departments. That size and complexity highlighted an overwhelming problem for both newcomers and longtime users: how could one navigate the myriad sites all over the Internet?

Some software tools helped in searching on the Internet,5 but there was a long conversation among computer scientists about numerous ways to improve upon these tools. The bigger challenge was designing something that others on the research-oriented Internet would use. It had to be functional, intuitive to the user, easy to explain, and not too expensive for administrators to install.

Robert Cailliau and Tim Berners-Lee were colleagues at CERN, which had TCP/IP connectivity. Both had an interest in developing tools to aid physicists in the sharing of information. Both independently pursued their interests. Both were focused on improving the search and sharing tools for their constituent community, researchers who came up against these constraints regularly in their scientific pursuits.

But in 1989, Cailliau eventually abandoned his own efforts, combining them with those of Berners-Lee’s, who was building key parts of what would become the World Wide Web. In addition, Berners-Lee later organized the pieces of code by establishing the World Wide Web Consortium (W3C), which standardized many protocols, with Cailliau helping to organize the W3C at its first conferences.

Berners-Lee was far from the first programmer to try to devise a hypertext system, and he was aware of prior efforts. At the outset, Berners-Lee’s goals were modest and were heavily informed by both the prior experience of his own previous efforts and others. Berners-Lee had experimented with making computing tools for physicists in the past, and despite his best efforts had seen prior innovations go unused. That experience taught him the importance of compromising technical ambition in order to foster adoption: users adopted software when it was easier to use and its functionality was readily apparent. He also continually faced questions from his management about how his inventions would aid the physics research community.6 That kept his attention focused on meeting pragmatic goals, inventing software his user community would find valuable. While the project was not starved for resources, the setting also forced a certain efficiency and resourcefulness on the design and the process for building it.

Although there had been years of discussion within computer science about how to design such a system, one of Berners-Lee’s core insights was not to design a perfect system. Rather, he designed one that improved performance for his community, making them better off than they were, motivating them to try the software and use it regularly. Berners-Lee deliberately kept the complexity low in spite of his ambitions. In a conscious attempt to make his invention easy to install and use, he reduced the scope of functions his software could perform. He also made it backward compatible with some other tools in widespread use—other tools worked within the system designed by Berners-Lee. Backward compatibility was a concession to the habits of his constituency—once again, something he did for the sake of promoting adoption by offering a migration path from old processes to new that involved few frictions.7

The community at CERN turned out to have propitious attributes as lead users. They were technically oriented but were not computer gurus—that is, they were physicists who needed to send files easily to one another and make them available for downloading. Although they did not know all the ins and outs of each computer system, these users did not require the type of easy-to-use designs and instructions that computing needed to appeal to the mass market. The lead users at CERN did not fear technical difficulties, they could learn new procedures, and they would follow directions to achieve desired ends. They also tended to work in environments with technically skilled systems administrators, who could take care of a few installation difficulties.

In addition, this community was already accustomed to the idea of electronic communication, if not the actual process. Berners-Lee faced no difficulties explaining his goals to this user community. Instead of spending time on demonstrating the software, he just had to get something that worked for them. If he could get some to adopt an invention and see its value, those first users would motivate others to adopt as well.

Berners-Lee came up with a comparatively simple model for hypertext computing where all information, whether text or graphic, was presented to the user in one format. Navigation was accomplished via links between files, where the user could either follow a link or make a query in search of a page where lists of documents existed. The code aimed to do a lot with as few steps as possible.8 In summary, Berners-Lee’s narrow search for a new invention yielded a rather simple but elegant result, and one with unexpected and wide breadth.

Adoption was always a priority, and that led Berners-Lee to develop working prototypes for his own lab and others. For example, during the period of experimentation at CERN, one of Berners-Lee’s first illustrations was administrative, not scientific. He used his invention to make available the document known as the telephone directory for CERN. His administrators could see the value of that.9

After numerous trials at CERN, in 1991, Berners-Lee decided to make the key inventions, HTML, HTTP, and the URL, available for others on shareware sites.10 Together these formed a hypertext language and location labeling system that made it possible to transfer textual and nontextual files. Once installed in a host computer these were well suited to Berners-Lee’s constituency in two specific senses: (1) They helped users organize transfers of previously known files, and (2) they helped make files available to others without a tremendous amount of searching prior to transferring.

The URL and HTML system was not the only application on TCP/IP for organizing many firms. It diffused quickly for several reasons: It appealed to system administrators who perceived its value for a problem many of them had in common, such as making large directories widely accessible on a local area network; the spread of the Internet itself also contributed to the ease many users had contacting the sites where this shareware resided; compatibility with preexisting search tools, such as WAIS (wide area information servers) and Gopher appealed to many users, who could either continue to use as always, or in conjunction with Berners-Lee’s invention; in February 1993, the University of Minnesota, the home of the inventors for Gopher, announced the intention to charge a licensing fee for its use, and that raised a concern among many insiders that the university intended to charge for all extensions as well, which motivated many administrators to abandon investing in Gopher and make use of HTML and the URL.11 Berners-Lee and Cailliau put considerable effort into publicizing their invention and making it known, investing more energy in marketing and distribution than typically found among computer programmers in the academic world. Perhaps most importantly, the software enabled users to do something demonstrably useful right away—namely, it brought text to life, allowing users to add color, graphics, and sound. Because its value was easy to illustrate, leading users became advocates for its diffusion.

In due time the new would replace all others, and it is easy to have a retrospective bias. The diffusion of the web did not seem inevitable at the outset. With additional functionality the web and its hypertext would attract more users, and the entire system would become self-reinforcing. After privatization of the Internet, many commercial applications would be built with the tools of the web, and that would lead the other tools to fade into obscurity.12 None of the early users saw that coming, however. At the outset users spread the web by word of mouth.

Word of mouth worked like this: One pair of researchers at distinct installations would use HTML and URL, like it, and then ask other administrators at other installations to get involved. They would, and then one of them would tell another friend, who would tell another, and so on. In brief, successful use motivated an exponential increase in adoption and use, and it operated both across departments within a single location and between institutions across locations. Such “word-of-mouth” marketing brought about a seeming explosion of use over the next several months. The technical community liked Berners-Lee’s invention and began installing it to use within research-oriented computing.

The invention of the popular browser Mosaic helped spread HTML and URL even further throughout universities, a topic discussed at length later in this chapter. As they spread, Berners-Lee forecast the need for an organization to assemble and standardize pieces of codes into a broad system of norms for operating in the hypertext world. He founded the World Wide Web Consortium (or W3C) in 1994 for this purpose.

While seeking to found the W3C, Berners-Lee gradually concluded that CERN would not be the most hospitable home. In part this simply reflected what CERN’s administrators told him: it was a physics laboratory and not a natural home for an organization to support worldwide use of software. In large part, this conclusion reflected Berners-Lee’s considerable experience with the global computer science community and his impression about how well different locations served as hospitable homes for development.13

Berners-Lee established the offices for the W3C in Cambridge, Massachusetts, at the Massachusetts Institute of Technology. This was a key event, which will receive more attention in a later chapter. The organization ultimately helped diffuse many of the software standards and tools that became important for operating on the commercial web, fostering growth around nonproprietary protocols endorsed by the consortium. The timing was propitious as well. This software—the URL, HTML, and HTTP—was established and institutionalized just as the privatization of the Internet was nearly completed. It was ripe for commercialization, as well as ripe to support the development of other new commercial applications.

Standing on the Shoulders of Giants

As with other inventions for the Internet, the technical path for the browser involved several researchers building on each other, using the institutions of shared science common to the research-oriented Internet. Specifically, several researchers had used the World Wide Web, devised improved versions of rudimentary browsers Berners-Lee had experimented with, and made them work on Unix operating systems. Many technically skilled Internet users understood Unix, so developing for these systems was the obvious first step to take. Throughout 1992, several browsers were in use among technically skilled programmers in the research community.14

The improvements accumulated on top of each other and set the stage for a pivotal development, the creation of Mosaic by a team at the University of Illinois. This team was situated at the National Center for Super Computing Applications (NCSA), an NSF-funded research center that supported and housed supercomputers. Its founding in 1985 was one of the direct results of the initiatives to gain congressional support for the NSFNET. NCSA received national funding from general NSF research support for computing, as well as the recently passed High Performance Computing and Communications Act of 1991, which Al Gore had sponsored. The NCSA housed supercomputers, to be sure, but it also housed many of the support facilities for using the Internet to access the supercomputers, which was a function that many other researchers used.

Larry Smarr, the enterprising director at NCSA, was a physicist by training but had not set up the NCSA solely for physics. Rather, the NCSA supported a large range of projects, principally involving a network of researchers who did a variety of frontier science in computing and networking. The center regularly built shareware software, used it, and made it available to others. It also employed many graduate students and undergraduates from top-ranked hard sciences departments, as well as many of the social sciences, where mathematics and networking played a role. The undergraduates were predominantly midwesterners attending the Illinois flagship university.

The activity at the NCSA can be viewed in both a positive and negative light. On the one hand, it could be seen as part of a broad mandate to experiment, invent new software for the Internet and supercomputers, as well as make it available for others. On the other hand, Smarr’s pervasive push of technology at the center could be seen more cynically, as an attempt to invent uses for the overly abundant computing capacity and justify the existence of the NCSA in time for the next application for more funding. Entrepreneurial presentations by Smarr around campus aimed to recruit faculty for the center, and fueled both views.

A more measured assessment emanates from a different premise—it recognizes the value of sponsoring a portfolio of research activities in the presence of a range of risks over the payoff of any specific project. While many, or perhaps even a majority, of projects at the NCSA were not expected to yield high returns, a few with very high returns justified the expense on the whole portfolio. Indeed, as it turned out, one project by itself, Mosaic, resulted in gains to society that paid back the expenses at that center many times over.

Mosaic initially appeared to be a routine project making mundane and incremental progress, seeking to design an easy-to-use browser for nonresearchers. That is, it was an attempt to improve on one aspect of the shareware Berners-Lee had made available less than a year earlier and to which others had added improvements. The project gradually became anything but routine. Mosaic’s team of programmers included an undergraduate, Marc Andreessen, and a graduated master’s student, Eric Bina, who had joined the NCSA as a full-time employee in 1991. Andreessen and Bina had a talent for programming and design. The browser was called Mosaic to reflect the team’s aspiration that the browser would open up a world of different pictures to student users.

Building on the inventions of others was not unusual in the technically oriented Internet, which operated under the norms of academic computer science. As long as credit was given to earlier inventors, standing on their shoulders was acceptable. With a seemingly cavalier attitude, Mosaic’s earliest prototypes liberally borrowed from prior designs for browsers. Andreessen and Bina made no secret that they were borrowing, and, following norms for open science, communicated with many of those prior designers, who expressed no interest in further developing their version of their own software.15 That was the best of all possible worlds for Andreessen and Bina, since it meant there was little rivalry, no frictions to slow them down, and they possessed discretion to do as they pleased.

Aside from refining and improving the software there were a couple of aspects that distinguished Mosaic from other NCSA projects. First, it was not a project pursued by just a single researcher. The NCSA’s institutional support helped the team design the software, and diffuse it, as well as help fund improvements to it over time. The institution paid for equipment and other support, and its endorsement also helped give Mosaic credibility with university administrators, which fostered adoption. Over time, the NCSA browser eventually was built for several operating systems. Crucially, that would include a Windows-based system from Microsoft, at that time the most widely used operating system worldwide for PCs.16

While releasing a version for Windows might seem like an obvious thing to do in retrospect, until then it had not occurred to any designer in the technically adept community of Internet programmers to write a browser for a nontechnical user, save one. They had taken care of their own parochial needs first, for which Unix-based browsers were sufficient. More than invention also played a role, as Berners-Lee later recalled:17

Marc and Eric did a number of very important things. They made a browser that was easy to install and use. They were the first one to get inline images working—to that point browsers had had varieties of fonts and colors, but pictures were displayed in separate windows. Most importantly, he followed up his and Eric’s coding with very fast 24 hour customer support, really addressing what it took to make the app easy and natural to use and trivial to install. Other apps had other things going for them. Viola, for example, was more advanced in many ways, with downloaded applets and animations way back then—very like HotJava.

Marc marketed Mosaic hard on the net, and NCSA hard elsewhere, trying to brand the WWW and “Mosaic”: “I saw it on Mosaic” etc. When Netscape started they of course capitalized on Mosaic as you know—and the myth that Mosaic was the first GUI browser was convenient.

The release of the Mosaic browser for Unix-based systems began in the spring of 1993, and it immediately began to receive attention and adoption. It was followed by the release for the Windows system in late fall of 1993. It too became available on shareware sites aimed at making software available to other university users. NSF’s accounts about the browser say,18 “In less than eighteen months after its introduction [Mosaic] became the Internet ‘browser of choice’ for over a million users.”

Andreessen increasingly spent all his free time on the browser. Initially, he had helped program it, but increasingly he was helping debug it and responding to requests and suggestions from users.19 Throughout the academic year of 1993, he continued in this role with others at NCSA, performing what a software firm might call “support,” and “feature upgrades,” improving the browser’s design.

Two intricately related events next shaped the direction of Mosaic. First, Marc Andreessen graduated in December 1993, leaving the Midwest for a software-programming job in California. Second, the browser became available for ISPs in a variety of commercial formats. These two events were linked, as NSCA was on its way to licensing the software. The latter did not sit well with the programmers. Charles Ferguson, an entrepreneur aspiring to make tools for browsers, recounts what was widely believed at the time:

Smarr and his managerial team had moved to assert control over Mosaic. The development team got thousands of emails a day with fixes, complaints, and questions, which placed them at the very center of the ferment. Smarr decided to route the email to a generic response desk and then told the developers that they could not even see it, because it interfered with their work. When Andreessen graduated in December, he was offered a $50,000 salary to stay at NCSA—high by university standards—but Smarr would not let him manage Mosaic development. Andreessen quit and headed for California, where he got a job at EIT (Enterprise Integration Technologies), which was, however ineffectually, exploring commercial opportunities on the Internet.20

That account accords with the version of events Netscape later told about itself. It is, however, perhaps a bit unfair to the university and to Smarr in particular. In December of 1993 Andreessen was young, footloose, ambitious, and, by all accounts, headstrong. It is not obvious that any offer from the university would have kept this young talented programmer in Champaign, or for that matter, the state of Illinois.

Growing a Business

As Mosaic grew in popularity (measured by downloads), the managers at the NCSA realized this invention had commercial potential. While they anticipated that the browser would diffuse into popular use through shareware, which was free, they did not view that as sufficient to support a sustainable software business over time. The university administrators arranged for commercial licensing of the browser.

There were numerous justifications for initiating a licensing program. Any popular piece of software requires extensive support. One way to do that involves seeding a firm to perform that support. Commercial licensing of the software was a viable way to seed such a firm.

The University of Illinois initially tried to license the software itself and then decided to work through a known channel, an existing software firm, which was given a master license. It was managed by a third party, a company known as Spyglass, who had helped commercialize other inventions out of the NCSA in the prior years, and though none of them were as large as the browser, the company had worked appropriately in the past.

This effort began with good intentions. Universities with rich technical histories, such as the University of Illinois, are frequently pressured by state oversight committees to find ways to translate their faculty’s inventiveness into innovations that help society at large (and their state in particular). Other universities, such as MIT, had seemingly shown the way. Their licensing offices turned patents for faculty inventions into lucrative licensing deals. The arrangement with Spyglass appeared to be an answer to such a request: It was actively speeding the commercialization of software invented at the NSCA, and benefiting an Illinois-based firm as well.

One path was chosen and another was not. An active licensing program precludes an alternative—namely, leaving things to chance and shareware. Once the university puts the underlying software on shareware sites, it must passively wait for an enterprising software firm to imitate pieces of it and put it to use in commercial applications.

A licensing program also precludes another alternative, releasing the underlying software code, which would make it difficult to establish unique ownership over a piece of software. Accordingly, though the university had released earlier versions of Mosaic’s code for all to see, they did not do so on the last version.

The university administrators started the licensing program without anticipating what actually happened: the programmers left the state altogether, an action that did not benefit the state at all. As noted, first Andreessen left for California. A few months later, so would Bina and many others, as part of an effort to start a new firm.

Worse yet for the university, that same team built software that eventually competed with the university’s licensing program. Specifically, the newly graduated Marc Andreessen, who had moved to the area between San Francisco and San Jose popularly known as Silicon Valley, had previously struck up an e-mail conversation with Jim Clark. Clark had used Mosaic and was curious about it. As it happened, Clark had founded Silicon Graphics many years earlier and was well known in the industry. Clark and Andreessen hashed through a variety of predictions for the future of browsers, starting with interactive TV.21

Clark was tiring of his role at Silicon Graphics and its strategic fights and decided to step down as chairman of the board in February 1994. After that point he wanted to start another company. Not long thereafter, Clark and Andreessen’s relationship coalesced into a business plan in April 1994. After considering Internet television as a business application for browsers, they eventually settled on selling the browser alone, enabling surfing on the web. They called themselves the Mosaic Communications Company and sketched a plan to make money selling a browser and the servers to go with it.

Clark openly admitted that the business plan was sketchy, but viewed it as a by-product of the enormous opportunity in front of Mosaic Communications Company. Approximately two million copies of Mosaic had been downloaded in the spring of 1994, and millions of more Internet users had never tried it. He reasoned that they could displace Mosaic with a new and better browser and generate millions of new users. Although he initially did not have a plan for generating revenue, ever known for his understatement, Clark was sanguine. As he said later, he saw twenty-five million users on the Internet in April 1994, and he expected that to double by the time the company shipped a product. He expected the product to appeal to all of them, and, thus, a rather simple business plan was born:

You’ve got to be able to make money with fifty million users using your product.22

Eventually Clark and Andreessen’s company gave away their browser for free to households, but charged businesses for licenses and support. The free downloading was necessary to compete with Mosaic, which, as an academic program, also was free to students and households. Many enterprise customers were willing to pay the fees, however, as they began to build applications around browsing. Eventually this plan would expand far beyond the browser. It blossomed into an extensive business plan to support a range of complementary activities around their own browser, server tools, and range of services.

One of the notable features of this business plan was the absence of concerns about the underlying substrates of the Internet, such as the presence of the backbone, ISPs, and routers. Andreessen knew that structure well from his days in Champaign, and so did many of the other employees he and Clark would soon hire. They had confidence that their browser would work on a privatized Internet and commercial adaptation to the World Wide Web. They only had to build the browser, not any of the other pieces.

At the outset the newly formed team moved quickly. In part, as with any entrepreneurial firm, their urgency arose from the desire to get their product to market fast. Urgency also arose because this team was concerned about competing with others seeded by Spyglass’s licensing program, which held the University of Illinois’s master license.

Clark helped the business in a variety of ways. First, he put in as much as $4 million of his own money to finance the start-up, and had connections for collecting more. The team applied for and received venture funding from the same venture capitalists that had backed Clark’s earlier efforts—Kleiner, Perkins, Caufield, and Byers, one of the premier venture capitalists on the West Coast. Founded in 1972, this firm was well known for its investments in information technology over many years, including firms such as Compaq, SUN Microsystems, and the predecessors to AOL.

While the financial backing was helpful, the endorsement and connections from L. John Doerr would be especially useful, particularly for recruiting new executive talent.23 Doerr was already well known, having funded a string of other start-ups, including Compaq, Intuit, and Clark’s prior firm, Silicon Graphics. In addition, Clark’s connections with the West Coast computing community and his stellar reputation also helped recruiting.

Throughout the spring of 1994, Clark, Andreessen, and Doerr started recruiting employees. Their first action was telling. Immediately after getting the funding in May 1994, they hired many of the same programmers who had worked at the NCSA in Champaign.24 With that one blow, they cornered most of the market for insider knowledge about the browser (outside of Spyglass’s programmers). Whether it actually mattered or not is an unanswerable question, but the appearances did make an impression. It looked like a very astute business move.

After that, the young company went on a crash course to become a large organization supporting worldwide use of their browser. Clark’s and Doerr’s energy and ability to interest world-class executive talent with experience made for eye-popping headlines among the executive insiders of Silicon Valley during the spring and summer of 1994. For example, they hired as chief operating officer Jim Barksdale, CEO from AT&T wireless (and, before that, at McCaw Cellular Communications, which AT&T bought out).25

Even before the young company shipped a product, others specializing in start-up markets began to take notice. This commercial start-up had financial backing from strong venture financing. Moreover, the enterprise had a famous founder, and he and his backers were actively recruiting world-class executive talent in addition to the programming talent.

An Early Confrontation

Back in Champaign, the movements of the university’s former students could not help but raise eyebrows. On the one hand, the university had arranged for diffusion, support, and further development of its inventions through two channels, shareware and licensing. These students had introduced a third and unexpected channel: an enterprise based on the West Coast, founded on the knowledge of several programmers’ deep familiarity with the inner workings of Mosaic.

Spyglass’s managers concluded (correctly) that the new venture aimed to make Spyglass’s actions less valuable and potentially obsolete in the marketplace. They responded as any manager in their shoes would have: since Spyglass had been given the right to license the trademarked name Mosaic, Spyglass’s management decided to defend its intellectual property. It had its lawyers contact Mosaic Communications Company with the intent of getting them to stop using the name Mosaic.

The lawyers’ actions had two effects. Firstly, in November, Clark and Andreessen chose to end the problems by finding a new name for their firm, renaming it Netscape. Secondly, they did some additional programming, making certain their software did not overlap with the intellectual property owned by the university, eliminating any risk of such claims.26

A tussle over a name, by itself, would not be sufficient to deter any new enterprise, and it did not in this case. However, unsurprisingly, Netscape employees resented the actions—especially those who had recently graduated from the university. They did not blame Spyglass as much as they blamed their alma mater for licensing the software in the first place, rather than making it available as shareware, which was the normal practice. They viewed these actions as a clumsy nuisance, arriving at a moment when their time was precious and their commercial needs urgent.

It is not a foregone conclusion that every start-up will succeed, even those with such a strong set of advantages at a young age. Personalities can clash in unexpected ways under the stress of entrepreneurial life, for example, or technical issues can emerge that nobody foresaw. If this had been a weak start-up, perhaps this little tussle over a name would have mattered. As it turned out, it was a minor blip.

One early test of a start-up’s management is whether it can ship its first product on or close to its self-assigned shipping date. Netscape’s management gave itself a four-month timeline for shipping a beta browser. They came close to making that goal. The beta browser was released in November 1994, with its final release in December. Commercial versions were available by February 1995. In other words, the newly founded firm released its first beta just six months after founding, and its first product in less than a year.

This speed got the firm attention because it was fast by the norms of Silicon Valley start-ups. Looking behind the curtain, such speed was not magical. The firm succeeded in being so fast not only because it had a good programming team, but also because it had things few start-ups ever have: it was not creating a design from scratch, but rather was starting from a working prototype, and it employed the original designers of that prototype, who were given the opportunity to do a makeover (albeit under a deadline). Moreover, they did this with deep familiarity and near certainty about what features users wanted.

Although Andreessen and Clark already knew from experience that there would be a market for something similar to Mosaic’s product, there was considerable uncertainty about how large that market would be, and what form competition would take. In this setting, Netscape had one strategic advantage. Their biggest risk was Spyglass, but because Netscape’s staff members were the original designers of Mosaic, they could improve upon anything Spyglass did without any learning-curve lag time. This provided the firm with a competitive advantage in a race with Spyglass.

As it turned out, demand for Netscape’s product grew in spectacular fashion through the early winter of 1995, much of it coming from displacing the Mosaic browser, as Clark had forecast. Netscape gained market share and publicity throughout the winter. Andreessen’s and Clarke’s initial strategy had been correct, and with every successful day the company embarked on an even more ambitious plan for growth.

A Host of Ironies

It is worthwhile to pause and observe several ironies in this brief tussle over a name. The form for diffusing innovations out of the nonprofit sec tor into commercial use has consequences for how much of society benefits, how quickly, and which firms reap the most benefits. These events illustrate that no licensing program is totally neutral in its consequences.

Most research-oriented universities make it a primary goal to invent new knowledge and diffuse it into widespread societal use. That is as true of the University of Illinois as it is of Stanford University and the University of California at Berkeley, the two large research universities located near the Silicon Valley. Many universities also pursue these goals with a parochial regional focus, if at all possible. This was also true of the three universities just mentioned.

The West Coast universities, however, had lived physically next door to a thriving industry for decades and evolved a set of norms that differed significantly from norms found at most other universities. Both Stanford and Berkeley had licensing programs, but they used comparatively light touches for enforcing them outside of the biological sciences. At both universities it had become quite common for former employees and graduate students to walk out with knowledge of innovations made on campus. The students could start new firms or contribute to the efforts of existing firms, sometimes without any university license at all. Often, however, these firms were within a short drive of campus, and the university retained a relationship with the former students in a variety of ways—through graduate advisors, other friends who remained on campus, and connections to other alumni.

As a former professor from Stanford, Clark was familiar with that norm, and he had taken advantage of it in his prior firm. He was also far from alone. For example, SUN Microsystems involved the teaming of Andy Bechtolsheim, a PhD student at Stanford, and Bill Joy, a programmer at Berkeley. Although SUN directly built their firm using inventions made in the university, neither Bechtolsheim nor Joy ever formally paid the university for its part.

Did the universities eventually get paid? Yes, the money came back to Stanford and Berkeley through later donations and through supporting a local industry that hired its graduates.

Clark and Andreessen established their firm using the norms with which Clark was familiar. They never had any intention of directly paying the University of Illinois for anything invented while Andreessen had been at the NSCA, but something would come back to the university eventually.

The University of Illinois had taken a different path. Although it had taken a light touch in the past, by the early 1990s it was using a heavier hand by initiating a licensing program. That program, in turn, led the university to de facto use a commercial channel for diffusing the invention, for which it needed to establish clear property rights over the work done by its employees and students.27 In many respects this was an appropriate strategy because the primary user of the technology was not expected to be geographically near. The local economy could be helped through commercialization of technology, and raising revenue became the dominant priority.

The law of unintended consequences rebounded on the university. The university was explicitly encouraging a commercial channel in the hope it would further speed diffusion. Yet in order to do that properly, the university had to establish clear property rights—a step that angered Andreessen and his friends, who reputedly became irate that the university was not assigning credit to the programmers. To make matters worse, the university handed the property rights over to Spyglass, which had to take actions that actively discouraged another competitor, while it licensed Mosaic widely. In other words, Spyglass’s fight over the name of the Mosaic Communications Company represented a by-product of the fight between the norms of the light touch and the heavy hand of active promotion. The university’s attempt to speed adoption, therefore, partly helped, but also eventually backfired, in achieving its primary goal.

A further irony only became apparent over time. Ultimately, Netscape became fabulously successful for a few years, making several alumni of the University of Illinois quite wealthy. Yet many of the university’s former students resented their experience tussling with the university over ownership of the browser. None of them made major donations back to the university.

One might conclude that society was fortunate that the dispute between the University of Illinois and Mosaic Communications Company only involved a naming right and not an issue that might have deterred or slowed Netscape further. Netscape’s actions would go on to become catalytic for Silicon Valley and other incumbent firms in commercial computing. That is, virtually every computing and communications company in the United States altered its investment and strategic plans as a consequence of actions Netscape took. Spyglass’s actions also did comparatively well, generating interest among many firms, but it simply did not have as catalytic an impact as Netscape’s throughout 1995 and later.28

What a set of ironies! The university’s administrators discouraged the very channel that eventually succeeded in having the most impact. The university’s failure ultimately ended up having positive results, as the failure of Spyglass eventually helped the university achieve its broader aims—namely, diffusing the browser into wide use.

Champaign’s Second Gift

While the events at Netscape garnered much attention, Netscape was not the only software descendant from the NCSA at the University of Illinois. In fact, the NCSA gave society one other invention, a set of standards and protocols for the web server software that worked with the browser—the NCSA HTTPd server. This was the most widely used HTTP server software in the research-oriented Internet.

The server software was the yin to the browser’s yang. The latter would not have been catalytic without the former. The browser was useless without the server to support it. The team at Champaign had sensibly undertaken a project to design server software to work with their browser.

While it is not surprising that NCSA had some server software, that fact alone does not explain why this specific software, NCSA HTTPd, became the seed for the most widely used server software. There would be other server software, but no other server software would become as popular as the descendants of the version written at NCSA.29 Moreover, in strong contrast to the browser this invention leaked to society without much explicit push from the university. It developed along a path quite unlike that of the commercial browser, eventually becoming an open source project. How did that happen?

The server was a collection of technologies that supported browsing and use of web technologies. Along with that came a key invention, protocols for supporting CGI script, which moved data from browsers to servers and back again.30 As it would turn out, that tool would become an essential building block for electronic commerce.

From the outset the server software had been available for use as shareware, with the underlying code available to all. Many webmasters took advantage of it by adding improvements as needed or communicating with the lead programmer, Robert McCool. McCool, however, left the university along with all the others to work at Netscape in the middle of 1994. Throughout the fall of 1994 and the first quarter of 1995, therefore, there was minimal maintenance of the software at the university. Webmasters and web participants became frustrated with the lack of response to identified new bugs, or suggestions for improvement in the design.

Netscape had just released its commercial browser, but had not yet built server technologies. It aspired to do so, and would eventually try, but the young company’s first product had stretched itself as far as it could go. (At this point, Microsoft did not even have plans to build any server programs for the Internet, a set of priorities that would change shortly.) Frustrated developers and impatient webmasters had nothing else to use except the descendant of the NSCA HTTPd.

The code was available on shareware sites, and many teams had improved it. At the time, there were eight distinct versions of the server in use, each with some improvements that the others did not include. These eight teams sought to coordinate further improvements to the software descended from the NCSA server. They combined their efforts, making it easier to share resources, share improvements, and build further improvements on top of the software.

The eight versions were combined and called Apache. The name Apache was a play on words: the first February 1995 effort involved bringing together a piece of software that involved many software “patches,” a colloquialism for additional code to repair problems. That led insiders to refer to their own project as “a patchy” piece of software. In a later interview Brian Behlendorf, one of the founders of Apache, acknowledges the pun, but claims it did not motivate his initial thoughts about naming the project Apache. He also referred to the Native American tribe, stating “It just sort of connoted: ‘Take no prisoners. Be kind of aggressive and kick some ass.’”31

In April 1995, NCSA administrators tried to revive their support for server software by hiring new employees. Upon learning about the Apache effort, however, administrators quickly changed plans and cooperated with the team behind Apache.32

Apache worked well with browsers, and many webmasters adopted it, sending suggestions and improvements back to the Apache team. It became the most widely used server on the commercial Internet, eventually competing with, and besting, Netscape and Microsoft’s versions of similar software. Spyglass also had a version, which did not diffuse widely. It did, however, become pieces of other software, most notably as server products from Oracle.

Apache played another role. Throughout the 1990s Apache became one of the most widely adopted open source projects, and became widely known as the first case of large-scale open source software that was not descended from Unix, as Linux was, the most widely used open source software at the time. Apache became known as one the “killer apps” for Unix, that is, an application on top of Unix whose value alone merited supporting the operating system.

With such widespread use, Apache’s organizers were able to influence others. As such, they became leading defenders of many nonproprietary protocols and standards for the commercial web. For example, CGI script became a widely used nonproprietary standard, and Apache insisted it stay that way.33

Becoming a champion of nonproprietary shareware, Apache led many other firms in the commercial computing industry to alter their strategies and investment priorities. It achieved such change because throughout 1995 and beyond, Apache’s and Mosaic’s users illustrated a working prototype for this novel approach to the organization of commercial computing.

Why did contemporaries consider the pair of server and application software novel? First, until then, personal computing was dominated by an interface known by the acronym WIMP, which stood for the predominant features of a personal computer, Windows, Icon, Menu, and Pointer. These all had proprietary code supporting them. Computing built around browsing appeared to differ significantly. It allowed users to open a window to other information, but potentially without the operating system, Windows, and with a new set of distinct icons and menus.

Second, the client-server interactions built between any browser and an Apache server did not involve proprietary code. They had the potential to allow users not to use proprietary software in their client-server applications. To many industry participants, that opened the potential for an enormous number of possible futures. For example, it implied that users would not necessarily have to buy everything from IBM or Microsoft to ensure it worked together, but instead could purchase server software from one firm and applications for browsers from another, just so long as both worked with the nonproprietary standards and protocols in wide use. This potential grabbed the attention of many, including Bill Gates. As discussed in later chapters, he reversed his previous position about staying out of the browser business, writing a memo in May 1995 about the change in direction for Microsoft. It was called the “Internet Tidal Wave.”34

The lack of prices became essential to the operation and success of the project.35 The absence of pecuniary transactions first arose at the beginning of Apache’s existence, when the University of Illinois collected no licensing fees for its use as shareware. It continued as Apache relied upon donations and a community of users who provided new features for free. Apache eschewed standard marketing/sales activities, instead relying on word-of-mouth and other nonpriced communication online. Apache also did not develop large support and maintenance arms for their software, although users did eventually find ways to offer free assistance to each other via mailing lists and discussion boards. Eventually, a foundation was established to support Apache and to provide some of these functions.

Altogether, this added up to yet another irony. The university’s neglect of the server software at a crucial moment led to the development of multiple improvements, independently made by others. That later led to an effort to unify the software, which became named Apache. That fostered the retention of nonproprietary software as a crucial component of the commercial Internet. That example, in turn, contributed to an enormous institutional shift in the practices that supported the commercial web. Hence, it is not much of an exaggeration to say that one university’s neglectful behavior led to an institutional novelty for supporting widely used software in commercial markets.

Microsoft’s Action after Licensing

There was a further irony from the University of Illinois’s licensing program and that arose from competitive events between Netscape and Microsoft, which later chapters will discuss in much more detail. To appreciate it requires a few more details about the University of Illinois’s role in Gates’s actions after May 1995. A fast-forward on future events will show the full picture.

Specifically, after Microsoft set on a course to offer a browser, its executives had to determine how to build one quickly. The fastest way to enter the browser business was through remodeling the browser Spyglass offered. Due to Gates’s late decisions about browsers, however, Microsoft’s entry into the browser business actually occurred in a somewhat roundabout way.

As part of an internal debate about the Internet, several Microsoft executives had arranged to license the browser from Spyglass in January 1995. They gained rights to fully access the code and modify it under their own brand. After May, as Gates authorized the change in direction, Microsoft added a few features, rebranded the browser, and unveiled what they called Internet Explorer 1.0 (IE 1.0) in August 1995 along with the release of Windows 95.

Spyglass’s licensed browser hastened the entry of Microsoft into the browser market, saving the Redmond-based company at least several months of development time by providing it with a working version of the software whose code it could examine and from which it could build. The value of that lead time was not apparent to others in January 1995, though it was to the team at Microsoft.36 It would be more apparent later, as Microsoft competed with Netscape from the position of an underdog. The Spyglass license was extraordinarily valuable to the Microsoft programming team in the second half of 1995. It gave them a working prototype as a starting point, just as Andreessen and Clark had had a working prototype in Mosaic.

IE 1.0 was not much more than what Microsoft had licensed from Spyglass, with many of the same features Spyglass had programmed into the browser, including some of the same bugs.37 Microsoft included it in a separate “plus-pack” that accompanied Windows 95. They did not make it central to the marketing surrounding the unveiling. In August 1995, IE 1.0 received little notice and made little market impact.

By the end of 1995, however, Microsoft began to devote enormous resources to improving the browser and supporting it.38 Eventually a thousand programmers would multiply to four times that in the entire Internet Products and Tools Division.39 Within two years, that investment would motivate all other licensees of the Spyglass browser to move their activities to Microsoft’s, effectively ending Spyglass’s ambition as the primary supporter for this software category.

Seen in retrospect, it was as if events were fueled by a propensity for unexpected consequences. First, Netscape had not put Spyglass out of the browser business. Spyglass’s last licensee did. Second, the University of Illinois sought to speed the diffusion of the innovation into widespread use. It succeeded in doing so by licensing their invention to the world’s largest software company. As a by-product of doing so, the university unwittingly took sides in a brewing competitive fight, helping the very firm that would compete with Netscape, which was founded by the university’s own graduates.

Was the licensing deal a good deal or a poor one? The deal in January 1995 between Spyglass and Microsoft initially was for $2 million. It also paid Spyglass a minimal quarterly fee for the Mosaic license. In a notable departure from Microsoft’s usual practices, the company agreed to pay a royalty from Microsoft’s Internet Explorer revenue. As it turned out, however, this royalty did not matter in practice, as IE 1.0 did not sell very well. Microsoft altered its strategy in December 1995, bundling IE 2.0 and later versions with the operating system, pricing the browser at zero. Hence, Microsoft only paid Spyglass the minimal quarterly fee after December 1995. After Spyglass threatened an auditing dispute in 1997, the companies settled for an additional $8 million.

Ten million dollars looked like a good deal in some respects. If the alternative were shareware and open access to code, which yielded the university fame but no fortune, then some money would have been better than none. In that sense it was a good financial deal. Moreover, at that time, it was one of the most lucrative deals ever for any invention out of the University of Illinois.

By the norms of Microsoft, in contrast, the amount of money was trivial, and the deal was a steal. The licensing deal helped its strategic interests in Windows, a business worth billions. It also accelerated the development of a project in a technology in which the firm had invested minimal resources, software that became a strategic priority to which the company would soon devote several thousand programmers. For a pittance, Microsoft bought the right to climb high and stand on the shoulders of others.

Does Public Funding Pay Off?

Even the best operated government funding of R&D is, by definition, difficult to assess, because of two criteria common to subsidies for R&D: specifically, that the activity funded yields large benefits, and secondly, that these activities should be actions not otherwise undertaken by the private sector. The first action leads government funders to avoid funding research projects with low rates of return. This sounds good because it avoids wasting money. However, combining it with the second criterion has some nonobvious consequences. Private firms fund scientific R&D where the rate of return can be measured precisely—where they can observe the return. What is left for government to fund? Governments fund activities where returns are imprecisely measured. In other words, governments tend to fund scientific research in precisely the areas where the returns are believed to be high, but where there is little measurable data to confirm or refute the belief.

The NSF accomplished a great deal of impressive science. In 1989, after only a short period, the transfer of the Internet to private ownership was on the near horizon. More to the point, the technology and the operations had approached the level of maturity required for commercial markets. The Internet had acquired most of the attributes that eventually would lead to the transformation of every part of information and communications markets around the world—except one attribute, software that compelled widespread use for any function other than electronic mail. In 1989 the software to support the web was not built. Needless to say, all the web pages later built by hundreds of thousands of webmasters with Berners-Lee’s software were even more unimaginable.

Table 4.1 provides a sense of what had been accomplished before privatization, as well as what was about to occur after privatization, due to the growth of the web. The table provides a summary of the growth of the Internet in terms of its “host” computers on the Internet. A host is a physical node in the network and has an IP address assigned to it, and it provides information to others on the network.40 This is one standardized way to measure growth over both the precommercial and postprivatization Internet. The table shows enormous growth rates, with the size of networks doubling one year to the next.

Table 4.1 frames Stephen Crocker’s well-known comment in his August 1987 RFC. In his retrospective about the Internet’s growth he wrote: “Where will it end? The network has exceeded all estimates of its growth. It has been transformed, extended, cloned, renamed and reimplemented.

I doubt if there is a single computer still on the network that was on it in 1971.”41

Crocker turned out to be right. The prior growth was remarkable, and it would not end.

This growth was accomplished at a remarkably small cost. The total cost to the government of creating the Internet is difficult to ascertain, but its ballpark estimate cost is well known. It is known that during NSF’s management (approximately 1985–95) the agency invested $200 million in building and managing the Internet.42 Since the Internet would grow into a multi-billion-dollar worldwide industry only a few years later, it is tempting to compare the result against that $200 million figure and assert that the US government got an enormous rate of return for its investment.

TABLE 4.1. Number of hosts on the Internet, 1981–98

Date

Hosts

August 1981

213

May 1982

235

August 1983

562

October 1984

1,024

October 1985

1,961

November 1986

5,089

December 1987

28,174

October 1988

56,000

October 1989

159,000

October 1990

313,000

October 1991

617,000

October 1992

1,136,000

October 1993

2,056,000

October 1994

3,864,000

January 1996

9,472,000

January 1997

16,146,000

January 1998

29,670,000

Source: Coffmann and Odlyzko (1998).

Such a conclusion fosters the myth that governments can financially support new technological development for cheap. This conclusion is not correct. It is conceptually inaccurate, most of all. Such a conclusion presumes the funding for basic research about the Internet arose out of cost and benefit calculation designed to accelerate the arrival of economic gains. This grafts a viewpoint on the participants that simply did not motivate events. The cost of the Internet or its future economic benefits did not shape the aspirations of the government.43 The NSF invested in developing Internet technologies to meet its agency mission, not with the intent of producing large economic gains.

In terms of accounting, the $200 million figure is also too small. It does not include the funding that paid for most of the early invention in the 1970s and early 1980s. While the financial commitment to what became the Internet was undoubtedly considerable, no historian of these events has made a precise estimate of its size. The entire expenditure for the Information Processing Techniques Office (IPTO), the agency within the Department of Defense that funded most of the Internet, did not exceed approximately $500 million over its entire existence (1963–86), and the funding for what became the Internet was but one of many IPTO projects.44

In addition, the cost tally of the Internet is further complicated by the distributed investments by others. The NSF paid for investments in backbone facilities and facilities for data exchange but offered only minimal support for investing in installations at universities. Most universities invested heavily in their own computing facilities, paid for by university funds.

There is a deeper observation lurking behind this point. The Internet is more than merely one particular implementation of packet switching at any point in time. Much additional invention had to occur after 1973, and also after 1989. The NSF invested in improving the backbone and supporting high-volume exchange of data, and that led a set of administrators and researchers to invent ways to make the Internet easier to use. In 1993 one research organization contained a small team who designed some software applications on top of Tim Berners-Lee’s inventions, and thus were born the browser and web server.

The US economy got innovation from the edges without the NSF fully intending that specific software as an outcome of its investment. Browsing made it possible to commercialize services founded on the World Wide Web, thereby increasing the value of the Internet infrastructure, which the NSF had helped to develop. That began at the same time the NSF privatized the backbone, which gave commercial firms permission to build an entirely new commercial use for the Internet. Later events did not merely provide a payoff from one investment in invention. Instead, society got a return on an innovation that enabled a large range of additional innovations across the economy.

1 As Dave disconnects HAL, HAL resorts to his most basic programming. He states: “Good afternoon, gentlemen. I am a HAL 9000 computer. I became operational at the H.A.L. plant in Urbana, Illinois on the 12th of January 1992. My instructor was Mr. Langley, and he taught me to sing a song. If you’d like to hear it, I can sing it for you.” Dave states that he wants to hear the song. As HAL is being disconnected, HAL sings the song “Daisy Bell,” which is called “Daisy” in the movie. Arthur C. Clarke inserted that song as homage to a synthesized version he heard at a technical demonstration at Bell Labs in 1962. It used an IBM 704.

2 See, e.g., Edwards (1997), 307–8, and 322–24.

3 CERN is located just outside Geneva. It is a French acronym for Organisation européenne pour la recherche nucléaire (originally Conseil européen pour la recherche nucléaire).

4 “While not the first TCP/IP connection in Europe, RIPE was significant for bringing institutional cooperation to the spread of the Internet. RIPE had to overcome the national rivalry that had European and US researchers pursuing incompatible networking protocols.

5 For example, Gopher and WAIS were two such tools for indexing users and topics in a network. See Frana (2004).

6 This is described in somewhat different language and at some length in both Berners-Lee and Fischetti (1999), and Gillies and Cailliau (2000).

7 Backward compatible designs give users of the prior program a migration path to the new. For example, on his FAQ page for the World Wide Web, Berners-Lee goes to some lengths to show potential users that his creation would work with the most common tools in use, WAIS and Gopher. See the discussion in Frana (2004). Also see the historical documents, housed at http://www.w3.org/History/19921103-hypertext/hypertext/WWW/FAQ/WAISandGopher.html, accessed June 2008.

8 See, e.g., Berners-Lee and Fischetti (1999) and Gillies and Cailliau (2000).

9 See, e.g., Berners-Lee and Fischetti (1999).

10 In 1993 CERN agreed to make the code available as open source code.

11 Many fans of Gopher would blame the University of Minnesota for implementing a clumsy licensing program, which discouraged use of Gopher and interrupted momentum behind its diffusion. See Frana (2004).

12 See Frana (2004).

13 This decision is described in detail in Berners-Lee and Fischetti (1999), and Gillies and Cailliau (2000). CERN did not want to offer support, but MIT was willing to do so. CERN’s managers did not view providing institutional support for the W3C as within its domain, whereas the experience with supporting the consortium at MIT gave Berners-Lee a model for how to proceed.

14 See Gillies and Cailliau (2000).

15 See the account in Kesan and Shah (2004). A lengthy list of browser prototypes influenced Mosaic. Consider this brief account by Berners-Lee (quoted from Robert X. Cringely, 1998b):

I wrote the first GUI browser, and called it “World Wide Web” for NeXTStep. (I much later renamed the application Nexus to avoid confusion between the first client and the abstract space itself.) Pei Wei, a student at Stanford, wrote “ViolaWWW” for UNIX; some students at Helsinki University of Technology wrote “Erwise” for UNIX; and Tony Johnson of SLAC wrote “Midas” for UNIX. All these happened before Marc (Andreessen) had heard of the Web. Marc was shown ViolaWWW by a colleague (David Thompson?) at NCSA, Marc downloaded Midas and tried it out. He and Eric Bina then wrote their own browser from scratch. As they did, Tom Bruce was writing “Cello” for the PC which came out neck-and-neck with Mosaic on the PC.

16 This version was released many months after Mosaic had spread to Unix-based systems, so its release received little attention in the technical community. In fact, the browser was built for several different operating systems, such as Unix, Mac, and Windows.

17 Cringely (1998b).

18 See http://www.nsf.gov/about/history/nsf0050/internet/mosaic.htm, accessed September 2012.

19 Kesan and Shah (2004).

20 Ferguson (1999), 52.

21 Cusumano and Yoffie (1998), 20.

22 Interview in Cringely (1998a).

23 Cusumano and Yoffie (1998) stress this point.

24 As Sink (2007) recounts, Champaign’s computer science community was small enough that it became known immediately who were asked to join and who were not, setting off jealously, envy, and admiration.

25 See the account in Cusumano and Yoffie (1998), especially page 42.

26 Netscape’s management later claimed it would have reprogrammed the browser from the ground up in any event because they were developing software to support their long-run goals, which required starting from scratch. However, the concerns about intellectual property made that goal a necessity rather than a luxury.

27 This explanation gets more emphasis in the account of Kesan and Shah (2004).

28 Spyglass sold a total of 108 licenses. See Cusumano and Yoffie (1998), Kesan and Shah (2004).

29 Spyglass also released a server program in 1995 and had some commercial success, for example, selling licenses to Oracle for the product, which resold it under its own brand.

30 CGI stands for Common Gateway Interface. It was first introduced in December 1993 (see RFC 3875) and became a common element in implementing dynamic web pages.

31 McMillan (2000).

32 See Mockus, Fielding, and Herbsleb (2005).

33 See, e.g., Behlendorf (1999), Mockus, Fielding, and Herbsleb (2005).

34 See Gates (1995).

35 The Apache Software Foundation argues that the lack of price encourages the commitment of the community, and this community would likely fall apart if its products were not free. “Why Apache Software Is Free,” http://httpd.apache.org/ABOUT_APACHE.html, accessed July 11, 2011.

36 Private communication, Ben Slivka, October 2008. Slivka headed the programming team for Internet Explorer 1.0, 2.0, and 3.0.

37 See Sink (2007).

38 See Sink (2007). He argues that Spyglass’s management was quite elated to get Microsoft as a licensee, which implied the management did not forecast this possibility. After they licensed the browser, Microsoft increasingly positioned itself as the primary firm to support other application development. All the other licensees eventually chose to either get support from Microsoft or Netscape; none continued with Spyglass.

39 Bresnahan, Greenstein, and Henderson (2012).

40 Although printers and other devices may have IP addresses assigned to them, they tend not to have IP hosts in this sense. In addition, many hubs and switches may not be designated as nodes in the network, and thus will not serve as hosts.

41 Request for Comment 1000, http://www.faqs.org/rfcs/rfc1000.html, accessed August 2010.

42 See Leiner et al. (2003), and the longer explanation in Greenstein and Nagel (2014).

43 This is especially so for the period prior to NSF’s stewardship. DARPA quite explicitly did not use economic rationales to fund projects. DARPA funded high risk projects that “dealt with the development of fundamental or enabling technologies with which to reach newly defined DOD objectives.… When DARPA judged success, it applied technical rather economic measurement standards.” Norberg, O’Neill, and Freedman (1996).

44 The cost of the Internet would also include the substantial number of failures that were part of a broad portfolio of investments in computing science more generally. For example, it would include funding for a range of computer science efforts that did not work out as well as planned, such as in artificial intelligence. It also does not include a range of other experiments in computer science that NSF paid for and from which the general community of researchers learned. Norberg, O’Neill, and Freedman (1996).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.235.176