13

image

The Paradox of the Prevailing View

We chose our system name, Google, because it is a common spelling of googol, or 10100 and fits well with our goal of building very large-scale search engines.

Sergey Brin and Lawrence Page, founders of Google1

The National Science Foundation took a parent’s credit for Sergey Brin’s and Larry Page’s success. The NSF’s claim contained considerable merit because the NSF helped fund the fundamental research that eventually led Brin and Page to found Google.2 Moreover, Google became the most profitable business in the first decade and a half of the commercial web.

It is useful to enumerate what NSF did and did not claim, and why. The NSF never claimed to have had perfect foresight, nor did it claim to sponsor the precise invention Brin and Page made. NSF only sponsored the general technical area in which Brin and Page worked. NSF’s claims also fit the facts. When they began their studies at Stanford in 1994 and 1995, Brin and Page aspired to do precisely what NSF had sponsored them to do, get PhDs in computer science by making frontier inventions. Founding a company was not their primary goal at that point, nor was it an explicit goal, when the NSF first began to fund their work. Seen in retrospect, however, there is no doubt the NSF funding played an essential role in helping Brin and Page make the inventions that led to Google.

image

FIGURE 13.1 Larry Page and Sergey Brin, cofounders of Google (February 8, 2002)

Brin and Page did not have the same type or level of support from NSF, and this turned out to be a distinction without a difference. Brin arrived with a NSF fellowship, while Page arrived with a promise for tuition and stipend support from Stanford’s computer science department.

Brin’s NSF fellowship was quite difficult to get, and, therefore, prestigious. It provided full financial support for a graduate education, and it afforded the recipient enormous freedom in his studies.3 Recipients of NSF fellowships, such as Brin, were given discretion to pursue any project. The money came with few strings attached other than a requirement that the students continue to enroll in their studies. As it turned out, Brin chose to work with Professor Terry Winograd.

In contrast, Page received a different deal, one that is more common for PhD students at elite research universities in the United States. His support came with a quid pro quo, in exchange for working as a research assistant for Professor Hector Garcia-Molina. As it happened, Garcia-Molina had promised NSF that he would do research on frontier issues in digital library science and train many graduate students in those issues. Garcia-Molina had made this promise in a grant application to NSF, which was written with Professor Terry Winograd. NSF had awarded the money to both Stanford professors as part of its Digital Library Initiative, NSF’s giant enterprise that included, among other things, the goal to improve the science of large-scale information retrieval and storage.4 Page could expect support as long as he remained in good standing. Good standing was earned by pursuing research on topics consistent with the scope of the NSF grant. In most cases earning good standing was the routine outcome from studiousness, perseverance, and disciplined work habits.

In practice, Brin and Page experienced virtually the same situation because Winograd and Garcia-Molina worked together and supervised a bevy of graduate students.5 Brin and Page both had discretion to pursue new ideas. Brin’s discretion came with no formal strings attached, but it did come with strong social norms of the technical meritocracy. He could expect that at some time near the end of his graduate school experience others within the computer science community would ask him to account for his time, and others would evaluate whether he performed well. Page’s obligations came in conjunction with his work with his advisors, Winograd and Garcia-Molina. They also had to justify their actions in a final report, which would be made available within NSF the next time the researcher applied for funding. That report could play a material role in whether they received funding again. Once again, therefore, a researcher could expect to have to account for their actions within the computer science community at some later date, and they could expect to have their performance assessed by others.

These facts about the discretion of a couple of young graduate students are more than a pedantic detail. While Garcia-Molina and Winograd could not change the scope of the NSF-funded project, they retained discretion to change the precise details of a line of experiments if they perceived a new opportunity, and the opportunity was consistent with the goals of the initial proposal. Many developed countries fund R&D (Research and Development), but few give recipients such flexibility, as NSF does. In this case, it did matter, and it especially mattered for Larry Page’s experience.

Professor Garcia-Molina and Winograd’s original proposal to the NSF did not promise—and could not have promised—the specific idea Page eventually pursued.6 The commercial web was quite young when Garcia-Molina and Winograd’s proposal began in the fall of 1994.7 The professors’ proposal was written and evaluated even before the release of the first beta for the Netscape browser. By the start of his studies, however, the growing use of the web had accelerated and caught the attention of many, including Larry Page. He came into the lab one day and proposed to examine information on private web pages.8 It was a conceptually interesting shift. Page shifted the focus from inventing tools to examine the digital content of libraries to inventing tools to examine the commercial web—similar tools, but a different setting. The supervisors recognized the potential scientific novelty of the proposal and let him go ahead and explore the possibility. That exact question had not been proposed in the NSF grant, because it could not have been asked. However, the professors knew they could authorize the change without worry.

A rigid system for funding science would have prevented both supervisors from allowing their students to address this new opportunity. NSF deserves credit for lacking such rigidity, for giving its researchers the discretion to pursue new opportunities related to those it had funded. It accommodates an unanticipated opportunity.

As it turned out, another member of the lab, Sergey Brin, eventually joined Page in the activity. What did Brin and Page develop? Using some advanced mathematics, they developed a method for measuring the links across websites by ranking a website more highly when other sites linked to it. Oversimplifying, the algorithm measured a web page’s popularity. That ranking provided a better answer to queries seeking to find informative web pages. The technology was called a search engine, because it helped users search for useful information. Indeed, a search engine built around Page’s invention was more informative than any other search engine at the time. Brin and Page called their algorithm Page-Rank,9 in a play on words, referring both to web pages and Larry Page’s last name.

Page-Rank led to Google, but not by a direct path. Page and Brin did not drop out of their studies right away, or quickly sacrifice their aspirations for a PhD. They took classes for two years, worked in the lab, and completed all the requirements for a PhD except the last one, writing a dissertation. They even took steps to making progress on that last requirement by making progress with implementing and refining their algorithm.10 They implemented Page-Rank in a search engine they initially called Backrub, which became available on a server at Stanford. They wrote several papers about it.

The popularity of their search engine with campus users encouraged Brin and Page to seek some revenue in new licensees (which they would share with Stanford). Stanford took a patent out on Page-Rank and, following standard policies for the campus, licensed it to the inventor, Larry Page.11

Things did not go according to plan. Despite their proximity to Silicon Valley, at first these young entrepreneurs did not find any takers for their algorithm. That is not a misprint. Odd as it might seem in retrospect for an invention that eventually became the basis for a multi-billion-dollar company, Brin and Page were not able to find any existing firms who wanted to license Page-Rank. To say it simply, their invention did not stand out because they were merely a couple of smart kids trying to license an algorithm to revolutionize searching the Internet. In the middle to late 1990s there were a lot of smart kids claiming to be torchbearers for the next revolution, and plenty of other approaches to search. Brin and Page did not look any more distinguished.

More to the point, Brin and Page happened to have the dumb luck to live in a geographic location, Silicon Valley, at a time of intense entrepreneurial activity. In most other universities and in many other technical eras they might have been big fish in small ponds, just not against that cacophony of activity in the valley in the mid- to late 1990s. The prototype built at Stanford worked well, but did little to convince others of the value of the algorithm. Everyone seemed to be a skeptic. The prevailing view dismissed their search engine and characterized its approach as not valuable.

Location did play a role in the next key event, however. Brin and Page found an angel investor in Andy Bechtolsheim, or, perhaps, it is more appropriate to say that Bechtolsheim found Brin and Page. Bechtolsheim had dropped out of his Stanford PhD studies many years earlier to become a cofounder of SUN Microsystems and over the years had retained a liking for the quixotic pursuits of PhD students. The timing was important. Had they stuck to their initial plan, both Brin and Page should have been doing the research to inform the writing of their dissertation. Instead, Bechtolsheim invested in the venture founded by Brin and Page, nurturing them through the next delicate steps of founding Google, which Page and Brin did in September 1998, just a few years after they showed up on the Stanford campus.

The new firm moved off the university’s servers, establishing an office in a few rooms in a friend’s house in Menlo Park, and set about scaling its search engine into something larger and more pervasive. At that point an observer could be forgiven for not seeing much that distinguished Google from the thousands of other start-ups at the time that aspired to profit from the growing web. The valley’s VCs did not see them as anything special and did not regard a facile search engine as the first technical step on the path toward industry dominance or uncommon profitability.

This recounting of Google’s origins touches the surface of the paradox of the prevailing view, the theme of this chapter. Every technology market has a prevailing view, and it helps guide collective behavior. The paradox of the prevailing view is this: While the prevailing view guides the activities of many firms in one technical and commercial direction, one consistent with the prevailing view, it also can simultaneously motivate others to pursue another technical and commercial direction, one that is inconsistent with the prevailing view. Said succinctly, the prevailing view tends to encourage the majority of innovation in a very specific direction, but—simultaneously and paradoxically—it can encourage something else, exploration of innovation aimed at activity that would alter the prevailing view.

When can the prevailing view play such distinct roles? It is likely to play these dual roles when the economic conditions tolerate a variety of business approaches, giving rise to exploration in many directions. For example, as this chapter will show, Google used an alternative set of principles for creating value than its near rivals. They had the opportunity to demonstrate their value because no single dominant view stopped them, either by acting as a gateway or slowing experimentation by raising frictions, or by outcompeting them before the technical meritocracy of the market went to work.

Advertising Supported Commerce

The importance of advertising arose from a fundamental feature of the web, its disorganization. Better said, users required help in navigating the material on so many web pages because Tim Berners-Lee had deliberately kept the web simple, embedding the web with minimal central control. Word of mouth, informal recommendations, and the links on web pages provided some guidance, but the scale of the number of pages found on the commercial web soon outran the functionality of those mechanisms.

The specific steps to succeed eluded many firms, however. Early on, some firms hoped to make money by charging directly for subscriptions to organized web-based information, much as indexed databases had done in the past, such as Lexis and Nexis. These efforts largely failed to generate much revenue.12

Attention largely turned to making money by one of two alternative routes. One approach was fundamentally technical. It oriented itself toward providing better technical solutions to finding information on the web. That led to various different engineered solutions to online searching that could be made profitable in two ways: by selling advertising to users, or by selling search results to firms who then licensed the search service. While this approach always yielded some revenue, especially for navigating internal corporate networks, the revenue from advertising approached a much larger potential scale, and so received much more attention from startups.13

The second approach looked for shared collective interests. It collected users in groups, showed them specialized content, and then showed them advertising. Many firms took this approach, assembling a wide variety of content and services, and the most successful among them grew into portals. Yahoo, AOL, MSN, and many other firms, aspired to become portals.14

Two aspects of advertising underlay the promise of value from portals: (1) better measurement of the results and (2) more precise targeting of the audience. In practice, inventors promised both but did not realize their promises over the late 1990s. In retrospect, these forecasts acted as if the new world would arrive faster and friction free, accusing the pessimists of being too unimaginative about how much could change in a half decade.

The problems became readily apparent with experience. And they began to shape investing at the beginning of the new millennium when enthusiasm for dot-com firms began to wane and investors asked the skeptical questions discussed in the prior chapter. In a stroke of good timing, Google’s approach emerged around this time, and it improved upon measurement and targeting, while creating considerable value.

What was the promise of more precise targeting? The problem arose very early in the diffusion of the web, years before Google’s founding. It can be understood through the history of the cookie.

The cookie originated with the ninth employee of Netscape, Lou Montulli. He was twenty-four years old in June 1994, just fresh out of college. Montulli called his tool a “cookie” to relate it to an earlier era of computing, when systems would exchange data back and forth in what programmers would call “magic cookies.” While there were multiple technical ways to address the lack of history in a browser, the cookie sent information back and forth from a user’s browser to a web site, letting the merchant know the user had returned, and letting the vendor store know small bits of information on the user’s browser for repeated use. Users could see it work too, and without having to change its settings, or, even without even being alerted to its operations. It saved users time and provided functionality for nontechnical users so they could complete their online transactions with greater ease.15

Microsoft’s browser included cookies because Microsoft did not want its browser to be any less functional than Netscape’s. With the two most widely used browsers holding identical functionality, the web-building community could depend on a ubiquitous and universal presence of cookies in all browsers. Third-party tracking of cookies built on top of that ubiquity. Although cookies had been designed to let one firm track one user at a time, nothing prevented one firm from placing ads on multiple sites and, through arrangements with the owners of those websites, tracking what users did as they moved from one site and another.

Third-party tracking was not one of Montulli’s design goals, as it violated his notion of privacy. “We didn’t want cookies to be used as a general tracking mechanism,” said Montulli. Very soon many Internet technologists at the IETF thought so as well. Just as quickly firms that benefited from cookies began to defend third-party tracking. Attempts to regulate its use became opposed throughout the late 1990s.16

Cookies particularly benefited firms with large amounts of content—for example, Yahoo, which assembles a great deal of information in one site. They also benefited firms with the ability to track users across multiple sites, such as DoubleClick. Such a firm, in principle, could achieve a survey of a user’s preferences by aggregating the insights made about one user across many properties. In principle, that permitted more effective targeting. Only a few years after the cookie was invented, Yahoo, AOL, and DoubleClick were well on the way to such an implementation, while making many deals to expand their offerings and match those of any other large site.

By the time Brin and Page founded Google, many of these firms had made progress but still faced difficult challenges matching their offerings to users with accuracy. At the end of the decade rarely did ads match with any precision.17 And that was so for both portals and specialty sites.

Reality fell short of the promise in one other aspect. The potential to place a “clickable/trackable ad” on a web page generated excitement.18 Owners of a large amount of content could keep track of surfers and which ads they clicked on. Since advertising defied strict measurement in many other media, the advance of the web held the promise of measuring whether ad campaigns had direct effects on users or not. Although banner ads made money,19 the vast majority were displayed to readers and sold in proportion to the number of times they were displayed, that is, they were priced on a “price-per-impression,” not on any reliable measure of their impact on user behavior.20 That left buyers and sellers with no certainty about the value of impressions, and it motivated sellers to find a better measure of impact.

Enter Google

Google eventually would improve on measurement and precision, but that was far from apparent in 1998 when Brin and Page founded the firm. At first Google offered its search service separately, with nothing else other than a bar for a user to input a keyword, just as it had appeared on the Stanford server. That generated a set of results, which the firm returned to the users. The design’s simplicity gave it a distinctive look from the crowded portals, and that also drew attention to it as a specialized service.21

Google was distinctive in another respect. The search engine declared its commercial neutrality with respect to advertising, which the founders believed made their search engine more user friendly. Hints of this neutrality could be found in the academic papers Brin and Page wrote when they were graduate students. Their first academic paper included an appendix with several paragraphs that described the potential conflicts between receiving compensation from advertisers and achieving a search result that best met user needs by avoiding any commercial influences on its search query answers. It included this declaration:

we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers … [and] … we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.22

In short, for many years Page and Brin did not focus on advertising. They had focused on attracting users with results that were relevant and quickly delivered. Speedy results became a well-known obsession of Larry Page. The focus on making search relevant to users—and doing so above all other considerations—could be called a strategic principle, but it is more accurate to call it an early outlook, a preference and inclination of the founders. It morphed into a strategic principle when the founders chose to retain it in spite of the short-run costs.23

In 1998, supply and demand did not seem to favor Google. Supply was high: Google was one of many search engines trying to find a licensee or buyer of its services.24 Demand was low: The prevailing view at the leading portals did not see much value in search, and did not anticipate creating value from improving search. In the prevailing view, search engines helped match a few niche concerns with a few idiosyncratic web pages. That activity, at best, catered to a few niche tastes that the portal otherwise could not satisfy with its listing. Moreover, the search engine took the user away from the portal, because it sent users to the links uncovered by the search engine. It reduced the potential to sell advertising, and portals could do that more easily when users remained on their website. Encouraging users to leave portals was regarded as an undesirable feature for a portal. Accordingly, all portals included a search function, and it was not featured prominently.

Initially lacking any interest from licensees, Google made its engine available to any searcher with a query. Its market share of searches grew, and it began making a little revenue from showing simple display or banner ads—ads that displayed for a fixed period and did not tailor their content to the user—alongside the search results. These became distinguished from what soon became called the “organic listings”—namely, the results from the search engine.

Even at this early moment, Google honored the firm’s commitment to make search results uninfluenced by advertising. At the outset the firm made a clear separation of ad from search results.

Google’s approach differed from the prevailing view. Aside from differing with portals, Google also differed with experiments with “pay-to-list” strategies followed by others at the time.25 Pay-to-list will be described more below, because it yielded lessons. Google did not win praise from analysts, since it attracted users but sacrificed revenue by not aggressively selling advertising, whereas other experiments, such as pay-to-list, focused explicitly on making more money.

Google’s web page did have one other positive consequence: it advertised Google’s capabilities to other portals, which occasionally rethought their contracts for search services. Google got its first big break when it found its first paying customer in June 2000, in a licensing deal with the web portal, Yahoo.26 The deal also was not worth very much money to Yahoo, and at that stage, Yahoo primarily used search technologies as a backup for queries that went beyond its own directories.27 Nevertheless, the deal had great symbolic importance. Yahoo chose Google over Inktomi, the perceived leader in search at the time, which had provided search services for Yahoo during the prior two years.28

The arrangement was straightforward and similar to what Yahoo had done with Inktomi. Google provided the search service for Yahoo in exchange for a license fee. Yahoo sent queries to Google, while Google provided the answers.

This arrangement accomplished three things at once. First, it saved Yahoo from investing in search services, an activity Yahoo’s management considered to be peripheral to its main services as a portal. Second, the deal gave Google revenue and legitimacy, which mattered to a young company founded by a couple of precocious graduate students. Third, it directed a large number of actual search queries to Google, which they could use to refine their algorithm.

The symbolic importance turned out to matter a great deal more than Google could have imagined. It also brought attention to Google, which soon signed similar deals with other firms, reaching 130 over the next two years.

Many years later, in multiple acts of Monday morning quarterbacking, commentators questioned why Yahoo and other portals chose Google over other alternative search engines available at the time. No single answer could be correct by itself. For one, there was no obvious early sign that these contracts would begin to lead to Google’s dominance. The space appeared capable of supporting many firms racing to become technically better than the other.29 Also, the contracts were short term and could be terminated if another leader emerged. Second, the deal contained considerable business sense in light of the prevailing view of the time. Page-Rank worked well at finding relevant answers, and it was fast, the result of many improvements over the prior two years.30 Many users lauded it in online discussion groups, and its growing use drew attention to its ability to satisfy users with unusual questions. Yahoo easily could justify its actions as in the interest of users with niche concerns.

In retrospect, one additional aspect deserves notice—the way Google masked its long-term aspirations by adopting what later became characterized as a “stealth strategy.” At the outset it did not appear as if Google could substitute for any portal, because it specialized in one activity, while the portals had many additional lines of business, such as e-mail, hosting of groups, and basic news. Indeed, when asked about their aspirations, the firm’s officer denied having any plans to expand the range of offered services.31 Overall, therefore, Google appeared to pose no immediate competitive threat as a substitute, so existing players did not hesitate to cooperate with it.

Evolving into a Platform

Google’s evolution into a platform for online advertising did not happen all at once. Like many young firms, Google’s founders planned to improve in specific ways, but they also reacted to new opportunities and threats. Because their approach began with the unique perspective of finding relevant answers for users without regard to the commercial needs of advertisers, they sought to match, but not imitate precisely, the features of others. They adapted new ideas in ways consistent with their unique strategy and strategic principles.

Like any other firm, Google studied other firms that provided complementary and substitute services, and sought to learn from their mistakes and triumphs. Google learned from another start-up called GoTo.com, which offered pay-to-list to advertisers. GoTo.com—later called Overture, and much later bought by Yahoo—pioneered the idea of a word auction linked to a search engine.

At first GoTo/Overture implemented what became known as “playing for placement.” It held a first-price auction for the right to list ads at the top of its searches, mixing those with the organic listings without informing users which were listings and which were ads. Overture’s auction attracted attention from analysts and investors because it was novel.

Overture’s management argued that vendors bid to provide relevant answers, so mixing bidding and organic listings helped users find what they wanted. That was a good sound bite when the experiment started. Closer scrutiny eventually revealed that the practical outcomes varied from this ideal. First, bidders manipulated the pricing of the first-price auction by gaming the system, that is, they bid a lot to show ads that users did not perceive as relevant. Second, advertisers could not measure the success of their ads. Those two factors led a search engine to an unsustainable strategy, collecting revenue in the short run while not measuring whether advertisers got any return on their ad.

The third related problem would lead to the downfall of pay-for-placement. Sites with deceptive motives annoyed many users, especially when the returned answers did not match their needs. This was especially apparent in the extreme case, when firms selling salacious or dubious services took advantage of users.32 For example, simple requests or many other innocent phrases, could lead to unwanted material, or a poor user experience. After becoming annoyed many users did not return to the pay-for-placement services.

GoTo’s management learned about auctioning from this experience and altered their business model. One day they announced that they would cease auctioning the listing. Instead, they auctioned off the ad next to a search result.

In the fall of 2001, two employees at Google learned from watching Overture’s behavior, and sought to improve it.33 Google implemented aspects of Overture’s approach on its website, remaining consistent with Google’s philosophy of giving users what they wanted. Google never changed its organic listings to reflect the needs of advertisers.

Google identified the ads by listing them above and to the right, and identifying them as paid.34 That was in keeping with its philosophy to be transparent and focus on the needs of users.

Google’s next decision ran against the prevailing view, and it had tremendous implications. Google’s auction designers eventually decided to use a second-price auction, not a first-price auction, for the right to have the first position, second position, and third position in the ads. A first-price auction requires bidders to pay what they bid, while a second-price auction requires the winner of the bid to pay the price bid by the next-highest bidder. While the first-priced auction was more intuitive and more familiar to potential ad buyers, Google’s experience taught them that the second-price auction worked better. It helped stabilize the bidding behavior that determined prices.

How did the second-price auction stabilize bidding behavior? It addressed an issue that arises from running the same auction with the same bidders day after day. If the top bidder wins day after day, then it has an incentive to discover if it can still win the auction with a lower price that saves money. Google’s auction designers observed that the highest bidder would win, and change its bid on a later occasion, shading its price, as if the bidder was trying to discover if they could have saved money by bidding less. Because the bidder would be returning for many sessions, they had strong incentives to save money on all their future bids. This type of behavior became known as “price exploration.”

Price exploration made the whole auction less than the sum of its parts—namely, less than what the auction could have done for everyone. A sophisticated bidder could write a program to do the price search in an automated way. That worked well for one bidder, who would save money by searching for better prices. However, that exploration only worked if all the other bidders did not change their bidding behavior. If several bidders deployed an automated program to do the price search, then the auction might not settle on one set of prices. In other words, such exploratory bidding made prices less predictable day after day, and it frustrated bidders, who wanted to manage ad campaigns with reasonable forecasts about the likely process and outcome.

In other words, the auction could run if one bidder explored the prices. It would become unstable if more than one did so.

How to stop such exploratory behavior? Google’s team rediscovered a well-known principle in studies of auctions: bidders continually decreased the bid price until the bid just exceeded the price of the second-highest bid. The auctioneer could prevent exploration of pricing by announcing a second-price auction at the outset.35

Use of the second-price auction was an important invention, albeit easy to misunderstand and difficult to explain to bidders. Although confusing, it was technically easy to implement, and, importantly, like any digitized process, it could be automated and every auction could be tailored to the unique set of bidders for any keyword.

A third innovation had a technical dimension to it, but it was also largely aimed at improving the service: Google paid advertisers for user clicks, not impressions. That gave advertisers the ability to measure what consequence their ads generated, that is, whether a user clicked on the ad. It also promised advertisers they did not have to pay if the ad failed to attract any user clicks. Remarkably, this ideal had been discussed in online circles long before Google implemented it, but many firms had not bothered to implement it, even some of the very large firms. Google’s implementation became the largest ever.36

It also created a problem, because the price of a pay-to-impression ad and pay-per-click ad should deliver equivalent value to users, even though seemingly different things are being valued.37 As Google’s chief economist, Hal Varian, later wrote, Google “wants to sell impressions, but the advertiser wants to buy clicks.”38 One hundred people might view an ad, but only five of them may click on it. That means if a buyer was willing to pay $10 for one hundred impressions, then it should be willing to pay $10 for five clicks.

The solution also was seemingly straightforward. Google needed an “exchange rate” between the two types of ads.39 The most obvious “exchange rate” is the expected click-through rate for an ad. This exchange rate became the origins of a “quality ranking” inside Google’s auctions.

At first Google used the history of clicking behavior of users who entered specific words (on which the advertiser bid) to generate the quality ranking. If users clicked on an ad then Google’s system ranked its quality as higher. In practice, designing the quality ranking required considerable effort. Google experimented with better models of click-through rates over time.

A quality rating based on click-through rates broadly punished advertisers who showed ads that users did not click on. The mechanism and its importance can be illustrated by the experience of the ads coming from the sites with dubious motives. In an unweighted auction a user could get ads that did not meet their needs, annoying users. In Google’s quality-weighted auction, in contrast, the quality ranking was low, and the dubious ads were disfavored. They could not bid their way to the top. Hence, a user avoided these annoyances, and most of these ads never appeared at all.40

As Google learned, paying for a click-through, by itself, does not generate appropriate incentives to economize on impressions. Excessive impressions impose costs on users. Advertisers had incentives to get free impressions by showing ads on unrelated queries—for example, showing a soda ad on a search for a restaurant.41 That potentially allowed a soda company to gain a large number of impressions while paying for a few clicks.

Eventually Google formulated the ranking by mixing in other assessments of the landing page and other aspects of the site.42 In this way Google’s ranking lowered the price of ads for firms that advertised services that were high-quality matches for keywords and raised the price of ads for poor matches.43

Google also took one more key step. It banned ads from pornographers, tobacco, and spirits. This was equivalent to giving these ads a quality of zero. Once again, it prevented these advertisers from buying a place in the ads, and reduced user annoyance.

Seen in broad perspective, quality ratings acquired a function with broad implications for ad auctions. They altered ads from being a negative feature of a search experience. Ads potentially became a positive feature, when the ad matched the needs of the user. That improved match benefited both user and advertiser and created value where there previously had not been as much.

Importantly, that positive experience was not followed by a negative one. By placing the ads off to the right many users simply ignored them, and that made the ads neither positive nor negative. Similarly, when Google started placing the ads at the top in boxes, users did not get annoyed if the ads were clearly marked.

Google’s pervasiveness in 2002 and later enhanced the importance of quality ranking. As Google’s approach to quality gained popularity with users and advertisers, a feedback loop developed, putting Google at the center of a virtuous cycle. Websites faced incentives to get higher quality rankings with Google, especially if they intended to bid in the ads. Consequently, websites began to tailor their designs to raise their quality score. That was an addition to the incentive they already faced to tailor their site to raise their ranking in Google’s organic listings. Enhancement in one led to enhancement in the other.

Years later a myth developed about Google’s success, viewing it solely as a technical achievement. There is a grain of truth in this myth. No other firm matched the quality ranking in the short run, because Google actually had to invent some enormously difficult computer science to implement it. The computer science of an automated quality-ranking algorithm for the entire web is extremely technically challenging and impossible for many organizations.44

Yet that is too narrow an understanding of how Google behaved. It ignores the broader context, and the nurturing features of the setting. Google’s experience reflected yet another tale of a successful inventive specialist, and in this case, the specialist made the jump from an academic setting to a commercial one while continuing in its specialist activities. The founders had focused on addressing one problem and focused on doing it well. Like many other inventive specialists in the Internet, they took for granted the other functions of the network and the commercial web.

The prevailing view also served a nurturing role. Although Google invested against the tide of the prevailing view, it appeared to offer a complement to portals with minor value. It appeared to pose no threat as a substitute for the leading portals. Thus these leading firms initially were cooperative in sending traffic to Google.

Google accomplished something extraordinary in the long run. It became the rare case of a specialist that others in the ecosystem had to accommodate. Because its search algorithm could determine a high fraction of the traffic received at a website, many other participants in the commercial web began to modify their website in ways that favored the organic listings. Because Google’s quality ranking generated a reaction from other participants on the web, other websites had to begin to pay attention to their quality. Their prosperity depended on how well their websites interacted with Google’s ad services.

Extending the Auction

Google’s leadership over advertising was further cemented in 2003 when Google began to offer one more service—namely, placing ads in designated places inside blogs, usually in little rectangle windows. This service, known as AdSense, used an auction, but a different version of the mechanism.45 Google split the money from the ads with the site showing the ad.

Once again, the computer science involved a number of technical challenges. Using automated linguistics as ways of identifying the content of sites,46 AdSense involved entrepreneurial imagination that built an organization behind its service.

Google did not invent the computer science behind AdSense, nor did it imitate someone else’s. Rather, Google bought Applied Semantics, a leading company in the area. After the purchase Google adapted to the processes at Applied Semantics, and renamed the service Google AdSense.47

The purchase was important from a competitive standpoint. Yahoo had worked closely with Applied Semantics, and Yahoo had no alternative partnership available after Google’s purchase. Thus Google’s competitive improvement came directly at the expense of its competitors. Moreover, Yahoo had no comparable auction to marry to such service, even if it invented one from scratch.

Google did not displace others overnight, and for a number of reasons. The early adopters were firms with a high fraction of their business online, such as eBay and Amazon. The auction met with considerable resistance among other firms, such as auto companies and packaged good providers, who would use web advertising to shape offline behavior. The process did not resemble ad purchases at newspapers or magazines, and it confused the marketing departments, who were accustomed to buying space at discount.

Google’s key invention, the second-price quality-weighted auction, also met with resistance because it confused all but the most sophisticated ad buyer. It had to be experienced to be understood.

Scaling was also an issue for Google. The aforementioned features of the auction ended up determining many of features of Google as an organization as it tried to grow. Google could automate the processes behind the auction, such as the submission of bids and determination of ad placement. That automation enabled the auction to scale, that is, to grow to the demand for its use without imposing additional costs on users or advertisers.

Other parts of the business did not scale, however; that is, they did not grow without additional cost. Google had to hire personnel to help ad buyers learn how to select appropriate keywords, bid appropriately, and mount a campaign. Google had to develop a sales department to explain the process and educate the ad buyer. It had to invest in tools to help buyers understand the value of its bids and manage large campaigns, and that required talented programmers. Google also had to invest in simulations of the auctions, so buyers could anticipate how to arrange large and complex campaigns that extended over thousands of keywords. Accordingly, as Google enjoyed increasing success in the market, Google, the organization behind the engine, began to hire thousands of new employees.

Advertiser resistance to the auction eventually declined for one principle reason: users returned to the keyword search. It offered an excellent way to reach them during their online session, and at the moment they asked a question. That permitted advertisers to target very specific users with very specific needs. It was an efficient process for matching a potential buyer with a potential supplier.

Google’s rise solidified the irreversible movement away from the web’s innocent roots and toward a pervasive commercialization of surfing. As every page became a potential poster for a relevant ad, every action took on an additional potential meaning as information about a user’s preferences, or as a statistical indicator about how other like-minded surfers would behave next.

Summarizing, the economic model of ads and search had eluded others because it had been easy to misunderstand. The two sources—ads and organic listings—were substitutes in any given search, and that would have seemed to give incentives to the search engine to invest in hosting as many ads as possible.

Search services depend on repeated use, however. In many cases the user gets its answer from the organic listing and not the ad. A relevant organic listing is essential for attracting users to return to the site in the future. Something similar applies to ads. The likelihood that a user gets the answer from the ad rises with the quality and relevance of the ads.

Incentives for Improving

In a few short years, Google had transformed itself from its origins as an obscure firm with a graduate school project. The quality-weighted second-priced keyword position auction for ads had become a competitive advantage for Google, enabling it to create value that Yahoo or any other portal could not immediately match. Users came to Google with a question and got their answers from one of two sources, either the organic search results, or the ads. In the latter case Google got paid.

With no close substitute, a virtuous cycle emerged. Websites invested in features that raised their standing in organic listings. Ad buyers invested in their websites, which improved in quality. Users continued to use the search engine because it yielded relevant answers. Ad buyers only paid when users clicked on the ads, and gained experience about the value of ads.

That virtuous cycle generated strong incentives at Google to continue to simultaneously improve search and ad quality as complements to each other for purposes of attracting users to return.48 Defensive motives—preventing users from ever going to another search engine—also provided incentives for continual improvements. Whatever the motivation, the result was the same: Google’s management adopted a strategic principle consistent with continual improvement.49

With strong internal funding, Google began investing in a wide set of improvements. Some of these improvements benefited many users. For example, improvement included innovation in spell checking. Informed by many user misspellings, the search engine asked, “Did you mean …?” Spell checking by itself nearly doubled traffic.50

Google went on to make innovations in news aggregation (that is, informed by user clicks on articles), filtering (that is, eliminating pornography with “safe search”), faster response (that is, due to large investments in server farms), and many more features. Eventually Google made investments in services that mimicked the broad-based offerings of portals, such as electronic mail and community support. All of these improvements were responses to the incentive to bring users back to search, and maintain the virtuous cycle.

Not all these incentives were unambiguous, however. The incentive to attract users placed Google in an unusual place when it came to online pornography. On the one hand it sought to reduce the influence of the pornographers in listings that annoyed its users. In addition, Google invested in tools for filtering pornography, meeting the needs of users who desired to avoid salacious material. Yet Google also accommodated the search terms, and did not filter pornography for a user who wanted it. Google helped users find websites that offered users what they wanted. In effect, the very tool that made it easier to filter away undesired photographs also enabled users to find any photograph, salacious ones included.51

The ambiguity of incentives fueled additional potential controversy. For example, Google faced incentives to take an expansive approach to fair use of online text. Fair use is a legal doctrine that permits a writer to quote copyrighted material verbatim for purposes of criticism, news reporting, teaching, and research, without the need for permission from or payment to the copyright holder. Not asking for permission was crucial for the search engine, as it quoted the most recent postings and content without transacting for it each time. Google faced no clear legal definition for how much of a website’s article a search engine should quote. Google did not want to quote so much text that it filled up the page, since a well-designed listing of organic results would include more than one option. However, could it quote more than one or two sentences of text? What principles defined the limits? This question would remain open and haunt Google’s actions as it expanded into more countries, where local firms could raise the issues within their legal system.

For related reasons, Google acquired a tense relationship with news and content sites. Google could move considerable traffic to those sites by making their content visible in its search engine. Yet a simple statement within the organic listing—a sports score of a recent event, a phone number for a business, or an address for a store, for example—could satisfy a user and obviate the need to check through to the original site. This tension was inescapable. Google’s prowess placed content sites in a dependent relationship with the pervasive search engine. Google was both the source of friendly referrals and a potential substitute. In the latter case Google was using the sites’ own content.

The situation also gave Google incentives to turn a blind eye toward violations of copyright by users and hosting sites. For example, Google had incentives to satisfy the query of a user who turned its search toward illegally pirated software or music. Accordingly, it had incentives to evade responsibility for user action and place legal responsibility on the sites that hosted the pirated software or those users that looked for it. Not surprisingly, Google would be dogged by this tension as it grew bigger and spread to more countries.

Google’s rise led many copyright holders to rue the decline of portals. Portals had incentives to contract for all their content, and, thus, stay within the bounds of copyright law, as interpreted by copyright holders signing a deal. The user orientation of Google gave it incentives to organize all of the web’s content, whatever the user wanted, even its illicit content. In short order, Google’s rise would be one of several factors enabling piracy to play a more prominent role in the web. The legal issues would persist for many years, as participants interpreted the many provisions of the Digital Millennium Copyright Act, which defined the legal liabilities of Internet intermediaries.52

By the middle of the decade the service had spread widely. “Ads by Google” became a common statement all over the web. Many small and large sites began using Google’s ad services, paying only for clicks. It was convenient, especially for small and medium-sized businesses, and it generated enough revenue to support an inexpensive website. AdSense also helped change the ecosystem of the web. The long tail of an ad-supported web began to depend on it. Many niche sites, blogs, and aggregators of content expanded accordingly.

What Was Accomplished?

As noted earlier, as Google grew, a myth arose that viewed its success solely through its technical accomplishments, as if the firm was merely an extension of Page-Rank, a lab project from a couple of graduate students that grew beyond its earliest aspirations. That myth interprets Google’s accomplishments as an outgrowth of its technical prowess and deemphasizes how much of the firm’s services evolved in response to commercial circumstances and incentives. It is very misleading.

Google’s success was more than merely the sum of several technical inventions. It involved a complementary number of innovative commercial processes, which reflected the unique perspectives of its founders, as well as the discoveries of many employees within the organization, as they responded to challenges with deploying an auction for advertising. Once that was deployed, the firm responded to the economic incentives to bring users to the search engine and continually improve the service for more advertisers, which fueled a virtuous cycle between users, advertisers, and many web pages.

In some respects, therefore, at the outset Google’s experience resembled other innovations from the edges. Google demonstrated a fresh perspective for a combination of a business process and new technical approaches, and these differed from those at the core of the industry. The prevailing view made it harder for Google to raise funds and get started, but entry was not blocked. Like many an entrepreneur before them, Google’s founders were given the opportunity to implement their alternatives prototype, and scale it with customers.

There also was something different about Google’s experience, and that difference illustrates how the commercial Internet had evolved by the late 1990s. Brin and Page were never truly outsiders. Google’s founders had the luxury of a challenging but nurturing setting, with modest but initial financial backing from the National Science Foundation, and then backing from a sympathetic financial angel. Their university had experience with moving inventions to industry and had put in place policies to accommodate such movement, easing the transition for Brin and Page. Within a couple of years the founders also made deals with other industry players as they tried to grow their business and learn about resolving new challenges. The cooperation arose, in part, because existing players adopted a prevailing view that interpreted Google’s services as a complement to theirs. Summarizing, Brin and Page had status as both insider and outsider, reflecting both the novelty of their invention and the familiar motif of their aspiration for commercialization, working with existing players while also pursuing a unique perspective.

That dual status also reflected the timing of Google’s rise. It came at the end of the 1990s, at a crucial moment in the history of the prevailing view for businesses built on the commercial web, when a widely imitated entrepreneurial approach—the dot-com firm—began to show that it would not come close to realizing the most optimistic promises. Google walked into the scene at the same time the support for these dot-com firms declined. Google accomplished part of what prior participants had promised but had not delivered, achieving the first large-scale tailoring of advertising to the unique features of the web surfer.

Long after the dot-com bust, Google continued to experience success delivering and improving its service. As it encountered more challenges, and resolved them, it improved the Internet experience for the vast majority of users and advertisers, as well as their content-supplying web partners. This set Google on an extraordinary long run path, one in which Google’s innovations would touch every participant in the commercial web.

1 Page and Brin (1998).

2 Thanks to Jeannette M. Wing, former assistant director of NSF’s Computer and Information Science and Engineering (CISE) Division, for pointing out the connection between NSF and Google, and piquing my curiosity. See http://www.tvworldwide.com/events/pcast/100902/default.cfm, accessed July 2012. See http://www.nsf.gov/discoveries/disc_summ.jsp?cntn_id=100660&org=NSF, and also http://ilpubs.stanford.edu:8091/diglib/pub/projects.shtml, accessed July 2012.

3 See http://www.google.com/about/company/facts/management/#sergey, and also http://www.nsfgrfp.org/why_apply/fellow_profiles/sergey_brin, accessed July 2012.

4 See http://www.nsf.gov/discoveries/disc_summ.jsp?cntn_id=100660&org=NSF, and also http://ilpubs.stanford.edu:8091/diglib/pub/projects.shtml, accessed July 2012.

5 Professor Hector Garcia-Molina, private communication, July 2012.

6 Professor Hector Garcia-Molina, private communication, July 2012.

7 The proposal has an official start date of September 1, 1994, and was written many months before, around the time of Netscape’s founding. Professor Hector Garcia-Molina, private communication, July 2012. See also http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=9411306, accessed July 2012.

8 Professor Hector Garcia-Molina, private communication, July 2012.

9 Page et al. (1999) and Page and Brin (2008).

10 This originally could be accessed at www.google.stanford.edu. Professor Hector Garcia-Molina, private communication, July 2012. In fact, Page and Brin were not the only search engine inventors to have a similar insight. RankDex, established in 1996 by Robin (Yanhong) Li, also used an algorithmic approach to links between sites to rank relevance. See US patent 5,920,859. Li worked for Infoseek for a time, and went on to found the Chinese search engine Baidu. In addition, another pioneer for search engines at the time, Eric Brewer, was an assistant professor at UC Berkeley, and a participant in another NSF grant from the same program about digital libraries. Along with a graduate student, Paul Gauthier, he founded Inktomi, using a distinct approach to rank web pages. Dave Peterschmidt became CEO, with Brewer acting as the chief scientist and Gauthier as the chief technology officer.

11 Larry Page and Stanford University were granted patent 6,285,999, with Page listed as the sole inventor, and Stanford as the assignee. The US patent database also includes several updates to the original filing.

12 For a history of the variety of approaches, see Haigh (2008).

13 For a history of some of these attempts, see Haigh (2008).

14 For a history of portals, see Haigh (2008).

15 Schwartz (2001).

16 See Schwartz (2001). In addition, there are alternative methods for tracking user surfing, such as web bugs, though European Commission rules banned them, for example. See Goldfarb and Tucker (2011).

17 As Haigh (2008) archly points out, both firms were quite far away from any finely grained targeting. Many of the ads sold in this era were sold at flat rates and with crude matching mechanisms, at best.

18 Donaldson (2008).

19 Although the history of the banner ad predates the growth of the web, its growth did not accelerate dramatically until the commercial web began to grow along with the browser. See Donaldson (2008). The BBS Prodigy appears to be among the first firms to deploy the equivalent to the click-through banner ad. http://nothingtohide.us/wp-content/uploads/2008/01/dd_unit-1_online_advertsing_history.pdf, accessed July 2012.

20 Also see the skepticism in Haigh (2008).

21 Marissa Mayer, vice president of search product and user experience, states that this design was a by-product of the limited HTML programming skills of the founders, and their desire for a fast-loading page. They decided to retain the simplicity after learning that many users expected more, and only came to understand the benefits of simplicity long after Google inserted the copyright notice at the bottom. See http://www.youtube.com/watch?v=so YKFWqVVzg, and http://alan.blog-city.com/an_evening_with_googles_marissa_mayer.htm.

22 See appendix A, Page and Brin (1998). They identify several conflicts, stressing conflicts over the top listing, conflicts over the order in which results are listed, conflicts between the transparency of the decisions about how to order results and hiding favoritism, conflicts due to exclusion of companies with whom the sponsor has a conflict, and erosion of search engine incentives to improve quality when advertisers address user needs.

23 Also see Schmidt, Rosenberg, and Eagle (2014), 78–81.

24 Among the other prominent search engines at the time were AltaVista, Lycos, Find-What, GoTo, Excite, Infoseek, RankDex, WebCrawler, Ask Jeeves, and Inktomi.

25 GoTo’s founders had taken out a patent on the pay-to-list method, receiving US patent 6,269,361, claiming to cover a broad class of paid-for-click advertising in a search engine. The claims were novel and legally untested but potentially could lead to a suit of any search engine that used anything similar. Google’s approach to advertising at the time would have given them considerable merit to claim they were not infringing on these patents.

26 This was not Google’s first deal. Among Google’s first deals was a deal to help search for Netscape’s users. It had generated so much traffic, Google had to temporarily disable its site. See Schmidt, Rosenberg, and Eagle (2014), 84.

27 Google licensed the technology and also sold an undisclosed part of the company to Yahoo. See Hu (2000). Also see Angel (2002), 245. Yahoo continued to use Google until February 2004.

28 New reports at the time stressed the symbolic importance of the change. Hu (2000).

29 There had been considerable turnover in leadership over the prior half decade, involving the firms Excite, WebCrawler, Lycos, Infoseek, AltaVista, Ask Jeeves, and Inktomi, among others. See, e.g., Danny Sullivan’s wistful recollections of search engines that had come and gone at Sullivan (2003). Also see the brief account of Mark Knowles at http://www.thehistoryofseo.com/The-Industry/Short_History_of_Early_Search_Engines.aspx.

30 Very early Google began investing in large-scale arrangements of server hardware to support quick processing.

31 Google initially denied in public that it had any ambitions to substitute for Yahoo’s portal service. For example, in an April 2001, article, the following quote appeared. “The fact is that we have 130 customers that we power search for,” said Omid Kordestani, Google’s vice president of business development and sales. “They don’t feel we’re competing with them, and we’re comfortable with that model. I use my favorite portals for sending e-mails, instant messaging, tracking stock portfolios—all these things Google isn’t doing.” Festa (2001).

32 The canonical example is the plumber who bids highest, gets the phone call, then bills a high amount merely for the visit before providing any service.

33 Varian (2006) identifies the two employees as Salar Mamangar and Eric Veach.

34 Did this implementation violate GoTo’s patents? GoTo eventually would claim it did, though the merits of that claim are not obvious in retrospect. To avoid an extended lawsuit Google eventually settled out of court with Yahoo, which had acquired GoTo/Overture, selling a stake in Google to Yahoo. Yahoo sold its stake in Google after Google’s IPO. Kuchinskas (2004).

35 Simon Wilkie, private communication, September 2012. Also see the longer explanation in Hansen (2009) and Varian (2006).

36 See Hansen (2009).

37 Varian (2006; 2010).

38 Varian (2010), 4.

39 Varian (2010), 4.

40 Extremely low quality matches could be excluded while extremely good matches could be included. Through such a mechanism the pornography sites could be excluded from appearing on ads no matter how high they bid.

41 This incentive is stressed by Schmidt, Rosenberg, and Eagle (2014), 70.

42 Simon Wilkie, private communication, September 2012. See also the explanation from Hal Varian, chief economist at Google, available at http://www.youtube.com/watch?v=1vWp2-QMOz0.

43 The quality ranking stayed the same whether the ad appeared in the first, second, third, or fourth position, and, therefore, worked rather mechanically. Every bid came with a quality, such as 1.1, 1.2, etc. Higher numbers represented higher quality. Hence, if two firms bid three dollars, the firm with a ranking of 1.1 would be treated as if it bid $3.30, while the firm with a bid of 1.2 would be treated as if it bid $3.60. The quality ranking rewarded better matches. Note that it could also punish firms for very poor matches. An extremely low quality, such as 0.1 would be treated as $0.30.

44 GoTo did not have anything equivalent to Page-Rank, so it lacked the ability to initially rank web pages for relevance as a basis for the quality ranking, and Google would not license Page-Rank to them. Goto only could employ a user’s click-through experience, but that required sufficient data, which they lacked. After GoTo was sold to Yahoo, determining a quality ranking also became a priority at Yahoo. Simon Wilkie, private communication, September 2012.

45 AdSense uses a Vickery-Clark-Groves mechanism, which is a sealed-bid auction that charges a bidder the “harm” it causes others in the auction.

46 Kawamoto and Olsen (2003).

47 Kawamoto and Olsen (2003).

48 This is one way to read the discussion in Schmidt, Rosenberg, and Eagle (2014), 80, which emphasizes attracting users in order to spur large volumes of users, and not let short-term revenue objectives overtake the long-term objective of attracting returning users.

49 Schmidt, Rosenberg, and Eagle (2014) stress the organizational features at Google that supported continual improvement.

50 See http://www.youtube.com/watch?v=soYKFWqVVzg, and http://alan.blog-city.com/an_evening_with_googles_marissa_mayer.htm.

51 See, in particular, Schmidt, Rosenberg, and Eagle (2014), 75–77, which emphasizes that the invention of safe search also generated additional tools for users.

52 The DMCA was passed in October 1998, as part of a broad initiative to coordinate copyright law around the world. For a description and explanation of the many provisions, see Nuechterlein and Weiser (2005).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.219.249.210