10

image

Capital Deepening and Complements

The universal reach and connectivity of the Internet were enabling access to information and transactions of all sorts for anyone with a browser and an Internet connection. Any business, by integrating its existing databases and applications with a Web front end, could now reach its customers, employees, suppliers and partners at any time of the day or night, no matter where they were. Businesses were thus able to engage in their core transactional activities in a much more productive and efficient way. And they could start very simply, initially just Web-enabling specific applications and databases.

Irving Wladawsky-Berger, first general manager for IBM Internet Division1

IBM suffered unenviable circumstances at the outset of the commercial Internet boom. Although IBM had played an important role in the National Science Foundation Network (NSFNET) from 1987 to 1995, its participation had done little to alter the practices or perspective of IBM’s top management. By the mid-1990s, IBM’s mainframe business had declined significantly from its peak in the late 1980s, and the PC division had lost much market share over the same half decade. Perceiving IBM to be in a free fall, the board had removed the CEO. Then the board broke with precedent in hiring a new CEO from outside the firm in 1993. That is how Lou Gerstner Jr. came to IBM.

image

FIGURE 10.1 Lou Gerstner, CEO IBM, 1993–2002 (photo by Kenneth C. Zirkel, 1995)

image

FIGURE 10.2 Irving Wladawsky-Berger, first general manager for IBM Internet Division (date 2014)

Gerstner undertook a long-term review of every part of IBM, and after considerable study came to two key decisions: he chose not to break IBM into distinct business units for sale, and he decided to reorganize many of the existing business units around a service-oriented strategy that focused on helping clients to implement IT in more effective ways.2 The strategy included some restructuring, such as—eventually—selling IBM’s networking business and, instead, purchasing networking services from others. It also stressed investing in business services, especially those related to implementing all aspects of IT used in business operations. While IBM had done some of these activities for years, there was only one obvious problem with this approach in 1995: IBM did not provide every service its clients wanted. In particular, it had no services related to the Internet.3

L. L. Bean was one client who wanted such services. Long one of the world’s largest catalog companies, the rapid rise of the commercial Internet had put numerous worries on the minds of its management. While the situation was not yet dire in 1995, the potential rise of electronic retailing threatened to take a large fraction of revenue in the near future. The management also took seriously the boasting of Internet enthusiasts who foresaw a transformed sales experience, selling directly in Internet users’ homes. The new services were forecast to use faster and cheaper Internet technology, and do it with more vivid web pages than L. L. Bean could offer in the pages of its mailings.

The management at L. L. Bean sought to develop a strategy to develop its own electronic services, and meet the anticipated competition head on. L. L. Bean already knew how to fulfill an order if it came over the phone, but the company could not perform a comparable service over the web. The company needed the equivalent of a reliable and scalable electronic process, one that used the Internet, triggered the fulfillment process, and did so at a large scale. It also wanted to get this quickly and with as little money as possible, so it desired not to start from scratch. That meant it had to find a way to make the new front end for electronic retailing work with its existing order-fulfillment process.

Why did L. L. Bean turn to IBM, its longtime computer supplier, for a new approach? The management of IBM and L. L. Bean were tied together by a long working relationship, reinforced over many years. IBM had always been more than just a supplier of computer hardware. It helped build reliable and scalable processes, and its employees had deep familiarity with L. L. Bean’s needs and with standard industry practices. L. L. Bean also had little reason to go to the Internet enthusiasts with their entrepreneurial ventures. Many of the entrepreneurs focused on establishing businesses that potentially competed with L. L. Bean. As one of the largest catalog retailers in the world with an established brand, L. L. Bean had little room for the experimentation of an unknown start-up. It needed to satisfy its customers as soon as possible, and it trusted IBM to do that with as few errors as possible.

As it turned out, IBM and L. L. Bean’s experiments together came at a propitious moment for both. IBM’s solutions gave L. L. Bean what it wanted, a way to maintain its existing order-fulfillment process for catalogs, and make it work with a new electronic retailing channel. It would make L. L. Bean one of early leaders among old-line catalog firms and give it a viable path to respond to the new entrants. As this chapter describes, IBM also got what it wanted. It learned a great deal from this situation and applied these lessons elsewhere. Using their expertise in their traditional strength, enterprise IT, and making it work with web technologies, IBM’s employees learned how to create bridges between L. L. Bean’s IT processes and a new web front end. They called this bridge middleware. IBM regarded it as a definable IT enterprise project, with many elements that had value in many settings, and where IBM provided all the pieces to adapt the mainframe system to the web and Internet.

A speech to the Internet Commerce Expo on April 9, 1997, by Irving Wladawsky-Berger, the first general manager for the IBM Internet Division, made clear how well the middleware solution had worked for both parties. It was less than two years after IBM initiated its new Internet strategy, and the talk was titled “E-Business and Killer Apps.” Taking issue with the common definition of “killer app” Wladawsky-Berger said:

If you mean one or two major applications that kill off the competition and leave a handful of IT firms with architectural control of this new era, you won’t find them, and I doubt they will emerge in the future. But from the customer’s point of view “killer” apps are appearing all over the place. These “killer” apps are the ones that help them do what they have always done … only faster and more efficiently, and with a global reach that two years ago was a dream.4

That set up the latter half of the speech, in which Wladawsky-Berger gave many examples in which IBM provided the middleware. Last of all, he mentioned L. L. Bean’s experience over the 1996 Christmas shopping season.5 In the speech L. L. Bean served as the poster child for successful adoption of electronic retailing by an established firm. Without revealing specific data, Wladawsky-Berger let it be known that traffic at the site had exceeded all expectations and forecasts.

IBM’s first two years’ experience with L. L. Bean illustrates a pattern that would be played out over many years as the commercial Internet began to grow. IBM emerged from its near-death experience with a healthy strategy and a lucrative line of services, and it did so by readopting the Internet to new circumstances. This strategy emerged because many of its clients—large enterprises—needed help integrating the commercial Internet into their businesses processes. Integration turned out to be productive, because it reused existing capital for new purposes instead of needing to build processes and operations from scratch. That approach led to a cheaper solution and in less time, and it preserved plenty of valuable processes these large enterprises had perfected in prior years. Hence, the buyer of IBM’s services made out well, and so did IBM.6 Understanding why illustrates many general lessons about the value of installing business processes related to the Internet.

There is one additional reason to examine IBM’s experience. Events illustrate a larger economic logic to why the commercial Internet yielded productivity advance. First, as noted, the most common response among established firms contained many of the same elements, retrofitting of existing processes with new technology.

A second reason is more subtle. Retrofitting existing business processes did not happen in isolation of other investment in the economy; growing investment in large enterprises generated demand for a supply of complementary services. One type of service firm acted as an intermediary, helping a buyer improve its existing business processes and adapt many elements of new technology. A very different set of participants, data carriers, responded to that demand by investing in their operations, propelling the network to provide a large scale of activity. In turn, greater network capabilities enabled additional commercial investment in a virtuous cycle, which became self-reinforcing.

The Rebirth of IBM

When the commercial Internet began to take off, Gerstner appointed a task force, which concluded that IBM needed a new Internet division, and the new division should not be in the classic mold. It should not have a product and its own profit and loss statement, engage in product development, or compete in a race to develop new features. Rather, it should coordinate activities across divisions in IBM.7 Why? Because, said Irving Wladawsky-Berger, who became head of the Internet division in late 1995, the “Internet touches on all parts of the company.… [You] cannot gather it together in one place.”

The essence of IBM’s strategy emerged in 1996, during his first year.8 Wladawsky-Berger’s team began talking with existing customers with whose business processes IBM’s staff had gained deep familiarity. Many of these were the largest firms in the globe; many had recently bought mainframes and related applications from IBM. More to the point, many of their managers had heard the outsized claims of the new economy entrepreneurs and did not dismiss these claims as outsized. Many viewed their own firms as under threat and wanted IBM’s help in developing a solid strategic response. In that sense the market opportunity fell into the lap of Wladawsky-Berger’s team; many of IBM’s longtime clients had a need and were willing to pay for substantial services that addressed it.

Wladawsky-Berger’s team queried IBM’s technical staff and sales force. The team discovered that while a large number of IBM experts were studying the right set of problems, they had not constructed prototypes that buyers wanted. In the recent past employees at IBM had developed network applications of, for example, news aggregation, yellow pages, hosted electronic commerce, and shopping sites. These prototypes addressed many of the potential issues Wladawsky-Berger’s team heard from customers. Yet the vast majority of prototypes in IBM’s laboratories used proprietary components and approaches, not Internet software and web protocols.9 In short, IBM’s prototypes had part of the vision right in a lab, but all of them implemented solutions that did not appeal to users. They were too cumbersome and far too expensive in comparison to what the web was making possible.10

At first Wladawsky-Berger’s team concluded that these prototypes gave IBM no comparative advantage in developing solutions for clients. Any competent HTML programmer could design a web page as easily as IBM, after all. The first impression was misleading, however, and Wladawsky-Berger’s views evolved as he came to appreciate that no other large supplier understood the buyer’s business processes of large enterprises as well as IBM’s staff did. IBM could do something unique; they could bring in programmers who had worked at their client in the past, and understood what the middleware needed to accomplish. That made IBM well placed to address the buyer’s needs as well as preserve some of the existing processes.

IBM also could take on a role as a technological intermediary, aiding the client’s move from its unique situation to a distant technical frontier. Why did IBM have a comparative advantage at that task? Because IBM’s employees also had a vision of what the new technology could accomplish in a large enterprise, and they understood from their own prototypes what types of services could be built. IBM’s employees also already had familiarity with IT-related issues in security, firewalls, and preservation of brand value. That knowledge could be very valuable if married to an effective vision about how to implement the new frontier using new Internet and web technologies.

IBM could get there if they made only one large change to their attitude about using open systems. Many years later, looking back on it, Wladawsky-Berger described that critical shift:

IBM, like many large businesses, used to be very inward-looking, preferring to do everything by ourselves if at all possible. Embracing the Internet, its open standards, and overall inside-out approach turned out to be much more than a technology change for us. I think it had a very big impact on the overall culture in IBM, as it did in many other companies. It truly made us much more open—e.g., embracing new technology ideas from external communities, as we did with Apache, Linux, and Grid.

Principles for a strategy emerged comparatively quickly and employed these three elements:

•  A vision of the future prototype that created value by delivering new services and meeting the client’s need to match competitive threats;

•  A commitment to adapt to and respect the client’s existing business processes that already delivered value to the client’s users;

•  A preference for the open technologies of the Internet and web.

IBM had two early experiences that became canonical for the company and were referenced frequently. The introduction to this chapter gave an overview of the experience with L. L. Bean, which showed how the catalog firm opened new channels using web technology. The other example comes from United Parcel Service (UPS), which would replace some of its activities, and enhance them. Rather than merely match a competitive threat, this enhancement would create an entirely new service for users.

UPS operated one of the largest shipping services in the world, moving packages from almost any location to any other. UPS sought ways to use the web to help customers keep track of these shipments. As with any shipping company, UPS already tracked packages using internal processes, and used that information to correct routing problems when they emerged. UPS sought to make that information more readily available.

Once again, IBM’s employees were quite familiar with the computing resources that supported internal processes. After all, IBM had installed the software to support such logistics. IBM’s staff concluded that it was possible—indeed, not technically difficult—to attach a web server to the transactional system, which had a mainframe computer at its core, and support a website with standard Internet functionality. What would that site do? It would answer a query from a user about the location of a package, sending information directly from the mainframe to the user.

After it was installed many users went to the automated system repeatedly, watching their package as it passed through various checkpoints. This had not been possible previously with human operators, as it would have been prohibitively expensive. According to Wladawsky-Berger, users loved the insight into the operations. “People thought these were magical.”

Both parties benefited from the deal. To IBM, it looked again like an IT enterprise project with middleware, where IBM provided all the pieces to adapt the mainframe system to the web. To UPS it looked like a cost saving measure because the website bypassed an operator. In addition, it offered a convenient way for the user to find information, enhancing the value of the company’s services.

IBM found similar issues at many of their clients. Integrating aspects of traditional IT with the web became a large business for IBM. Because many firms in many industries use similar processes, IBM’s solutions for L. L. Bean and UPS generalized to a variety of settings and processes. For example, developing an online presence was valuable for hotels needing to fill rooms, or large retailing firms eager to reach more customers. All made use of back-office computing in innovative ways, typically by integrating a web-based application with an existing mainframe-supported business process so that users could reach the process over the Internet. As Wladawsky-Berger summarized, “We discovered the strategy in the market place, what our (longtime) clients wanted and what others were not doing.”11

Once the Internet division had articulated a strategy, the next step was to implement it inside IBM. In prior years that could have encountered major hindrances, but, as it turned out, virtually every IBM division manager cooperated with its execution. The “near-death” crisis at IBM in the early 1990s had helped soften the resistance to open technology, paving the way toward using open systems throughout IBM. Moreover, no division manager wanted to oppose the CEO, who backed the new strategy and was prepared to implement it across the organization.12

Many division managers were persuaded that the new strategy was in their self-interest, because they saw that their divisions could receive more sales if they made their products and services compatible with TCP/IP and the web. Although such conversion could be an enormous task, the growth potential was also apparent. Accordingly, every IBM division began the engineering efforts to add designs that reflected the Internet strategy to existing products.

The tough times of the early 1990s thus became a hidden blessing during the implementation of the Internet strategy. It helped silence the loudest defenders—who had been quite vocal prior to Gerstner’s appointment and even at the start of his tenure—of the older way, or the proprietary approaches inside IBM. The mainframe division had defended proprietary solutions, for example, but had suffered serious declines in revenues in the 1990s, and under Gerstner no longer held veto over firm strategy.13

As IBM gained one successful example after another, Wladawsky-Berger recognized early that IBM had a comparative advantage in providing large IT projects with these characteristics. Few other firms possessed the same combination of client relationships, intimate knowledge of existing processes, and familiarity with the prototypes of electronic commerce. Few other firms had sufficient staff to handle large and complex projects. This orientation also was consistent with the costumer-oriented strategy Gerstner was implementing throughout the company.14 The strategy sustained itself throughout the latter part of the 1990s and beyond.

IBM’s unique position fostered an additional strategic element—to provide intellectual leadership and associate IBM with that leadership. Despite the publicity that came with such leadership, IBM’s near-death experience did hurt its reputation with financial analysts. Although IBM did not face problems selling services in the boom atmosphere of the late 1990s, its approach did not impress many financial analysts at the time. To an analyst who subscribed to Internet exceptionalism, IBM appeared to be merely profiting from helping old firms stave off extinction for another day. The value of IBM’s approach became widely appreciated only much later, after the collapse of the dot-coms. After 2001 IBM was still selling services to clients and did not suffer large drops in demand as a result of the dot-com bust.15

What overall lesson does this illustration provide? IBM found ways to recombine the new and old into a complementary whole rather than build new IT systems from scratch. It helped pioneer an approach to changing the face of selling on the web and the Internet, altered the value of electronic commerce, and eventually adjusted the conception of where the bulk of value lay. As such, it is also an example of a leading firm altering its previous conception and adopting the vision of those who brought innovation from the edges. It even made a business from explaining that vision to others and selling implementations of it.

Moreover, IBM’s experience partly explains how and why the first wave of pure dot-com entrepreneurial businesses did not wholly—or even substantially—replace established large firms. As it turned out, a number of traditional business processes could be improved with complementary inputs from appropriate implementations of web-based technology. That altered business processes, and in ways that preserved value in existing brands, relationships with customer bases, operations and order fulfillment processes, and relationships with suppliers. More to the point, it gave established firms a potential pathway to competing with new entrants by taking advantage of declining costs in Internet access and by taking advantage of the potential for adding new capabilities affiliated with web-based technologies.

Demand for Enhanced Business Processes

IBM’s experience illustrates the most important factor shaping business investment in the commercial Internet in the late 1990s—the economics of business process innovations. Indeed, it is not much of an exaggeration to say that a large part of the failure of Internet exceptionalism was due to the blindness of many pure-play entrepreneurs to the unglamorous success of investing in business process innovations that worked well with existing business processes.

Business process innovations were easily misunderstood during the early stages of the commercial Internet. They were not the standard fare of undergraduate courses in economics or MBA courses in business development. Business process innovations involved both new processes and products, and the payout was generally not known with any certainty at the outset. Because important business process innovations in enterprise IT occurred on a large scale, they typically involved a range of investments, both in computing hardware and software, and in communications hardware and software. They also involved retraining employees and redesigning organizational architecture, such as its hierarchy, lines of control, compensation patterns, and oversight norms. Total costs and benefits varied, depending on circumstances and the resolution of unexpected problems that emerged.

A number of misunderstandings shaped common perceptions of business process innovations, and these in turn created confusion about how the Internet would yield productivity gains. For example, there was a myth that new IT hardware or software yielded the vast majority of productivity gains by themselves. In fact, business process innovations were not often readily interchangeable with older products or processes, meaning that the initial investment often did not generate a substantial productivity gain until after complementary investments, adaptations, and organizational changes. Many of these necessary changes were made long after the initial adoption. Hence, it was common for a business process innovation to have zero or negative returns in the short run before it yielded positive returns.

Among the functions altered by electronic commerce, for example, use of electronic retailing generated many changes to routine processes, such as altering customer/supplier interactions, and these took time to smooth out. That explains why adding electronic commerce to something as basic as parts supply yielded productivity gains eventually, but occasionally long after the initial rollout.

That observation relates to another common misunderstanding: the planning myth. Although the installation of any substantial business process innovation required planning—that is, administrative effort by an enterprise in advance of installation to coordinate complementary activities—such planning alone rarely ended the administrative tasks required to generate productivity gains. Administrative effort did not cease after installation, or even necessarily reach a routine set of procedures. Rather, planning continued throughout implementation and morphed into reacting to unexpected issues. Hiring and training personnel generated use of new hardware, software, and procedures. New users in new settings then noticed unanticipated problems, which generated new insight about unexpected issues.

Consider the example of adding web-enabled processes to the regular routines behind a simple business process, procurement of parts. Electronic methods made delivering the order more convenient, and it potentially made the billing process smoother, and it reduced the number of errors in the order due to misspellings or poor writing. But not all orders go according to plan, and poorly designed software could make adjusting an order more difficult when a component was out of stock, or weather interfered with shipping parts on schedule. The rise of unanticipated issues typically required considerable effort from users, because they had to tailor newly installed software, while simultaneously manage changes with multiple suppliers. If users were changing complex processes, then the challenges could be very difficult.16

That relates to a third common misunderstanding: the shrink-wrap myth. Installing business process innovations was not equivalent to installing shrink-wrap software for a PC that worked instantly or immediately after training staff. Instead, often necessary was coinvention, the postadoption invention of complementary business processes and adaptations aimed at making adoption useful.17 As in the first myth, the initial investment in IT was not sufficient for ensuring productivity gains. Those gains depended on later inventive activity, that is, whether employees in the adopting organization found new uses to take advantage of the new capabilities, and invented new processes for many unanticipated problems. Once again, many users failed to anticipate this inventive activity and retained the wrong staff, or did not employ the right consultant. This too could shape the immediate payoff. Until issues of coinvention were addressed, in many instances there was a strong potential for delayed, if any, payoff.18

Indeed, that problem was illustrated by the challenges of adding electronic commerce to consumer relationships. Catalog companies such as L. L. Bean had to change what they showed customers, but did not change the order-fulfillment process much. Order fulfillment involved tracking and billing, moving goods from warehouse to shipping, and, in the event of error, additional intervention. As it turned out, catalog companies had a comparatively easier experience adopting electronic commerce than did pure-play firms because their order fulfillment processes required less coinvention after the adoption of electronic commerce.19

Misunderstandings about the necessity of coinvention sometimes generated a fourth myth—namely, expectations that the entire cost of investment was incurred as monetary expense. In fact, nonmonetary costs—unexpected issues causing delay, hassles for employees, or foregone opportunities—comprised a significant risk of installing a business process innovation, and this was the most substantial issue established firms faced. Once again, this could create delays too.

The issues became particularly challenging when an enterprise could not shut down when installing the software. In such settings, issues had to be resolved and bugs removed as they appeared.20 Moreover, interruptions to ongoing operations generated large opportunity costs in foregone services that could have been substantially mediated with internal resources (for example, development of middleware by in-house IT staff) for which there may be no market price or, for that matter, no potential for resale.21

A final myth sometimes arose in some organizations facing these issues, and that is the go-it-alone myth. Many organizations failed to recognize that they shared issues with many others, and they did not have to face the growth of electronic commerce alone. As with other enterprise IT, third-party consulting services could be hired on a short-term basis from the local market and could attenuate these costs by sharing the learning. The incentives around utilization and investment also could change considerably over time due to changes in the restructuring of the organization’s hierarchy and operational practices, rendering old internal personnel staffing levels inappropriate for the evolving needs of the organization. Once again, third party consulting services could fill those gaps.

The presence of coinvention had two key implications for the deployment of the Internet inside of enterprises. First, there was a visible relationship between investment in the Internet to enhance enterprise IT and local conditions in a limited metropolitan geographic area. Large cities had thicker labor markets for complementary services and specialized skills, such as consultants who could program in both the language of mainframes and the language of the web. The presence of those labor markets for technical talent, greater input sharing of complex IT processes, and greater knowledge spillovers in cities increased the benefits to adoption of frontier technologies in big cities relative to other locations.22

Second, enterprises with existing IT facilities could expect lower coinvention costs than establishments without extensive operations, which shaped costs around the time of adoption. Having more resources elsewhere in the organization meant access to lower-cost resources and loans between an organization’s projects. Programmers with IT experience could reduce development costs if they were able to transfer lessons learned from one project to another. Prior work on other IT projects could create learning economies and spillovers that decreased the costs of adapting general purpose IT to organizational needs, reducing the importance of external consultants and local spillovers. Existing support in firms for PCs also made the transition easier, making support for Internet access and the web an additional routine for the same staff.

In 1996 the technology was new and untested in a wide set of circumstances. Existing businesses needed something, but each faced different economic circumstances that arose from differences in local output market conditions, quality of local infrastructure, labor market talent levels, quality of firm assets, and competitive conditions in output markets. In short, every business faced a different adaptation problem.

The Market for Coinventive Activity

There were many ways for a market to organize coinvention activity. Firms in need of coinvention could do it themselves or hire third parties to do it for them. Firms that supplied services to help users reduce the costs of coinvention could specialize in problems that many buyers shared. Such specialists could choose to supply problems common to an industry, a location, or an organizational activity. Being part of a crowd was the best situation for a buyer, as that crowd generally induced a considerable supply of services and allowed one coinvention experience to generate lessons for another. Being unique was more expensive for a buyer, as it left the buyer unable to take advantage of the services third parties offered.

As it played out, providing coinvention services in the late 1990s became an enormous business opportunity. The gap between the needs of business and the technical frontier, as defined by the potential gains from using the commercial Internet, was large and persistent. This gap created business opportunities for others to help existing firms advance from where they found themselves to where they aspired to go. Simply stated, many established businesses faced similar coinvention issues, and consequently innovated in similar ways. As illustrated above, IBM took advantage of this opportunity. It was not alone. Anderson Consulting (Accenture), Booz-Allen, and many others grew client bases under these conditions.

While many dot-com entrants received substantial public attention, many other companies began the process of upgrading their IT installations in order to implement enhancements and thereby increase their productivity from the web-enabled Internet. Because the costs of basic participation and enhancement were distinct, the second layer of business investment—enhancement—followed a trajectory distinct from basic participation.

Just as Wladawsky-Berger stated in his speech, there was no single “killer app” generating large investment in enhancement. Instead, the most complex and productive adaptations of the Internet followed the path laid out by most innovative IT enterprises. This was directed toward automating functional activity or business processes within an organization, such as bill collection, inventory replenishment, or point-of-sale tracking.23 These activities were essential for operating an organization. Such rearrangement of functions had the potential to lead to, for example, the reorganization of supply chains in many specific markets over the long run.24

An additional factor accelerated investment in enhancement activity. The entry of entrepreneurial firms provided motivation for many existing firms to act. The rapid success of firms such as HoTMaiL and others fostered the perception that some entrepreneurial firms would succeed quickly in finding a competitive advantage against established businesses. The best chance for existing firms was to avoid losses. The fear of large losses motivated implementing quick change through large investment.25 In addition, accelerated rates of replacement and upgrades of existing systems arose from several other factors: constant improvement in the quality of PCs, falling prices along the entire range of IT equipment,26 general improvement in networking and the Internet, and the need to address potential Y2K issues.27

Enhancement of IT was not a minor activity in many firms and required deliberate change in large organizations. Even in the most adventurist organizations, these changes would not come without a large amount of analysis to legitimize any departure from refined and tested processes. The arrival of books, such as Shapiro and Varian (1999) or Hanson (2000), was often the earliest economic and marketing analysis of the Internet read by many American senior executives. Because they used a familiar language, and came from faculty at traditional universities, they influenced thousands of companies that finally got online on the Internet in the latter part of the boom.

The experiences of lead users—pioneers who make investments in new technology in order to seek competitive advantage and achieve dramatic leaps in productivity—can illustrate these observations. A survey of lead users of enhancement at the end of 2000 showed the variety of industries the Internet affected.28

The two biggest lead adopters in using the Internet for enhancement were not a surprise. They came from two very different areas: management of companies and enterprises; and media, telecommunications, and data processing. The former represented the financial side of Internet use, including corporate headquarters for multidivisional firms, securities firms, and financial holding companies, all of which were longtime users of frontier computing. The latter included publishing firms, thus representing the change the Internet brought to media. It also included information and data processing services, an industry with firms such as AOL and other Internet-access providers. These too were frontier users.29

The second tier of lead users also represented a wide mix of industries that had a history of pushing out the frontier of computers. These were finance and insurance, professional and scientific services, utilities, and wholesale trade. The latter two included heavy use of sophisticated applications, which combined database software with communication technologies. The next tier of lead users included a large part of manufacturing, notably computer and electronic manufacturing, and printing and related support activities. Another important lead user of this category was oil and gas extraction, a longtime user of computation-intensive modeling. Lastly, many lead users employed logistical information for greater efficiency, such as water transportation, pipelines, motor vehicle and parts dealers, electronics and appliance stores, sporting goods, and nonstore retailers. The last four were leaders in consumer e-commerce.30

The activities of lead adopters coincided with geographic variation in the propensity to adopt. Establishments in MSAs (metropolitan statistical areas) with more than one million people were nearly 50 percent more likely to adopt enhancement than MSAs with less than 250,000 people. The strong urban bias toward the adoption of advanced Internet applications resulted from the fact that lead-user industries tended to be located in urban areas. Related work suggests that small establishments in nonurban locations might have been unable to take advantage of Internet opportunities due to lack of thick labor markets for technical talent.31

Overall, the growth rates in real investment in computing equipment were extraordinarily high in the latter part of the 1990s. Investment in software reached 9.5 percent growth rates per year from 1990 through 1995, and 14.2 percent for 1995 through 2000. Computing equipment growth rates reached, respectively, 13.5 percent and 7.1 percent per year for the first and second half of the decade. Communications equipment reached 7.2 percent and 15.5 percent growth rates. All of these exceeded rates of growth in non-IT capital, which reached 6.8 percent and 4.9 percent per year over the same periods.32 The productivity gains from this enhancement ultimately became one of the lasting legacies of this period’s investment.

Economy-wide investment in IT continued to grow at high rates for all of the late 1990s. Investment in IT, pictured in figure 10.3, continued to grow at rates well above 20 percent a year. Market demand for new goods and services was as high as any participant had ever experienced. Suppliers of equipment, business software, and business services had never had it so good. Indeed, as it would turn out, the good times would last until the beginning of the new millennium.

Capital Deepening of the Network

Capital deepening refers to investment in existing processes, where that investment aims to increase the scale of existing activity or, at most, change it in incremental ways. Generally speaking, capital deepening lowers costs by preserving existing business processes while it increases scale. As it turned out, increases in business processes motivated investments that led to capital deepening in the network that supplied data.

image

FIGURE 10.3. Real IT investment growth rates, 1990–2004, year-over-year % change (Doms 2004)

Capital deepening in the network was essential for everyone, as it would permit doubling the scale of traffic handled by the network with far less than double the scale of investment. If the Internet would support a wide array of applications for a dispersed set of users, society also would need a tenfold and one-hundred-fold increase in traffic. That had to be available at less than a tenfold and one-hundred-fold increase in expense.

Although costs reductions in data carrier services were precisely the gains Stephen Wolff had envisioned when he initiated the privatization of the Internet backbone, there was little precedent to forecast the next generation of investment to achieve those gains. Growing demand certainly would motivate carriers to respond with investments that attempted to generate efficiencies. What would those investments look like? Would they be more of the same but on a larger scale? Would those investments require fundamental restructuring of the arrangement of the supply of services by infrastructure firms? Would new increases in traffic put enormous strains on the Internet’s existing infrastructure?

The shadow of recent history hung over the open question. The network had been built for researchers sending e-mail and files, not households and business users sending web traffic. How would the privatized network handle the new type of traffic? Would more traffic merely lead to more investment in the same activity, or would it require a fundamental rethinking of the structure of processes supporting the network? There was no historical basis for addressing the question.

The first test of these concerns came early, as web-based traffic began to grow. Electronic mail had been the dominant application prior to the privatization of the Internet and invention of the web. It continued to be popular with adopters after 1995. All surveys showed all new Internet adopters making use of e-mail,33 which generated the most Internet traffic in 1995. Lotus, Microsoft, HoTMaiL, and even Netscape provided widely used services. Yet no sensible observer expected this dominance to last. Everyone expected traffic affiliated with the web to overtake e-mail traffic. Indeed, in retrospect it was clear that it did by the middle of 1996.34

Capital deepening enabled the growth in web traffic. At the time traffic grew, it was quite challenging to get solid facts. In the latter part of the 1990s advocates for the new economy were fond of claiming that traffic doubled every year or at even faster rates. Firms who benefited from this assertion, such as WorldCom, also made similar claims and assertions.

This assertion was rarely supported by actual data or met with skeptical analysis from contemporaries. In retrospect, the facts seem not to support the most optimistic claims.35 While traffic volumes doubled every few months in the early years (the early 1990s), this growth rate was an artifact of the low base from which it grew. Although traffic growth could vary across locations due to local influences, generally speaking, by the late 1990s traffic was growing at a rate of 50 percent per year.36 Estimates can be seen in table 10.1.

After 1997, a structure began to take shape around this increasing traffic. It was part mesh and part hierarchical, using “tiers” to describe a hierarchy of suppliers.37 The first group of backbone providers in the United States (MCI; Sprint; UUNET; Bolt, Beranek and Newman [BBN]) had been the largest carriers of data in the NSF network. In 1995 and 1996, any regional ISP could exchange traffic with them. At that time, the backbone of the US Internet resembled a “mesh,” with every large firm both interconnecting with each other and exchanging traffic with smaller firms.38 Some of these firms owned their own fiber (for example, MCI) and some of them ran their backbones on fiber rented from others (or example, UUNET).

TABLE 10.1. US Internet traffic, 1990–2003, estimates at end of each year (in PetaBytes/month)

1990

0.001

1991

0.002

1992

0.004

1993

0.008

1994

0.016

1995

0.15

1996

1.5

1997

2.5 to 4

1998

5 to 8

1999

10 to 16

2000

20 to 35

2001

40 to 70

2002

80 to 140

2003

130 to 210

Source: Odlyzko (2003).

A hierarchy began to take shape with tier 1 suppliers acting as national providers of backbone services who charged a fee to smaller firms to interconnect. The small firms were typically ISPs that ranged in size and scale from wholesale regional firms to local ISPs handling a small number of dial-in customers. Tier 1 firms did most of what became known as “transit” data services, passing data from one ISP to another ISP, or passing data from a content firm to a user. In general, money flowed from customers to ISPs, which treated their interconnection fees with backbone firms as a cost of doing business.

Tier 1 firms adopted a practice known as peering, which was a self-propagating way of reinforcing a tier 1 firm’s status. Specifically, peering involved the removal of all monetary transfers at a point where two tier 1 providers exchanged traffic. Peering acknowledged the fruitlessness of exchanging money for bilateral data traffic flows of nearly equal magnitude at a peering point. Hence, it lowered transaction costs for the parties involved. However, because their location and features were endogenous, and large firms that denied peering to smaller firms would demand payment instead, many factors shaped negotiations. As a result, the practice became controversial. It was impossible to tell whether peering reflected a more efficient transaction for large-scale providers or reflected the market power of large suppliers, which smaller firms or non-tier-1 firms could not acquire.39

Another new feature of Internet infrastructure also began to emerge at this time. Most households received considerably more data (from content firms) than they sent out to others. Such asymmetric traffic put more strains on the network, which had to find ways to deliver large amounts of data to households quickly.

That asymmetry led to the rise of third-party caching services. Caching was an enhancement to the operations of the network backbone. For reasons explained in a moment, it accomplished what capital deepening would have been unable to accomplish except at a very high cost.

Starting in 1998 Akamai—a pioneering caching company—began to locate its servers within key points of an ISP’s network, sometimes paying the ISP a nominal fee for the inconvenience, but most often paying nothing. Aspiring to reduce delays for users, content providers and other hosting companies then paid the caching companies to place copies of their content on such services in locations geographically close to users. Users were directed to the servers instead of the content provider’s home site, and the data traveled a shorter distance, and thus the homes received faster response to queries.

Capital deepening in the backbone of the network would have been very expensive in the absence of caching services. Both ISP and the content firm benefited from caching, as this bypassed potential congestion in other parts of the network. In general, these became known as overlays, because they were not part of the original design of the noncommercial Internet.40

These changes raised an open question about whether the market for intermediate data transmission was still competitive. For much Internet service in urban areas, the answer appeared to be yes. ISPs could choose from multiple backbone providers and multiple deliverers of transit IP services, and many ISPs “multihomed” to get faster services from a variety of backbone providers. ISPs also had multiple options among cache and content delivery network (CDN) services. For Internet service outside of urban areas, the answer appeared to be no. ISPs did not have many options for “middle mile” transit services, and users did not have many options for access services. The high costs of supply made it difficult to change these conditions.41

Changes in National Policy

Capital deepening of the US data network would have occurred under almost any policy regime, but its rate was sensitive to the policy regime of the country. As it was, in many ways both large and small, the Federal Communications Commission (FCC) tried to foster the Internet’s growth from 1992 through much of the decade.42 At the same time, the statutory regime changed with the passage of the Telecommunications Act of 1996. It was the first major piece of federal legislation for telecommunications since the 1934 act that established the FCC. While the Telecom Act contained many complex features, several facets shaped the capital deepening over the next few years.43

The act did not impose a financial burden on the young commercial activity. ISPs did not have to pay the universal service fees that telephone companies had to pay. The act also exempted cable companies, and—because of the asymmetric burden placed on telephone companies—that legal exemption became especially important to regulators when cable companies began aspiring to convert their lines for carrying Internet traffic to homes.

Related, the act also contained provisions for the “E-rate program,” which had both real and symbolic significance. The E-rate program, which was aimed at alleviating inequities in the provision of the Internet, was proposed as a funding scheme for bringing the Internet to disadvantaged users, particularly at schools and libraries. Although delayed by legal challenges, the E-rate program eventually raised over two billion dollars a year from long-distance telephone bills, and continued indefinitely. Closely identified with the ambitions of Vice President Al Gore, who had made fostering next-generation information technology a special interest, the E-rate program was labeled the “Gore tax” by opponents.44

Internet infrastructure received an additional implicit and explicit subsidy with the passage in October 1998 of the Internet Tax Freedom Act. It placed a federal moratorium on taxing the provision of Internet access. Unlike several other communications technologies, such as cellular telephony or landline telephony, Internet access was free from local attempts to tax the service (except those that were grandfathered in prior to October 1, 1998). The market’s young status justified the law, according to supporters, who worried that excessive local taxation could deter growth for the new nationwide applications of electronic commerce in retailing.

Probably most important, the 1996 Telecom Act tried to encourage competition in telephony and data networks by formalizing national legal definitions for competitive local exchange companies (CLECs).45 The new definition formalized a broad framework governing the access of CLECs to facilities from incumbent local exchange companies, or ILECs, which was the formal name for local telephone companies. These provisions were intended to advance competitive local telephony, but they also fostered the growth of CLECs, which supported ISP growth throughout the United States.46 This will be discussed in more detail in a later chapter about the telecom meltdown, and for now a simple statement will do: Eventually CLECs presented an acute issue for the intersection of US telephone and Internet policy. While the growth of CLECs supported growth of the Internet, an unintended subsidy had been built into the system, transferring money from telephone companies to CLECs. As it would turn out, these implicit subsidies would be removed in 1999.47

These policies mattered in another way. By 1998, many local telephone firms had begun basic Internet service. In addition, the long distance and data carrier AT&T emerged as the largest retail Internet provider to businesses; AT&T had already had a data carrier business with firms, but it grew larger when IBM sold the company its operation in 1997. Likewise, WorldCom was not far behind AT&T, having acquired UUNET and other firms. However, as a result of inheriting so many policies from competitive telephony, the independent dial-up Internet market had worked out a set of processes for doing business with local telephone companies. In many countries an independent dial-up industry never got off the ground, and government policies did not encourage it; in contrast, in the United States the government policies did get in the way, and many independent firms thrived.

Seen in retrospect, the growth of the dial-up network was quite an achievement. The national network grew without heavy-handed government intervention to make firms coordinate with one another. Symptoms of this achievement arose in many places: independent dial-up firms competed with large firms, and everyone interconnected; backbone providers (MCI, Sprint, and such) were the largest carriers of data, and many of these firms also provided retail ISP services to consumers while renting facilities to other ISPs, who rented rights to resell use of their modem banks; interconnection disputes also did not interfere with new entry. For example, Level 3, which had raised considerable funds, announced plans to enter at a national scale with a network architecture well suited to TCP/IP traffic, and began building its network from 1998 onward, and did not require government intervention to resolve interconnection issues.

Some restructuring did encounter roadblocks from policy makers, but it was rare, and arose only when consolidation threatened to eliminate competition. When MCI and UUNET became part of WorldCom in 1998, WorldCom became the largest backbone provider and a large reseller of national POPs to other firms. The divestiture of some Internet backbone became a condition for government approval of the MCI WorldCom merger. The proposed merger between Sprint and WorldCom also generated opposition. European regulators were the first to oppose it, and the US Department of Justice (DOJ) almost certainly would have opposed the merger, too, at least in part, but the merger was called off before official DOJ action. Relatedly, in late 1998, Bell Atlantic proposed a merger with GTE, forming Verizon. As condition for approval, GTE spun off the backbone as a separate entity, forming Genuity.

Viewed by the light of WorldCom’s later accounting fraud, these government reviews appear to have been prescient. The government’s actions helped to partially deconcentrate ownership of basic infrastructure assets and spread investment decisions into multiple hands. It did not allow a dominant firm to dictate terms of interconnection to any other, or impose their vision for the architecture on others. The US backbone, therefore, did not depend solely on the actions of a single firm, or on the (in)competence of that firm’s management.

A Sense of Urgency

From the middle of the 1990s and onward for the next half decade, capital deepening extended far and wide throughout the network economy. Large enterprises, such as L. L. Bean and UPS, made investments aimed at developing new services, reducing costs, and responding to competitive threats. Network suppliers, such as WorldCom, MCI, and Level 3, made new investments aimed at carrying a large increase in traffic, enabling the new users to gain their productivity. Third parties, such as IBM, made a business of helping the investment work with existing equipment so it generated the maximum productivity advance. All of this investment extended the Internet into a vast array of business activities throughout the economy.

At a broad level, such a combination of expansion in sales, variety in approaches, and increasing standardization in operations was not unusual for a young entrepreneurial market. Yet to contemporaries the specific accomplishments appeared remarkable. The scale of investment was huge, and no contemporary communication network had ever been as entrepreneurial and expanded as fast.

Participants chose to act without spreading out the investment over time. This urgency arose for multiple reasons. For one, competitive pressures motivated actions sooner than later. Many established firms perceived a viable threat to revenue from entrepreneurial entrants, and, eventually, from their traditional competitors, who were making productive investments in electronic commerce. Second, growth of browser-based applications motivated more adoption of Internet access in homes and businesses, which generated more investment from suppliers of data services and in infrastructure to support it. Meeting this genuine growth in demand also motivated investments sooner rather than later.

The simultaneous growth of investment in all of these activities became self-reinforcing, which is an additional factor, and the most unusual one. Better infrastructure generated better services, which motivated new users, and new application development, deepening capital in every related process.

This positive reinforcement often went by another shorthand label, and was called a “network effect.” In dry abstract terms a “network effect” arises if the value of participating in a community rises with the scale of participation in that community. In this instance, more participants wanted to join a whole array of complementary TCP/IP-based activities. Said another way, the simultaneity of capital deepening investment was symptomatic of many commercial firms facing the incentives to invest now (instead of later) to take advantage of the network effect they all experienced at the same time.

This positive reinforcement contributed to impatient investment, and, unrecognized by many at the time, connected the beginning of the boom to its later fate. As later chapters describe, in the latter part of the 1990s some investors took the positive reinforcement for granted, anticipating its appearance.

That anticipation could be either correct or incorrect. When correct, it allowed for additional investments in advance of use of the capacity. The positive experience in the first few years of the boom—1996, 1997, and 1998—largely rewarded those who took it for granted, or, what was equivalent, presumed an optimistic outcome. Many new services met with quick uptake and use.

That anticipated demand motivated many to continue investing into 1999 and 2000. While nobody planned for the Internet’s investment boom to cease, it made no sense to plan for calm. The investments were urgent and required attention. Their inevitable and unavoidable end was something to worry about later. With all those taking each other’s investment for granted, however, it was only a matter of time before somebody overshot their investments—either because they presumed too much about the actions of others or jumped out in front too quickly to gain a competitive advantage that never materialized, or let their optimism overrule good business sense.

Such overshooting was in the future, too far away to be a concern just yet. Contemporaries talked as if they were experiencing a once-in-a-lifetime event, and many acted accordingly. Their impatience was visible in a wide range of economic activities—in cavalier attitudes about equipment purchases, in stock valuations that presumed sales would grow exponentially higher in response to business investment, in the use of unethical competitive tactics that the purveyor presumed were too expensive to police. Managers at firms that had never been big players in IT faced issues they had never considered, and many chose to invest.

1 See Wladawsky-Berger (2005).

2 See Wladawsky-Berger, private communication, August 2010.

3 See Gerstner (2002) for an extensive description of the state of the company when Gerstner arrives, and the rationale for his approach to turning it around.

4 See http://patrickweb.com/points_of_view/presentations/audio-and-text-transcripts/ice-los-angeles-1997/, accessed April 2013.

5 While advertising the benefits of their standards to support secure transactions, other firms mentioned in this speech included the global engineering company of Asea, Brown, and Boveri; the transportation firm Caterpillar Incorporated; Japan Airlines; the Bank of Montreal; Smith Barney; Charles Schwab; MasterCard and Visa; the Swiss Federal Railway; Acxiom Corporation, among others.

6 Reflecting that success, IBM’s stock price rose from $14.12 at the end of the year in 1993 to $120.96 at the end of the year for 2001.

7 Gerstner had Denny Welsh, head of the services division for IBM, organize a large task force. Wladawsky-Berger, private communication, August 2010. Also, see Wladawsky-Berger (2005). The independent division also avoided political issues affiliated with assigning the Internet strategy to one existing unit, which would have made it challenging to get other units to buy in to the organization’s strategy. See Gerstner (2002).

8 Interestingly, the strategy emerged at the same time as the removal of the 1956 IBM consent decree, which had forbidden IBM from entering some complementary markets, such as consulting service. CNET (1996).

9 Wladawsky-Berger, private communication, August 2010.

10 Also see Gerstner (2002) for an extensive discussion about the use of proprietary or nonproprietary technologies, and the attitudes he encountered when trying to move the firm to nonproprietary approaches.

11 Wladawsky-Berger, private communication, August 2010.

12 Wladawsky-Berger, private communication, August 2010. Also see Gerstner (2002) for a similar point.

13 Perhaps one of the most prominent symbols of this change came from the experience of Ellen Hancock. In the early 1990s, Ellen Hancock, the networking division head, had staunchly defended selling exclusively proprietary equipment. However, Gerstner spun off the division, and Hancock moved on to other prominent executive positions in high-tech. Hancock left in 1995, prior to the development of the Internet strategy, so spinning off the network division appears to have been a propitious move, rather than one that fully anticipated the open Internet strategy. See Gerstner (2002).

14 See Gerstner (2002).

15 Wladawsky-Berger, private communication, August 2010.

16 See McElheran (2011) and Bresnahan, Brynjolfsson, and Hitt (2002).

17 See Bresnahan and Greenstein (1997). Also see Forman and Goldfarb (2006).

18 Delays arose from one of several different causes. For one, delays could arise because of the technical necessity to invest in one stage of a project only after another was completed—e.g., the client could not be modified until the servers worked as designed. For two, investments also could be nonconvex and lumpy, and not used in operations until entirely installed—e.g., all the wiring had to be installed before the communications routines could be tested. Third, as hinted in the planning myth, cognitive limits could cause delays—e.g., staff did not anticipate idiosyncratic issues until a new process was at their fingertips, and only after observing it did an issue emerge and receive attention. It often was challenging to forecast which of these would be most problematic in practice.

19 Also see Hanson (2000).

20 See Forman and Goldfarb (2006).

21 Forman (2005).

22 Forman, Goldfarb, and Greenstein (2005) provide extensive evidence for this statement.

23 The investor in IT seeks the same process at lower cost or the same process at the same cost with improved features, such as lower error rates, more timely supply of inventory, or better real-time decision support for firms in rapidly changing market environments. Cortada (2003) argues that the appropriate model varies between industries and even within industries over different eras.

24 Cortada (2003) argues that only three innovations have spurred such investments: the UPC code, EDI, and the Internet.

25 Wladawsky-Berger, private communication, August 2010.

26 The consumer price index for “Information technology, hardware and services” is dominated by quality adjustments in the price for personal computers. This price index dropped from 63.8 in 1995 to 21.3 in 2001, or a 66.6 percent decline. Subcomponents for PCs were only made public for a time that begins in 1998 and declines from 875.1 to 330.1 from 1998 to 2001, or by 62.2 percent.

27 The Y2K problem arose in old mainframes that lacked the ability to update their dates after January 1, 2000. Many firms with such old systems replaced them and used that moment to upgrade all IT facilities.

28 See Forman, Goldfarb, and Greenstein (2003a, 2003b).

29 Management of companies and enterprises is NAICS 55 in the North American Industry Classification (NAICS). Media, telecommunications, and data processing is NAICS 51. It also included information and data processing services, NAICS 514.

30 Computer and electronic manufacturing refers to NAICS 334. Printing and related support activities refer to NAICS 323. The rest of the sentence refers to oil and gas extraction (NAICS 211), water transportation (NAICS 483), pipelines (NAICS 486), motor vehicle and parts dealers (NAICS 441), electronics and appliance stores (NAICS 443), sporting goods (NAICS 451), and nonstore retailers (NAICS 454). Low adopters at the NAICS three-digit level also existed and would not have surprised any longtime observer of computing. These include transit and ground passenger transportation (NAICS 485), food services and drinking places (NAICS 722), social assistance (NAICS 624), and amusement (NAICS 713).

31 See Forman, Goldfarb, and Greenstein (2008).

32 For a review of studies of Internet investment by business, see Forman and Goldfarb (2006) and also Doms (2004).

33 Clemente (1998).

34 Estimates vary for when HTML traffic exceeded e-mail and file transfers and other traffic, but every estimate points to sometime in 1996. For example, see table 2 of Odlyzko (2003).

35 Odlyzko (2003).

36 See the ongoing discussion on http://www.dtc.umn.edu/mints/igrowth.html, accessed July 2012.

37 See Friedan (2001, 2002).

38 The term “mesh” first appeared in Besen et al. (2001).

39 See Besen et al. (2001) and Laffont et al. (2001, 2003) for analysis of incentives to sign peering agreements.

40 For more on overlays, see Clark et al. (2006).

41 See Strover (2001) or Downes and Greenstein (2002). A summary can be found in Greenstein and Prince (2006). Also see the discussion in the Federal Communications Commission (2010).

42 Hundt (2000).

43 This is discussed in more detail in Greenstein (2005).

44 It survived several court challenges and regulatory lobbying efforts after its passage. This money was administered by the FCC. In 1998 this program was just getting under way.

45 CLECs bore some resemblance to competitive access providers of the recent past. These were firms who connected with the existing telephone networks and offered services that competed with the telephone network.

46 In his book recounting his years as FCC chairman between 2003 and 2007, Reed Hundt (2000) gives his staff at the FCC credit for interpreting the act in such a way as to foster growth of entry instead of protect incumbent telephone monopolies. Referring to how his agency implemented the 1996 act, he says (154–55):

The conference committee compromises had produced a mountain of ambiguity that was generally tilted toward the local phone company’s advantage. But under principles of statutory interpretation, we had broad authority to exercise our discretion in implementing the regulations.… The more our team studied the law the more we realized our decisions could determine the winners and losers in the new economy.

47 This is described in more detail in chapter 12. A number of CLECs set up businesses to receive ISP calls from households but send very few, which effectively billed other telephone companies for “reciprocal compensation.”

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.230.81