Chapter 8

Efficient: Amidst Massive Technology Waste

“It was the meeting where our IT council gave me the green light to proceed with a major global project to consolidate 80 different HR systems across the world. We had settled on our existing ERP vendor's software,” says David Smoley, CIO of Flextronics, one of the world's largest electronics contract manufacturers.

“I thanked them for approving such a large project and then I surprised them. I told them I may come back to them in the next month with a ‘better, faster, cheaper’ option.

“Mind you, this was after we had already spent months of evaluating options, systems integrators, building budgets and business cases. But we have a ‘fail faster’ culture here at Flextronics, so they were more intrigued with what I may come back with than surprised.”

The Courage to Be More Efficient

In the month prior to the meeting described above, Smoley had been introduced to Workday, then a startup. He says:

I was immediately impressed by the user interface. It was highly intuitive, attractive and easy to use. Very visual, like Amazon or Google. At the time, usability was one of the biggest issues with the legacy HR packages. I met with the Workday team and got a demo. We discussed technology and I was again impressed by their approach. From my time working in venture capital I knew that some of the earliest and most successful SAAS companies were in the HR space.

After the IT council meeting, the due diligence around Workday grew in intensity. There was strong interest, along with strong resistance. Initially, the head of HR was intrigued because he received positive feedback from his team on the usability factor; however, he was concerned with the risk associated with such a small and early-stage company. The CFO also was concerned with their size and stage. The more I got to know the Workday team, the more I was convinced that while they were early and small, they had the vision, leadership, talent, and experience to deliver in this space. From my perspective, HR IT should be simple. It is basically a database of employees with workflow and reporting associated with it. The legacy HR packages were far too complicated and difficult to use. Workday was an opportunity to start fresh and take advantage of leading-edge technology along with a decade of HR IT lessons the Workday founders had learned at their previous company, Peoplesoft.

We debated back and forth on this and in the end, the tipping point was a Saturday morning meeting at Flextronics, where Aneel Bhusri and Dave Duffield, the co-founders of Workday, met with me and our CEO Mike McNamara. After about 20 minutes we walked out and Mike looked at me and said, ‘Those are the kind of guys I want to be in business with.’ Duffield and Bhusri had committed to a partnership where Flextronics would provide a benchmark for process and priorities. Flex would drive the development road map for Workday. In turn, Flextronics would draw upon the years of HR experience the Workday team had to shape and standardize its processes. It was a model software development partnership.

The proposal from Workday saved us more than 40 percent over the life of the project, with even larger savings in the first year. By going with Workday we were able to reduce our team size by more than 60 percent.

“Empty Calories” in Technology

What is remarkable is that Smoley had the courage to turn down a preapproved project and go with something far more efficient. Unfortunately, too much in technology gets funded year after year with “safe” choices. It becomes “entitlement spend.” The old adage used to be “No one got fired for buying from IBM.” Today, that saying has become a little broader—to include IBM, Verizon, SAP, Accenture, and other large vendors. Too many IT executives live in fear of one of their smaller vendors going out of business.

So, we live in a world of ossified and outrageous technology costs. There are opportunities at every turn to deliver efficiencies in technology, if we only have the courage like Smoley to look at options.

Their estimates vary, but most analysts agree that annual IT and telecom spend exceeds $3 trillion. That's more than the GDP of many nations, and shockingly much of that is “empty calories.”

Here are a handful of examples:

Software: Every year, software vendors send customers a bill for 15 to 25 percent of the “licensed value” of their software. This payment is supposed to cover support (bug fixes, help desk) and maintenance (periodic regulatory updates, enhancements). During implementation, few customers tax the software vendor's support lines. In fact, most customers additionally pay a systems integrator or the software vendor's consultants to provide on-site implementation help. Yet they are charged the full software support and maintenance fee. Typically, after a year of going “live” on the software, support needs drop off again. Yet companies continue to be charged full support and maintenance fees. Because the support fees are on the licensed value, users are also charged on the “shelfware”—software licenses the company originally bought but has not deployed. In the meantime, software vendors have been automating much of the support into knowledge bases so customers can self-service. They are moving support for stable, older releases to their offices in low-cost locations (or using offshore firms to support them). They are increasingly letting user communities handle routine queries. Yet none of these lowered costs have typically been passed along to customers.

Technology infrastructure: The average corporate data center is grossly inefficient compared to the next-generation centers being built by Google, Amazon, and Apple, and certainly compared to the Facebook center discussed in the case study later in this chapter. It is not uncommon to see power usage effectiveness (PUE)—the ratio of power entering a data center divided by the power used to run the computer infrastructure within it—at 2.0, when the Facebook Prineville one is at 1.07. So why can enterprises not walk away from their internal boat anchors? Many are locked into multiyear outsourcing contracts and face stiff early-termination penalties. Meanwhile, there is little in the legal language to force outsourcers to move to more efficient data centers. Walk into an Office Depot or a Curry's and anyone can buy a terabyte of storage for less than $100 and it's a one-time payment. Yet many enterprises are paying $100 or more per gigabyte over a three-year useful life, when you amortize the cost of storage and support for it. Granted, that is high-availability, enterprise-grade storage, but is it worth 1,000 times as much?

Technology services: As companies implement technology projects, they tend to hire systems integrators like Deloitte Consulting. Even though many consultant proposals proudly state they have done hundreds of SAP or Java development projects, typically they do not pass along much in the way of productivity or automation gains. In fact, since they are more experienced, they expect to be paid a premium. But even after companies pay premiums for specialized talent, IT projects still fail at unacceptably high rates. Time-to-completion metrics coming out of newer agile development methods often show a two to three times improvement opportunity compared to traditional delivery models. Another major expense is consultant travel. That often adds another 15 to 25 percent to base fees, which are already high to begin with. The traveling consultants often have a Monday–Thursday on-site policy, which forces the entire project to adjust to a four-day workweek. Many consultants are implementing telepresence for internal communications but unlike Cognizant, described further on, most have not shown much initiative in using it on client projects and cutting back on project travel.

Telecommunications: Most companies pay a bewildering range of landline, conference calling, calling card, employee wi-fi, mobile, and other messaging fees. Phone companies are also creative with their fees, with shortfall charges, early-termination, and other charges. In most companies, the spend on telecommunications typically exceeds the cost of all other technology costs—hardware, software, technology services, internal staff—put together. Not surprisingly, there is plenty of waste. One of the most flagrant is in international mobile calling charges. A study by Harris Interactive showed that the average U.S. employee spends $693 in international roaming calls on an overseas trip at between $1 and $4 a minute when using local carriers; Skype would cost just pennies.

Other technology costs: Many printer inks cost over $5,000 a gallon (yes, over 1,000 times the price of gasoline). Ink is far more expensive, ounce for ounce, than the most expensive perfumes or rare spirits. Worse, the ink gauges can be inaccurate and force consumers to discard cartridges before they are empty. Another major cost is technology strategy and market intelligence. Peer benchmarking opportunities and market intelligence via boutique research firms like Redmonk (whose analysts are quoted in several chapters in this book) and decent quality but free blogs, such as ZDNet, GigaOmM, TechCrunch, and other sites quoted in several chapters, are helping companies lower costs of IT market intelligence.

There is a screaming need for efficiency in technology, and it is gratifying to see enterprises that run their technology in a lean fashion.

Efficiency in Government

Nestled between a Target and a Lowe's in a Tampa, Florida, shopping center is a much smaller nondescript office. During some hours, this office does more business than either of those two giant retailers. It is the Hillsborough County Tax Collector's branch office. It issues driver's licenses, collects property taxes, issues fishing licenses, and does several more things. While there are separate queues for many of the services, the agency increasingly has employees who can handle multiple transactions in one session. Of course, for many of the transactions, you need not come in. You can do business with them by mail, phone, or on the web.

Doug Belden's site says, “His goal as tax collector is simple: To save taxpayers money through consolidation and efficiency while improving service at the branch offices. His objective is to make the Hillsborough County Tax Collector's Office the most modern and efficient office in the state.”1

Walk into that branch, and there usually is a mass of humanity, and your ticket number may be in the 600s. Your heart sinks as you get ready to settle in for a few hours.

Twenty minutes later your number comes up. Five minutes after that the agent has scanned four pieces of personal documentation you have brought. In another five minutes you have a polycarbonate driver's license with digitized photo, state holograph, two-dimensional bar code, and magnetic strip. In those five minutes, the system has been running validations against various state and federal databases. These are part of the checks under the Federal Real ID legislation requirements, and for that, licenses issued by the county are among the first in the country to carry a gold star, which shows compliance.

The customer queuing system is from Q-Matic. Beyond balancing customer load with agent availability, it is smart enough to route Spanish-speaking customers to appropriate agents. It collects all kinds of metrics, which are then helpfully summarized, and before you leave home, you can check likely wait times at each of the agency's locations.

Beyond what is visible to the ordinary, Kirk Sexton of the agency is happy to talk about other efficiencies. There are VoIP and virtualization efforts. At the back there is a microwave tower that makes communications cheaper than a T1 line. Sexton talks about a penny-a-page arrangement with SunPrint Management (described further in Chapter 11) for their many black-and-white copies. He describes buying mostly standard, off-the-shelf equipment to get volume breaks—but making adjustments as needed. The Samsung printers had their firmware adjusted to control their speed to optimally heat the car license tag.

No, this is not your run-of-the-mill local agency. In 2008 it won the Florida governor's prestigious Sterling Award. In 2010 it applied to be considered for the national Malcolm Baldrige Award. When you look at the applications the agency filed for consideration for both awards, you see the kinds of eye-popping metrics even private sector firms would kill for, focused on customer satisfaction, customer wait times, and economics.

The Tax Collector's office is an elected one, but Belden has not had much serious competition for 12 years. As we can see, he runs a tight ship with a continuous improvement focus. With folks like Sexton who joined the agency three years ago from the private sector, it is clearly a role model for efficiency and innovation in government.

Efficiencies Even When Things Are Going Well

Cognizant is a leading IT, consulting, and business-process outsourcing services provider. It was spun out of Dun & Bradstreet in the mid-1990s and has grown like a weed ever since. As its website succinctly summarizes, “A member of the NASDAQ100, the S&P 500, the Forbes Global 2000, and the Fortune 500, Cognizant is ranked among the top-performing and fastest growing companies in the world.”2

Given its phenomenal success, you would think it would not be as focused on efficiency. Two of its executives, in particular, highlight a relentless focus on productivity and efficiency.

Gordon Coburn, its CFO (and COO and Treasurer), has plenty to keep him busy between analyzing and integrating acquisitions and handholding investors. On any given day, though, you will see him intensely involved in real estate negotiations, from square footage cost through space design and related accoutrements (like wall and carpet colors), and in software and other technology negotiations. He is a bulldog in such negotiations.

Sukumar Rajgopal wears two hats. He is the Chief Information Officer and worries about infrastructure for hundreds of offices and over 100,000 worldwide employees. He is also the Chief Innovation Officer and is constantly thinking about how to make the company's consultants more productive—x times more efficient.

Cognizant's global footprint means hundreds of meetings every week among associates working from widely dispersed locations. Project teams are increasingly virtualized, with many individuals, associates, and client staff working on closely related tasks from different sites. So its investment in telepresence has been a major boon allowing Cognizant teams to collaborate.

Cognizant had originally planned to equip 20 major offices with full-immersion, three-screen systems for internal meetings. They quickly learned that the full-immersion terminals were less frequently used than a single-screen setup in a conference room or on a desktop. So they shifted gears to scale telepresence swiftly across their global footprint with the smaller devices, many from Tandberg (now part of Cisco).

The initial deployment was equipped on desktops of the senior management team, especially in home offices. Coburn remarks that “telepresence has become the standard for how we do senior management calls.” It also has become the preferred approach for early-stage interviews with recruits. Cognizant has also started using telepresence for most meetings in its budgeting process. As Coburn explains, “It made a tremendous difference; it probably shortened the budget cycle by 25 percent, because there was far less confusion.”

More impressively, Cognizant has rolled telepresence out to major clients. That helps overcome at least some of the consultant travel costs and other issues mentioned earlier. Clients were generally skeptical at first, especially if they had previously used traditional videoconferencing, which was characterized by high cost, frequently dropped calls, and video and audio quality that ranged from poor to mediocre. But they have been much more accepting of telepresence, which delivers a vastly superior user experience.

Two areas with high payback have been during transition and training. Transition is the transfer of a client's operation (e.g., software maintenance or call center support) to Cognizant people and infrastructure. In the past, this typically required Cognizant specialists to be at a client site for several weeks. The start date was often determined by travel logistics, visa availability, and so on. But with telepresence, they can start the transition with less delay, involve more specialists, and do it at lower cost than before.

When Cognizant brings a client's process in-house, it often needs to conduct extensive training. This typically requires a trainer to travel, with all the same visa and logistics issues. With telepresence, it can conduct the training with less delay and at a lower cost than before.

It shows in a significant reduction in Cognizant's emissions associated with business travel. They decreased from 35,964 metric tons of CO2e (Carbon Dioxide equivalent) in 2008, to 27,738 in 2009, or a 23 percent reduction, even as business grew by 16 percent over that time period. Although some of this was due to controls of travel costs, telepresence was instrumental in driving these significant results.

Sukumar, on the other hand, is obsessed with what he calls “social design,” and his quest to deliver 500 percent productivity on Cognizant projects.

Think of avalanches. A snowball starts small, but then gathers mass and gradually turns into a massive avalanche. That is the basic inspiration behind Social Design. What we do individually should (positively) impact hundreds and thousands of others.

One of my favorite examples is CDDB (short for Compact Disc Database). When an individual ripped tunes from a CD, before CDDB you had to manually track names, artist, etc. Each user around the world did that and did in their own format with their own typos. CDDB started to track unique signatures of each tune on its servers and matching them to album, artist, and other information. So when later users ripped the same tune, they could download that same information. Think of the massive productivity that delivered across millions of users.

Today's enterprise systems don't acknowledge like CDDB did that we live in a connected world and what one user enters can be used to populate that for thousands of others. Our systems record, they don't think. When you enter your time sheet, the system should prefill a number of fields and it should give you feedback like “you are the last one in your office to file it” or “you forgot to enter a few items others in your group may have entered.”

It would be nice to move to a “productivity income statement” where we charge the IT department $1 for each data item users enter and in return we charge users $1 for each data item that the system prefills for them. Think how differently we would think about enterprise systems if we did that.

As we discussed in Chapter 2, Apple, in announcing its iCloud Match service, provided another example of what Sukumar calls social design.

“We have 18 million songs in the music store. Our software will scan what you have, the stuff you've ripped, and figure out if there's a match.”3

Apple's site helpfully added, “And all the music iTunes Matches plays back at 256-Kbps iTunes Plus quality—even if your original copy was of lower quality.”4

The CDDB and Apple examples are both examples of personal productivity. Imagine if Sukumar can inspire Cognizant consultants and clients to deliver similar productivity gains in corporate applications.

Conclusion

The technology elite don't just focus on innovation to improve the top line. They are also intensely focused on efficiencies. In this case study we will see the efficiencies Facebook has delivered in its Prineville data center it opened in 2011.

Case Study: Facebook's Hyperefficient Data Center

“Our server chassis is beautiful,” says Amir Michael, Manager, System Engineering at Facebook.

That's beautiful in a minimalist kind of way. “It's vanity-free—no plastic bezels, paint, or even mounting screws. It can be assembled by hand in less than eight minutes and thirty seconds.”

Michael is describing the design principles behind Facebook's data center, which opened in April 2011 in Prineville, Oregon (previous data centers were leased). The center occupies 150,000 square feet to start, with another 150,000 square feet in progress. It handles about half the processing of Facebook's staggering user base demands—over 700 billion minutes a month are spent on Facebook.5 Facebook says it has 750 million active users (as of July 2011), 70 percent of which are outside the United States and access the site in over 70 languages.6

The servers, Michael describes, are almost six pounds lighter than the models Dell or HP (Facebook suppliers for the first phase of the project) sell to other customers. That, of course, lowers shipping costs and eases technician effort to move them around. The lack of covers also allows easier technician access to components and more direct cooling. That all adds up when you are talking thousands of servers. Facebook says in a typical data center this would save more than 120 tons of materials from being manufactured, transported, and ultimately discarded.

Efficiencies Galore

Everything in the center's design was driven from a cost and efficiency perspective, and most of the components were designed from the ground up to Facebook's specifications.

The chassis, for example, is 1.5U form/factor (2.63 inches tall) compared with standard 1U (1.75 inches tall) chassis. That allows for larger heat sinks and larger 60 mm fans as opposed to 40 mm fans. Larger fans use less energy to move the same amount of air. That takes 2 to 4 percent of the energy of the server, compared to 10 to 20 percent for typical servers. The heat sinks are spread at the back of the motherboard, so none of them will be receiving preheated air from another heat sink, reducing the work required of the fans.

The racks are “triplet” enclosures that house three openings for racks, each housing 30 of the 1.5U Facebook servers. Each enclosure is also equipped with two rack-top switches to support high network port density.7

Prineville enjoys dry, cool desert weather at its altitude of 3,200 feet. In what is likely good application of feng shui, the data center is oriented to take advantage of prevailing winds feeding outside air into the building. The data center thus takes advantage of the free, natural cooling, and in winter, heat from the servers can be used to warm office space. On warmer days, when they cannot use the natural cooling, they use evaporative cooling. Air from the outside flows over water-spray-moistened foam pads. There are no chillers on-site—common in most other data centers, thus saving significantly on capital and ongoing energy to run them.

Backup batteries, which keep servers running for up to 90 seconds before backup generators take over, are distributed among server racks. This is more efficient because the batteries share electrical connections with the computers around them, eliminating the dedicated connections and transformers needed for one large store. This design uses only about 7 percent of the power fed into it, compared to around 23 percent for a more conventional, centralized battery store approach.

The motherboards are bare bones and devoid of features Facebook did not need, and geared to handle processors from AMD and Intel. They came with direct connections to the power supply, which itself was uniquely designed.

James Hamilton is considered a “Jedi Master” in data center circles, with experience at IBM, Microsoft, and now at Amazon Web Services. He was invited to visit the Facebook data center as it opened, and he analyzed its features on his blog. Here is what he wrote about the power supply and related backup UPS (Uninterrupted Power Supply) infrastructure:

In this design the battery rack is the central rack flanked by two triple racks of servers. Like the server racks, the UPS is delivered 480VAC 3-phase directly. At the top of the battery rack, they have control circuitry, circuit breakers, and rectifiers to charge the battery banks. What's somewhat unusual is the output stage of the UPS doesn't include inverters to convert the direct current back to the alternating current required by a standard server PSU. Instead the UPS output is 48V direct current, which is delivered directly to the three racks on either side of the UPS. This has the upside of avoiding the final invert stage, which increases efficiency.8

To net that out, Facebook believes this proprietary UPS design (for which it has applied for a patent) will save up to 12 percent in electricity usage.

Adding up all the efficiencies Facebook implemented in the data center, it says it delivered a 38 percent energy efficiency and a 24 percent lower cost compared to comparable existing facilities. The Data Center is rated at a power usage effectiveness (PUE) of 1.07—one of the best in the industry and much better than the 1.5 in its previous facilities. PUE is an indicator of data center energy efficiency—how much of power input goes toward computing versus cooling and other overhead activities—and the lower the number, the more efficient.

The Open Compute Thinking

But Hamilton's next statement makes the center really stand out. “What made this trip (to the Prineville data center) really unusual is that I'm actually able to talk about what I saw.”

He continues:

In fact, more than allowing me to talk about it, Facebook has decided to release most of the technical details surrounding these designs publically. In the past, I've seen some super interesting but top secret facilities and I've seen some public but not particularly advanced data centers. To my knowledge, this is the first time an industry leading design has been documented in detail and released publically.9

Facebook calls it the Open Compute Project. They elaborate: “Facebook and our development partners have invested tens of millions of dollars over the past two years to build upon industry specifications to create the most efficient computing infrastructure possible. These advancements are good for Facebook, but we think they could benefit all companies. The Open Compute Project is a user-led forum, to share our designs and collaborate with anyone interested in highly efficient server and data center designs. We think it's time to demystify the biggest capital expense of an online business—the infrastructure.”10

Facebook is publishing specifications and mechanical designs for Open Compute Project hardware, including motherboards, power supplies, server chassis, and server and battery cabinets. In addition, Facebook is making available its data center electrical and mechanical construction specifications.

This is a remarkable move on the part of Facebook. As we will discuss in Chapter 13, Apple's iCloud data center was not even visible on Google Earth until a few days before the iCloud public announcement. A giant 500,000-square-foot facility was kept “hidden.” Google, Microsoft, Amazon, and others have also traditionally been secretive about their operations.

Frank Frankovsky, Facebook's director of hardware design, was quoted as saying, “Facebook is successful because of the great social product, not [because] we can build low-cost infrastructure,” he says. “There's no reason we shouldn't help others out with this.”

What is interesting is Dell's Data Center Solutions business says it will design and build servers based on the Open Compute Project specification. Presumably so will HP’s. Dell owns Perot and HP owns EDS, and those outsourcers currently run data centers for many corporations but they are nowhere as efficient as what Facebook has built. In the meantime, Facebook is planning to source servers even more efficiently by going direct to Original Design Manufacturers who supply HP and Dell.

George Brady, Executive Vice President, Technology Infrastructure, Fidelity Investments, was quoted as saying, “Data centers provide the foundation for the efficient, high quality services our customers have come to expect. Facebook has contributed advanced reference designs for ongoing data center and hardware innovation. We look forward to collaborating with like-minded technology providers and partners as we seek ways to learn from and further advance these designs.”

The Green Question

One quibble that environmentalists had with the Facebook center is that it is fueled by the utility Pacific Power, which produces almost 60 percent of its electricity from burning coal. Greenpeace ran a campaign for Facebook to “unfriend” coal. As we discuss in Chapter 17, Google, through its energy subsidiary, has negotiated several long-term wind-power agreements. It sells that energy in the open market at a loss but strips the Renewable Energy Credits and applies them for carbon credit to the conventional power it also uses to run its data centers. Toward the end of 2011, Greenpeace won a moral victory as Facebook promised a preference for access to clean and renewable energy in picking future sites for data centers. Facebook has also recruited Bill Weihl, formerly Google's “Energy Czar.”

Barry Schnitt, Facebook's director of policy communications, provides an alternative perspective on clean energy: “As other environmental experts have established, the watts you never use are the cleanest and so our focus is on efficiency. We've invested thousands of people hours and tens of millions of dollars into efficiency technology and, when it is completed, our Oregon facility may be the most efficient data center in the world.”11

As Facebook's Michael would say, “Now that's a beautiful thing.”

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.35.54