Chapter 6
Rule 3: Use Your Data as Currency

Data is the new oil. Companies that will win are using math.

—Kevin Plank, Under Armour CEO

As the world moves from analog to digital, data is the new currency. Data-rich companies are tapping into vast new revenue streams and profit pools. Data-poor ones are facing diminished market power, industry relevance, and profitability.

Back in Chapter 3 we showed you how securing a perpetual algorithmic advantage drives these winner-takes-most dynamics. Claiming a spot on the right-hand hump (Figure 3.1) of your industry’s future profit distribution requires focus across three areas: building your data balance sheet, valuing data optionality, and maximizing return on data.

Those three actions will add a new crown jewel to your arsenal. They will position your company to deliver the step-change customer outcomes you committed to under Rule 1 and accelerate the Big I and Little I innovations you prioritized under Rule 2.

Some Historical Context

Before we jump into how you can treat your data as currency, some historical context is in order. The lasting value of data is what makes the digital age fundamentally different from the industrial age and the agricultural age that preceded it.

During the agricultural age, control over arable land determined economic wealth as hunter-gatherer labor was gradually replaced with mechanized farming. While the Earth has a surface area of about 200 million square miles, just 57 million square miles consist of land and only 12 million square miles are suitable for growing crops. In medieval Europe, control of arable land was the basis of economic power, with lords controlling vast swaths of farmland (fiefs) and serfs exchanging their labor for a lifetime of basic sustenance.

In the industrial age, control of the means of production formed the basis of economic success, with human manufacturing labor gradually being replaced with mechanized assembly lines. Mechanization, automation, and steam power delivered a step-change improvement in the cost of production. Economies of scale and scope, combined with ready sources of investment capital, pulled manufacturing activity into larger and larger plants. The big got much bigger and many smaller manufacturers just went away.

In the digital age, computers, algorithms, and AI use data to augment human effort across virtually every sector of the economy. Let’s take two industries you likely know well as examples: entertainment and sports.

Entertainment Goes Left Brain

It used to be that new TV shows were developed by creative people spitballing, or throwing ideas against the wall, then using small test screenings to determine how audiences might respond. The process was long and nondeterministic, with most ideas being discarded along the way.

In the United States, NBC, CBS, ABC, and Fox were happy to have Netflix as a distributor of their content. “We get to reach whole new audiences, what is not to like?”, said many a network TV executive. But seemingly overnight, Netflix has become a top-five content producer in direct competition with those established TV networks.

You see, Netflix viewed data as currency, and not just as an asset for its IT organization to manage. Netflix builds shows through a proprietary mix of right-brain creative talent and left-brain machine learning insights. It constantly taps granular data from over 100 million subscribers consuming 250 million hours of video per day to anticipate what content viewers will respond best to. That massive data advantage enables Netflix to produce shows that outperform those from other networks by a factor of two to one.

Moneyball Wins Games

As detailed in Michael Lewis’s popular book Moneyball, the Oakland Athletics made the Major League Baseball playoffs in the fall of 2003 with an annual payroll of $44 million. For context, the New York Yankees spent $125 million that same year.

Oakland discovered that letting the data speak determined which players should play better than the gut feel of experienced coaches and managers. For baseball purists this was heresy, but the A’s came out of nowhere to go from laughingstock to pennant contender.

The Oakland A’s organization literally viewed its data as currency. The organization understood that the more games played in the playoffs, the more ticket, merchandise, concession, and TV revenue the team would receive. Once its data showed that on-base and slugging percentages were far better predictors of offensive success than the batting-average and runs-batted-in metrics other teams were using, the A’s were not shy about changing how they made lineup and in-game decisions.

Oakland’s disruptive innovation altered how every team in Major League Baseball is run. The impact has even been felt in other sports, such as basketball and football, where data science teams are growing in funding, size, and importance.

Just as mechanization substituted for physical labor in the agricultural and industrial ages, data is being substituted for mental labor in the digital age. This is causing every industry to fragment into the data rich and the data poor. Which are you going to be?

Build Your Data Balance Sheet

The core management tenet “What gets measured gets done” makes this first area of focus a critical one. Unless you measure the quantity and quality of data that your company owns outright or has contractual rights to, then you are almost certain to be outflanked by the Davids of Silicon Valley. Those digital disruptors are building their businesses with a data-as-currency mindset.

If your goal is becoming data rich, then the first order of business is building your company’s data balance sheet. What data assets does your company control that will allow you to apply data science and machine learning for industry-shaping insights? What data liabilities do you have such that you “owe” data or payments associated with data to third parties? See Figure 6.1 for the data balance sheet that you and your team need to complete.

Illustration shows two different images for the data balance sheet. 
(a) Image depicts a two-column balance sheet for data assets. The column headers are Data Set, Key Attributes, Data Rights and Potential Value Gained.
The row wise data shown in the table are as follows:
Row 1: For Data Set “Name”, Key Attributes are Source, Size, Location and Owner;  Data Rights are Exclusivity, Limits, Permissions and Regulatory; and  Potential Value Gained are  Outcome enabler, Big I enabler and Little I enabler.
Similarly for Row 2 and Row 3.
(b) Image depicts a two-column balance sheet for data liabilities. The column headers are Data Set, Key Attributes, Data Obligations and Potential Value Lost.
The row wise data shown in the table are as follows:
Row 1: For Data Set “Name”, Key Attributes are Source, Size, Location and Owner; Data Obligations are To whom? For what? And By when?; and Potential Value Lost are In data, In sights and In dollars.
Similarly for Row 2 and Row 3.

Figure 6.1 Your Data Balance Sheet

Let’s start with the easy one: the data assets side of the ledger. That is, how much of each type of digital data does your company have at its disposal? You’d think that would be an easy question to answer—just call up your CIO and ask.

Think Broadly about Your Data Assets

In our experience, it is not nearly that simple. Even in a small or midsized company, the data sets you’re looking to inventory are scattered across disconnected pockets of your business. Beyond the typical data residing in your IT systems, look for data sets that are:

  • Known only to some industrial automation guru in one of your manufacturing plants, where the data is “owned” by production operations, not IT
  • Buried in a past acquisition that was sponsored by one of your business units and no one talks about anymore
  • Part of an RM&D solution within your services business that is monitoring equipment operation at customer sites
  • Hidden in the installed base data management tool in your repair depots as part of service entitlement management
  • Aggregated in someone else’s cloud as part of a SaaS (software as a service) application, such as Salesforce for customer information or Workday for employee data
  • On a storage-as-a-service platform, such as Amazon S3 or Microsoft Azure Data Lake, that your marketing team chose for a major customer trial program
  • Embedded within the indecipherable log files on your servers that track user behavior across enterprise and Web applications
  • Available through your channel, marketing, service, or manufacturing partners as part of your contractual arrangements with them
  • Located within the instrumentation of your products themselves, but not yet cataloged, backhauled, and aggregated into a usable form

The list could go on, but you get the picture. You have access to more data assets than you think but have fewer under management than you hoped.

Aspiring Goliaths are increasing their focus on one particular area: data from their customers. For example, Waze enhances its own data sets through granular visibility to the activity of every user of its navigation app. This provides Waze with a cheap way to know the real-time speed of traffic in any market in which it has a critical mass of customers.

John Deere has integrated data from sensors in farmers’ fields and equipment to deliver cost-effective predictive maintenance and unlock the potential of precision farming to increase crop yields. The average farm is on its way from generating 190,000 data points per day in 2014 to a projected 4.1 million data points by 2020.

For companies such as Waze and Deere, access to granular customer data presents an enormous opportunity for cocreating new products and services. Ensure that your data asset inventory has sufficient focus on this rich source of potential data rights.

Understand Your Data Liabilities

On the right side of your data balance sheet, you probably have many more data liabilities than you’d expect. For example, you may be licensing data from third parties and incurring either in-cash or in-kind costs. Also, new data security, governance, privacy, and sovereignty regulations, such as GDPR in the European Union, make managing personally identifiable data more expensive and higher risk. In addition to Europe, governments in California, China, and India are becoming increasingly aggressive in this area.

Beyond data licensing and regulatory compliance, you almost certainly have data liabilities to your customers and suppliers in terms of what you can and cannot do with data that comes from your interactions with them. Just as with data assets, it is likely that your management of these data liabilities is far too fragmented today. That is becoming a high-risk approach.

Privacy breaches at brand-name companies, such as Equifax, Target, Saks Fifth Avenue, Panera Bread, Under Armour, and Facebook, have put data privacy front and center in the minds of customers. Companies like these are building high-value, hyperpersonalized experiences around granular, customer-identified data.

If you’re estimating the liability side of your data balance sheet, be clear about the true costs of acquiring and protecting that data. Catalog your obligations to be transparent about how data is used and secure the necessary data rights as you undertake your digital transformation. Aspiring Goliaths are pursuing new models of data anonymization and even providing customers with simple self-service portals that allow them to opt in or out at multiple levels of data sharing.

Rate the Quality of Each Data Asset

Now that you have your data balance sheet in place, it is time to rate each data asset in terms of quality. Company after company boasts about how big its big data is. “Our airplane produces multiple terabytes of sensor data per flight.” “We can see the condition of every bolt on our oil rigs.” “We have information on a hundred million customer purchases.”

While size does matter, it is just part of the story. The dirty little secret is that most of that data is unused—what we call “dark data.” It is sitting idle for a reason. Most of it is low quality. It lacks the attributes that make it useful in improving your core business and winning in adjacent markets.

Aspiring Goliaths are more sophisticated. They rate the assets on their data balance sheet against seven indicators of data quality:

  1. Freshness. How recent is the data, and therefore, how predictive is it likely to be when applied to your current products, services, and operations?
  2. Duration. Does the data cover an extended time period that reflects the normal seasonality and business cycle variations inherent in your business?
  3. Context. Is the data tagged with metadata that makes the data valuable in understanding an end-to-end business process or customer interaction?
  4. Consistency. Is each data item collected in a consistent way over time to allow trending and correlation analyses to be run with high statistical validity?
  5. Attributability. Can the data in question be attributed to specific customers or machines, or has it been de-identified for data privacy reasons?
  6. Reach. Does your data reach beyond the installed base of your current customers, products, and services to be reflective of your industry as a whole?
  7. Exclusivity. Do you enjoy exclusive data rights, or is that data available to your current and potential competitors as well?

Pay Special Attention to the Edge

Domo’s Data Never Sleeps analysis shows that 2.5 quintillion bytes of data are being produced each day. That is 2,500,000,000,000,000,000 bytes. Forbes reports that 90% of the data in the world today was produced in just the past two years. Gartner predicts that by 2022, 75% of enterprise-generated data will be created and processed outside of traditional data centers or cloud infrastructure. That is up from just 10% today.

Most of this data is being produced at the edge—machines, cars, phones, and sensor networks. Little of it is backhauled to traditional IT systems today. It dwarfs much of the structured data that you’ve inventoried in your data balance sheet.

A few examples. Self-driving cars harvest massive amounts of sonar, radar, and video sensor data to learn about their environments and make real-time decisions about how to drive safely. Nike has made significant investments to extend its Nike+ platform of connected sneakers and wearables to collect granular data on over seven million runners. Under Armour spent nearly $560 million to acquire MapMyFitness, MyFitnessPal, and Endomondo to access data on 150 million digital fitness users, then launched Healthbox to extend its data collection into health and wellness. In industrial markets, platforms such as GE’s Predix and Hitachi’s Lumada collect and bring context to massive stores of machine and operational data.

As you finalize your data balance sheet, make sure that you have fully considered the future value of your equivalents of these edge data stores. In a digital future, you may find them more important than the ones you’ve traditionally run your business on.

Value Data Optionality

The “perpetual” in perpetual algorithmic advantage is based on the fact that more data today means better algorithms tomorrow resulting in more data the day after that. It is a self-reinforcing cycle. The challenge is, how do you get that cycle jump-started?

How can your company gain access to enough data fast enough to secure an algorithmic advantage? It means spending heavily now for a hard-to-model future return and at least some improvement in corporate longevity. It requires your organization to value the second- and third-order effects of assets in your data balance sheet, not just the first-order ones.

Few companies are good at valuing real options when it comes to allocating scarce human and financial capital. CFOs are notorious for bringing a show-me-the-money discipline in the annual planning and budgeting exercises that midsized and larger companies run. They expect a direct linkage between investment decisions today and near-term improvements in orders, revenue, costs, and margins.

This is the antithesis of a future defined by machine learning prowess. Artificial intelligence systems learn by being fed a large quantity of high-quality data over an extended period of time. Anyone who has driven a Tesla on autopilot can attest to this. There are no overnight successes in machine learning.

Aspiring Goliaths value the optionality of data and are willing to invest substantial capital today to secure data assets that will pay off over years or decades. A few saw this opportunity long ago.

As discussed in Chapter 3, in the 1990s, IRI and Nielsen offered sophisticated analytics and reporting to retailers for “free” so that they could aggregate the SKU-level point-of-sale data produced by the retailers’ normal checkout process. They each built billion-dollar businesses selling insights based on that data to consumer packaged-goods companies.

In the early 2000s, Microsoft’s Hotmail and Google’s Gmail both offered “free” email in return for the right to analyze every word you wrote. They sold insights based on those analytics back to advertisers in the form of highly tailored marketing campaigns. Facebook is the modern equivalent.

In the 2010s, Progressive Insurance “paid” for data on individual driver behavior by offering auto insurance discounts and “free” monitoring devices that customers inserted into their cars’ diagnostic ports. More recently, Monsanto acquired Precision Planting for $210 million and The Climate Corporation for $930 million to aggregate data on customer use of Monsanto products.

As you will see in Chapter 10, our survey shows that only 12% of large companies and 5% of small ones believe that they are using data extensively to power their innovations. Learning how to value the optionality of data is a critical step in joining that elite group.

Maximize Return on Data

When IBM’s Watson beat Ken Jennings on Jeopardy! in 2011 and Google’s AlphaGo beat Lee Sedol in Go in 2016, it might have seemed trivial at the time. Both are games, after all. However, for those who are part of the AI community, the events were watersheds. They demonstrated that machine learning had made the leap from rapid retrieval of the right information to creating entirely new insights that even the world’s best human in those domains could not master.

Your company needs to master machine learning before it masters your industry. A robust data balance sheet and the willingness to grow it by valuing the optionality of data are useless if you cannot turn data into insights and insights into actions. The digital value stack, shown in Figure 6.2, is how you are going to maximize your return on data.

Illustration shows three different segment of the digital value stack. 
The top row depicts the first segment as “Digital Customer Segment” and includes four categories: Internal “Make It” Teams, Internal “Sell It” Teams, External supply chain partners and External channel and customers. 
The middle row depicts the second segment as “Continuous Machine Learning” and includes five categories: Asset optimization, Operations optimization, Risk optimization, Pricing optimization and Sales optimization.
The bottom row depicts the third segment as “Digital Data Aggregation” and includes six categories: Assets, People, Processes, Purchases, Behavior and Social.

Figure 6.2 The Digital Value Stack

We’ve already covered the bottom row, so let’s focus on the top two: understanding the customers of your digital value stack and the machine learning capability that can turn data into gold.

Digital Customer Segments

Your efforts to maximize your return on data need to focus on four customer segments: internal make-it and sell-it teams, as well as external supply chain partners and customers. Just like Watson or AlphaGo, your goal is to augment the expertise of the people in that segment in order to make better decisions faster and more often. You will be shifting your company from fact-free to fact-based operations.

First up are your internal make-it teams, which might work in functions such as product management, design, business incubation, R&D, engineering, manufacturing, and service delivery. They invent, design, develop, produce, and deliver the products and services that your customers purchase. Those teams have the potential to act as both suppliers of new data—by designing additional digital instrumentation into their offers—and consumers of algorithmic insights. Show them which capabilities customers most value and they can stop designing ones that customers do not use. Help them understand the real-life way that customers use your products and they can manufacture them to fail less often and require less service. Identify the unexpected ways that customers want to use your products and services and your make-it teams can dream up entirely new solutions to grow your business.

Second, your internal sell-it teams include functions such as sales, marketing, lead generation, pricing, and distribution channels. They work to extract the maximum gross margin from the maximum number of customers every week, month, and quarter. Show them how to calculate prices that optimize across gross margin percentage and revenue growth rate for maximum gross profit dollars. Help them optimize their digital and traditional marketing spend for lowest cost per converted lead. Enable them to see future demand through the scoring of prospective customer activities—reading white papers, watching videos, subscribing to newsletters, attending webinars—that represent expressions of interest. Give every sales representative the next button that tells them the most likely customer in their assigned accounts to buy and the offer most likely to convert to a profitable sale. Propose get-well actions for end customers that have made a purchase of an offer but have not yet consumed it. Identify the most important impediments to customers buying and consuming more, so that your sell-it teams can design out the commercial friction.

Third, make sure your external supply chain partners can be reached via your procurement and supply chain planning teams. You might ask, “Why would I commit scarce algorithmic resources to help my suppliers?” Well, you are not doing it for free. Companies such as Walmart have become remarkably sophisticated at using data as currency in a literal way with their suppliers. Give them more granular visibility to demand so that they can avoid both excess inventories and out-of-stocks while balancing production in their factories efficiently. Provide them with detailed benchmarks about how their on-time, on-quality, on-quantity performance varies from their peers and what they can do to improve. Help them see through to end customer usage of their products so that they can design them for higher value, reliability, and performance.

Finally, external channel partners and end customers are increasingly dependent on your algorithmic insights. Help your channel partners see the most valuable ways to combine your products and services for maximum commercial velocity. Give your B2B customers visibility to the installed base of your products in their operations at a level of detail their own IT teams cannot match. Enable your best customers to evolve from time-based to condition-based maintenance for both higher uptime and lower overall maintenance costs. Provide your end customers with anonymized benchmarks for how other customers get more value from your offers more quickly and help your channel partners target complementary products and in those areas.

This is by no means an exhaustive list. It is merely a starting point in your efforts to prioritize the highest-value algorithmic insights within each of these four digital customer segments.

Master Machine Learning

Let’s turn our attention to the middle row of Figure 6.2—the algorithms that realize the value of your data for each of those four digital customer segments. It would not be hyperbole to say there has been an explosion of activity in this middle layer.

In year 2000, there were just 570 patent applications that mentioned “algorithm” in their title or description. By 2015, there were 17,000 such patent applications, a number estimated to reach 500,000 by 2020. If you are not already focused on acquiring or building an algorithmic advantage, then you are falling behind.

Some of these algorithms are horizontal in nature and can often be sourced from third parties. Examples include remote asset monitoring solutions for services, propensity to buy models for sales, cyber intrusion detection for IT, and spam filtering for all of us.

However, many algorithms are industry specific and likely to require internal development. Examples include fraud detection for financial services, price elasticity models for packaged consumer goods, object identification for self-driving cars, predictive maintenance models for sophisticated industrial equipment, and emergency room throughput optimization in hospitals. New algorithms in healthcare are even outperforming certified radiologists in diagnosing acute conditions, such as pneumonia and stroke.

The range of algorithms is broad, but most companies focus on five algorithm categories: asset optimization, operations optimization, risk optimization, pricing optimization, and sales optimization. Note that the internal and external customer segments on the top row of Figure 6.2 seek algorithmic advantage from many of these categories.

These algorithms can be built in two different ways—data science teams and continuous machine learning. Both are still important, but many aspiring Goliaths are in the midst of shifting their focus from the first to the second one.

Traditionally, companies seeking an algorithmic advantage have made a sizable investment in hiring a chief digital officer or chief data officer and staffing up a data science team. These highly trained (and compensated) statisticians, modelers, and programmers specialize in each of the five categories of algorithms in the middle row of Figure 6.2.

In fact, for the past five years data scientist has been one of the hottest jobs in Silicon Valley. Parents have stopped telling their kids to become doctors and lawyers so that they can focus their studies on math and statistics. As shown in data from McKinsey in Figure 6.3, demand for data scientists is dramatically exceeding supply.

Illustration shows an image of three different bars representing the demand, supply and shortage of data scientists. The first bar depicts the estimated demand of data scientists for 2018 is 440,000 people, the second bar depicts the estimated supply of data scientists for 2018 is 300,000 people and the third bar depicts the estimated shortage of data scientists for 2018 is 140,000 people.

Figure 6.3 Shortage of Data Scientists.

Sources: McKinsey; US Census Bureau; Dun & Bradstreet; Bureau of Labor Statistics.

For context, there were just 150,000 people employed in data science 10 years ago. Now the gap between supply and demand is almost that big. These individuals require a unique combination of a masters- or PhD-level math background, advanced data modeling and programming skills, and domain knowledge of the business problem that the algorithm seeks to solve. While data science is not going away anytime soon, basing your future algorithmic advantage on the hiring of a big-data science team is going to take too long and cost too much.

The second approach has greater potential for long-term success. Continuous machine learning is, in many ways, the great equalizer. Given the broad availability of both on-premise and cloud-based machine learning tools, it is becoming possible for every small, midsized, and large company to let its data speak.

At the most simple level, continuous machine learning is an automated way to find actionable patterns in large data sets. You might hear terms such as deep learning, neural networks, and cognitive models as specific approaches to how computers find nonintuitive patterns on their own.

While the basic math has been around for decades, an arms race is underway between Amazon, Microsoft, Google, Baidu, and Tencent to provide your company with a ready-to-deploy platform for continuous machine learning. These tech titans are taking advantage of step- function advances in computing horsepower through AI-optimized GPUs, the availability of massive integrated data sets to train new models, and ready access to AI development platforms, such as Google TensorFlow and Amazon AI.

This is great news for you and your company. The pace of innovation is rapid and accelerating. The cost of entry is coming down. If you are just starting out, self-directed platforms such as Google Cloud’s AutoML and Microsoft Azure Machine Learning allow your core developers to use your existing data to build algorithms without an army of data scientists.

As new methods such as reinforcement learning—machines training machines—continue to advance, you can expect this reduced reliance on data scientists to accelerate. The ultimate goal is to allow business people to ask the right questions of continuous machine learning platforms directly. Maybe the supply of doctors and lawyers in Silicon Valley will be safe after all.

For most aspiring Goliaths, investing in both data science and continuous machine learning is the best path to building your algorithmic advantage. Focus your data scientists on developing high-value algorithms, where their unique combination of math skills and domain knowledge is essential. In parallel, unleash continuous machine learning on those same areas, as well as the next tier of use cases for your four digital customer segments.

In the overlap areas, play king of the hill. That is, let the best approach win through ongoing A-B testing between data scientist–developed and continuous machine learning–derived algorithms. In the next tier use cases, let time be your friend as continuous machine learning solutions tend to get better with more time and bigger data.

Now that you’ve built your data balance sheet, learned to value data optionality, and used the digital value stack to maximize your return on data, let’s turn to a case example of a company that is certainly using its data as currency.

The Weather Channel

New CEO David Kenny and CTO Bryson Koehler faced an enormous task—transforming The Weather Channel from a declining media business into a high-growth weather insights company. By 2013, it was too late to just build a mobile weather app that could help TWC grow beyond its cable TV roots. There were already 1,000 of those.

Kenny and Koehler settled on a bigger, hairier, more audacious goal—helping people around the world make better weather-related decisions. It would require TWC to forward integrate into industries with the most to lose when the real weather is different from what was forecasted.

TWC put a three-phase plan into action. Phase one focused on the bottom row of the digital value stack shown in Figure 6.2. TWC doubled down on its unique combination of data sets and scientific experts to create the most accurate weather forecasting engine on the planet. TWC’s army of 200 meteorologists classified 108 different forecastable weather patterns. To extend its data balance sheet, TWC acquired Weather Underground for its vast pool of crowdsourced microclimate data. Extending TWC’s forecasting prowess required rearchitecting its IT infrastructure, consolidating from 13 disparate data centers to a single cloud and big-data infrastructure. The hard work paid off with the cost of a million application programming interface (API) calls falling from $70 to just $1.

In phase two, TWC shifted its focus to the digital customer segments layer of the stack. Instead of internal and external user groups, Kenny and Koehler prioritized vertical industries based on the business risk those industries faced in getting weather predictions wrong. Aviation is a good example of one of the industries targeted, as predicting turbulence keeps passengers safe and minimizes the litigation risk that airlines face. To drive this, TWC brought in creative thinkers such as Chris Huff, who added deep domain knowledge about the retail and packaged consumer goods industries. Together the team installed a culture of experimentation and innovation through employee-led hackathons and special incentive programs. This extended TWC’s algorithmic advantage into areas such as the integration of weather into health and fitness platforms.

In phase three, TWC opened up the data-rich platform it had created to a broad ecosystem of people interested in building algorithms and apps around the weather. Getting innovators excited was not very hard. Most people think working with weather is cool and the ability to touch the lives of a billion or more people around the globe is highly motivating. The success was remarkable. With over 25,000 partners making 26 billion daily calls to its APIs, TWC is one of the largest API platforms in the world. By experimenting with the highest-potential innovations inside its flagship weather app, TWC was able to test new concepts quickly with minimal up-front investment. One example was linking the app more tightly with the Apple Watch. This delivered value to Apple users but also added a new data asset to TWC’s data balance sheet—access to hundreds of millions of barometric pressure sensors within the Apple devices.

Bryson admits that it was an intense three-year journey: any faster and TWC would not have been able to see its Big I and Little I innovations achieve their promise; any slower and the new leadership team might have lost momentum and the necessary support from employees and shareholders.

TWC showed how using data as currency pays off. The business is now a growing and vibrant unit in IBM’s Watson division and is recognized as the leading weather-insights company, powering over 150,000 airline flights a day, providing energy-demand forecasts to utility providers, delivering important insights to global insurance companies, and helping billions of people plan their lives around the weather.1

Others are following in TWC’s footsteps. For example, Walmart’s Data Café has made the company’s 40 petabytes of retail sales data available to innovators across its business. Data Café leverages 200 internal and external data streams to cut the cycle time for data-driven solutions from three weeks to 20 minutes. Innovations incubated through Data Café include the Social Genome project, which predicts sales based on conversations on social media; Shopycat, which analyzes how shopping habits are influenced by friends; and Polaris, which analyzes search terms on websites.

Rule 3: Company and Career Readiness

With this case example of TWC in mind, it is time to assess your readiness to use data as currency.

Company Readiness Self-Assessment

To complete your company readiness self-assessment, thoroughly read the grid in Figure 6.4, then identify the level of capability that your company has demonstrated within each row.

Illustration shows the company readiness self-assessment grid of the Rule 3: Use your data as currency.

Figure 6.4 Rule 3 Company Self-Assessment Grid.

Career Readiness Self-Assessment

Repeat the exercise above with your career in mind. What roles are you playing in helping your company use its data as currency? Mark your self-assessments on the grid shown in Figure 6.5.

Illustration shows the career readiness self-assessment grid of the Rule 3: Use your data as currency.

Figure 6.5 Rule 3 Career Self-Assessment Grid.

Rule 3 Readiness Summary

Now that you’ve completed your company and career self- assessments for Rule 3, fill in your readiness summary in Figure 6.6. Again, if you are doing these self-assessments online at www.goliathsrevenge.com, this readiness summary will be produced for you automatically.

Illustration shows an example of the Rule 3: Use your data as currency readiness summary and depicts a two-column table for entering results. The column headers are “company readiness” and “career readiness.” 
The row wise data shown in the table are as follows:
Row 1: High quality big data assets for company readiness and career readiness.
Row 2: Manageable data liabilities for company readiness and career readiness. 
Row 3: Place value on data optionality for company readiness and career readiness.
Row 4: Broad customer segment focus for company readiness and career readiness.
Row 5: Building data science team for company readiness and career readiness.
Row 6: Mastering machine learning for company readiness and career readiness.
Row 7: Overall rule 3 readiness for company readiness and career readiness.

Figure 6.6 Rule 3 Readiness Summary.

You are halfway there—three of the six rules completed. Take a breath, and then it’s time to open up your innovation as TWC did using Rule 4: Accelerate through innovation networks.

Note

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.95.74