Key Points and Questions
Now that you know how to get the data straight and how to visualize that data to arrive at answers, it is time to start building from there. Answering one strategic business question with the aid of great visualizations does not make you a hero instantly. So it is time to move on, increase the effort, and learn from your mistakes to make every new piece of actionable intelligence better, easier, and more visually appealing than the last one.
That is the importance of visualization. You can have important data all cleaned up, full, and complete. If nobody sees it, it has as much added value as an empty bag.
If the data is accurate and timely, as we saw in Chapter 3, then you can unlock its dormant value by making it accessible. Do this by—you guessed it—visualizing the data. Now, the definition of cutting-edge visualization changes every year. What is hip and modern today looks hopelessly outdated tomorrow. That's the reason you will not find images of visualizations in this chapter. Instead, you will find examples of how visualizations develop. And more important, you will learn what elements are in all good visualizations.
Visualizations need to answer “Where we are today?” “Are we happy or sad?” and “How do we win?”
After putting in place a solid data foundation and starting up our first project at Eséee Lauder, we started to deliver rapid and iterative actionable intelligence. We used a methodology I termed the Project Vision SWAT Iteration Framework (see Figure 4.1).
This fast-paced methodology delivers quick results by asking and answering:
Does this work? Yes, says Jack Levis, director of process management at UPS and creator of the amazing ORION project.
Levis manages a team of mathematicians who built the algorithms that help UPS shave millions of miles off delivery routes—ORION, or On-Road Integrated Optimization and Navigation.
When he first brought up the project, his management was skeptical and so were the drivers. He set up competitions between the drivers and the system to refine the model and variables and to acquire more data.
Following an iterative approach and engaging the drivers paid off.
“Starting small shows the project is feasible, optimal, and, more important, implementable. So we created and tested lots of prototypes,” Levis told me.1
Levis also shared his three key steps to success with me:
Even with an efficient framework to bring about results, the ORION project was not without its challenges. One of the reasons management was skeptical was that the data seemed impossible to collect.
For example, only the drivers knew external information such as:
Despite the challenges, the ORION program has been successful. Some of the early benchmarks include:
The ORION program also provides some best practices that any company can apply:
Now that you have established a solid start of your data foundation and an understanding of business discovery, you should have the following support and capabilities in place:
With all the above we had enough ingredients for a successful intelligence project.
Inject speed into the process.
Earl Newsome, former chief technology officer, Estée Lauder
I was waiting for an elevator, and who walked up but Earl Newsome, a man who knows how to transform organizations fast! But at Estée Lauder, the IT department he was working on changing was struggling.
Newsome shared with me how tough it was to reduce the red tape and get people moving. Then he said a phrase that has stuck with me for years: “We need to inject speed into the process!”
Speed in executing each step is the critical aspect. We established regular ideation sessions 45 minutes in length to agree on visualization or review iterative answers. This framework helped deliver answers in hours and days, satisfying the business' need for speed.
The exciting part of intelligence work is the activity of turning data into easy-to-follow visual representations and then creating stories about how the business is performing today versus opportunities for better performance tomorrow.
Instead of looking at reams of spreadsheets, we delivered the ability to slice and dice intelligence. We brought together streams of data into a single tool to show opportunities for improvement in easy to see red, yellow, and green indicators.
As the users saw the tools during our ideation sessions, they began to develop their own uses and purposes for the intelligence. This enabled us all to run faster toward monetizing the benefits.
The team and I were dedicated to the ideation sessions. While I was on vacation on the beach in Barcelona, we held sessions on one tool. When I was on a cruise ship in the Mediterranean Sea, we held a critical session with the head of supply planning.
Helena May, of the MLH group, held ideation sessions with the demand planning head during the day, at night, and on weekends.
The pace was intense, but the results were worth it.
The users saw where they could use the intelligence on a regular basis. They began to report the results of using the tools to us and regularly asked for improvements. Because we were following the iterative SWAT framework we delivered the changes quickly. The speed of enhancement was a significant departure from standard budgeting methodology and development approach. It was a refreshing change.
Instead of looking at printouts of data from the past, business users received graphical views of past and future performance. Our users were excited, and adoption of the tools increased.
A supply planner in North America said, “This is the first time I can make changes to the plans three months ahead of time, instead of being blamed for shortfalls that happened three months ago.”
Here, I'd like to make a distinction between ideation sessions and requirement gatherings. Ideally, ideation sessions are aimed at the users, driven by the users, and demand total involvement every step of the way. Table 4.1 shows the differences between an ideation session and a typical requirement gathering.
Table 4.1 Ideation Session versus SDLC Requirements Gathering
Ideation Session | Standard Software Development Lifecycle (SDLC) Requirement Gathering Phase | |
User | High user involvement | Little user involvement |
Involvement | High commitment level; involved in all sessions from the start of project | Low commitment level May be absent for certain sessions |
Process | Brainstorming and open discussions; fast, proactive, on-the-fly problem solving | Written project brief reports Slow, reactive, waits for approvals from related parties |
By having the right structure in place, we were able to accelerate from the “beginner” to “localized success” and “enterprise actionable intelligence capabilities” stages of actionable intelligence.
One such structure is the Human Centered Design Toolkit, by International Development Enterprise (IDE). The kit was written to depict the three-theory process that has been commonly used in multinational corporations (see Figure 4.2).
Hear Phase
Create Phase
All these steps should be executed in days, especially prototyping. The resources and permission should be available to execute good ideas as they are created. Enable fast prototyping!
Deliver Phase
In the deliver phase, ideas and solutions created will be implemented to deliver value and results in a sustainable way:
Delivering is important, as is getting the right answer to lots of important strategic business questions. This can be done by anyone because there are a great number of possibilities for the deployment of actionable intelligence.
The technology is there, you know the steps to creating actionable intelligence, and your team is ready to step it up. It's now your duty to build actionable intelligence momentum and make the effort reach its tipping point. Grow the project organically until it grows by itself because the organization sees that your project is indispensable to almost every business unit. After reaching that phase, the enterprise actionable intelligence stage mentioned in Chapter 1, the possibilities are endless.
Take this following example: You are driving a car and want to know whether you will arrive safely and on time. It's an old problem with an old solution. A speedometer visualized the data point of your current speed, the gas meter showed whether you still needed to make a stop. Everything else you had to do by yourself. That would be the first iteration of visualizing the available data.
In the 1960s, the U.S. Navy envisioned a new opportunity to move ships effectively and came out with satellite navigational systems. By the time they were made fit for consumers, these systems ran on the Global Positioning System (GPS) and allowed drivers to have a box in their car that calculated where they were, how long the drive was going to take, how far they still had to drive and whether they were keeping to the speed limit.
Iteration two, the navigational system to add on to iteration one, was elegantly designed and visualized all the information it gathered in such a way that the user only needed one glance at the little box to see what needed to be seen.
Even though iteration two doesn't show whether the user will be there on time, the navigational system will come pretty close. Users can answer that question for themselves, and hooking up the schedule to the navigational system could lead to visualizing the answer to that question.
However, there is one word in the strategic question—“Will I make it safely and on time?”—I left out. Currently, no system takes into account safe driving.
So that brings us to iteration three—one we haven't realized yet. To show whether you can drive safely and be on time, we need to layer in traffic conditions on the road, weather conditions, police speed traps, and so on. We also need to know how likely the car is to break down—on major points like tires, brakes, and lights. Some systems still provide this information, but most systems stop before reaching this point. By visualizing this you could give a basic answer as to whether you will arrive safely and on time (and without tickets).
We can go deeper still to iteration four. It is possible to realize systems that monitor whether you have been on a certain stretch of road before, whether your average speed is up on that stretch, and how many accidents have occurred on that same stretch. By monitoring biometrics you could also tell whether you are intoxicated, have a low pulse, feel sleepy, or have an illness that interferes with your ability to drive. Let's call this harder-to-get data really big data.
While you are driving you need a very fast answer. The answer needs to be simple, visual, and nondistracting. You can't afford to be distracted by, for example, your current heart rate, on top of everything else being displayed in your car. When you need the answer, you need it fast. You can't do the whole analysis while driving.
So this leaves us with one question: What would this visualization look like? I'd suggest a big view with clear colors indicating whether you are making it and whether you are safe. If you want to know by how much, you should be shown an estimate of the difference in travel time. If you want to drill down, you should have the option to say to your little black box “tell me the factors delaying me,” and the box would list them.
Whether you were at iteration one or iteration four, there were some commonalities in the example.
As soon as machines take over the process of interpreting the answer, making the decisions, and reflecting on those decisions, management can be fired and robots can take over the business. Until then you need to be able interpret the story behind the data, the story behind what the numbers tell you.
Why are we visualizing information in a particular way? Why do bar charts and pie charts work better than tables? The answer is simple. It's because of how our minds work. We trigger a bigger area in our brains when we look at data ordered in a way that comes naturally to us than we do by looking at raw data. Raw data, lists, and tables require us to think in order to understand the data and piece together the relationships in our minds. Visualizations let you see the relationships visually and instantly. This works because our minds use heuristics to determine what goes together, and data visualizations use those same heuristics to show people what they need to know.
Early in the twentieth century the Gestalt school of psychology, developed by Christian von Ehrenfels and inspired by Hume, von Goethe, and Kant, determined how the mind sees relationships between points and pieces of drawings. This led to mapping some important heuristics that to this day remain valid and the basis of many visualizations.
Table 4.2 provides a short list of the heuristics the mind uses.
Table 4.2 Human Heuristics
Proximity | Objects that are close to each other are assumed to belong together. | You probably feel a connection to the nearest city as opposed to some place on the other side of the country. |
Similarity | Objects that are perceived as equal are assumed to belong together. | Tigers and lions go together much better than tigers and fish. They are similar. |
Enclosure | Objects enclosed by a line or plane are assumed to belong together. | All the things on a desk are assumed to be of one person; all the things not on there are assumed to be of someone else. |
Closure | Objects that are not fully finished will be assumed to be finished in the mind. | If you stare into a crescent moon, you will likely see the outlines of a full moon. The mind does this. |
Continuation | Objects that disappear partially behind other objects are continued in the mind. | To see this, all you have to do is shove one item in front of another. |
Connection | Objects linked together by anything whatsoever are assumed to belong together. | Connection is very easy; you'll associate almost everything that is connected to each other physically as a group. |
These heuristics are interesting and explain why certain visualizations work so well. Though when you keep in mind just the list of commonalities we discussed earlier when creating visualizations, you will almost always automatically adhere to one or more of these heuristics, because your brain knows what makes information easiest for it to understand.
In the end you are still the user of the tools you create to visualize the business. The tools will not make your decisions nor give you the relationships. They are called tools because they give you the means to find relationships and make the decisions.
One of the biggest pitfalls here is the fact that you still have to find the relationships for yourself. This sounds easy but can lead to what practitioners call the post hoc fallacy. The post hoc fallacy means you see relationships that aren't there, just because the data suggests such relationships. This nasty trap can be avoided by clear and careful thinking when determining relationships suggested by the data. Does the relationship make sense? Is it causal one way or another? Or is there just a correlation? The famous example from statistics is this: On days where people buy more ice cream, more people drown. So people drown because of ice cream? No! People drown because there are a lot of people in the sea on hot days. And on hot days more ice cream is sold. There is a relationship, but no causation between ice cream sales and drowning. So be aware; use your logic.
Earlier I mentioned that statistical forecasts that do not include the right amount of data and influences are good for matching the past but not for predicting the future. Many organizations talk about “forecast accuracy” with decimal points as if to appear very precise. The reason business leaders aren't satisfied is because the forecast is often wrong and there is a cost associated with errors in either poor customer service or increased costs.
This was made very clear in the April 6, 2014, New York Times article “Eight (No, Nine!) Problems with Big Data,” by Gary Marcus and Ernest Davis.2
[Even] when the results of a big data analysis aren't intentionally gamed, they often turn out to be less robust than they initially seem. Consider Google Flu Trends, once the poster child for big data. In 2009, Google reported—to considerable fanfare—that by analyzing flu-related search queries, it had been able to detect the spread of the flu as accurately and more quickly than the Centers for Disease Control and Prevention. A few years later, though, Google Flu Trends began to falter; for the last two years it has made more bad predictions than good ones.
Visualization should first be used to identify a situation holistically, with a wider field of vision. Too often, scientists attempt to hone in on a correlation right away. This is like looking at the stars through a telescope when there is an elephant in your way. You find very interesting constellations, until you step back and realize it's not the stars you are seeing but an elephant's hide!
The authors of the New York Times article also warn about making too many connections.
If you look 100 times for correlations between two variables, you risk finding, purely by chance, about five bogus correlations that appear statistically significant—even though there is no actual meaningful connection between the variables. Absent careful supervision, the magnitudes of big data can greatly amplify such errors.
Organizations can avoid these problems by maintaining focus on the big picture and the original strategic question.
3.16.29.209