CHAPTER
4

Using Metrics as Indicators

To keep things simple, thus far I’ve focused only on the following basic concepts:

  • Metrics are made up of basic components: data, measures, information, and other metrics.
  • Metrics should be built from a root question.
  • It’s more important to share how you won’t use a metric than how you will.

This chapter introduces another basic concept about creating and using metrics—metrics are nothing more than indicators. That may seem to be a way of saying they aren’t powerful, but we know that’s not the case. Metrics can be extremely powerful. Rather, the concept of metrics as indicators warns us not to elevate metrics to the status of truth.

Metrics’ considerable power is proven by how much damage they can do. Metrics’ worth is rooted in their inherent ability to ignite conversations. Metrics should lead to discussions between customers and service providers, between management and staff. Conversations should blossom around improvement opportunities and anomalies in the data. The basis for these conversations should be the investigation, analysis, and resolution of indicators provided through metrics.

Metrics should be a catalyst to investigation, discussion, and only then, action. The only proper response to metrics is to investigate—a directed and focused investigation into the truth behind the indicator.

Facts Aren’t Always True

If you search the internet for things we know to be true (supported, of course, by data), you’ll eventually find more than one site that offers evidence “debunking” past and present-day myths. What was thought to be a fact is proven to be an incorrect application of theory or the misinterpretation of data.

Health information is a ripe area, full of things people once believed to be true but now believe the opposite. Think about foods that were considered good for you ten years ago but today are not. Or foods that were considered not to be good for you, which now are considered healthy fare. Are eggs good for you or not? The answer not only depends on who you ask, but when.

  • The US Government’s “food pyramid” changes periodically.
  • Who doesn’t remember the scenes of Rocky downing raw eggs?
  • It seems like each year we get a new “diet” to follow—high protein, high cholesterol, low fat, no red meat, or fish…the arguments change regularly.

One good argument on the topic of old facts not being in line with new truths is that facts don’t change, just our interpretation of them.

This misrepresentation of metrics as fact can be seen in instances where only a portion of the metric is relayed to the viewer.

A business example is one a friend of mine loves to tell about the service desk analyst who was by all accounts taking three to five times as long to close cases as the other analysts. The “fact” was clear—he was less efficient. He was closing less than half of the cases as his peers and taking much longer to close each case. His “numbers” were abysmal.

The manager of the service desk took this “fact” and made a decision. It may not have helped his thought process that this “slow” worker was also the oldest and had been on the service desk longer than any of the analysts. The manager at the time made the mistake of believing the data he was looking at was a “fact” rather than an indicator. And rather than investigate the matter, he took immediate action.

He called the weak performer into his office and began chewing him out. When he finally finished his critique he gave the worker a chance to speak, if only to answer this question (veiled threat): “So, what are you going to do about this? How are you going to improve your time to resolve cases? I want to see you closing more cases, faster.”

Showing a great deal more patience than he felt at the moment, the worker replied, “My first question is, how is the quality of my work?”

“Lousy! I just told you. You’re the slowest analyst on the floor!”

“That’s only how fast I work, not how good the quality is. Are you getting any complaints?”

“Well, no.”

“Any complaints from customers?”

“No.”

“How about my coworkers? Any complaints from them?”

“No,” said the manager. “But the data doesn’t lie.”

“You’re right, it doesn’t lie. It’s just not telling the whole story and therefore it isn’t the truth.”

“What? Are you trying to tell me you aren’t the slowest? You are the one who closes the cases. Are you just incompetent?” The manager was implying that he wasn’t closing the cases when done.

“No, I am the slowest,” admitted the worker. “And no, I’m not incompetent, just the opposite. Have you asked anyone on the floor why I’m slow?”

“No—I’m asking you.”

“Actually you never asked me why. You started out by showing me data that shows that I’m ‘slow, inefficient,’ and now ‘incompetent.’”

The manager wasn’t happy with the turn this had taken. The employee continued, “Did you check the types of cases I’m closing? I’m actually faster than most of my coworkers. If you looked at how fast I close simple cases, you’d see that I’m one of the fastest.”

“The data doesn’t break out that way,” said the manager. “How am I supposed to know the types of cases each of you close?”

The employee replied, “Ask?” He was silent a moment. “If you had asked me or anyone else on the floor why I take longer to close cases and why I close fewer cases you’d find out a few things. I close fewer cases because I take longer to close my cases. The other analysts give me any cases that they can’t resolve. I get the hardest cases to close because I have the most experience. I am not slow, inefficient, or incompetent. Just the opposite. I’m the best analyst you have on the floor.”

The manager looked uncomfortable.

The employee continued, “So, tell me, what do you want me to change? If you want, I won’t take any cases from the other analysts and I’ll let the customers’ toughest problems go unresolved. Your call. You’re the boss.”

Needless (but fun) to say, the boss never bothered him about his time to resolve again. And luckily for all involved, the boss did not remain in the position much longer.

The only proper initial response to metrics is to investigate.

Metrics are not facts, treating them as such over values them. This is dangerous when leadership decides to “drive” decisions with metrics. When we elevate metrics to truth, we stop looking deeper. We also risk making decisions and taking actions based on information that may easily be less than 100 percent accurate.

Metrics are not facts. They are indicators.

When we give metrics an undeserved lofty status (as truth instead of indicators) we encourage our organization to “chase data” rather than work toward the underlying root question the metrics were designed to answer. We send a totally clear and equally wrong message to our staff that the metrics are what matter. We end up trying to influence behavior with numbers, percentages, charts, and graphs.

One of the major benefits of building a metric the way I suggest is that it tells a complete story in answer to a root question. If you’ve built it well, chances are, it’s accurate and comprehensive. It is the closest thing you’ll get to the truth. But, I know from experience, no matter how hard I try there is always room for error and misinterpretation. A little pause for the cause of investigation won’t hurt—and it may help immensely.

Metrics Can Be Wrong

Since there is the possibility of variance and error in any collection method, there is always room for doubting the total validity of any measure. If you don’t have a healthy skepticism of what the information says, you will be led down the wrong path as often as not. Let’s say the check-engine light in your car comes on. Let’s also say that the car is new. Even if we know that the light is a malfunction indicator, we should refrain from jumping to conclusions. My favorite visits to the mechanics are when they run their diagnostics on my check-engine light and they determine that the only problem is with the check-engine light.

Perhaps you are thinking that the fuel-level indicator would be a better example. If the fuel gauge reads near empty, especially if the warning light accompanies it, you can have a high level of confidence that you need gas. But the gas gauge is still only an indicator. Perhaps it’s a more reliable one than the check-engine light, but it’s still only an indicator. Besides the variance involved (I noticed that when on a hill the gauge goes from nearly empty to nearly an eighth of a tank!), there is still the possibility of a stuck or broken gauge.

I understand if you choose to believe the gas gauge, the thermometer, or the digital clock—which are single measures. But, when you’re looking at metrics, which are made up of multiple data, measures, and information, I hope you do so with a healthy dose of humility toward your ability to interpret the meaning of the metric.

This healthy humility keeps us from rushing to conclusions or decisions based solely on indicators (metrics).

I’ve heard (too often for my taste) that metrics should “drive” decisions. I much prefer the attitude and belief that metrics should “inform” decisions.

Accurate Metrics Are Still Simply Indicators

Putting aside the possibility of erroneous data, there are important reasons to refrain from putting too much trust in metrics.

Let’s look at an example from the world of Major League Baseball. I like to use baseball because of all the major sports, baseball is easily the most statistically focused. Fans, writers, announcers, and players alike use statistics to discuss America’s pastime. It is arguably an intrinsic part of the game.

To be in the National Baseball Hall of Fame is, in many ways, the pinnacle of a player’s career. Let’s look at one of the greatest player’s statistics. In 2011, I was able to witness Derek Jeter’s 3,000th hit (a home run), one of the accomplishments a player can achieve to essentially assure his position in the Hall of Fame (Jeter was only the 28th player of all time to achieve this). The question was immediately raised—could Jeter become the all-time leader in hits? The present all-time leader had 4,256 hits! Personally, I don’t think Jeter will make it.

The all-time hits leader was also voted as an All-Star 17 times in a 23-year career—at an unheard of five different positions. He won three World Series championships, two Golden Glove Awards, one National League Most Valuable Player (MVP) award, and also a World Series MVP award. He also won Rookie of the Year and the Lou Gehrig Memorial Award and was selected to Major League Baseball’s All-Century Team. According to one online source, his MLB records are as follows:

  • Most hits
  • Most outs
  • Most games played
  • Most at bats
  • Most singles
  • Most runs by a switch hitter
  • Most doubles by a switch hitter
  • Most walks by a switch hitter
  • Most total bases by a switch hitter
  • Most seasons with 200 or more hits
  • Most consecutive seasons with 100 or more hits
  • Most consecutive seasons with 600 at bats
  • Only player to play more than 500 games at each of five different positions

This baseball player holds a few other world records, as well as numerous National League records that include most runs and doubles.

In every list I could find, he was ranked in the top 50 of all-time baseball players. In 1998 The Sporting News ranked him as the 25th and The Society for American Baseball Research placed him at 48th.

So, based on all of this objective, critically checked data, it should be easily understood why this professional baseball player was unanimously elected to the National Baseball Hall of Fame on the first ballot that he was eligible for.

But he wasn’t elected.

His name is Pete Rose. He is not in the Baseball Hall of Fame and may never get there. If you look at all of the statistical data that the voters for the Hall use, his selection is a no-brainer. But the statistics, while telling a complete story, lack the input that was taken into account—specifically that he broke one of baseball’s not-to-be-breached rules: he legally and illegally gambled on professional baseball games.

In the face of the overwhelming “facts” that Pete Rose should be in the Baseball Hall of Fame, the truth is in direct contrast to the data.

Even if we look at well-defined metrics that tell a full story, they are only indicators in the truest sense. If you fully and clearly explain the results of your investigation, you complete the metric by explaining the meaning of the indicator. You explain what the metrics indicate so that better decisions can be made, improvement opportunities identified, or progress determined. You are providing an interpretation—hopefully one backed by the results of your investigation.

No matter how you decorate it, metrics are only indicators and as such should elicit only one initial response: to investigate.

Indicators: Qualitative vs. Quantitative Data

The simple difference between qualitative and quantitative data is that qualitative data is made up of opinions and quantitative data is made up of objective numbers. Qualitative data is more readily accepted to be an indicator, while quantitative data is more likely to be mistakenly viewed as fact, without any further investigation necessary. Let’s look at these two main categories of indicators.

Qualitative Data

Customer satisfaction ratings are opinions—a qualitative measure of how satisfied your customer is. Most qualitative collection tools consist of surveys and interviews. They can be in the form of open-ended questions, multiple-choice questions, or ratings. Even observations can be qualitative, if they don’t involve capturing “numbers”—like counting the number of strikes in baseball, or the number of questions about a specific product line. When observations capture the opinions of the observer, we still have qualitative data.

Many times, qualitative data is what is called for to provide answers to our root question. Besides asking how satisfied your customers are, some other examples are:

  • How satisfied are your workers?
  • Which product do your customers prefer, regular or diet?
  • How fast do they want it?
  • How much money are your customers willing to pay?
  • When do your customers expect your service to be available?
  • Do your workers feel appreciated?

No matter how you collect this data, they are opinions. They are not objective data. They are not, for the most part, even numbers.

Some analysts, especially those that believe the customer is always right, believe that qualitative data is the best data. Through open-ended questions these analysts believe you receive valuable feedback on your processes, products, and services. Since the customer is king, what better analytical tool is there than to capture the customers’ opinion on your products and services?

But is a survey response truly the respondent’s opinion?

Someone could rate your product high or low on a satisfaction scale for many reasons other than the product’s quality. Some factors that could go into a qualitative evaluation of your service or product could include:

  • The time of day the question was asked
  • The mood the respondent was in before you asked the question
  • Past experiences of the respondent with similar products or services
  • The temperature of the room
  • The lighting
  • The attractiveness of the person asking the question
  • If the interviewer has a foreign accent

The list can go on forever. The problem is that these results are not facts. They are still only opinions, and in most cases there is low confidence that respondents even provide their actual opinion.

Quantitative Data

Quantitative data usually means numbers—objective measures without emotion. This includes all of the gauges in your car. They also include information from automated systems like automated-call tools, which tell you how many calls were answered, how long it took for them to be answered, and how long the call lasted.

The debate used to be that one form of data was better than another. It was argued that quantitative data was better because it avoided the natural inconsistencies of data based on emotional opinions.

Quantitative data avoids the variances we saw with qualitative data and gets directly to the things that can be counted. Some examples in the customer satisfaction scenario could include:

  • The number of customers who bought your product
  • The number of times a customer buys the product
  • The amount of money the customer paid for your product
  • What other products the customer bought
  • The number of product returns

The proponents of quantitative information would argue that this is much more reliable and, therefore, more meaningful data.

I’m sure you’ve guessed that neither camp is entirely correct. I’m going to suggest using a mix of both types of data.

Quantitative and Qualitative Data

For the most part, the flaws with qualitative data can be best alleviated by including some quantitative data—and vice versa. Qualitative data, when taken in isolation, is hard to trust because of the many factors that can lead to the information you collect. If a customer says that they love your product or service, but never buy it, the warm fuzzy you receive from the positive feedback will not help when the company goes out of business. Quantitative data on the number of sales and repeat customers can help provide faith in the qualitative feedback.

If we look at quantitative data by itself, we risk making some unwise decisions. If our entire inventory of a test product sells out in one day, we may decide that it is a hot item and we should expect to sell many more. Without qualitative data to support this assumption, we may go into mass production and invest large sums. Qualitative questions could have informed us of why the item sold out so fast. We may learn that the causes for the immediate success were unlikely to recur and therefore we may need to do more research and development before going full speed ahead. Perhaps the product sold out because a confused customer was sent to the store to buy a lot of product X and instead bought a lot of your product by mistake. Perhaps it sold quickly because it was a new product with a novel look, but when asked, the customers assured you they’d not buy it again—that they didn’t like it.

Not only should you use both types of data (and the accompanying data collection methods), but you should also look to collect more than one of each. And of course, once you do, you have to investigate the results.

Even in the case of automated-call software, the results are only indicators.

Quantitative data, while objective, are still only indicators. If you don’t know why the numbers are what they are, you will end up guessing at the reasons behind the numbers. If you guess at the causes, you are guessing at the answer.

Metrics (indicators) require interpretation to be used properly.

The great debate between which is better is unnecessary. You should use some of each in your recipe.

Recap

The following are principles to remember:

  • Metrics are only indicators.
  • Metrics are not facts. Even when you have a high level of confidence in their accuracy, don’t elevate them to the status of truth.
  • The only proper response to a metric is to investigate.
  • When you tell the story by adding prose, you are explaining what the metrics are indicating so that better decisions can be made, or improvement opportunities identified, or progress determined.
  • There are two main categories of indicators: Qualitative and Quantitative. Qualitative is subjective in nature and usually an expression of opinion. Quantitative is objective in nature and compiled using automated, impartial tools.
  • Metrics by themselves don't provide the answers; they help us ask the right questions and take the right actions.
  • Metrics require interpretation to be useful.
  • Even the interpretation is open to interpretation—metrics aren’t about providing truth, they’re about providing insight.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.118.180