5

THE PRINCIPLES OF
GOOD MEASUREMENT

THE HUMAN-RESOURCE performance-measurement system you use plays a key role in determining HR’s place in your firm—including securing HR’s credibility. It also influences the organization’s ability to capitalize on HR as a strategic asset. For these reasons, you must ground that measurement system in some essential principles. Let’s take another look at HiTech, the company we first met in chapter 3, to see what can happen when a firm ignores these principles.

HiTech is a large manufacturer headquartered in the western United States. Its innovative products have earned it recognition as an industry leader. Like many companies, HiTech for several years had emphasized the importance of people as a source of competitive advantage. However, although the company had prominently emphasized some “people policies,” it had not articulated the cause-and-effect relationships that might link its HR architecture to customer and shareholder value. As a first step toward measuring these relationships, HiTech’s HR leadership conducted a feasibility study to explore what it would take to develop a strategic measurement system. The following are the problems they encountered in this project. As we’ll see, this list provides insight into the necessary ingredients for an effective measurement effort.

Available data—not relevant data—drove HiTech’s key decisions. Because HiTech had not articulated the processes through which people create value throughout the business, it did not manage (and therefore measure) the relevant drivers within that value chain. Not surprisingly, the measures it did have available were designed for other purposes. The company used people measures from the traditional annual employee survey, for example, and financial measures that included budget variance. The feasibility project was focused on one of the few business units that currently collected data on all three components in the value chain. To that end, HR chose a service/call-center operation because of the close relationship between front-line employees (HiTech’s service contracts) and customers. Though the call center was a convenient source of data, it was clearly not HiTech’s “core business.” Nevertheless, HR had no other data to work with.

By incompletely articulating its causal story, HiTech undermined its measurement results. By relying on measures that were available rather than appropriate, HiTech found it very difficult to draw even tentative inferences about important relationships. For example, the call center generates revenue through a fee-for-service arrangement with the regional sales office. It may also generate additional revenue through new sales of additional products to existing customers. Call-center “profitability” is thus the net of the combined revenue from these two sources minus budgeted costs. The company assessed customers’ perception of value by using two available efficiency measures (call volume and speed per answer) and by gauging service accuracy and customer satisfaction.

This is a typical set of metrics that you would find in a company that uses measurement to monitor activities against some standard, particularly cost control. These metrics are not so helpful to an organization that wants to understand the value-creation process. For example, the HiTech call center really has three ways to drive “profitability.” First, it can increase revenues by providing outstanding service, which over time will add to the overall product value perceived by HiTech’s customers. But there is likely to be a significant delay between call-center performance and changes in product purchases. Indeed, HiTech might more accurately link its call-center performance to product sales rather than service billing, since service billing is ultimately an internal cost that HiTech would like to reduce. In other words, the call center’s gain becomes a charge against the product sales business unit.

Second, the call center generates revenue through service sales that lead to top-line growth, that is, new service sales that are not tied to a preexisting customer purchase. Here, HiTech could expect the time lag between the customer’s service experience and buying decisions to be relatively short.

Third, HiTech included budget variance as a measure of cost control (and thus a measure of profits, indirectly). While cost control is one traditional measure of financial performance, it may well conflict with “new sales” revenue. For example, measures of efficiency and speed may drive down costs, but they do not necessarily represent the most effective way to generate new revenues. HiTech’s simple causal model combines these conflicting effects into one relationship.

In its measurement system, HiTech included what we might term the operational or internal process measures of a Balanced Scorecard (speed and volume) in the “customer” segment, along with customer satisfaction. A more appropriate model would have had those process measures driving customer satisfaction, which in turn would drive financial performance. This more realistic causal connection would have given managers actionable results that they could then use to adjust the implementation of the call center’s strategy at the appropriate point in the system.

Finally, the people metrics HiTech used highlighted a common problem many HR managers face when trying to make business sense of their HR measurement system. HiTech relied on employee attitude surveys that contained a substantial number of questions. However, these questions represented just one large and ill-defined measure of employees’ attitudes toward the company and their supervisors. Because the survey responses could not be divided meaningfully into separate determinants of employee strategic behavior (competencies, motivation, strategic focus), HiTech had no idea how to align the HR system to drive change. Even if the senior HR management team believed that people could generate value at HiTech, the measures they used provided little insight into how the HR system should be aligned to influence people results.

HiTech didn’t communicate HR’s strategic value up front. HiTech’s HR VP required the cooperation of other business units to collect the necessary data for the feasibility study. However, because the organization had not gone through Steps 2 through 5 in our model (described in chapter 2), the VP had no way to convince these other units that the results of his project would make a difference in the performance of line operations. Therefore, these units saw the project as a diversion and not a priority. As a result, the feasibility-project team collected useful data on just a few units out of a much larger population. This limitation made it nearly impossible for the senior HR management team to draw meaningful inferences about the potential relationships in the model.

Multiple problems compounded to limit the value of HiTech’s feasibility study. The feasibility study’s small sample size was compounded by the CHRO’s over-reliance on available data. Moreover, those data were not necessarily available in the appropriate time periods. The unit of observation was the supervisory group within the call center. But the measures that the CHRO chose to focus on required data that became available on different cycles. For example, the operational process measures of speed, efficiency, and volume were available monthly. Revenue and customer satisfaction measures were gathered quarterly. And the employee survey data came up annually. These disparities in timing forced the feasibility-project team to annualize all the measures so as to conform to the employee data cycle. Thus, instead of having thousands of unit observations over several years, the project team had access to less than thirty.

Taken together, these flaws in HiTech’s measurement-system feasibility study hamstrung the project. Not surprisingly, the results of the pilot project provided little to reinforce HR’s status as a strategic asset or to guide HR’s strategic effectiveness. The pilot project was not extended to a more comprehensive analysis, and HiTech missed an opportunity to sharpen the strategic focus of HR.

WHY BETTER MEASUREMENT?

A sound performance-measurement system does two things. First, it improves HR decision-making by helping you focus on those aspects of the organization that create value. In the process, it provides you with feedback that you can then use to evaluate current HR strategy and predict the impact of future decisions. A well-thought-out measurement system thus acts as both a guide and a benchmark for evaluating HR’s contribution to strategy implementation.

Second, it provides a valid and systematic justification for resource-allocation decisions. HR can’t legitimately claim its share of the firm’s resources unless it can show how it contributes to the firm’s financial success. An appropriately designed performance-based measurement system lets you explicate those links and thus lay the groundwork for investment in HR as a strategic resource, rather than HR serving as a cost-center to be retrenched.

For example, suppose you measure your firm’s standing on the High-Performance Work System index (described in chapter 2). The HPWS index is a summary indicator of the “performance” orientation of key HR practices. You find that your firm’s HR system falls in the forty-fifth percentile among all firms and the fifty-sixth percentile in your industry group. A good measurement system will let you predict how much improvement in firm performance you can expect if you boost your HR system to a higher target-percentile level. Or, let’s say you find that your firm is already in the ninetieth percentile on the HPWS index. You can then calculate how much of the company’s shareholder value is attributable to your outstanding HR system, compared to the value created by a HR system at the fiftieth percentile.

This approach is a sophisticated form of benchmarking, because it goes beyond measuring just the “level” of the HR system. It lets you attach dollar values to the gap between your firm’s current HR system and some target level. Still, it suffers from the same weakness as any benchmarking approach for measuring HR’s strategic influence. It doesn’t tell you much about how narrowing that gap actually creates the predicted gains in shareholder value. In effect, there’s a “black box” between HR and firm performance—and preventing HR from gaining the credibility it needs to become a true strategic partner.

Ultimately, you must have a persuasive story about what’s in the black box. You must be able to throw back the cover of that box and reveal a plausible process of value creation from HR to firm performance. The strategic HR architecture we have described, aligned with the strategy implementation process, forms such a story. Telling this story—through the measurement system you design—will help you identify actionable goals and performance drivers.

THE MEASUREMENT CHALLENGE:
ATTRIBUTES AND RELATIONSHIPS

When we speak of measurement as a strategic resource for HR managers, what do we really mean? For example, many firms identify one or two “people-related” measures, such as employee satisfaction, in a balanced measure of corporate performance. Line managers, even HR managers, might be held accountable for these measures, which could also be incorporated into the managerial bonus plan. Such measures capture the quantity, or level, of a particular attribute—in this case, employee satisfaction. How much is there? Does it change over time? How does it compare with that of other firms, or across SBUs? Most of us would assume that more of this attribute is a good thing. We say “assume,” because in many firms there is probably little evidence supporting the link between employee satisfaction and firm performance. Such organizations emphasize the level of the attribute, rather than the relationship between the attribute and some strategic outcome (performance drivers or firm performance).

Good measurement requires an understanding of and expertise in measuring both levels and relationships. Too many HR managers under pressure to demonstrate the HR-firm performance relationship rely on levels of HR outcomes as proxies for measures of that relationship. In other words, they can’t show the direct causal links between any HR outcome and firm performance, so they select several plausible HR measures as candidates for strategic drivers—and then simply assert their connection to firm performance.

This inability to demonstrate these relationships is sometime obscured by diagrams that vaguely suggest cause and effect. Figure 5-1 shows a common example of what might be called a superficial strategy map. A firm might include one or two measures under each of these three categories and do a good job of measuring the levels of those attributes. But what does doing well on those measures really mean? The arrows imply that better performance on the “People” dimension improves performance on the “Customer” dimension, which in turn will improve “Profits.” But the real story of value creation in any firm is much more complicated, so this “story” is incomplete. It provides only the most superficial guide to decision making or performance evaluation. It’s only marginally better than traditional measures that make no effort to incorporate a larger strategic role for HR. Boxes and arrows give the illusion of measurement and understanding, but because the relationship measures are so limited, such diagrams—and the thinking behind them—can actually help to undermine HR’s confidence and credibility.

Figure 5-1 A Superficial Strategy Map

art

Even though relationship measurement is the most compelling assessment challenge facing HR managers today, attribute measures should form the foundation of your measurement system. Why? Because evidence of a strong relationship between A and B is worthless if the measures of A and B themselves are worthless. But words such as “worthless” or “useful” or “appropriate” aren’t precise enough for our discussion about the elements of good measurement. In fact, there are well-defined principles delineating effective measurement practice. Understanding those principles lets you take that essential first step in developing a strategically focused HR measurement system.

NUMBERS WITH MEANING

Let’s begin with a simple definition of what we mean by measurement. Typically, measurement is defined as the assignment of numbers to properties (or characteristics) of objects based on a set of rules. Numerical representation is important, because often we are interested in quantities. But, we are interested in quantities that have meaning. For example, knowing that average employee satisfaction is 3.5 on a 5-point scale is numerical, but it doesn’t have much inherent meaning. Is 3.5 good or bad? Or consider an employee turnover rate of 15 percent. Percentage points have more inherent meaning than 5-point scales, but simply observing the number doesn’t reveal much about whether 15 percent is a problem.

To add meaning to these levels, we need context. This is the appeal of a benchmark. If we find that our 3.5 on a 5-point scale is considerably better than our industry peers’ ratings, we can begin to attach some significance to that measure. However, we might also observe that our 3.5 is considerably below our own historical level on this measure. We’re doing better than our peers but not maintaining our historical performance. Of course, in both cases we have made interpretations about relative value only. That is, we are better or worse than some standard. In neither case do we have any measure of managerial value. In other words, what difference does it make whether we have a 3.0 or 4.0 value on a 5-point employee satisfaction scale? To have managerial value, the measure must be expressed in numerical units that have inherent performance significance (such as dollars). Barring that, we have to be able to translate the measure into performance-relevant units.

Consider this simple example: Suppose you want to demonstrate the dollar cost (new hiring and training costs, lower productivity) associated with each additional percentage point in your firm’s turnover. To get managerial value out of this exercise, you would have to link HR measures to performance drivers elsewhere in the firm, and ultimately to firm performance. Recall the Sears story. The key “people” measures in Sears’ measurement model reflected employees’ attitudes toward their jobs and the company overall. Sears could have benchmarked those attitudes against similar levels at other companies, or perhaps against Sears’ own historical norms. From this, the company might have identified a gap. However, then it would have had to ask, so what? Unlike most companies, Sears had an answer to this question, because it could translate changes in those attitude measures into changes in firm performance. The “people” numbers thus had business meaning.

Measuring relationships gives meaning to the levels, and to potential changes in those levels. However, those relationships are very likely to be firm-specific. Therefore, the more the magnitude (the impact of one measure on another) of those relationships is unique to your firm, the less useful it is for you to benchmark on levels. Benchmarking on measurement levels assumes that the relationships among these levels are the same in all firms, and hence that they have the same meaning in all firms. That’s the same as saying that the strategy implementation process is a commodity, or at least that HR’s contribution to that process is a commodity. For this reason, we find benchmarking on HR strategic measures to be misguided at best and counterproductive at worst.

MEASURES VERSUS CONCEPTS OR VISIONS

For our purposes, the “objects” in our definition of measurement are a firm’s HR architecture and strategy implementation systems. The “properties” of those objects that most interest us are the value-creating elements in those two systems—in other words, the HR deliverables and the firm’s performance drivers that the deliverables influence. We can think of these properties as abstract concepts, but also as observable measures. First, an organization or top management team can identify key links in the value-creation chain by taking what we call a “conceptual” or “vision” perspective. For example, the simple relationship between employee attitudes and firm performance serves as the foundation of the Sears measurement model described earlier. Sears refined its model further with brief vision statements about the important attributes of each element in its model. If you recall, the company’s top management decided that Sears must be a compelling place to work, a compelling place to shop, and a compelling place to invest (the “three C’s”). As another, more specific example, a retail bank that we’ve worked with identified “superior cross-selling performance” as a key performance driver.

Such concepts and visions—let’s refer to them collectively as “constructs”—are properties of the strategy implementation process. However, they are so abstract that they provide little guidance for decision making or performance evaluation. To illustrate, identifying “superior cross-selling performance” as a key performance driver may take things one step beyond the vision stage, but it’s still too conceptual to be operational. What does it mean? How will we know it when we see it? Will two different managers both know it when they see it? In short, how do we measure it?

Compelling and easy-to-grasp constructs are important because they help you capture and communicate the essence of powerful ideas. They’re like simple but evocative melodies that everyone can hum. Nevertheless, they are not measures. Rather, they constitute the foundation on which you build your measures. Clarifying a construct is the first step in understanding your firm’s value-creation story. But you must then know how to move beyond the construct to the level of the measure.

One way to detect a good measure is to see how accurately it reflects its underlying construct. Earlier, we said that a measure of the relationship between A and B is worthless if the underlying measures of A and B themselves are worthless. A or B would be worthless if they did not reflect the constructs behind them. For example, if Sears measured the construct “compelling place to work” simply by assessing the level of employee satisfaction with pay, the measure would not have very much relevance. Why? Because it omits key dimensions, such as the understanding of business strategy or relationships with supervisors, of the underlying idea that it is designed to tap.

One way to avoid this kind of mistake is to use multiple measures that reflect different dimensions of the same construct. In Sears’ case, managers used a seventy-item survey, which they then distilled down to ten items as their measure of “compelling place to work.” Next they consolidated those ten items along two dimensions—employee attitudes about the job and employee attitudes about the company. Figure 5-2 illustrates this technique. This approach gave the organization an explicit way to assess how well it was realizing its vision of being a “compelling place to work.”

Figure 5-3 illustrates another problem that can arise in choosing metrics. In the figure notice that the measure does not correspond to its underlying construct for two reasons. First, the measure doesn’t fully capture all of the properties of the construct of interest. This “deficiency” is the dark area on the left. Second, the measure is capturing something beyond the construct of interest. In other words, the measure is contaminated (lighter area on the right). This kind of measurement error is all too common. For example, recall the retail bank that identified “cross-selling performance” as a key performance driver. How should the firm measure this construct? It might use total sales, under the assumption that employees or branches with more cross-selling skill would have higher total sales. But total sales would also include sales other than those derived from cross-selling performance by tellers; those other data would thus contaminate the metric. What about assessing “total number of different products sold per customer,” or “new sales to existing customers”? In either of these cases, the bank would still have to develop a measure that tapped the important attributes of the performance driver in question without blurring the picture with unrelated influences.

Figure 5-2 An Example of Multiple Measures Reflecting Different

Dimensions of the Same Construct: A Compelling Place
to work

Responses to these 10 questions on a 70-question employee survey had a higher impact onŁ employee behavior (and, therefore, on customer satisfaction) than the measures that were devised initially: personal growth and development and empowered teams.

art

Source: Adapted from Anthony J. Rucci, Steven P. Kirn, and Richard T. Quinn, “The Employee-Customer-Profit Chain at Sears,” 76, no. 1 (January–February 1998): 90.

These sorts of measurement errors severely reduce the value you can derive from your measurement system. If you use a deficient measure, it’s very likely that employees will ignore or misinterpret a particular performance driver. For example, if a key driver is “positive customer buying experience,” you might use “time with customer” as a measure. Indeed, market research shows that customers appreciate it when sales staff do not pressure them to make a quick purchase. On the other hand, if this is your only measure of the customer’s buying experience, sales-people might be tempted to needlessly drag out their encounters with customers. It’s still true that what gets measured, gets managed. Simply put, we can’t measure A and hope for B.1

METRICS THAT MATTER

Suppose you’ve developed a clear strategy map describing your firm’s strategy implementation process, identified the key performance drivers involved, and even have a good idea of what measures you might use to capture the HR enablers of those drivers. You still have several important decisions to make regarding the structure of those measures—decisions that will dramatically affect their eventual usefulness. The measurement process is not an end in itself. It has value only if its results provide meaningful input into subsequent decisions and/or contribute to more effective performance evaluation. Therefore, as you think about the choice and form of a particular measure, stop for a moment and think carefully about what you would do with the results. Imagine receiving your first report summarizing this measure. What key decisions will these results inform? Will another manager, particularly outside HR, consider recommendations based on this measure to be persuasive? Would these results provide a compelling foundation for a resource-allocation decision within your firm?

Figure 5-3 Misalignment of Construct and Measure,

Causing Contamination/Deficiency

art

We have defined “measurement” as the process of assigning numbers to properties of objects by following certain rules. Numerical measures are appealing because they describe quantities, which play a central role in most decisions. But not all measures provide information about quantities. Here are some pointers to keep in mind as you choose metrics for your own system:

Nominal Measures. Nominal measures are the lowest level of measurement and tell us nothing about quantity of a particular attribute. They simply indicate differences or categorizations across certain properties. For instance, classifying employees by gender indicates a difference between males and females. It doesn’t say anything about whether one category is “more” or “less” than the other on the property of gender. Nominal measures are useful only for counting. Any numbers attached to these categories are used as labels, as in “category 1” or “category 2.” In HR, gender counts would most likely be used to assess compliance activities, such as adherence to EEO policies, but they would have little value in measuring HR as a strategic asset.

Ordinal Measures. Ordinal measures represent the next level up of measurement. They provide the first, but least-sensitive, measure of quantity. Think of ordinal measures most easily as rank-order assessments. If we know that A exceeds B on the underlying property in question, we can “rank” A above B. We just don’t know by how much. In addition, we know that if B is greater than C, then A is also greater than C. We can say nothing, however, about how the difference between A and B compares to the difference between B and C. Rank-order measures are probably most useful in performance evaluations, such as “good,” “better,” and “best.” Promotion recommendations provide another apt example: The top candidate is better than the rest, but this top ranking says nothing about how much better. (Note that for the purposes of succession planning, this may not matter.)

Interval Measures. Interval measures are an improvement beyond ordinal measures because they let us assume that the interval between “scores” of 1 and 2 is equal to the interval between 2 and 3. Many common business performance measures expressed in time, dollars, units, market share, or any combination of their ratios are interval measures. For instance, a 1-point percentage change in market share means the same number of customers going from 34 to 35 percent as it does going from 67 to 68 percent. (Note that these examples are also ratio measures, which we describe next.) The more common—and purest—form of interval measure is one of those scales on which “1” means “strongly agree” and “5” means “strongly disagree.”

Ratio Measures. In the cases of distance, dollars, and time just described, you can see that ratio scales have an important advantage over interval scales, because they have a true zero point. This point of reference lets you make meaningful comparisons between two values. For example, you could describe one result as two-thirds the quantity of another result. Ratio measures are also appealing because the units of measure tend to have inherent meaning (dollars, number of employees, percentages, time, etc.). Finally, these measures are relatively easy to collect. (Note, though, that just because they’re readily available doesn’t necessarily mean that they’ll accurately reflect the underlying concept or vision that you’re trying to assess—as we saw earlier.)

Ideally, you will develop a measurement system that lets you answer questions such as, how much will we have to change x in order to achieve our target change in y? To illustrate, if you increase training time by 20 percent, how much will that change employee performance and, ultimately, unit performance? Or, if you reduce turnover among key technical staff in R&D by 10 percent, how long before that action begins to improve the new-product-development cycle time? A measurement system that can provide this kind of specificity is not easy to develop and, indeed, may be beyond the reach of some firms. But measurement quality is a continuum, not an absolute. As with most decisions, developing a strategic HR measurement system involves trade-offs. To make the correct trade-off, you need to choose the point along the measurement-quality continuum that you think your firm can reasonably achieve.

MEASURING CAUSATION

Why are accounting numbers and financial measures so compelling? It’s not so much that they guide decision making, but that they are expressed in units that directly reflect the bottom line—dollars. We can object to the supposed shortsightedness of the “bean counters,” but there is still something to the adage that “a dollar saved is a dollar earned.” As we’ve seen, this characteristic makes it particularly challenging to manage intangible assets (such as human capital), for which you can quantify costs much more easily than benefits. For example, how would you measure the value of developing and implementing a new competency model? As with many large investments, you may not realize the benefits for several years, and even then they might manifest themselves only indirectly, through improved levels of performance drivers elsewhere in the firm. Since HR will always tend to be further upstream in the value-creation process, measuring the value of human resource decisions means assessing their impact on strategic drivers that are linked more closely—if not directly—to the bottom line.

Quantifying these relationships is by no means an easy task. However, even if you can’t empirically verify a five-link chain of causation from HR to firm performance in your organization, establishing HR’s influence on key interim performance drivers (such as customer retention or R&D cycle time) has clear financial implications. As HR managers validate an increasing number of such links, they begin to establish the central connections between HR and firm performance. Having systematic and quantifiable evidence of HR’s contribution to seven out of twenty strategic performance drivers, for example, is not the complete story of HR’s strategic influence, but it is a significant improvement over zero out of twenty!

But what does “measuring a relationship” actually mean? Terms such as association, correlation, or causation might come to mind—though they are sometimes used too loosely and aren’t always helpful. Two variables are related when they vary together, but you may not know for sure that one actually causes the other. You don’t have the luxury of arguing over such nuances, though. You have to make decisions, and your colleagues expect those decisions to produce results. At some point, your job requires you to draw a causal inference about the relationship between a decision and its result. After all, you’re not interested in whether a mere “association” exists between a particular incentive system and employee performance. You need to know whether the system in question will produce a change in employee performance and, if so, by how much.

In short, you need relationship measures that are “actionable.” One of the most common measures of a statistical relationship is the correlation coefficient. Ranging from –1.00 to +1.00, correlation coefficients describe the extent to which two variables change together. Unfortunately, correlation coefficients have little actionable value. First, they are not expressed in units that have any inherent meaning. To illustrate, how would you interpret a correlation coefficient of .35? Has your CEO ever asked you to describe HR’s contribution in terms of correlation, or its equivalent statistical term “explained variance”? Second, correlation coefficients typically describe the relationships between just two variables. Since most business outcomes have more than one cause, these measures simply can’t capture the complexity of real-world questions. For example, suppose you’re the head of HR at a major retailer, and you’re interested in the relationship between hours of sales training and the customer buying experience. In addition, store outlets offering more training have implemented a new computer system that reduced customer transaction time by 30 percent. The variables “training time” and “customer satisfaction” might well show a strong positive correlation. However, much of it may be due to the influence of the new technology!

So what are the alternatives? There are many to choose from—but all of them have a couple of important features. For one thing, unlike the simple correlations just described, they all measure relationships from a multivariate, rather than bivariate, perspective. This means that if you were interested in understanding the individual effect of a particular HR deliverable on a performance driver, the measure of that relationship would accurately reflect the independent effect of that individual HR deliverable. Moreover, these causal models measure relationships in actionable terms. For example, you need to know that a 20-percent change in competency A will increase employee cross-selling performance by $300 per employee per week, not that the two are “positively and significantly correlated.”

So, you can operationalize your causal inferences—you just need to carefully consider the plausible alternatives to the HR effect you are interested in. For x to be a cause of y, for example, you have to be confident that the effect on y is not due to some influence other than x. If you can keep those other influences from varying, your confidence in your causal inference will increase. You will also be able to express your inference in actionable terms.

Measuring Causal Linkages in Practice

Let’s take a look at how some firms devised ways to measure causal linkages.

THE EXPERIENCE AT GTE

GTE provides a very interesting illustration of how an organization can estimate linkages across several performance drivers in a strategy map. GTE’s Network Services unit (approximately 60,000 employees) “hypothesized” that market share was driven by customer valuation of its service, which in turn was driven by customer service quality, brand advertising, and inflation. The driver (the leading indicator) for customer service was a set of strategic employee behaviors focusing broadly on employee engagement. GTE HR created what it called the “employee engagement index” (EEI) based on a subset of seven questions from the GTE employee survey as a measure of these strategic behaviors.

The analysis supported the hypothesis and demonstrated the wisdom of HR’s “balanced” approach to performance measurement and management. For example, GTE found that a 1 percent increase in the EEI resulted in nearly a ½ percent increase in customer satisfaction with service. In other words, GTE examined a key section of its “strategy map” and explicitly tested its hypothesis that employee behaviors are indirect leading indicators of key strategic measures (market share).

GTE was able to do this for three reasons. First, unlike at HiTech, the HR department had a clear story in mind of how employee behaviors actual drive strategy in its organization. Second, HR recognized the need to collect and merge information from multiple sources and multiple time periods. Third, it had access to the technical expertise necessary to make these statistical estimates.2

THE EXPERIENCE AT SEARS

Sears was one of the first companies to actually quantify the hypotheses in a strategy map. It has further refined its firmwide work-shop-invest model to include a focus on specific relationships within stores (see figure 2-6 in chapter 2 for an example of their full-line store strategy map). For example, at Sears the Brand Central department specializes in consumer durables (TVs, refrigerators, etc.). These items tend to be expensive and complex, are purchased infrequently, and require high levels of prepurchase advice from salespeople, who are paid on commission. In contrast, in the Women’s Ready to Wear(RTW)/Intimate Apparel department, products tend to be inexpensive and uncomplicated, customers generally make their own selections with limited input from salespeople, and customers tend to purchase items more frequently. Here the sales associates are paid on an hourly basis.

Steve Kirn, VP for innovation and organizational development, and his staff wanted to know: Do the relationships differ among the Work, Shop, and Invest categories between these two departments? Because Sears collects data on each of these elements by department, they were able to generate some surprising answers. The willingness of customers to recommend Sears as a place to shop to others (which they call customer advocacy) is a key driver of profitability. For example, in Women’s RTW/Intimate Apparel, a 1 percent increase in customer advocacy was linked to a 7.4 percent increase in revenue, and in Brand Central to a 4 percent increase. However, the drivers of customer advocacy differed across departments. In the RTW/Intimate category, working conditions and a belief that the company’s pricing is a competitive strength significantly affected overall attitude toward Sears and had a favorable impact on customer advocacy. In Brand Central (a commission-based category), pay and a willingness to recommend Brand Central emerged as significant drivers. The presence of attentive and responsive managers was a core driver of sales associate attitudes and, ultimately, economic value across all of the departments studied. Such analyses are critical for helping Sears to gain an increasingly sophisticated understanding of its strategy map and to help implement that strategy faster.

DRILLING DEEPER AT SEARS

A fuller understanding of the relationships between people, strategy, and performance may also require some innovative thinking in the analysis of data. At Sears, customer satisfaction is a key driver of store performance, not only because satisfied customers are more likely to become repeat customers, but also because they are more likely to recommend Sears to others as a good place to shop. Thus, as we described in this chapter, customer advocacy is a key driver of profitability at Sears. But as Sears found, the relationship between customer satisfaction and advocacy is nonlinear. For example, when customers rated their overall satisfaction with the shopping experience as a “10” on a scale of 1 through 10, 82 percent of them were likely to recommend Sears to friends or family—a key driver of business success in retailing. However, when customers rated Sears a “9,” only 33 percent were likely to recommend Sears as a place to shop. While Sears managers initially believed that a “9” on a 10-point scale was a high rating on customer satisfaction, analyses showed otherwise. Thus, satisfied customers were not enough—what they needed were enthusiastic customers to drive referrals. Understanding these relationships helped Sears managers understand how much customer satisfaction was “enough.”

Increasing Your Confidence in Causal Relationships

Despite the wide range of influences on any management phenomenon, the question remains: Is it really possible to isolate the effect of a particular HR management policy or practice on firm performance? When you’re dealing with complex, living systems in the real world, it’s not possible to completely isolate variables. In even the most rigorous social-science laboratory experiments, certain factors still lie outside the researchers’ control. The best you can do is to improve your confidence in such judgments. Here are some points of encouragement to keep in mind:

Just because it can, doesn’t mean it does. Just because an organizational outcome can be influenced by a wide range of other influences doesn’t necessarily mean that it is. In most cases, there are only a few key influences on your outcome of interest. If you understand your business, you can easily identify these vital few. In our example of the relationship between cross-selling skills (an HR deliverable) and cross-selling sales performance (a driver for the firm’s strategic goal of increased revenue growth), cross-selling sales performance may also be strongly influenced by the availability of timely product information to employees. Does this mean that the relationship between skills and performance might be contaminated by the influence of product information availability? No, if all employees have the same product information available to them. Yes, if product information availability varies with both employee skills and cross-selling performance. In this case, it doesn’t vary with either. On the other hand, if it turned out that, for some reason, employees with better skills also had better product information, then it would be more difficult to isolate the independent influence of skills on performance.

If it can be measured, it’s much less of a problem. So what can we do if there is another influence, such as product information availability, that we think might be confounding our estimate of the relationship between skills and employee performance? Fortunately, if you can measure this other influence—for example, if you can assess the level of product information availability—you can then use a variety of techniques to estimate the separate (or independent) effects of both skills and product information availability on employee performance.3 As a manager, you don’t need to be an expert in those techniques, but you should understand the circumstances under which they may have value.

All other causes are not created equal. A potential other cause becomes more of a concern when it affects both variables of interest. Think of this other cause as a joint influence. For example, product information availability might confound the relationship between skills and performance only when it can be shown to influence both. If it affects just cross-selling performance but does not vary with skills, then it won’t affect the estimated relationship. Likewise, if it varies with employee skills only but has no apparent effect on sales performance, it will not affect the relationship between the two variables.

You can account for joint influences. Clearly, the real challenge in measuring causal relationships lies in handling joint influences that you can’t measure. If you could measure them, you could control their confounding effects using statistical techniques. However, simply by understanding the logic behind your causal model and the basic principles of measurement, you will have a much better grasp of the magnitude of the problem and, in fact, whether there really is a problem.

IMPLEMENTING YOUR MEASUREMENT SYSTEM:
COMMON CHALLENGES

Now that you have an overview of the foundations of good measurement, let’s highlight some common problems managers encounter when they attempt to implement these ideas. These problems focus on the more technical challenges surrounding the implementation of these systems, rather than the organizational hurdles associated with change efforts in general. We leave a discussion of these latter challenges for chapter 8.

Out with the Old, In with the New

Much of the challenge surrounding the introduction of a more strategically focused measurement system involves the complexity of introducing any new IT system. Your current system and measures are comfortable, and changing them can prove expensive. This is particularly true for measurement systems that let you assess relationships as well as levels. In addition, managers tend to become very attached to the metrics they create, and we have frequently seen firms continue to use these legacy metrics long after they have become inappropriate. Unfortunately, as we’ve seen, there is probably an inverse relationship between the accessibility of your current measures and their value in a strategic measurement system. Recall our earlier argument: If your measures don’t fully capture the underlying organizational process or outcome that really drives strategy, they will have little value. This means you really have to understand the story of value creation in your organization and accurately measure the HR drivers in that process. This process takes time and resources, but if the organization isn’t willing to make that investment, it will have nothing more than “garbage in, garbage out.”

HR managers may often find themselves at the head of this change effort, and we discuss these challenges in some detail in chapter 8. But one of the first hurdles such managers might face is simply building consensus that such change is necessary. While fortunate to have the strong support of CEO Arthur Martinez at Sears, Tony Rucci, the former vice president for administration, has observed that CHROs must learn to build their support wherever possible. In his experience, about two-thirds of the employees in any organization are going to be indifferent or actively opposed to such initiatives.4 He argues, however, that effective change comes through time and energy devoted to the one-third who support change rather than attempting to convert the two-thirds who are not supportive.

The Temptation to Measure It All

Don’t let the fact that it may be impossible to measure every relationship prevent you from making wise use of available data. For example, Sears was able to precisely express the relationships among employees, customers, and profitability in part because it is in the retail service industry. The causal link between front-line employees and profitability was not only relatively direct for this firm, it was also relatively short. In other words, there was not an overwhelming number of links between employee behaviors and financial performance.

In manufacturing or other industries where the links in the value chain are more complex and probably more numerous, HR managers may want to begin with easily measured relationships. For example, even if the larger company information system is unable to link new-product cycle time to customer satisfaction and ultimately profitability, establishing the first several links between the HR system and R&D cycle time would say a lot about HR’s strategic influence (see figure 5-4). By establishing even just the few links shown in the figure, HR managers could begin to talk about deliverables that make a difference in the business.

Figure 5-4 An Example of Establishing Links between the HR System

and Performance Drivers in a Strategy Map

art

Matching Data to the Appropriate Level of Analysis

To measure relationships, you have to assess cause and effect at the same level of analysis. Examples of levels of analysis include the employee, team or group, project, unit, branch, division, and SBU. The problem is that HR measures might be available at just one level of analysis (the employee), while higher-order performance drivers, such as customer satisfaction, might be available only at the level of the unit or larger organizational division. Or, certain process or development measures might be available at the team or project level, but profitability is measured at a higher level.

This is where understanding the value-creation story comes in. If you can grasp how strategy is really implemented in your firm, you should be able to create parallel measures at the appropriate level of analysis. For example, an international package delivery service uses a complex “time in transit” index to measure operational performance at the level of the firm. This calculation means nothing at the level of the truck driver. However, at that individual level, “number of off-route miles” is one of many measures that cumulate to the “time in transit” index. HR decisions that influence “number of off-route miles,” such as training or reward strategies, have strategic value because this variable drives the ultimate “time in transit” index.

Alternatively, you may need to aggregate lower-level measures to higher levels of analysis. So, for example, if financial or customer satisfaction data are available only at the level of the unit or division, individual-level HR measures can be aggregated to that level. That is, you could cumulate the individual-level measures into a summary measure at the unit or division level—in this example, the mean of all individual employee measures could represent the “team” measure. Ultimately, you have to think about what you are going to do with the results and ask yourself whether the outcomes of a particular level of analysis will really give you the answers you need. Otherwise, you may be diverted by measurement convenience, at the expense of measurement effectiveness.

Separating Leading from Lagging Indicators

You can logically distinguish leading from lagging indicators as you develop a causal model of your firm’s strategy implementation process. However, to identify and quantify relationships within the model, you need to know more than just that “HR” is a leading variable and “customer satisfaction” is a lagging variable. Accurately gauging the relationship between the two requires some sense of the magnitude of the time lag between changes in the leading indicator and subsequent changes in the lagging indicator. Don’t worry about calculating an exact figure for the delay, but do understand the implications of leads and lags when developing your measurement system. The key is to collect measures over multiple time periods, so that you can evaluate the relationship between HR at time T-2 with performance driver x at time T+1. You will probably have to collect some data more often than you have in the past. Employee surveys, for example, have little value when collected only annually.

SUMMARY: THINKING STRATEGICALLY
ABOUT MEASUREMENT

Thinking strategically about measurement means understanding whether the measurement system you are considering will provide you with the kinds of information that will help you manage the HR function strategically. This lesson is the same theme we’ve been reinforcing throughout the entire book. In addition, “think top down, not bottom up” should guide the technical decisions underlying your measurement system. Understanding the value-creation process and developing construct-valid measures of that process form a “top-down” approach. Starting with available measures and making the best of a bad situation is a “bottom-up” approach that in most cases will be a waste of time and, in the long run, will only undermine HR’s strategic credibility. This chapter should provide you with the essential principles of measurement that will enable you to move beyond the limits of the “best available” approach. In the next chapter, we’ll apply these principles to the problem of measuring HR alignment with the firm’s strategy implementation system.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.219.213.196