Chapter 9. The Economic Value of Job Performance

Consider this single question: Where would a change in the availability or quality of talent have the greatest impact on the success of your organization? Talent pools that meet this standard are known as “pivotal” talent pools. For example, at Xilinx, the world’s largest manufacturer of programmable logic chips, the ability to innovate, to develop new products, is key to the firm’s success in the marketplace. A pivotal talent pool is, therefore, engineers, for as CEO Wim Roelandts is fond of saying, “De-motivated engineers don’t create breakthrough products.”[1]

In defining pivotal talent, an important distinction is often overlooked.[2] That distinction is between average value and variability in value, something that utility analysis explicitly recognizes. When strategy writers describe critical jobs or roles, they typically emphasize the average level of value (for example, the general importance, customer contact, uniqueness or power of certain jobs). Yet, variation interacts with average value to identify the talent where HR practices can have the greatest effect. Hence, a key question for managers is not which talent has the greatest average value, but rather, in which talent pools does performance variation create the biggest strategic impact?

“Impact” identifies the relationship between improvements in organization and talent performance, and sustainable strategic success. The pivot-point is where differences in performance most affect success. Identifying pivot-points often requires digging deeply into organization- or unit-level strategies to unearth specific details about where and how the organization plans to compete, and about the supporting elements that will be most vital to achieving that competitive position. These insights identify the areas of organization and talent that make the biggest difference in the strategy’s success.

Pivotal Talent at Disney Theme Parks

Consider a Disney theme park. Suppose we ask the question the usual way: What is the important talent for theme-park success? What would you say? There is always a variety of answers, and they always include the characters. Indeed, characters such as the talented people inside the Mickey Mouse costumes are very important. A decision science, however, focuses on pivot-points. Consider what happens when we frame the question differently, in terms of impact: Where would an improvement in the quality of talent and organization make the biggest difference in our strategic success? Answering that question requires looking further to find the strategy pivot-points that illuminate the talent and organization pivot-points.

One way to find the pivot-points in processes is to look for constraints. These are like bottlenecks in a pipeline: If you relieve a constraint, the entire process works better. For a Disney theme park, a key constraint is the number of minutes a guest spends in the park. Disney must maximize the number of “delightful” minutes. Disneyland has 85 acres of public areas, many different “lands,” and hundreds of small and large attractions. Helping guests navigate, even delighting them as they navigate, defines how Disney deals with this constraint. Notice how this takes the customer-delight strategy and makes it much more specific by identifying a pivotal process that supports it.

Figure 9-1 applies this concept to two talent pools in the Disney theme park: Mickey Mouse and the park sweeper.

<source>Source: John W. Boudreau and Peter M. Ramstad (2007). Beyond HR: The New Science of Human Capital. Boston: Harvard Business School Publishing.</source>
Performance-yield curves for sweepers vs. Mickey Mouse.

Figure 9-1. Performance-yield curves for sweepers vs. Mickey Mouse.

Mickey Mouse is important, but not necessarily pivotal. The top line represents the performance of the talent in the Mickey Mouse role. The curve is very high in the diagram because performance by Mickey Mouse is very valuable. However, the variation in value between the best performing Mickey Mouse and the worst performing Mickey Mouse is not that large. In the extreme, if the person in the Mickey Mouse costume engaged in harmful customer interactions, the consequences would be strategically devastating. That is shown by the very steep downward slope at the left. That’s why the Mickey Mouse role has been engineered to make such errors virtually impossible. The person in the Mickey Mouse costume is never seen, never talks, and is always accompanied by a supervisor who manages the guest encounters and ensures that Mickey doesn’t fall down, get lost, or take an unauthorized break. Because the position is so well engineered, there is also little payoff to investing in improving the performance of Mickey Mouse, although there is significant payoff in making sure Mickey Mouse meets the high standards of performance, so that even the “worst” Mickey Mouse is excellent.

If the talent pivot-point for the guest-experience process isn’t characters, what is it? When a guest has a problem, folks, like park sweepers and store clerks, are most likely to be nearby in accessible roles, so guests approach them. People seldom ask Cinderella where to buy a disposable camera, but hundreds a day will ask the street sweeper! The lower curve in Figure 9-1 represents sweepers. The sweeper curve has a much steeper slope than Mickey Mouse because variation in sweeper performance creates a greater change in value. Disney sweepers have the opportunity to make adjustments to the customer-service process on-the-fly, reacting to variations in customer demands, unforeseen circumstances, and changes in the customer experience. These are things that make pivotal differences in Disney’s strategy to be the “Happiest Place on Earth.” To be sure, these pivot-points are embedded in architecture, creative settings, and the brand of Disney magic. Alignment is key. In fact, it is precisely because of this holistic alignment that interacting with guests in the park is a pivotal role, and the sweeper plays a big part in that role. At Disney, sweepers are actually frontline customer representatives with brooms in their hands.[3]

Logic: Why Does Performance Vary Across Jobs?

Performance is more (or less) variable across jobs for two main reasons.[4] One of these is the nature of the job, or the extent to which it permits individual autonomy and discretion. For example, when job requirements are specified rigidly, as in some fast-food restaurants, important differences in ability or motivation have less noticeable effects on performance. If one’s job is to cook French fries in a restaurant, and virtually all the variables that can affect the finished product are preprogrammed—the temperature of the oil in which the potatoes are fried, the length and width of the fries themselves, the size of a batch of fries, and the length of time that the potatoes are fried—there is little room for discretion. That is the objective, of course: to produce uniform end products. As a result, the variability in performance across human operators (what the utility analysis formulas in Chapter 8, “Staffing Utility: The Concept and Its Measurement,” symbolized as SDy) will be close to zero.

On the other hand, the project leader of an advertising campaign or a sales person who manages all the accounts in a given territory has considerable discretion in deciding how to accomplish the work. Variation in individual abilities and motivation can lead to large values of SDy in those jobs, and also relative to other jobs that vary in terms of the autonomy and discretion they permit. Empirical evidence shows that SDy increases as a function of job complexity.[5] Hence, as jobs become more complex, it becomes more and more difficult to specify precisely the procedures that should be used to perform them. As a result, differences in ability and motivation become more important determinants of the variability of job performance.

A second factor the influences the size of SDy is the relative value to the organization, of variations in performance. There are some jobs where performance differences are vital to the successful achievement of the strategic goals of an organization (for example, software engineers that design new products for leading-edge software companies) and others that are less so (for example, employees who send out bills in an advertising agency). Even though there is variability in performance of employees in the billing department, that variability is not as crucial to the success of the firm as is the variability in the performance of project leaders at the agency. In short, SDy is affected by the relative position of a job in the value chain of an organization.[6] Identifying the relative pivotalness of jobs requires considering the dimensions of Impact (sustainability, strategic success, pivotal resources and processes, and organization and talent pools).[7]

In the Disney example shown in Figure 9-1, the sweeper role has a higher SDy than Mickey Mouse because variations in sweeper performance (particularly when they respond to guests) cause a larger change in strategic value than variations in Mickey Mouse performance. Mickey Mouse is vital to the Disney value chain, however. So, it is important to understand the difference between average value of performance and pivotalness, the latter being reflected in SDy. High pivotalness and high average value often occur together, but not always.

Graphic depictions of performance-yield curves, such as the one in Figure 9-1, can help to identify where decisions should focus on achieving a minimum standard level of performance (like billers in an advertising agency or cooking French fries in a fast-food restaurant) versus improving performance (like sweepers in a theme park or software designers). It also provides a way to think about the risks and returns to performance at different levels. It helps people avoid making decisions based on well-meaning, but rudimentary rules, such as “get the best person in every job.”[8]

In terms of measurement, the value of variation in employee performance is an important variable that determines the likely payoff of investments in HR programs. Most HR programs are designed to improve performance. All things equal, such programs will have higher payoff when they are directed at organizational areas where performance variation has a large impact on processes, resources, and ultimately, strategic success. A 15 percent improvement in performance is not equally valuable everywhere. Ideally, we would like to have a measure that captured the monetary value of different performance levels, so that we could translate performance improvements into monetary values. If we could do that, we could measure whether the performance improvement we expect from a program, such as more accurate selection, improved training, or more effective recruitment, will justify the cost of the program.

There is as yet no perfect measure of the value of performance variation, but a great deal of research has addressed the issue. We provide a guide to the most important findings in the later sections. First, we show how the value of performance variation (in the form of SDy) fits into the formulas for the utility, or monetary value, of staffing programs that we discussed in Chapter 8.

Analytics: The Role of SDy in Selection-Utility Analysis

As Equations 8-14 through 8-18 showed in Chapter 8, SDy (the monetary value of a one-standard-deviation difference in job or criterion performance) translates the improvement in work force quality from the use of a more valid selection procedure into economic terms. Without SDy, the effect of a change in criterion performance could be expressed only in terms of standard Z-score units. However, when the product, rxy Analytics: The Role of SDy in Selection-Utility Analysiss is multiplied by the monetary-valued SDy, the gain is expressed in monetary units, which are more familiar to decision makers. Note again that we will often refer to dollar-valued performance, or use the dollar sign as a subscript, but the conclusions are valid for any other currency value, such as euros, yen, or Swiss francs.

Most parameters of the general utility equation can be obtained from records, such as the number of people tested, the cost of testing, and the selection ratio, but SDy usually cannot. Traditionally, SDy has been the part of the utility equation most difficult to obtain.[9] Originally, SDy was estimated using complicated cost-accounting methods that are both costly and time-consuming. Those procedures involve first costing out the dollar value of the job behaviors of each employee,[10] and then computing the standard deviation of these values. The complexity of those approaches led to newer approaches that rely on estimates from knowledgeable persons. Our next section identifies some alternative approaches for estimating SDy.

Measures: Estimating the Monetary Value of Variations in Job Performance (SDy)

In general, there are two types of methods for estimating the standard deviation of job performance in monetary terms. The first is cost accounting, which uses accounting procedures to estimate the economic value of the products or services produced by each employee in a job or class of jobs, and then calculates the variation in that value across individuals. The second method is sometimes called “behavioral,” and combines the judgments from knowledgeable people about differences in the value of different performance levels. There are several alternative judgment-based approaches. Table 9-1 lists the cost-accounting and judgment-based alternatives.

Table 9-1. Alternative Approaches for Estimating SDy

Estimation Approach

Description

Cost accounting

Calculate the accounting value of each person’s accounting outcomes such as production or sales, and calculate the standard deviation of those values across individuals.

Judgment-Based Approaches

 

40 percent rule

Multiply the average total remuneration of the group by 40 percent.

Global estimation

Ask experts to estimate the value of performance at the average, 85th percentile and 15th percentile, of the performance distribution, and calculate the differences between their estimates.

CREPID

Identify the individual elements of performance, weight them by contribution to economic value, multiply average remuneration by the importance weight of each element, rate individual performance on each element, multiply performance by monetary value for each dimension for each individual, and sum to get a monetary value for each individual. Calculate the standard deviation of those values across individuals.

System effectiveness technique

Estimate the percentage difference in performance effectiveness between a superior and an average performer, and multiply that percentage by the cost of the system and capital used on the job.

Superior equivalents technique

Estimate how many fewer employees would be required to achieve a certain level of performance, if the employees were one standard deviation better. Calculate the average cost of employees, and multiply that by the difference in the number of superior employees required, compared to the number of average employees, to determine the employment-cost savings of having superior versus average employees.

The remainder of the chapter presents each of these approaches in more detail.

Cost-Accounting Approach

If you could determine the economic value of each employee’s performance, you could calculate directly the standard deviation of performance value simply by taking the standard deviation of those values. That’s the idea behind the cost-accounting approach. In a job that is purely sales, it may be reasonable to say that each person’s sales level, minus the cost of the infrastructure and remuneration he or she uses and receives, would be a reasonable estimate of the economic value of his or her performance. Unfortunately, aside from sales positions, most cost-accounting systems are not designed to calculate the economic value of each employee’s performance, so adapting cost accounting to that purpose proved complex for most jobs.

Cost-accounting estimates of performance value require considering elements such as the following:[11]

  • Average value of production or service units.

  • Quality of objects produced or services accomplished.

  • Overhead, including rent, light, heat, cost depreciation, or rental of machines and equipment.

  • Errors, accidents, spoilage, wastage, damage to machines or equipment due to unusual wear and tear, and so on.

  • Such factors as appearance, friendliness, poise, and general social effectiveness in public relations. (Here, some approximate value would have to be assigned by an individual or individuals having the required responsibility and background.)

  • The cost of spent time of other employees and managers. This would include not only the time of supervisors, but also that of other workers.

Researchers in one study attempted to apply these ideas to the job of route salesperson in a Midwestern soft-drink bottling company that manufactures, merchandises, and distributes nationally known products.[12] This job was selected for two reasons: there was a large number of individuals in the job, and variability in performance levels had a direct impact on output. Route salespersons were paid a small weekly base wage, plus a commission on each case of soft drink sold. The actual cost-accounting method to compute SDy involved eight steps.

  1. Output data on each of the route salespersons were collected from the records of the organization on the number of cases sold and the size and type of package, for a one-year period (to eliminate seasonality).

  2. The weighted average sales price per case unit (SPu) was calculated using data provided by the accounting department.

  3. The variable cost per case unit (VCu) was calculated and subtracted from the average price. Variable costs are costs that vary with the volume of sales, such as direct labor, direct materials (syrup cost, CO2 gas, crowns, closures, and bottles), variable factory overhead (state inspection fees, variable indirect materials, variable indirect labor), and selling expenses (the route salesperson’s commission).

  4. Contribution margins per case unit (CMu) were calculated as the sales price per unit minus the variable cost per unit.

  5. The contribution margins calculated in step 4 were multiplied by the output figures (step 1), producing a total one-year dollar-valued contribution margin for each route salesperson. This figure represents the total amount (in dollars and cents) contributed toward fixed costs and profit, by each route salesperson.

  6. Not all differences in sales were assumed to be due to differences in route salespersons’ performance. Other factors, such as the type of route, partially determined sales, so it was important to remove these influences. To accomplish this, the sales of each route were partitioned into two categories: home market and cold bottle. Home market represents sales in large supermarkets and chain stores, in which the product is purchased and taken home to consume. Cold bottle represents sales, such as those from small convenience stores and vending operations, in which the product is consumed on location. Top management agreed that home-market sales are influenced less by the efforts of the route salesperson, but the route salesperson exercises greater influence over the relative sales level in the cold-bottle market, because the route salesperson has a greater degree of flexibility in offering price incentives, seeking additional display space, and so forth.

    The critical question was, “How much influence does the route salesperson have in each of the respective sectors?” The percentage of sales or contributions attributable to the efforts of the route salesperson was determined by a consensus of six top managers, who estimated the portions of home-market sales and cold-bottle sales attributable to the efforts of the route salesperson at 20 percent and 30 percent, respectively.

  7. The percentages calculated in step 6 were multiplied by the total contribution margins calculated in step 5, yielding a total contribution margin for each route salesperson. An example calculation is shown in Figure 9-2. This served as the cost-accounting-based estimate of each route salesperson’s worth to the organization.

    Sample of the total contribution attributable to route Salesperson A (RSA) using cost-accounting procedures.

    Figure 9-2. Sample of the total contribution attributable to route Salesperson A (RSA) using cost-accounting procedures.

  8. The standard deviation of these values was the cost-accounting-based estimate of SDy.

This approach, called contribution costing, is generally not used for external-reporting purposes, but it is generally recommended for internal, managerial-reporting purposes.[13]

The Estimate of SDy

The cost-accounting-based procedure produced an estimate of SDy of $30,500 (all figures in 2006 dollars), with an average value of job performance of $86,483. Estimates of average worth ranged from $25,067 to $222,087. These values were skewed positively (Q3 − Q2 = $27,159, greater than Q2 − Q1 = $11,783), meaning that the difference between high and average was greater than the difference between average and low. This makes sense, because the values were calculated for experienced job incumbents, among whom very low performers would have been eliminated.

Cost-accounting systems focus on determining the costs and benefits of units of product, not units of performance, and thus require a good deal of translation to estimate performance value. So, although the accounting data on which the estimates are based is often trusted by decision makers, the array of additional estimates required often creates doubt about the objectivity and reliability of the cost-accounting estimates. Over the past few decades, several alternative approaches for estimating the economic value of job performance have been developed that require considerably less effort than the cost-accounting method. Comparative research has made possible some general conclusions about the relative merits of these methods.

The 40 Percent Rule

Some researchers have recommended estimating SDy as 40 percent of average salary.[14] They noted that wages and salaries average 57 percent of the value of goods and services in the U.S. economy, implying that 40 percent of average salary is the same as 22.8 percent (0.40 × 0.57), or roughly 20 percent, of the average value of production. Thus, they suggested using 40 percent of salary to estimate SDy.[15] Or, using 20 percent of salary for SDy (a value symbolized as SDp), expresses a one-standard-deviation performance difference in terms of the percentage increase in output.

A summary of the results of 68 studies that measured work output or work samples found that low-complexity jobs such as routine clerical or blue collar work had SDp values that averaged 15 percent of output. Medium-complexity jobs such as first-line supervisors, skilled crafts, and technicians had average SDp values of 25 percent, and high-complexity jobs, such as managerial/professional, and complex technical jobs had average SDp values of 46 percent, respectively. For life-insurance sales jobs, SDp was very large (97 percent of average sales), and it was 39 percent for other sales jobs.[16] There are sizable differences in the amounts of performance variation in jobs (recall the sweeper and Mickey Mouse comparison), and the 40 percent rule may underestimate or overestimate them.

It has been suggested that SDp might be directly estimated from the complexity of the job, making estimation easier. However, SDp is expressed as the percentage of average output, not in monetary values.[17] Determining whether such increases offset monetary costs or whether to invest resources in different jobs requires monetary values. One could estimate the average value of output in a job, but that would incorporate many of the same difficulties as the cost-accounting approach described earlier in this chapter. Hence other methods have evolved to estimate the monetary value of variability in job performance.

No estimate is perfect, but fortunately utility estimates need not be perfectly accurate, just as with any estimate of business effects. For decisions about selection procedures, only errors large enough to lead to incorrect decisions are of any consequence. Moreover, it is precisely the jobs with the largest SDy values, often those involving leadership, management, or intellectual capital, that have lots of opportunities for individual autonomy and discretion, that are handled least well by cost-accounting methods. So, subjective estimates, to one degree or another, are virtually unavoidable. Next, we examine some of the most prominent methods to gather judgments that can provide SDy estimates.

Global Estimation

The global-estimation procedure for obtaining rational estimates of SDy is based on the following reasoning: If the monetary value of job performance is distributed as a normal curve, the difference between the monetary value of an employee performing at the 85th percentile (one standard deviation above average) versus an employee performing at the 50th percentile (average), equals SDy.[18]

In one study, budget-analyst supervisors were asked to estimate both 85th and 50th percentile values.[19] They were asked to estimate the average value based on the costs of having an outside firm provide the services. SDy was calculated as the average difference across the supervisors. Averaging across multiple raters may control for the idiosyncratic tendencies, biases, and random errors of any individual. In the budget-analyst example, the standard error of the SDy estimates across judges was $3,837, implying that the interval $32,482 to $45,143 should contain 90 percent of such estimates (all results expressed in 2006 dollars). Thus, to be extremely conservative, one could use $32,482, which is statistically 90 percent likely to be less than the actual value.

An Example of Global SDy Estimates for Computer Programmers

What follows is a detailed explanation of how the global-estimation procedure has been used to estimate SDy. The application to be described used supervisors of computer programmers in ten federal agencies.[20] To test the hypothesis that dollar outcomes are normally distributed, the supervisors were asked to estimate values for the 15th percentile (low-performing programmers), the 50th percentile (average programmers), and the 85th percentile (superior programmers). The resulting data thus provide two estimates of SDy. If the distribution is approximately normal, these two estimates will not differ substantially in value. Here is an excerpt of the instructions presented to the supervisors:[21]

The dollar utility estimates we are asking you to make are critical in estimating the relative dollar value to the government of different selection methods. In answering these questions, you will have to make some very difficult judgments. We realize they are difficult and that they are judgments or estimates. You will have to ponder for some time before giving each estimate, and there is probably no way you can be absolutely certain your estimate is accurate when you do reach a decision. But keep in mind...[that]...your estimates will be averaged in with those of other supervisors of computer programmers. Thus errors produced by too high and too low estimates will tend to be averaged out, providing more accurate final estimates.

Based on your experience with agency programmers, we would like for you to estimate the yearly value to your agency of the products and services produced by the average GS 9-11 computer programmer. Consider the quality and quantity of output typical of the average programmer and the value of this output. In placing an overall dollar value on this output, it may help to consider what the cost would be of having an outside firm provide these products and services.

Based on my experience, I estimate the value to my agency of the average GS 9-11 computer programmer at _________ dollars per year.

We would now like for you to consider the “superior” programmer. Let us define a superior performer as a programmer who is at the 85th percentile. That is, his or her performance is better than that of 85% of his or her fellow GS 9-11 programmers, and only 15% turn in better performances. Consider the quality and quantity of the output typical of the superior programmer. Then estimate the value of these products and services. In placing an overall dollar value on this output, it may again help to consider what the cost would be of having an outside firm provide these products and services.

Based on my experience, I estimate the value to my agency of a superior GS 9-11 computer programmer to be _________ dollars per year.

Finally, we would like you to consider the “low-performing” computer programmer. Let us define a low-performing programmer as one who is at the 15th percentile. That is, 85% of all GS 9-11 computer programmers turn in performances better than the low-performing programmer, and only 15% turn in worse performances. Consider the quality and quantity of the output typical of the low-performing programmer. Then estimate the value of these products and services. In placing an overall dollar value on this output, it may again help to consider what the cost would be of having an outside firm provide these products and services.

Based on my experience, I estimate the value to my agency of the low-performing GS 9-11 computer programmer at _________ dollars per year.

The wording of these questions was developed carefully and pretested on a small sample of programmer supervisors and personnel psychologists. None of the programmer supervisors who returned questionnaires in the study reported any difficulty in understanding the questionnaire or in making the estimates. Participation in the study was completely voluntary. Of 147 questionnaires distributed, 105 were returned (all in usable form), for a return rate of 71.4 percent.

The two estimates of SDy were similar. The mean estimated difference in value (in 2006 dollars) of yearly job performance between programmers at the 85th and 50th percentiles in job performance was $37,249 (SE = $5,732). The difference between the 50th and 15th percentiles was $34,110 (SE = $3,546). The difference of $3,139 was roughly 8 percent of each of the estimates and was not statistically significant. Thus, the hypothesis that computer programmer productivity in dollars is normally distributed could not be rejected. The distribution was at least approximately normal. The average of these two estimates, $35,679, was the final SDy estimate.

Modifications to the Global-Estimation Procedure

Later research showed that the global-estimation procedure produces downwardly biased estimates of utility.[22] This appears to be so because most judges equate average value with average wages despite the fact that the value of the output as sold of the average employee is larger than average wages. However, estimates of the coefficient of variation of job performance (SDy/ Modifications to the Global-Estimation Procedure or SDp) calculated from supervisory estimates of the three percentiles (50th, 85th, and 15th) were quite accurate. This led the same authors to propose a modification of the original global estimation procedure.[23] The modified approach estimates SDy as the product of estimates of the coefficient of variation (SDy/ Modifications to the Global-Estimation Procedure or SDp) and an objective estimate of the average value of employee output (Modifications to the Global-Estimation Procedure). In using this procedure, one first estimates Modifications to the Global-Estimation Procedure and SDp separately, and then multiplies these values to estimate SDy.

SDp can be estimated in two ways: by using the average value found for jobs of similar complexity,[24] or by dividing supervisory estimates of SDy by supervisory estimates of the value of performance of the 50th percentile worker. Researchers tested the accuracy of this method by calculating supervisory estimates of SDp from 11 previous studies of SDy estimation, and then comparing these estimates with objective SDp values.[25] Across the 11 studies, the mean of the supervisory estimates was 44.2 percent, which was very close to the actual output-based mean of 43.9 percent. The correlation between the two sets of values was .70. These results indicate that supervisors can estimate quite accurately the magnitude of relative (percent) differences in employee performance.

With respect to calculating the average revenue value of employee output (Y-bar), the researchers began with the assumption that the average revenue value of employee output (Modifications to the Global-Estimation Procedure) is equal to total sales revenue divided by the total number of employees.[26] However, total sales revenue is based on contributions from many jobs within an organization. Based on the assumption that the contribution of each job to the total revenue of the firm is proportional to its share of the firm’s total annual payroll, they calculated an approximate average revenue value for a particular job (A) as follows.[27]

Equation 9-1. 

Equation 9-2. 

SDy then can be estimated as (SDp), where SDp is computed using one of the two methods described earlier. An additional advantage of estimating SDy from estimates of SDp is that it is not necessary that estimates of SDy be obtained from dollar-value estimates.

Although the global-estimation procedure is easy to use and provides fairly reliable estimates across supervisors, we offer several cautions regarding the logic and analytics on which it rests.

Empirical findings support the assumption of linearity between supervisory performance and annual worth (r = .67),[28] but dollar-valued job performance outcomes are often not normally distributed.[29] Hence, comparisons of estimates of SDy at the (85th–50th) and (50th–15th) percentiles may not be meaningful.

The original procedure lacks face validity (that is, it does not appear to measure what it purports to measure) because we do not know the basis for each supervisor’s estimates. Using general rules of thumb, such as job complexity, has merit, but can be enhanced by using a more well-developed framework, such as the “actions and interactions” component of the HC BRidge model to identify and clarify underlying relationships.[30]

Supervisors often find estimating the dollar value of various percentiles in the job performance distribution rather difficult. Moreover, the variation among each rater’s SDy estimates is usually as large as or larger than the average SDy estimate. In fact, one study found both the level of agreement among raters and the stability over time of their SDy estimates to be low.[31]

To improve consensus among raters, two strategies have been used:

  • Provide an anchor for the 50th percentile.[32]

  • Have groups of raters provide consensus judgments of different percentiles

Despite these problems, several studies have reported close correspondence between estimated and actual standard deviations when sales dollars[33] or cost-accounting estimates were used.[34] However, when medical claims cost data was used, the original global estimation procedure overestimated the actual value of SDy by 26 percent.[35]

The methods discussed so far require that we assume that the monetary value of job performance is distributed normally, and they require experts to make an overall estimate of value across often widely varying job performance elements. An alternative procedure that makes no assumption regarding the underlying normality of the performance distribution, and that identifies the components of each supervisor’s estimate, is described next.

The Cascio-Ramos Estimate of Performance in Dollars (CREPID)

The Cascio-Ramos estimate of performance in dollars (CREPID) was developed under the auspices of the American Telephone and Telegraph Company, and was tested on 602 first-level managers in a Bell operating company.[36] The rationale underlying CREPID is as follows. Assuming an organization’s compensation program reflects current market rates for jobs, the economic value of each employee’s labor is reflected best in his or her annual wage or salary. As we discussed earlier in this chapter, this is probably a low estimate, as the average value produced by an employee must be more than average wages to offset the costs of wages, overhead and necessary profit. Later, we will see that this assumption indeed leads to conservatively low estimates of SDy. CREPID breaks down each employee’s job into its principal activities, assigns a proportional amount of the annual salary to each principal activity, and then requires supervisors to rate each employee’s job performance on each principal activity. The resulting ratings then are translated into estimates of dollar value for each principal activity. The sum of the dollar values assigned to each principal activity equals the economic value of each employee’s job performance to the company. Let us explain each of these steps in greater detail.

  1. Identify principal activities. To assign a dollar value to each employee’s job performance, first we must identify what tasks each employee performs. In many job analysis systems, principal activities (or critical work behaviors) are identified expressly. In others, they can be derived, under the assumption that to be considered “principal” an activity should comprise at least 10 percent of total work time. To illustrate, let us assume that the job description for an accounting supervisor involves eight principal activities.

  2. Rate each principal activity in terms of time/frequency and importance. It has long been recognized that rating job activities simply in terms of the time or frequency with which each is performed is an incomplete indication of the overall weight to be assigned to each activity. For example, a nurse may spend most of the workweek performing the routine tasks of patient care. However, suppose the nurse must respond to one medical emergency per week that requires, on an average, one hour of his or her time. To be sure, the time/frequency of this activity is short, but its importance is critical. Ratings of time/frequency and importance should be expressed on a scale that has a zero point so that, in theory, complete absence of a property can be indicated. Research shows that simple 0–7 point Likert-type rating scales provide results that are almost identical to those derived from more complicated scales.[37]

  3. Multiply the numerical ratings for time/frequency and importance for each principal activity. The purpose of this step is to develop an overall relative weight to assign each principal activity. The ratings are multiplied. Thus, if an activity never is done, or if it is totally unimportant, the relative weight for that activity should be zero. The following illustration presents hypothetical ratings of the eight principal activities identified for the accounting supervisor’s job.

    Principal Activity

    Time/Frequency

    × Importance

    = Total

    Relative Weight

    1

    4.0

    4

    16.0

    16.8

    2

    5.0

    7

    35.0

    36.8

    3

    1.0

    5

    5.0

    5.3

    4

    0.5

    3

    1.5

    1.6

    5

    2.0

    7

    14.0

    14.7

    6

    1.0

    4

    4.0

    4.2

    7

    0.5

    3

    1.5

    1.6

    8

    3.0

    6

    18.0

    19

       

    95.0

    100%

    After doing all the multiplication, sum the total ratings assigned to each principal activity (95 in the preceding example). Then, divide the total rating for each principal activity by the grand total to derive the relative weight for the activity (for example, 16 / 95 = 0.168, or 16.8 percent). Knowing each principal activity’s relative weight allows us to allocate proportional shares of the employee’s overall salary to each principal activity as is done in step 4.

  4. Assign dollar values to each principal activity. Take an average (or weighted average) annual rate of pay for all participants in the study (employees in a particular job class) and allocate it across principal activities according to the relative weights obtained in step 3.

    To illustrate, suppose the annual salary of each accounting supervisor is $50,000.

    Principal Activity

    Relative Weight (%)

    Dollar Value ($)

    1

    16.8

    8,400

    2

    36.8

    18,400

    3

    5.3

    2,650

    4

    1.6

    800

    5

    14.7

    7,350

    6

    4.2

    2,100

    7

    1.6

    800

    8

    19

    9,500

      

    50,000

  5. Rate performance on each principal activity on a 0–200 scale. Note that steps 1 through 4 apply to the job, regardless of who does that job. The next task is to determine how well each person in that job performs each principal activity. This is the performance appraisal phase. The higher the rating on each principal activity, the greater the economic value of that activity to the organization. CREPID uses a modified magnitude-estimation scale to obtain information on performance.[38] To use this procedure, a value (say 100) is assigned to a referent concept (for example, the average employee, one at the 50th percentile on job performance), and then all comparisons are made relative to this value. Discussions with operating managers indicated that, given current selection procedures, it is highly unlikely that even the very best employee is more than twice as effective as the average employee. Thus, a continuous 0–200 scale was used to rate each employee on each principal activity. The actual form used is shown in Figure 9-3. Managers reported that they found this format helpful and easy to use.

    Performance appraisal form used with CREPID.

    Figure 9-3. Performance appraisal form used with CREPID.

  6. Multiply the point rating (expressed as a decimal number) assigned to each principal activity by the activity’s dollar value. To illustrate, suppose the following point totals are assigned to accounting supervisor C. P. Ayh.

    Principal Activity

    Points Assigned

    ×

    Dollar Value of Activity ($)

    =

    Net Dollar Value ($)

    1

    1.35

     

    8,400

     

    11,340.0

    2

    1.00

     

    18,400

     

    18,400.00

    3

    1.25

     

    2,650

     

    3,312.50

    4

    2.00

     

    800

     

    1,600.00

    5

    1.00

     

    7,350

     

    7,350.00

    6

    0.50

     

    2,100

     

    1,050.00

    7

    0.75

     

    800

     

    600.00

    8

    1.50

     

    9,500

     

    14,250.00

  7. Compute the overall economic value of each employee’s job performance by adding the results of step 6. In our example, the overall economic value of Mr. Ayh’s job performance is $57,902.50, or $7,902.50 more than he is being paid.

  8. Over all employees in the study, compute the mean and standard deviation of dollar-valued job performance. When CREPID was tested on 602 first-level managers at a Bell operating company, the mean of dollar-valued job performance was only $2,164 (3.4 percent) more than the average actual salary of all employees in the study. However, the standard deviation (SDy) was almost $22,000 (all figures in 2006 dollars), which was more than three and a half times larger than the standard deviation of the actual distribution of salaries. Such high variability suggests that supervisors recognized significant differences in performance throughout the rating process. As we noted earlier, this is precisely the type of situation in which investments in HR programs have the greatest payoff—when individual differences in job performance are high, and therefore the cost of error is substantial.

It is important to point out that CREPID requires only two sets of ratings from a supervisor:

  • A rating of each principal activity in terms of time/frequency and importance. (This is the job analysis phase.)

  • A rating of a specific subordinate’s performance on each principal activity (This is the performance appraisal phase.)

Average Salary as a Proxy for the Economic Value of Output

The valuation base (average annual salary) used in the CREPID model has generated considerable discussion among researchers in this field.[39] Indeed, one study suggested that “in many jobs there is very little relationship between wages and output for individual employees.”[40] Others noted that as organizations have broadened pay ranges and job classes, job titles become less homogeneous, and the average pay for a job becomes a less specific proxy for value.[41] Yet, the logical link between employee variability and value is the essential element of SDy. Results also depend on how the judgment task is framed,[42] the purpose for which information is collected (staffing versus retention), and the information judges use to make estimates of SDy.[43] There probably is not a single value for SDy, and this makes it even more important to understand the assumptions and logic that raters use when generating estimates of value.

CREPID has the advantage of assigning each employee a specific value that can be analyzed explicitly for appropriateness and that may also provide a more understandable or credible estimate for decision makers. However, as noted earlier, CREPID assumes that average wage equals the economic value of a worker’s performance. This assumption is used in national income accounting to generate the GNP and labor-cost figures for jobs where output is not readily measurable (for example, government services). That is, the same value is assigned to both output and wages. Because this assumption does not hold in pay systems that are based on rank, tenure, or hourly pay rates, CREPID should not be used in these situations.[44]

System-Effectiveness Technique

This method was developed specifically for situations where individual salary is only a small percentage of the value of performance to the organization or of the equipment operated (for example, an army tank commander or a fighter pilot).[45] In essence, it calculates the proportional difference in system effectiveness between the average performer and someone who is one standard deviation better than average. It multiplies that value by the cost of the system, assuming that the superior performer achieves higher performance using the same cost, or that the superior performer achieves the same performance level at less cost. For example, suppose we estimate that a superior performer (one standard deviation better than average) is 20 percent better than an average performer, and that it costs $100,000 to run the system for a month. We multiply the 20 percent by $100,000 to get $20,000. The assumption is that the superior performer saves us $20,000 per month to achieve the same results, or that he or she achieves $20,000 more per month using the same cost of capital.

This approach distinguishes the standard deviation of performance in dollars, from the standard deviation of output units of performance (for example, number of hits per firing from an army tank commander). It is based on the following equation.

Equation 9-3. 

Where Cu is the cost of the unit in the system. (It includes equipment, support, and personnel, rather than salary alone.) Y1 is the mean performance in output units.

Equation 9-3 indicates that the SD of performance in monetary units equals the cost per unit times the ratio of the SD of performance in output units to the initial mean level of performance, Y1. However, estimates from Equation 9-3 are appropriate only when the performance of the unit in the system is largely a function of the performance of the individual in the job.

Measures

To assess the standard deviation of performance in monetary units, using the system effectiveness technique, researchers collected data on U.S. Army tank commanders.[46] They obtained these data from technical reports of previous research and from an approximation of tank costs. Previous research indicated that meaningful values for the ratio SDy/Y1 range from 0.2 to 0.5. Tank costs, consisting of purchase costs, maintenance, and personnel, were estimated to fall between $684,000 and $1.14 million per year (in 2006 dollars). For purposes of Equation 9-1, Cu was estimated at $684,000 per year, and the ratio of SD of performance in output units/Y1 was estimated at 0.2. This yielded the following:

SD of performance in dollars = $684,000 × 0.2 = $136,800

Superior-Equivalents Technique

An alternative method, also developed by the same team of researchers for similar kinds of situations, is the superior-equivalents technique. It is somewhat like the global-estimation procedure, but with one important difference. Instead of using estimates of the dollar value of performance at the 85th percentile, the technique uses estimates of how many superior (85th percentile) performers would be needed to produce the output of a fixed number of average (50th percentile) performers. This estimate, combined with an estimate of the dollar value of average performance, provides an estimate of SDy.

The first step is to estimate the number (N85) of 85th percentile employees required to equal the performance of some fixed number (N50) of average performers. Where the value of average performance (V50) is known, or can be estimated, SDy may be estimated by using the ratio N50 / N85 times V50 to obtain V85, and then subtracting V50. That reduces as follows:

Equation 9-4. 

But, by definition, the total value of performance at a certain percentile is the product of the number of performers at that level times the average value of performance at that level, as follows:

Equation 9-5. 

Combining Equations 9-4 and 9-5 yields this:

Equation 9-6. 

Measures

The researchers developed a questionnaire to obtain an estimate of the number of tanks with superior tank commanders needed to equal the performance of a standard company of 17 tanks with average commanders.[47] A fill-in-the-blanks format was used, as shown in the following excerpt.

For the purpose of this questionnaire an “average” tank commander is an NCO or commissioned officer whose performance is better than about half his fellow TCs. A “superior” tank commander is one whose performance is better than 85% of his fellow tank commanders.

The first question deals with relative value. For example, if a “superior” clerk types 10 letters a day and an “average” clerk types 5 letters a day then, all else being equal, 5 “superior” clerks have the same value in an office as 10 “average” clerks.

In the same way, we want to know your estimate or opinion of the relative value of “average” vs. “superior” tank commanders in combat.

I estimate that, all else being equal, _______________ tanks with “superior” tank commanders would be about equal in combat to 17 tanks with “average” tank commanders.

Questionnaire data was gathered from 100 tank commanders enrolled in advanced training at a U.S. Army post. N50 was set at 17 as a fixed number of tanks with average commanders, because a tank company has 17 tanks. Assuming that organizations pay average employees their approximate worth, the equivalent civilian salary for a tank commander was set at $68,000 (in 2006 dollars).

The median response given for the number of superior TCs judged equivalent to 17 average TCs was 9, and the mode was 10. The response “9” was used as most representative of central tendency. Making use of Equation 9-3, V85 was calculated as follows:

($68,000 x 17) / 9 = $128,444

In terms of Equation 9-4:

SDy =$68,000 [(17 ÷ 9)—1] = $60,444

This is considerably less that the SD$ value ($136,800) that resulted from the system-effectiveness technique. SDy also was estimated using the global-estimation procedure. However, there was minimal agreement either within or between groups for estimates of superior performance, and the distributions were skewed positively. Distributions of average performance also were skewed positively. Such extreme response variability illustrates the difficulty of making these kinds of judgments when the cost of contracting work is unknown, equipment is expensive, or other financially intangible factors exist. Such is frequently the case for public employees, particularly when private-industry counterparts do not exist. Under these circumstances, the system effectiveness technique or the superior equivalents technique may apply.

One possible problem with both of these techniques is that the quality of performance in some situations may not translate easily into a unidimensional, quantitative scale.[48] For example, a police department may decide that the conviction of one murderer is equivalent to the conviction of five burglars. Whether managers do in fact develop informal algorithms to compare the performance of different individuals, perhaps on different factors, is an empirical question. Certainly the terms or dimensions that are most meaningful and useful will vary across jobs.

This completes our examination of five different methods for estimating the economic value of job performance. Researchers have proposed variations of these methods,[49] but at this point, the reader might naturally ask whether any one method is superior to the others. Our final section addresses that question.

Process: How Accurate Are SDy Estimates, and How Much Does It Matter?

In terms of applying these ideas in actual organizations, the logical idea that there are systematic differences in the value of improving performance across different roles or jobs is much more important than the particular estimate of SDy. When business leaders ask HR professionals how much a particular HR program costs, often they are actually wondering whether the improvement in worker quality it will produce is worth it. The distinction between the average value of performance versus the value of improving performance is often extremely helpful in reframing such discussions to uncover very useful decisions.

As discussed in Chapter 2, “Analytical Foundations of HR Measurement,” if the question is reframed from “How much is this program worth?” to “How likely is it that this investment will reach at least a minimum acceptable level of return?,” the process of making the correct decision is often much more logical and so better decisions are more likely. In terms of SDy, this means that it is often the case that even a wide range of SDy values will yield the same conclusion, namely, that what appeared to be very costly HR program investments are actually quite likely to pay off. In fact, the break-even level of SDy (the level needed to meet the minimal acceptable level of return) is often lower than even the most conservative SDy estimates produced by the techniques described here.

A review of 34 studies that included more than 100 estimates of SDy concluded that differences among alternative methods for estimating SDy are often less than 50 percent (and may be less than $5,000 in many cases).[50] Even though differences among methods for estimating SDy may be small, those differences can become magnified when multiplied by the number of persons selected, the validity, and the selection ratio. Without any meaningful external criterion against which to compare SDy estimates, one is left with little basis for choosing one method over another. This is what led the authors of one review to state, “Rather than focusing so much attention on the estimation of SDy, we suggest that utility researchers should focus on understanding exactly what Y represents.”[51]

In terms of the perceived usefulness of the utility information, research has found that different SDy techniques influence managers’ reactions differently (the 40 percent rule was perceived as more credible than CREPID), but these differences accounted for less than 5 percent of the variance in the dependent measure.[52] At a broader level, another study found that managers preferred to receive information about the financial results of HR interventions, rather than anecdotal information, regardless of the overall impact of such programs (low, medium, or high).[53]

Let us not lose sight of our overall objective: to improve HRM decision making. To be useful, utility analyses should reflect the context in which decisions are made.[54] For example, is the task to choose among alternative selection procedures? Or is it to decide between funding selection program or buying new equipment? All utility analyses involve uncertainty and risk, just like any other organizational measurement. By taking uncertainty into account through sensitivity or break-even analysis (see Chapter 2), any of the SDy estimation methods may be acceptable because none yield a result so discrepant as to change the decision in question.

Of course, the broader issue, from a talentship perspective, requires answers to questions such as the following: Where would improvements in talent, or how it is organized, most enhance sustainable strategic success? We began this chapter by focusing on performance-yield curves and the notion of pivotal talent. We emphasized that it is important to distinguish average value from variability in value, and that a key question for managers is not which talent has the greatest average value, but rather, in which talent pools does performance variation create the biggest strategic impact. The estimation of SDy provides an answer to one important piece of that puzzle. The next chapter examines the actual outcomes of utility analyses applied to employee selection, and the role of economic factors, employee flows, and break-even analysis in interpreting such results.

Exercises

Software that calculates answers to one or more of the following exercises can be found at www.shrm.org/publications/books.

1.

Divide into four- to six-person teams and do either a or b depending on feasibility.

  1. Choose a production job at a fast-food restaurant and, after making appropriate modifications of the standard-costing approach described in this chapter, estimate the mean and standard deviation of dollar-valued job performance.

  2. The Tiny Company manufactures components for word processors. Most of the work is done at the 2,000-employee Tiny plant in the Midwest. Your task is to estimate the mean and standard deviation of dollar-valued job performance for Assembler-1s (about 200 employees). You are free to make any assumptions you like about the Tiny Assembler-1s, but be prepared to defend your assumptions. List and describe all the factors (along with how you would measure each one) your team would consider in using standard costing to estimate SDy.

2.

Using the instructions provided for the global-estimation procedure, each class member should attempt to estimate the mean, standard deviation, standard error of the mean Exercises, and 90 percent confidence interval for the mean value of a stockbroker working for a major brokerage firm in New York.[55] Each class member should make three estimates of the dollar value to the firm:

The value of a stockbroker at the 50th percentile in merit

At the 85th percentile in merit

At the 15th percentile in merit

For purposes of this exercise, the accuracy of your estimates is less important than your understanding of the process and mechanics of the estimation procedure.

3.

Jim Hill is manager of subscriber accounts for the Prosper Company. The results of a job analysis indicate that Jim’s job includes four principal activities. A summary of Jim’s superior’s ratings of the activities and Jim’s performance of each of them follows.

Principal Activity

Time Frequency

Importance

Performance Rating (Points)

1

4.5

3

1.00

2

3.0

5

2.00

3

6.0

2

0.50

4

1.0

7

1.00

Assuming Jim is paid $62,000 per year, use CREPID to estimate the overall economic value of his job performance.

4.

Assume that an average SWAT team member is paid $55,000 per year. Complete the following questionnaire. Then use the results to estimate SDy by means of the superior equivalents technique.

For purposes of this questionnaire, a “superior” SWAT team member is one whose performance is better than about 85 percent of his fellow SWAT team members. Please complete the following item:

I estimate that, all else being equal, _________ “superior” SWAT team members would be about equal to 20 “average” SWAT team members.

References

1.

W. F. Cascio and P. Wynn (2004). “Managing a downsizing process,” Human Resource Management Journal, 43:4, 2004, 425–436..

2.

J. W. Boudreau and P. M. Ramstad (2003). “Strategic industrial and organizational psychology and the role of utility analysis models,” in W. C. Borman, D. R. Ilgen, and R. J. Klimoski (volume eds.), Handbook of Psychology, Volume 12, Industrial and Organizational Psychology (Hoboken, NJ: Wiley). 193–221.

3.

A more complete treatment of the Disney example, as well as the concept of pivotalness and performance-yield curves, can be found in J. W. Boudreau and P. M. Ramstad, Beyond HR: The New Science of Human Capital (Boston: Harvard Business School Publishing, 2007).

4.

E.F. Cabrera, and J. S. Raju (2001). “Utility analysis: Current trends and future directions” International Journal of Selection and Assessment, 9, 92–102

5.

J. E. Hunter, F. L. Schmidt and M. K. Judiesch (1990). “Individual differences in output variability as a function of job complexity,” Journal of Applied Psychology, 75, 28–42.

6.

Cabrera & Raju, op. cit.

7.

J. W. Boudreau and P. Ramstad (2007). Beyond HR: The new science of human capital (Boston: Harvard Business School Press).

8.

Ibid.

9.

L. J. Cronbach and G. C. Gleser (1965). Psychological Tests and Personnel Decisions (2nd ed.) (Urbana, IL: University of Illinois Press). See also N. S. Raju, M. J. Burke, and J. Normand, “A new approach for utility analysis,” Journal of Applied Psychology, 75, 1990, 3–12.

10.

H. E. Brogden and E. K. Taylor (1950). “The dollar criterion—Applying the cost accounting concept to criterion construction,” Personnel Psychology, 3, 133–154.

11.

Ibid., 146.

12.

O. L. Greer and W. F. Cascio (1987). “Is cost accounting the answer? Comparison of two behaviorally based methods for estimating the standard deviation of job performance in dollars with a cost-accounting-based approach,” Journal of Applied Psychology, 72, 588–595.

13.

C. T. Horngren, S. M. Datar, and G. M. Foster (2005). Cost Accounting: A managerial emphasis (12th ed.) (Upper Saddle River, NJ: Prentice Hall). See also J. O. Cherrington, E. D. Hubbard, and D. Luthy (1985). Cost and Managerial Accounting (Dubuque, IA: Wm. C. Brown).

14.

F. L. Schmidt and J. E. Hunter (1983). “Individual differences in productivity: An empirical test of estimates derived from studies of selection procedure utility,” Journal of Applied Psychology, 68, 407–414.

15.

Subsequent research indicates that this guideline is quite conservative. M. K. Judiesch, F. L. Schmidt, and K. K. Mount (1996). “An improved method for estimating utility,” Journal of Human Resource Costing and Accounting, 1:2, 31–42.

16.

Hunter, Schmidt, & Judiesch, 1990, op. cit.

17.

J.W. Boudreau, (1988). Utility analysis. In L. Dyer (Ed.), Human resource management: Evolving roles and responsibilities (p. 1-125 to 1-186). Washington, D. C.: Bureau of National Affairs.

18.

F.L. Schmidt, J.E. Hunter, R. C. Mckenzie & T.W. Muldrow (1979). Impact of valid selection procedures on work-force productivity. Journal of Applied Psychology, 64, 6010-626.

19.

In a normal distribution of scores, + 1SD corresponds, approximately, to the difference between the 50th and 85th percentiles. Minus 1SD corresponds, approximately, to the difference between the 50th and 15th percentiles.

20.

Ibid.

21.

Ibid., p. 621.

22.

M. K. Judiesch, F. L. Schmidt, and M. K. Mount (1992). “Estimates of the dollar value of employee output in utility analyses: An empirical test of two theories,” Journal of Applied Psychology, 77, 234–250.

23.

Ibid. M. K. Judiesch, F. L. Schmidt, and M. K. Mount (1996). “An improved method for estimating utility,” Journal of Human Resource Costing and Accounting, 1:2, 31–42.

24.

Hunter, Schmidt, & Judiesch, 1990, op. cit.

25.

Judiesch, Schmidt, and Mount, 1996, op. cit.

26.

Judiesch, Schmidt, and Mount, 1992; 1996, op. cit.

27.

Although these methods were proposed, it is important to alert readers that either “sales divided by total employees” or that number weighted by proportional salary may not be accurate ways to estimate the relative variability within different jobs.

28.

S. J. Cesare, M. H. Blankenship, and P. W. Giannetto (1994). “A dual focus of SDy estimations: A test of the linearity assumption and multivariate application,” Human Performance, 7:4, 235–255.

29.

M.J. Burke, & J.T. Frederick, J. T. (1984). Two modified procedures for estimating standard deviations in utility analyses. Journal of Applied Psychology, 69, 482-489. D.V. Lezotte, N.S. Raju, M.J. Burke, & J. Normand (1996). An empirical comparison of two utility analysis models. Journal of Human Resource Costing and Accounting, 1 (2), 110-30. J.R. Rich & J.W. Boudreau (1987). The effects of variability and risk on selection utility analysis: An empirical simulation and comparison. Personnel Psychology, 40, 55-84.

30.

J.W. Boudreau and P. M. Ramstad (2007). Beyond HR: The New Science of Human Capital (Boston, MA: Harvard Business School Publishing).

31.

R. L. Desimone, R. A. Alexander, and S. F. Cronshaw (1986). “Accuracy and reliability of SDy estimates in utility analysis,” Journal of Occupational Psychology, 59, 1986, 93–102.

32.

P. Bobko, R. Karren, and J. J. Parkington (1983). “The estimation of standard deviations in utility analyses: An empirical test,” Journal of Applied Psychology, 68, 1983, 170–176. Burke & Frederick, 1984, op. cit. M. J. Burke and J. T. Frederick (1986). A comparison of economic utility estimates for alternative rational SDy estimation procedures. Journal of Applied Psychology, 71, 1986, 334–339.

33.

Bobko, Karen, & Parkington, 1983, op. cit.

34.

Greer and Cascio op. cit.

35.

Lezotte et al., 1996, op. cit.

36.

W. F. Cascio and R. A. Ramos (1986). “Development and application of a new method for assessing job performance in behavioral/economic terms,” Journal of Applied Psychology, 71, 1986, 20–28.

37.

J. A. Weekley, B. Frank, E. J. O’Connor, and L. H. Peters (1985). A comparison of three methods of estimating the standard deviation of performance in dollars,” Journal of Applied Psychology, 70, 1985, 122–126.

38.

S. S. Stevens (1971). “Issues in psychophysical measurement,” Psychological Review, 78, 426–450.

39.

Judiesch, Schmidt, and Mount, 1992; 1996, op. cit. Lezotte et al., op. cit.

40.

Judiesch et al., 1992, 236.

41.

Boudreau and Ramstad, 2003, op. cit.

42.

P. Bobko, L. Shetzer & C. Russell (1991). Estimating the standard deviation of professors’ worth: The effects of frame and presentation order in utility analysis. Journal of Occupational Psychology, 64, 1710-188.

43.

P.L. Roth, R.D. Pritchard, J.D. Stout, & S.H. Brown (1994). Estimating the impact of variable costs on SDy in complex situations. Journal of Business and Psychology, 8 (4), 437-454.

44.

J.W. Boudreau (1991). Utility analysis for decisions in human resource management. In M. D. Dunnette & L. M. Hough (Eds.), Handbook of industrial and organizational psychology (Vol. 2, 2nd ed., p. 621-745). Palo Alto, CA: Consulting Psychologists Press.

45.

N.K. Eaton, H. Wing, H. & K.J. Mitchell (1985). Alternate methods of estimating the dollar value of performance. Personnel Psychology, 38, 27-40.

46.

Ibid.

47.

Ibid.

48.

Ibid.

49.

Raju, Burke, & Normand, (1990), op. cit. M.K. Judiesch, F.L. Schmidt & J.E. Hunter, J. E. (1993). Has the problem of judgment in utility analysis been solved? Journal of Applied Psychology, 78, 903-911. K.S. Law & B. Myors (1999). A modification of Raju, Burke, and Normand’s (1990) new model for utility analysis. Asia Pacific Journal of Human Resources, 37(1), 39-51.

50.

Boudreau, 1991, op. cit.

51.

R.D. Arvey & K.R. Murphy (1998). Performance evaluation in work settings. Annual Review of Psychology, 49, 141-168.

52.

J.T. Hazer& S. Highhouse (1997). Factors influencing managers’ reactions to utility analysis: Effects of SDy method, information frame, and focal intervention. Journal of Applied Psychology, 82, 104-112.

53.

B.W. Mattson (2003). The effects of alternative reports of human resource development results on managerial support. Human Resource Development Quarterly, 14(2), 127-151.

54.

W. F. Cascio (1996). The role of utility analysis in the strategic management of organizations. Journal of Human Resource Costing and Accounting, 1:2, 85–95. W. F. Cascio (1993). Assessing the utility of selection decisions: Theoretical and practical considerations. in N. Schmitt and W. C. Borman (eds.), Personnel Selection in Organizations (San Francisco: Jossey-Bass) 39–335. C. J. Russell, A. Colella, & P. Bobko (1993). “Expanding the context of utility: The strategic impact of personnel selection,” Personnel Psychology, 46, 1993, 781–801.

55.

A 90 percent confidence interval for the mean may be calculated by the following formula: References. The interpretation of the result is this: If we were to repeat the above procedure often, each time selecting a different sample from the same population, then, on the average, 90 out of every 100 intervals similarly obtained would contain the true value of the population mean.

 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.254.44