Chapter 10. The Payoff from Enhanced Selection

Chapter 8, “Staffing Utility: The Concept and Its Measurement,” provided you with the logical and mathematical models for calculating the utility of staffing. Chapter 9, “The Economic Value of Job Performance,” showed how to estimate an important element of staffing utility models: the monetary value of the standard deviation of performance. When you put the models of Chapter 8 together with the estimates of Chapter 9, you end up with powerful analytical frameworks that help predict when investments in enhanced staffing will pay off. In this chapter, we present evidence that suggests that such programs often pay off handsomely. This is important because lacking the frameworks provided here, organization leaders often can see only the costs of such programs, or are presented with statistics such as correlations or validity coefficients out of context. The result is that decision makers ignore the difficult-to-understand value of improved staffing, and instead focus only on the costs. Focusing only on costs can cause organizations to fail to invest in improved staffing programs that would be extremely valuable.

By the same token, it is not unusual for staffing professionals to become so enamored with improvements in staffing validity or elegance, that they lose sight of the need to balance costs and benefits. Improved staffing validity is not always worth the cost, and it is certainly not equally valuable in every situation. The logic of Chapter 8 and the estimation methods of Chapter 9 combine to provide clues about where staffing investments have the greatest payoff.

In Chapters 8 and 9, we introduced the concept of utility analysis, described the assumptions and data requirements of alternative utility models, and presented methods for estimating the economic value of each employee’s job performance. We emphasized that utility analysis is a framework to guide decisions about investments in human capital. Building on the framework shown in Figure 8-1, which illustrated the logic of the staffing process and talent flows, we emphasized that at each stage, the candidate pool can be thought of in terms of the quantity of candidates, the average and dispersion of the quality of the candidates, and the cost of processing and employing the candidates. Quantity, quality, and cost considerations determine the monetary value of staffing programs, as shown in the utility models of Chapter 8.

Our objective in this chapter is to tie these ideas together to demonstrate how they can and have been applied in work settings. We show how valid selection procedures (for external and internal candidates) can pay off handsomely for organizations. Moreover, we show how the basic utility formulas can incorporate important financial considerations, to make utility estimates more comparable with estimates of investment returns for other resources such as technology, advertising, and so on. To date, utility analysis has not been used widely. However, as the pressure increases on HR executives to justify new or continuing HR programs in the face of budgetary constraints and escalating labor costs, it is increasingly likely that strategic success will be enhanced by this kind of logical analysis applied to pivotal talent (see Chapter 1, “Making HR Measurement Strategic”) with strong economic justification. In terms of process, the more comparable utility-analysis estimates can be with other financial estimates, we believe the more likely it is that HR and business leaders will develop shared mental models and make better decisions about staffing and other HR programs.

We begin with an example of tangible results from improved staffing, estimated with the Brogden-Chronbach-Gleser model described in Chapter 8.[1] Then, we consider the effects of five considerations that help to make staffing payoffs more realistic, and better connected to traditional financial logic:

  • Economic factors (variable costs, taxes, and discounting)

  • Employee flows

  • Probationary periods

  • The use of multiple selection devices

  • Departures from top-down hiring

Then, we address the issue of risk and uncertainty in utility analysis and offer several tools to aid in decision making. We conclude the chapter by focusing on processes used to communicate the results of utility analyses to decision makers.

The Logic of Investment Value from Utility Analysis

Figure 10-1 presents the logic of utility analysis, along with some situational factors that may affect quantity, quality, and cost.

The logic of utility analysis and factors that can affect payoffs.

Figure 10-1. The logic of utility analysis and factors that can affect payoffs.

We discussed several of these factors in Chapters 8 and 9. In Chapter 8, Equation 8-17 showed how the Brogden-Cronbach-Gleser model combines several of these factors, namely the selection ratio (SR), validity of the selection procedure (r), the variability or standard deviation of job performance expressed in monetary terms (SDy), the average score of those hired on the predictor The logic of utility analysis and factors that can affect payoffs., and the average cost per selectee of applying the selection process to all applicants [(Na × C)/Ns], to determine an unadjusted estimate of the utility of a selection process. The remaining factors shown in Figure 10-1 may serve to increase or to decrease the unadjusted utility estimate. We discuss each of them after we illustrate the computation of the unadjusted estimate in the following section.

Measuring the Utility Components

In terms of quantity, the average number of GS-5 through GS-9 programmers selected was 618 per year, and their average tenure was 9.69 years. Estimating on the basis of U.S. census data in 1979, 10,210 computer programmers could be hired each year in the U.S. economy using the PAT. As already mentioned, average tenure for government programmers was found to be 9.69 years; in the absence of other information, this tenure figure also was assumed for the private sector. The average gain in utility per selectee per year was multiplied by 9.69 to yield a total employment-period gain in utility per selectee.

It was not possible to determine the prevailing selection ratio (SR) for computer programmers either in the general economy or in the federal government, so the utility analysis formula was used to do sensitivity analysis for SRs of 0.05 and in intervals of 0.10 from 0.10 to 0.80.

In terms of validity, it’s possible that the PAT would replace a prior procedure with zero validity in some cases, but in other situations the PAT would replace a procedure with lower but nonzero validity. Thus utilities were calculated assuming previous-procedure true validities of 0.20, 0.30, 0.40, and 0.50, as well as zero.

SDy was calculated as the average of the two estimates obtained from experts, using the global estimation procedure described in Chapter 9. The estimate was $35,679 per person per year (in 2006 dollars).

When the previous procedure was assumed to have zero validity, its associated testing cost also was assumed to be zero. When the previous procedure was assumed to have a nonzero validity, its associated cost was assumed to be the same as that of the PAT (that is, $33 per applicant). Cost of testing was charged only to the first year.

The Brogden-Cronbach-Gleser general utility equation was modified to obtain the equation actually used in computing the utilities.

Equation 10-1. 

Where ΔU is the gain in productivity in dollars from using the new selection procedure for one year; t is the tenure in years of the average selectee (here 9.69); Ns is the number selected in a given year (this figure was 618 for the federal government and 10,210 for the U.S. economy); r1 is the validity of the new procedure, here the PAT (r1 = 0. 76); r2 is the validity of the previous procedure (r2 ranges from 0 to 0.50); cl is the cost per applicant of the new procedure, here $33; and c2 is the cost per applicant of the previous procedure, here zero or $33. The terms SDy, λ, and φ are as defined previously. This equation gives the productivity gain that results from one year’s use of the new (more valid) selection procedure, but not all these gains are realized the first year. They are spread out over the 9.69-year tenure of the new employees.

Analytics: Results of the Utility Calculation

The estimated gains in productivity in (2006) dollars varied from $18 million to $309 million.[3] Those gains would result from one year’s use of the PAT to select computer programmers in the federal government for different combinations of selection ratios and previous-procedure validity. When the SR is 0.05 (the government is assumed to be very selective) and the previous procedure has no validity (the maximum relative value for the PAT), use of the PAT for one year produces an aggregate productivity gain of $309 million. At the other extreme, if the SR is 0.80 (relatively unselective) and the validity of the procedure the PAT replaces is 0.50, the estimated gain is only $18 million.

To illustrate how those figures were derived, assume that the SR = 0.20 and the previous procedure has a validity of 0.30. All other terms are as defined previously.

ΔU = 9.69(618)(0.76 − 0.30)($35,679)(0.2789 ÷ 0.20) − 618($33 − $33)/0.20

ΔU = 9.69(618)(0.46)($35,679)(1.3945) − 0

ΔU = $137,060,000

The gain per selectee can be obtained by dividing the value of total utility by 618, the assumed yearly number of selectees. When this is done for our example ($137,060,000 / 618), the gain per selectee is $221,775. That figure is still quite high, but remember that not all of those gains are realized during the first year. They are spread out over the entire tenure of the new employees. Gains per year per selectee can be obtained by dividing the total utility first by 618 and then by 9.69, the average tenure of computer programmers. In our example, this produces a per-year gain of $22,887 per selectee, or to carry it even further, an $11 gain per hour per year per selectee (assuming 2,080 hours per work year).

Making Utility Analysis Estimates More Comparable to Financial Estimates

Evidence presented in the studies we have described leads to the inescapable conclusion that how people are selected makes an important, practical difference. The implications of valid selection procedures for work force productivity are clearly much greater than most of us might have suspected, but are they as high as these studies suggest? Standard investment analysis would suggest that considerations such as the costs of improved performance, inflation and risk, and the tax implications of higher profits from better selection should all be accounted for, to make these estimates comparable to investment calculations for more traditional resources. Next, we incorporate more of the factors shown earlier in Figure 10-1 to illustrate the effects of adjustments to the basic utility-analysis formula.

Figure 10-1 showed that the cost of a selection program depends not only on the number of individuals selected, and the cost of selection, but also on several additional economic factors. These include variable costs, taxes, and discounting. Why are these important? Because by taking them into account, decision makers can evaluate the soundness of HR investments more comparably with other investments. Other financial investments routinely account for these factors, so failure to consider them in estimating the value of staffing produces utility estimates that are overstated compared to other investments. Decision makers will want to compare HR investments on compatible terms with other investments, so these adjustments help to make HR utility estimates more comparable.

Failure to adjust utility estimates will create overestimates under any or all of three conditions.[4] First, where variable costs (for example, incentive- or commission-based pay, benefits, variable raw materials costs, variable production overhead) rise with productivity, a portion (V) of the gain in value calculated using Equation 10-1 will go to pay such costs. Second, organizations must pay a portion of the profit as tax liabilities (TAX). Third, where costs and benefits accrue over time, the values of future costs and benefits are worth less than present costs and benefits, so future values must be discounted to reflect the opportunity costs of returns foregone. Benefits received in the present or costs delayed into the future would be invested to earn returns. A dollar received today at a 10 percent annual return would be worth $1.21 in two years. A future benefit worth $1.21 in two years has a “present value” of $1.00 ($1.21 / 1.102). The following utility formula takes these three economic factors into account.[5]

Equation 10-2. 

Where ΔU is the change in overall worth or utility after variable costs, taxes, and discounting; N is the number of employees selected; t is the time period in which an increase in productivity occurs; T is the total number of periods (for example, years) that benefits continue to accrue to an organization; i is the discount rate; SDsv is the standard deviation of the sales value of productivity among the applicant or employee population (similar to SDy in previous utility models); V is the proportion of sales value represented by variable costs; TAX is the organization’s applicable tax rate; rx,sv is the validity coefficient between predictor (x) and sales value (similar to rx,y in previous utility models); and C is the total selection cost for all applicants.

Those economic considerations suggest large potential reductions in unadjusted utility estimates. For example, researchers computed an SDy value of $35,679 (in 2006 dollars) in their utility analysis of the Programmer Aptitude Test (PAT).[6] This value does not account for variable costs or taxes. Although this may have been appropriate for federal government jobs because the federal government is not taxed, it would not be appropriate for private-sector organizations that face variable costs and taxes.

Assuming the net effect of variable costs is to reduce gains by 5%, V = −0.05. Assuming a marginal tax rate (the tax rate applicable to changes in reported profits generated by a decision) of 45 percent, the after-cost, after-tax, one-year SDy value is as follows:

(SDy) × (1 + V) × (1 − TAX)

($35,679) × (1 − 0.05) × (1 − 0.45) = $18,643

This is 52 percent of the reported value.

Now, assuming a financial discount rate of 10 percent, if the average tenure of computer programmers in the federal government was just two years, the appropriate discount factor (DF) adjustment would be as shown in Equation 10-3.

Equation 10-3. 

Over 10 years, DF = 6.14, but the average tenure of computer programmers in the federal government (at the time of the study) was computed to be 9.69 years.[7] Hence, the appropriate adjustment needed to discount the computed utility values is 6.14 times (10 / 9.69), or 6.34.

When all three of those factors—variable costs, taxes, and discounting—are considered, the per-selectee utility values reported in the study of computer programmers range from $9,602 ($18 million / 618 = $29,126 per selectee × 0.634 × 0.52) to $164,840 ($309 million / 618 = $500,000 per selectee × 0.634 × 0.52).

These values still are substantial, but they are 67 percent lower than the unadjusted values. Such significant effects argue strongly that HR leaders should be careful to make their monetary payoff estimates as compatible as possible with standard investment calculations.

Analyzing “Compound Interest” Through Talent: Effects of Employee Flows on Utility Estimates

Employee flows into, through, and out of an organization influence the value of a staffing program or any other HR intervention.[8] We showed earlier that failure to consider the effects of variable costs, taxes, and discounting tends to overstate utility estimates. Conversely, failure to consider the effect of employee flows may understate utility estimates. The utility-analysis formulas originally introduced reflected a selection program used to hire a single group, and often only the first-year effect of those better-selected employees. They expressed the utility of adding one, new, better-selected cohort to the existing work force. Yet, in any investment, it is the cumulative benefit over time that is relevant. One would not evaluate an investment in a new supply-chain system merely on the first set of materials or orders received.

Logic of Employee Flows

Earlier we multiplied the one-year selection benefit by the average tenure of selected employees, using the PAT to select computer programmers.[9] Yet, this still only reflects the effects of hiring one group, whose members stay for more than one year.

In practice, valid selection programs tend to be reapplied year after year, as employees flow into and out of the work force. Often, a program’s effects on subsequent cohorts will occur in addition to its lasting effects on previously treated cohorts. These are additive cohort effects.[10] By altering the terms N and T in Equation 10-2, we can account for the effect of employee flows.

Employee flows generally affect utility through the period-to-period changes in the number of treated employees in the work force. Note that we will use the term treated employees to mean those employees that are affected by an improved HR program, such as the group hired with an improved test. Such employees are added to a work force containing existing or untreated employees. The number of treated employees in the work force k periods in the future (Nk) may be expressed as shown in Equation 10-4.

Equation 10-4. 

Where Nat is the number of treated employees added to the work force in period t, and Nst is the number of treated employees subtracted from the work force in period t. For example, suppose it is the end of the fourth year that a new selection procedure is applied (k = 4), that 100 persons were hired in each of the four years, and that 10 of them left in Year 2, 15 in Year 3, and 20 in Year 4. Then, the following results are observed from the inception of the program (t = 1) to year 4 (t = 4):

N4 = (100 − 0) + (100 − 10) + (100 − 15) + (100 − 20)

N4 = 355

Thus, the term Nk reflects both the number of employees treated in previous periods and their expected tenure. The formula for the utility (ΔUk) occurring in the kth future period that includes the economic considerations of Equation 10-2 may be written as shown in Equation 10-5.

Equation 10-5. 

This formula modifies the quantity element, by keeping track of how many treated employees are in the work force in each year. Then, after multiplying that number by the increased productive value of the treated employees, the relevant discount rate, cost, tax, and other factors are applied for that particular year.

For simplicity, the utility parameters rx,svV, SDsv, and TAX are assumed to be constant over time. This assumption is not necessary, and sometimes the factors may vary. Note also that the cost of treating (for example, selecting) the Nak employees added in period k (Ck) is now allowed to vary over time. However, Ck is not simply a constant multiplied by Nak. Some programs (for example, assessment centers) have high initial startup costs of development, but these costs do not vary with the number treated in future periods. Also, the discount factor for costs [1/(1 + i)(k − 1)] reflects the exponent k − 1, assuming that such costs are incurred one period prior to receiving benefits. Where costs are incurred in the same period as benefits are received, k is the proper exponent.[11]

Measures: Calculating How Employee Flows Affect Specific Situations

To continue with our example using data from the study of programmers, let’s look at what happens in Year 4 of a new selection program: Recall that we calculated N4 = 355. Further, assume that the discount rate is 10 percent; the validity of the procedure is 0.40; the selection ratio is 0.50 (and therefore the average standardized test score of those selected is 0.3989 / 0.50 = 0.80, from earlier chapters); SDy is $18,643; variable costs = −0.05; taxes = 0.45; and Ck, the cost of treating the 100 employees added in Year 4, is 100 × $33 = $3,300.

ΔU4

=

355 × [(1 / 1.101 + 1 / 1.102 + 1 / 1.103 + 1 / 1.104) × 0.40 × 0.80 × $18,643 × 0.95 × 0.55] − $3,300 × (1 / 1.101 + 1/1.102 + 1/1.103) × 0. 55

ΔU4

=

355 × [(3.17) × $3,117] − $1,815 (2.49) = $3,507,716 − $4,519

 

=

$3,503,197

This figure equals the total one-year value of the improved performance of all the better-selected employees who are still with the organization in the fourth year.

To express the utility of a program’s effects over F periods, the one-period utility estimates (ΔUk) are summed. Thus, the complete utility model reflecting employee flows through the work force for a program affecting productivity in F future periods may be written as shown in Equation 10-6.

Equation 10-6. 

The duration parameter F in Equation 10-6 is not employee tenure, but rather how long a program affects the workforce. For example, assume that the PAT in the computer-programmer study is applied for 15 years, and for simplicity assume that all programmers stay for 10 years. (Recall that average tenure was 9.69 years.) If 618 programmers are added each year, for the first 10 future periods Nk will increase by 618 in each period. That is:

Equation 10-7. 

By Year 10, therefore, 6,180 programmers selected by the PAT have been added to the work force, and none have left. Beginning in future period 11, however, one PAT-selected cohort leaves in each period (Nst = 618). However, by continuing to apply the PAT to select 618 new replacements (that is, Nat = 618), the number of treated programmers in the work force is maintained. Thus, in future periods 11 through 15, Nat and Nst offset each other and Nk remains unchanged at 6,180. Assuming the government stops using the test in Year 15, starting in future period 16 the cost and number added (Ck and Nat) become zero, assuming the organization returns to random selection. However, the treated portion of the work force does not disappear immediately. Earlier-selected cohorts continue to separate (that is, Nst = 618), and Nk falls by 618 each period until the last-treated cohort (selected in future period 15) separates in future period 25. Nk for each of the 25 periods is shown in Figure 10-2. In Figure 10-2, F = 25 periods.

Table 10-2. Example of employee flows over a 25-year period.

<source>Source: Adapted from Boudreau, J.W. (1983b). Effects of employee flows on utility analysis of human resource productivity improvement programs. Journal of Applied Psychology 68, 400. Copyright © 1983 by the American Psychological Association. Reprinted with permission.</source>

Period (k)

Nk

1

618

2

1,236

3

1,854

4

2,472

5

3,090

6

3,708

7

4,326

8

4,944

9

5,562

10

6,180

11

6,180

12

6,180

13

6,180

14

6,180

15

6,180

16

5,562

17

4,944

18

4,326

19

3,708

20

3,090

21

2,472

22

1,854

23

1,236

24

618

25

0

Note: Nk = number of employees receiving a given treatment who remain in the workforce:

Now, we can add the economic factors to the utility model that reflects employee flows. Assuming, as we did in our earlier example, that V = −0.05, TAX = 0.45, and the discount rate is 10 percent, the total expected utility of the 15-year selection program (the sum of the 25 one-period utility estimates, ΔUk in Equation 10-5) is $264.5 million (in 2006 dollars). This is considerably higher than the estimate in the original study of $119.6 million (in 2006 dollars), even after reflecting variable costs, taxes, and discounting.

The most important lesson to learn from the principle of employee flows is that one-cohort utility models often will understate actual utility because they reflect only the first part of a larger series of outcomes.[12]

At this point, you might be tempted to conclude that the actual dollar payoff from valid selection programs is two or three times higher than one-cohort models predict. Be careful! All existing utility models contain parameters that must be estimated, and in most cases we simply have no way to measure the accuracy of our estimates. Beyond that, as noted earlier, utility values often are based on the assumption that variable costs, SDy, and the selection ratio are constant over time. This may be unrealistic. Equation 10-6 does permit these parameters to vary, but until our measurement systems begin to assess multiperiod variations in these parameters, we will not know just how unrealistic these assumptions are. Research in utility analysis has produced some important refinements to the original Brogden-Cronbach-Gleser formula, but we still have a long way to go in improving our measurements of the parameters of the model.

Logic: The Effects of a Probationary Period

At Whole Foods Market, new employees are selected by a process that looks a lot like the Survivor television show. A new employee is hired provisionally, works side by side with his or her future team members, and at the end of four weeks is offered a permanent job only if at least two thirds of the team votes to hire him or her. A powerful way to augment the accuracy of staffing systems is to allow new employees actually to do the job for a while, and choose to keep those that work out and dismiss those who don’t.[13] This can be expensive, because Whole Foods has to pay probationary employees their salaries and benefits, and it involves the time and effort of the other employees who observe and rate the probationary workers. At the same time, the added accuracy and value of the better-screened work force may offset the increased costs. The utility formulas we have developed can be used to diagnose the conditions that determine when such a probationary period will pay off.

The utility effect of a probationary period is reflected by modifying the utility equation to reflect the difference in performance between the pool of employees hired initially and those who survive the probationary period.[14] Whether a new hire is considered successful or not depends only on his or her performance rating at the end of the probation period. The performance rating is therefore the retention-decision variable. You may recall that this binary, in-versus-out approach was a defining characteristic of the Taylor-Russell utility model (the success ratio).

Because lower performers are dismissed, the average performance of a given selected cohort increases after the probationary period. The actual amount of the improvement depends on two things: the validity of the selection process used to weed out low performers, and the performance cutoff that determines success during probation. The costs of paying and training employees who are later dismissed, together with any separation costs, must also be taken into account among the overall costs of a probationary-hiring program.

Interestingly, a probationary period reduces the harmful effect of selection errors, because they are corrected very quickly. Poor-performing employees are weeded out consistently and early, instead of being retained longer in “permanent” positions that require a longer process of formal dismissal. Paradoxically, this means that the value of selection procedures used prior to a probationary period is often less than would be the case if the same selection process were used without the probationary period. One benefit of improved selection is fewer errors (that is, erroneous acceptances and rejections), and when a probationary period mitigates those errors, the value of avoiding them through better selection is less.[15] Overall, the combined value of improved selection and a probationary period can be higher or lower than using either one alone. It depends on their relative validity, the severity of selection errors, and the variability in the applicant population. All of this is elegantly reflected in the utility model, which can be used to examine these factors in combination to identify the optimum combination.

Another way to look at probationary periods is as a special case of the employee-movement model that we described in Chapter 4, “The High Cost of Employee Separations,” in Figure 4-1. In essence, the probationary period is a “controlled-turnover” process, in which the validity of the dismissal decision becomes a determinant of the value of turnover to the organization.

Logic: The Effect of Multiple Selection Devices

Thus far, we have been proceeding on the assumption that a new, more valid selection procedure simply is substituted for an older, less-valid one. Often, however, work force productivity will be optimized by combining an existing procedure and a new procedure to obtain validity higher than either procedure can provide individually. The basic utility-analysis formula can easily accommodate this possibility.

Another feature of the basic utility equation is that it compares improved selection to random selection. Of course, in practice it is unlikely that many organizations select randomly. Most use multiple selection devices, such as application forms, interviews, background checks, aptitude, ability, personality, or work-sample tests, medical exams, and assessment centers. Although the validity of some of these devices may be low, each has demonstrated validities greater than zero.[16] Essentially, when multiple selection devices are combined, the overall validity of the combination may be higher, assuming each of them provides unique and valid information. However, the incremental effect of any particular device compared to the combination without it, will generally be lower than if that one device were compared to random selection. If the costs of using multiple devices are relatively low, and the value of performance variability is high, the higher costs are often offset by the increased predictive power of the combination of predictors.

Logic: Effects of Job-Offer Rejections

The method of calculating utility that we have been discussing assumes that all applicants who receive offers take them. How much does it matter if top-scoring applicants reject offers and you must move on to lower-scoring applicants? Obviously, the average quality of those selected will be lower. The effect is especially pronounced in a tight labor market, where the supply of available workers is low relative to demand. Firms may be forced to lower their minimum hiring requirements in order to fill vacancies.[17] Lower hiring requirements mean higher selection ratios, so the effects can be estimated by altering the selection ratio in the utility model.

Rejection of job offers produces the same effect as reducing hiring standards. It increases SRs and lowers the gains from more valid selection. For example, if an SR of 0.20 would yield the needed number of new employees, given no rejected offers, and if half of all job offers are rejected, the SR must be increased to 0.40 to fill all the vacancies. Using our PAT example, if the validity of the previous procedure were zero, assuming 50 percent rejected offers would reduce the estimated productivity gains from $209.9 to $145.02 million, or by 31 percent. If the validity of the previous procedure were not zero, offer rejection still reduces the utility of both procedures. However, because the utility function is multiplicative, the utility of the more valid procedure would be reduced by more.

The actual size of the loss in utility depends on two parameters

  • The (potentially negative) correlation between the quality of the applicants and their probability of accepting a job.

  • The proportion of job offers rejected.[18]

Any adjustment for a job-offer rejection should be applied to the old selection battery as well as to the new one.

How large are the potential losses? One study found that under realistic circumstances, unadjusted utility formulas could overestimate gains by 30 to 80 percent. To some extent, these utility losses caused by job-offer rejection can be offset by additional recruiting efforts that increase the size of the applicant pool and, therefore, restore smaller SRs. However, if the probability of accepting a job offer is negatively correlated with applicant quality (the better applicants are more likely to reject an offer), increasing the number of applicants may not be as effective as increasing the attractiveness of the organization to the better ones.

Process: It Matters How Staffing Processes Are Used

Similar to the effect of rejected offers is the situation where an organization decides to deviate from the practice of making job offers to the top-scoring candidates. To test this, researchers examined the impact on the productivity of forest rangers of three approaches:

  1. Top-down selection

  2. Selecting those who meet a minimum required test score equal to the average

  3. Selecting those who meet a minimum required score set at one SD below the average[19]

Top-down selection produced a productivity increase of about 13 percent compared to random selection (which translated into millions of dollars). Under option 2, the value of output gains was only 45 percent as large as the dollar value for top-down selection. Under option 3, the value of output gains was only 16 percent of the top-down figure. Employers who deviate from top-down selection when performance variation is significant do so at substantial economic cost.

Cumulative Effects of Adjustments

At this point, you are probably asking yourself how adjustments for all five of these factors—economic variables, employee flows, probationary periods, multiple selection devices, and rejected job offers—affect estimates of utility. One study used computer simulation of 10,000 scenarios, each of which comprised various values of the five factors just noted. Utility estimates were then computed using the five adjustments applied independently.[20] Figure 10-3 presents the median and mean effects.

Table 10-3. Utility-analysis adjustments: net and relative effects.

<source>Source: M.C. Sturman, “Implications of utility analysis adjustments for estimates of human resource intervention value.” Journal of Management, Vol. 26, No. 2, p. 290 (2000).</source>
 

Number of Modifications Being Applied

 

1

2

3

4

5

  

a

b

a

b

a

b

a

b

Modification Being Applied

Net Effect

Net Effect

Over Previous Modification

Net Effect

Over Previous Modification

Net Effect

Over Previous Modification

Net Effect

Over Previous Modification

Economic variables

− 64%

        
 

− 64%

        

Multiple devices

− 53%

− 84%

− 53%

      

− 59%

− 84%

− 53%

      

Deviation from top-down hiring

− 23%

− 73%

− 23%

− 89%

− 25%

    

− 38%

− 76%

− 34%

− 92%

− 37%

    

Effect of a probationary period

− 22%

− 72%

− 21%

− 87%

− 17%

− 91%

− 8%

  

− 25%

− 72%

− 24%

− 88%

− 21%

− 94%

− 10%

  

The effect of employee flows

− 1%

− 65%

− 1%

− 84%

− 1%

− 90%

− 2%

− 91%

− 0%

− 9%

− 65%

− 7%

− 87%

− 8%

− 96%

− 1%

− 98%

− 1%

The first row in each group shows the medium effect of adding the specified utility analysis modification beyond those already added; the second row is the mean. Modifications were added in order of the size of the median effect over the previous modification. Column one shows the effect of implementing each modification by itself. Columns 2a, 3a, 4a, and 5a show the effect of implementing the combination of modifications over an unmodified utility analysis. Columns 2b, 3b, 4b, and 5b show the effect of the modification over the previously applied set of modifications.

Column 1 of Figure 10-3 compares all adjustments to the estimate of the basic utility formula (Equation 8-17 in Chapter 8). Accounting for economic variables had the largest effect, followed, in rank order, by multiple selection devices, departures from top-down hiring, probationary period, and employee flows. The median effect size of the total set of adjustments was –91 percent, with a minimum total effect of –71 percent, and negative estimates 16 percent of the time. Although the majority of the utility estimates for the simulated scenarios remained positive, the five modifications had sizable and noteworthy practical effects.[21] These results suggest that although valid selection procedures may often lead to positive payoffs for the organization, actual payoffs depend significantly on organizational and situational factors that affect the quantity, quality, and cost of the selection effort.

It is tempting to conclude that there is a set of “best practices” that invariably contribute to improvements in performance. For example, meta-analyses of multiple studies often show that validity is high or at least consistently positive. However, validity is only one consideration in determining the overall value of a selection system to an organization. The hallmark of a decision science is its ability to apply consistent frameworks to diverse situations, obtaining different results depending on vital factors. The results of this chapter show that the payoff from improved selection is potentially, but not necessarily, very large. Wise organizations will use the frameworks to examine their particular situation and make sound decisions based on their unique opportunities and constraints.

Dealing with Risk and Uncertainty in Utility Analysis

As you have seen through this chapter and the two previous ones, many factors might increase or decrease expected payoffs from utility analysis.[22] Taking such factors into account will make utility estimates more realistic, but in actually conducting utility analyses, researchers have tended to assume away or not even to consider many of these factors. Even when we take all of these factors into account, uncertainty still exists in our estimates of expected monetary payoffs. Researchers have developed three techniques to deal with such uncertainty: break-even analysis, Monte Carlo analysis, and confidence intervals.

Break-Even Analysis

We reviewed break-even analysis in Chapter 2, “Analytical Foundations for HR Measurement.” We noted two of its major advantages

  • It shifts emphasis away from estimating a precise utility value toward making a good decision even with imperfect information.

  • It pinpoints areas where controversy is important to decision making (that is, where there is doubt about whether the break-even value is exceeded), versus where controversy has little impact (because there is little risk of observing utility values below break-even).

One comprehensive review of the utility-analysis literature reported break-even values for 42 studies that estimated the parameter SDy.[23] Without exception, the break-even values fell at or below 60 percent of the estimated value of SDy. In many cases, the break-even value was less than 1 percent of the estimated value of SDy. Before you conclude that the HR programs in all of these 42 studies were justified, however, we hasten to add two important qualifications. One, even though the break-even value might be low when deciding whether to implement a particular HR program, comparing that program to other organizational investments might produce decision situations where differences in SDy estimates do affect the ultimate decision.[24] Two, decision makers may also consider parameters other than or in addition to SDy in making capital-budgeting decisions.[25] Nonetheless, break-even analysis can be used under all these circumstances, and is often a powerful tool for clarifying the analysis and producing not only better decisions, but better logical analysis, too.

In summary, break-even analysis of the SDy parameter (or any other single parameter in the utility model) seems to provide two additional advantages:

  • It allows practicing managers to appreciate how little variability in job performance is necessary before valid selection procedures begin to pay positive dividends.

  • Even if decision makers cannot agree on an exact point estimate of SDy, they can probably agree that it is higher than the break-even value.

Monte Carlo Analysis

A second approach to dealing with risk and uncertainty is computer-based (Monte Carlo) simulation to assess the extent of variability of utility values, and thus to provide a sound basis for decision making.[26] This technique is often used in operations management for decisions about processes such as manufacturing and supply chain, or in consumer research on issues such as the likely response to new marketing initiatives. In essence, Monte Carlo analysis estimates a distribution of values for one or more elements of a calculation. For example, we might want to explore SDy values ranging from $1,000 per person to $10,000 per person, and we might assume that they follow a normal distribution. We might assume that the number of applicants and the number hired in a given year will vary within some range of values, and that the probability of seeing a certain value might be normally distributed.

To implement a Monte Carlo analysis, we would draw a value for each of the variables from its assumed distribution, input that value into the utility equation, and then calculate the utility value. Doing this repeatedly for many values of the parameters in combination produces an array of utility outcomes. Computer technology permits researchers to run tens of thousands such experimental values. In examining the pattern of resulting utility values, it is possible to estimate the average, range, and likelihood that various utility values will occur.

By modeling and analyzing uncertainty within the Monte Carlo analysis, we can better predict the likely outcomes and the risks of observing very low or very high utility values. To illustrate, in Figure 10-3 we referred to a Monte Carlo study that varied all of the elements of the utility model with employee flows and economic factors, by analyzing 10,000 scenarios that combined different elements.

Confidence Intervals

A third approach is to compute a standard error of the utility estimate, and then to derive a 95 percent confidence interval around that estimate.[27] Because 2.5 percent of normally distributed observations fall below a value that is 1.96 standard deviations below the average, and 2.5 percent of observations fall above a value that is 1.96 standard deviations above the average, we can calculate a 95 percent confidence interval as shown here.

Equation 10-8. 

Although there are problems with the method used to compute the standard error of the utility estimate, especially the assumption that all components in the equation are independent and normally distributed, research suggests that it provides a serviceable approximation.[28] To illustrate this method, researchers applied it to the estimated utility of the PAT in predicting the performance of computer programmers in the federal government.[29] They found that the values of SEu were very large, about half the size of the utility estimate itself. This is larger than most researchers might have expected. It means that the experts who estimated SDy had less agreement than might have been predicted. As one observer commented, “Ironically, the impressively large size of utility estimates per se have (sic) been almost overemphasized...while the standard error of utility has been largely ignored. If we are to be impressed by the size of utility, we must similarly be impressed by the size of the uncertainty in these estimates.”[30] To date, we have tended to view utility values as point estimates, rather than as predictions under uncertainty. Given the uncertainty of many of the parameters of the utility model, confidence intervals are probably more appropriate and they should be reported routinely.

Process: Communicating the Impact of Utility Analyses to Decision Makers

Two provocative studies showed that it makes a big difference how utility results are presented. Presenting utility analysis results in certain ways can actually reduce the support of managers for a valid selection procedure, even though the net benefits of the procedure are very large.[31] In one experiment, managers were presented with an unadjusted estimated payoff from a selection program of more than $97 million (in 2006 dollars), representing a return on investment of 14,000 percent. Results this large strain credulity, and thus it is no surprise that the managers did not accept them. Moreover, a fundamental principle of financial economics is that high returns carry high risks. Thus, presenting business leaders with such extraordinary estimated returns understandably would cause them to assume the investment must be highly speculative.[32] We should point out that there has been some controversy about the finding that leaders will reject high-utility results. Two subsequent studies failed to replicate these findings,[33] and their conclusions and implications have been challenged.[34]

Still, based on the hypothesis that such extraordinary utility results were not believable, another study took a different tack. It adjusted the utility value used in the original studies for the same five factors we discussed earlier: economic variables, multiple selection devices, deviations from top-down hiring, a probationary period, and employee flows.[35] Using a computer-based, Monte Carlo simulation that generated 10,000 independent scenarios, the adjusted utility estimates yielded an average payoff of $2,738,989 (in 2006 dollars), more than a 96 percent reduction from the unadjusted values. The median return was $2,137,504. The smallest outcome was an estimated loss of $3,168,083, and the largest predicted gain was $21,097,036 (all figures in 2006 dollars). This was still more than 71 percent smaller than the initial estimate presented to the non-HR managers.

It would be a mistake to assume that every study should adjust utility estimates for these same five factors, or, equally faulty, to assume that such factors would have only minor effects on estimated payoffs. Instead, consider the decision context, and whether each factor is relevant. Remember, the ultimate goal of utility analysis is to influence decisions. In our view, the studies noted previously have stimulated the field to ponder its communication strategies.

We actually know very little about how decision contexts or organizational characteristics affect the reactions of managers to the results of utility analyses. If we begin to develop and share information on how we can improve the way we communicate the results of our analyses, and if we learn where our audiences perceive weaknesses in our presentations, and if we begin the painstaking process of soliciting feedback from our audiences, taking constructive steps to respond to it, soliciting more feedback, and repeating the process over and over again, we will make progress toward having the kind of impact on decision makers and organizations that is the hallmark of more mature professions such as finance.[36]

Two hurdles to more widespread use of utility analysis models are their complexity and the lack of knowledge about utility analysis among key decision makers.[37] Computer-based algorithms can accomplish the complex calculations and adjustments needed to estimate utility models, but a more fundamental concern is the lack of knowledge among key decision makers (usually outside the HR profession) even that utility models exist, and about the value of the logical frameworks expressed in utility models. If decision makers lack such knowledge it is difficult for them to understand the logic behind utility analysis calculations, the analytics (the elements of the utility formula and adjustments to it), or the measures used to populate the formula (such as interviews to generate estimates of SDy). When leaders in the HR profession lack such knowledge, they cannot be expected to populate the utility formula with the measures required, or use the results of utility analyses to affect actual decisions about vital investments in talent programs.

Beyond those concerns is a genuine need for utility analysts to shift their focus. The fundamental question is not, “How do we construct the best HR measure?” Instead, it is, “How do we induce changes through HR measurement systems?”[38] HR measurement is not an end in and of itself, but rather a decision-support system that can have powerful effects if users pay careful attention to the sender, the receivers, the strategy they use to transmit their message, and the organization of their message. In short, the field should change its focus from an emphasis on measures to an emphasis on effects. Why? Because research is not complete until it is communicated effectively.

Evidence indicates that managers are quite receptive to utility analysis when analysts present conservative estimates, illustrate the tradeoffs involved, do not overload the presentation with technical details, and emphasize the same things that managers of operating departments pay attention to (reducing the overall cycle time of the staffing process, reducing costs while maintaining the validity of the overall staffing process).[39] Clearly, the “framing” of the message is critical and has a direct effect on its ultimate acceptability.[40] That is strategic HR management in action, and it illustrates how utility analysis can serve as a useful guide to decision making. In this sense, utility analysis is more than an equation or series of equations; it is a stimulating way of thinking.

Exercises

Software that calculates answers to one or more of the following exercises can be found at www.shrm.org/publications/books.

1.

You are given the following information regarding the CAP test for clerical employees (clerk-2s) at the Berol Corporation:

  • Average tenure as a clerk-2: 7.26 years

  • Number selected per year: 120

  • Validity of the CAP test: 0.61

  • Validity of previously-used test: 0.18

  • Cost per applicant of CAP: $35

  • Cost per applicant of old test: $18

  • SR: 0.50

  • Ordinate at SR: 0.399

  • SD, in first year: $34,000

Use Equation 10-1 to determine (a) the total utility of the CAP test, (b) the utility per selectee, and (c) the per-year gain in utility per selectee.

2.

Referring to Exercise 1, suppose that after consulting with the chief financial officer at Berol, you are given the following additional information: variable costs are –0.08, taxes are 40 percent, and the discount rate is 8 percent. Use Equation 10-2 in this chapter to recompute the total utility of the CAP test, the utility per selectee, and the utility per selectee in the first year.

3.

The Top Dollar Co. is trying to decide whether to use an assessment center to select middle managers for its consumer products operations. The following information has been determined: variable costs are –0.10; corporate taxes are 44 percent; the discount rate is 9 percent; the ordinary selection procedure costs $700 per candidate; the assessment center costs $2,800 per candidate; the standard deviation of job performance is $55,000; the validity of the ordinary procedure is 0.30; the validity of the assessment center is 0.40; the selection ratio is 0.20; the ordinate at that selection ratio is 0.2789; and the average tenure as a middle manager is 3 years. The program is designed to last 6 years, with 20 managers added each year. Beginning in Year 4, however, one cohort separates each year until all hires from the program leave.

Use Equation 10-6 in this chapter to determine whether Top Dollar Co. should adopt the assessment center to select middle managers. What payoffs can be expected in total, per selectee, and per selectee in the first year?

References

1.

F. L. Schmidt, J. E. Hunter, R. C. Mckenzie, and T. W. Muldrow, “Impact of valid selection procedures on work-force productivity,” Journal of Applied Psychology, 64, 1979, 609–626.

2.

Ibid.

3.

These figures assume inflation rates of 6 percent from 1979 to 1990, and 3.5 percent from 1991 to 2006.

4.

Boudreau, J. W. (1983a). Economic considerations in estimating the utility of human resource productivity improvement programs. Personnel Psychology, 36, 551-576.

5.

Boudreau, J. W. (1983b). Effects of employee flows on utility analysis of human resource productivity improvement programs. Journal of Applied Psychology, 68, 396-406.

6.

Schmidt et al., 1979, op. cit.

7.

Ibid.

8.

Boudreau, J. W., & Berger, C. J. (1985). Decision-theoretic utility analysis applied to employee separations and acquisitions. Journal of Applied Psychology Monograph, 70(3), 581-612.

9.

Schmidt et al., 1979, op. cit.

10.

Boudreau, 1983b, op. cit.

11.

Ibid.

12.

Ibid.

13.

Downloaded on October 3, 2007 from www.wholefoodsmarket.com/careers/hiringprocess2.html.

14.

De Corte, W. (1994). Utility analysis for the one-cohort selection-retention decision with a probationary period. Journal of Applied Psychology, 79, 402-411.

15.

Ibid.

16.

Cascio, W. F., & Aguinis, H. (2005). Applied Psychology in Human Resource Management (6th ed.). Upper Saddle River, NJ: Prentice-Hall.

17.

Becker, B. E. (1989). The influence of labor markets on human resources utility estimates. Personnel Psychology, 42, 531-546.

18.

Murphy, K. R. (1986). When your top choice turns you down: Effects of rejected offers on the utility of selection tests. Psychological Bulletin, 99, 133-138.

19.

Schmidt, F. L., Mack, M. J., & Hunter, J. E. (1984). Selection utility in the occupation of U.S. park ranger for three modes of test use. Journal of Applied Psychology, 69, 490-497.

20.

Sturman, M. C. (2000). Implications of utility analysis adjustments for estimates of human resource intervention value. Journal of Management, 26, 281-299.

21.

Ibid.

22.

Cascio, W. F. (1993). Assessing the utility of selection decisions: theoretical and practical considerations. In N. Schmitt & W. C. Borman (Eds.), Personnel selection in organizations (p. 310-340). San Francisco, CA: Jossey-Bass.

23.

Boudreau, J. W. (1991). Utility analysis for decisions in human resource management. In M. D. Dunnette & L. M. Hough (Eds.), Handbook of Industrial and Organizational Psychology (Vol. 2, 2nd ed., p. 621-745).

24.

Weekley, J. A., O’Connor, E. J., Frank, B., & Peters, L. W. (1985). A comparison of three methods of estimating the standard deviation of performance in dollars. Journal of Applied Psychology, 70, 122-126.

25.

Hoffman, C. C., & Thornton, G. C. III. (1997). Examining selection utility where competing predictors differ in adverse impact. Personnel Psychology, 50, 455-470.

26.

Sturman, 2000, op. cit. See also Rich, J. R., & Boudreau, J. W. (1987). The effects of variability and risk on selection utility analysis: An empirical simulation and comparison. Personnel Psychology, 40, 55-84.

27.

Alexander, R. A., & Barrick M. R. (1987). Estimating the standard error of projected dollar gains in utility analysis. Journal of Applied Psychology, 72, 475-479.

28.

Myors, B. (1998). Utility analysis based on tenure. Journal of Human Resource Costing and Accounting, 3(2), 41-50.

29.

Alexander & Barrick, 1987, op. cit.

30.

Myors, 1998, op. cit., p. 47, 48.

31.

Latham, G. P., & Whyte, G. (1994). The futility of utility analysis. Personnel Psychology, 47, 31-46. Whyte, G., & Latham, G. P. (1997). The futility of utility analysis revisited: When even an expert fails. Personnel Psychology, 50, 601-611.

32.

Boudreau, J. W., & Ramstad, P. M. (2007). Beyond HR: The New Science of Human Capital. Boston, MA: Harvard Business School Publishing.

33.

Carson, K. P., Becker, J. S., & Henderson, J. A. (1998). Is utility really futile? A failure to replicate and an extension. Journal of Applied Psychology, 83, 84-96.

34.

Cronshaw, S. F. (1997). Lo! The stimulus speaks: The insider’s view of Whyte and Latham’s “The futility of utility analysis.” Personnel Psychology, 50, 611-615. Hoffman & Thornton, 1997, op. cit.

35.

Sturman, 2000, op. cit.

36.

Cascio, W. F. (1996). The role of utility analysis in the strategic management of organizations. Journal of Human Resource Costing and Accounting, 1(2), 85-95.

37.

Sturman, 2000, op. cit.

38.

Boudreau, J. W. (1996). The motivational impact of utility analysis and HR measurement. Journal of Human Resource Costing and Accounting, 1(2), 73-84. Boudreau, J. W. (1998). Strategic human resource management measures: Key linkages and the Peoplescape model. Journal of Human Resource Costing and Accounting, 3(2), 21-40.

39.

Hoffman, C. C. (1996). Applying utility analysis to guide decisions on selection system content. Journal of Human Resource Costing and Accounting, 1(2), 9-17.

40.

Carson, Becker, & Henderson, 1998, op. cit. Hazer, J. T., & Highhouse, S. (1997). Factors influencing managers’ reactions to utility analysis: Effects of SDy method, information frame, and focal intervention. Journal of Applied Psychology, 82, 104-112.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.220.98.14