Chapter 11. Costs and Benefits of HR Development Programs

Organizations in Europe, the United States, and Asia spend billions each year on employee training. These outlays reflect the aggregate cost of keeping abreast of technological and social changes, the extent of managerial commitment to achieving a competent, productive work force, and the broad array of opportunities available for individuals and teams to improve their technical skills and their social skills. Indeed, the large amount of money spent on training in both public and private organizations is likely to increase in the coming years as organizations strive to meet challenges such as the following:[1]

  • Hypercompetition, such competition, both domestic and international, is largely due to trade agreements and technology (most notably, the Internet). As a result, senior executives will be required to lead an almost constant reinvention of business strategies/models and organizational structures.

  • A power shift to the customer. Customers who use the Internet have easy access to databases that allow them to compare prices and examine product reviews; hence, there are ongoing needs to meet the product and service needs of customers.

  • Collaboration across organizational and geographic boundaries. In some cases, suppliers are co-located with manufacturers and share access to inventory levels. Strategic international alliances often lead to the use of multinational teams, which must address cultural and language issues.

  • The need to maintain high levels of talent. Because products and services can be copied, the ability of a work force to innovate, to refine processes, to solve problems, and to form relationships becomes a sustainable advantage. Attracting, retaining, and developing people with critical competencies is vital for success.

  • Changes in the work force. Unskilled and undereducated youth will be needed for entry-level jobs, and currently underutilized groups of racial and ethnic minorities, women, and older workers will need training.

  • Changes in technology. Increasingly sophisticated technological systems impose training and retraining requirements on the existing work force.

  • Teams. As more firms move to employee involvement and teams in the workplace, team members need to learn such behaviors as asking for ideas, offering help without being asked, listening and providing feedback, and recognizing and considering the ideas of others.

Indeed, as the demands of the information age spread, companies are coming to regard training expenditures as no less a part of their capital costs than plant and equipment.

The term human resource development (HRD) represents the wide range of behavioral science and management technologies that improve both the operating effectiveness of an organization and the quality of working life experienced by its employees.[2] The analytical tools that we present here apply to programs as diverse as providing learning through job experiences, mentoring, formal training, electronic instruction, and off-site classes or degrees. We focus our examples on training programs because that is where most of the research and discussion have occurred. In the area of training, topics range from basic skills to job enrichment and interpersonal skills, team building, and decision making for individuals or teams. Technologies used run the full gamut from lectures to CD-ROMs, to Internet-based training, intranet-based training, interactive video, and intelligent tutoring systems.[3]

Unfortunately, although billions may be spent providing training and development programs, little is spent assessing the social and financial outcomes of these activities. Consider leadership-development programs as an example. One thorough review estimated that only 10 percent of them evaluated the impact of the programs on the actual behaviors of managers. Most consider only the satisfaction of participants as an indicator of the programs’ effectiveness.[4] The overall result is that little comparative evidence exists by which to generalize or to evaluate the impact of the various technologies. Decision makers thus remain unguided by systematic evaluations of past experiments and uninformed about the costs and benefits of alternative HRD programs when considering training efforts in their own organizations. “Billions for training, but not one cent for evaluation” is an exaggerated, but not altogether untrue, characterization of present training practice in many organizations. Although the tools we describe here are certainly valuable for increasing the amount and effectiveness of development-program evaluation, the issue runs much deeper. Analytical decision tools are not just useful for evaluating programs after they are complete. The lack of evaluation in HR development is a symptom of a more fundamental issue, a lack of systematic logic to plan and refine such programs.

Our intent in this chapter is not to present true experimental or quasi-experimental designs for evaluating HRD programs.[5] Instead, it is to illustrate how the economic consequences of HRD programs can be expressed. Let us begin, as we have in other chapters, by presenting the logic of talent-development, as shown in Figure 11-1.

Logic of talent-development effects.

Figure 11-1. Logic of talent-development effects.

As the diagram shows, effectiveness of development is much more than sound design and effective implementation of HRD programs or experiences. These are necessary, but not sufficient by themselves to ensure that what is learned in training is actually applied on the job.[6] For that to occur, other conditions must be satisfied. First, candidates for development must be prepared and motivated both to learn and to apply their learning at work. This requires investments by the organization in both the preparation of development candidates, as well as in careful selection of candidates for development experiences, such as jobs or training programs. Second, after the development experience, there must be an environment that provides the opportunity and motivation for the newly developed individuals to apply or transfer their learning to their work. This second condition requires that supervisors and higher-level managers must support employees’ attempts to use on the job what they have learned in training or development. For example, if employees learn all about democratic leadership styles in training, but then report back to autocratic leaders on the job, the effects of the training are not likely to have long-term effects. In addition, it is important to offer rewards and incentives to employees when they apply what they learned in training to improve their day-to-day job performance. This means that improved performance will often carry with it increased costs of pay, incentives, or supervisory preparation.

The conditions shown in Figure 11-1 create “line of sight” for development candidates, between their development, their on-the-job behaviors, improved unit performance, and the overall strategic success of the organization. Here is an illustrative example. In response to a shortage of trained service technicians, Caterpillar, Inc. partnered with a network of vocational schools in six countries to develop a Caterpillar-approved curriculum. This ties the training directly to important business processes that Caterpillar must execute well to achieve its business strategy. Students enter the vocational schools with dealerships already committed to hiring them upon graduation. In fact, the trainees will spend up to half of their time in apprenticeships at Caterpillar dealers, learning on the job.[7] Dealer (that is, management) support, coupled with rewards for completing the training program (guaranteed jobs), provide the kind of “line of sight” that links strategy, execution, and motivation to do well in training.

At the bottom of Figure 11-1, we connect employee development to several other topics covered in this book. Although the vast majority of attention to valuing employee development has focused on its immediate effects or its effects on job performance, it should also be noted that when employees have more tools and opportunities to perform well, they are often more motivated and engaged with their work. This can lead to reduced turnover and absence. In addition, opportunities for development are increasingly an important part of the “total rewards” proposition that employers offer to the labor market.[8] For example, Procter & Gamble is globally known for its effective career and training programs to develop great marketers. GE is well known for the effectiveness of its career and management systems in developing future leaders. Not only do these programs improve the performance of those who directly participate, they also are powerful attractors to external candidates. Thus, enhanced development can also lead to more and better applicants for employment, which, as you saw in Chapters 8, 9, and 10, is one element of enhanced work force value through staffing.

The remainder of the chapter presents three very different illustrations that show how to assess the economic impact of development and training programs: (1) an illustration of a strong research design, used to compare structured versus unstructured training; (2) a framework that extends the utility-analysis logic we applied to staffing in Chapters 8, 9, and 10, to the evaluation of HRD programs; and (3) an illustration of cost analysis, comparing offsite versus web-based meeting costs.

The Value of Structured Versus Unstructured Training in Basic Skills

Training may be structured or unstructured. The term structure implies a systematically developed educational program using a logical progression from an assumed starting level of competency to a specified mastery level. Unstructured training, on the other hand, implies a less specific program, often provided by an experienced worker who trains while continuing to perform his or her regular duties. Avoiding disruptions in ongoing production is often emphasized in unstructured training, while structured training tends to focus more exclusively on the development experience of the trainee. In unstructured training, learning opportunities emerge out of the situation rather than through a structured plan, and mastery is often not defined as precisely.

Structured training is widely assumed to be more effective than unstructured training, although little controlled research has been done. To some extent this is understandable because the complex and numerous variables in ongoing operations are so complex that attributing effects to differences in training structure is risky. Controlling them in a simulation is, in itself, very difficult. Nevertheless, failure to conduct controlled studies is perhaps the most serious shortcoming of training programs, many of which are, in fact, extremely well done, for there is no empirical linkage of the results of training to improved productivity. In the absence of such a connection, training activities often are the first to go when profits tumble. This need not always be the case, as a 14-month study in a university industrial-manufacturing laboratory indicates in the next section.[9]

A Field Experiment Comparing Structured and Unstructured Training

The purpose of the study was to conduct an experimental comparison of structured versus unstructured training of semiskilled production workers. The job studied was that of an “extruder operator,” and it involved transforming raw materials into quality plastic pipe from a plastic-extrusion machine.

There were six major steps in the development of the extruder-operator structured-training program:

  1. Job and task analysis

  2. General training-design decisions

  3. Specific training-design decisions

  4. Production of the training program

  5. Pilot test

  6. Training-program revision

Subjects in the experiment responded to recruitment methods either by telephoning about the position or by applying in person. As part of the selection procedure, subjects were asked to complete an application form and the Bennett Mechanical Comprehension Test. Forty subjects were selected, matched on characteristics such as age, education, community background, and test scores (to help ensure pre-experimental equivalence), and then assigned randomly to the structured and unstructured groups (20 in each group). Measures of product quantity and quality, worker competence, cost-benefit, and worker attitudes were as follows.

Production Quantity and Quality

Quantity was measured by count and weight. The quality of pipe production was based on visual and dimensional criteria. To aid visual judgments, samples of defective pipe were used as standards. Pipe roundness and concentricity were measured with a specially developed mechanical test device.

Competency Outcomes

Worker competence was defined as the ability to start up production, to develop quality pipe, and to recover from two production problems (remotely manipulated machine variables) without a loss of production rate. A concealed, closed-circuit television system was set up to monitor the extruder operator’s work area and was broadcast to the project office some 100 feet away. In addition, all production rates and observation logs were time-referenced systematically.

Costs

Actual expenditures were used as inputs to a cost-benefit model for the two industrial training methods. The hourly rate of the research assistant performing as the industrial trainer was used, as were all the project costs, even including the cost of the paper on which the job analysis was written.

Worker Reactions and Attitudes

A worker-attitude inventory was developed to assess the attitudes of trainees toward their training and jobs. Questions included attitudes toward the job, the training, the trainer, and the equipment.

Collecting and Recording the Data

Methods for collecting and recording data depended on the type of data needed. The times a trainee reported for work and ended work, the hours a subject was a trainee, and the hours a subject was a worker-trainer were recorded in a log. Data were collected on production rates, production weight, and material waste (scrap). Production rate was recorded as the number of acceptable pieces of pipe extruded per hour of work. At the end of each hour, the researcher collected and counted the production and recorded the production count and weight in the log. At the same time, the researcher collected, weighed, and recorded the plastic determined as scrap. Scrap was defined as plastic extruded not as pipe and pipe not meeting the dimensional and visual standards. The researchers also compared the production weight and scrap weight between each training group. At the end of the employment period, the trainees recorded their attitudes on a questionnaire.

Study Results

  • Time to achieve competence. Mean training time for individuals in the unstructured group was 16.3 hours, compared to an average 4.6 hours for those in the structured group. This difference is statistically significant (p < 0.005) and indicates a 72 percent savings in training time using the structured method.

  • Level of job competence. Subjects in the structured training group achieved significantly higher (p < 0.01) job competence after four hours of training. Though statistically significant differences were not found at 8- and 11-hour intervals, there were still substantial differences in training times.

  • Costs of training. The $228 (in 2006 dollars) average cost to train a group of 20 extruder operators by the structured method was not significantly different (p > 0.05) from the $232 average to train an identical-size group by the unstructured method. Such a conclusion is probably an artifact of group size in this study. A firm normally would use an industrial-training program to train many more than 20 workers. As the number of trainees increases under a structured-training program, the average cost per trainee decreases because the cost to develop the structured program is fixed, and is spread over more trainees. In an unstructured approach, there is less development cost, but more of the training effort is constructed specifically for each trainee. The costs that vary with larger numbers of trainees increase more than with structured training, as the number of trainees increases. So, with large numbers of trainees, the average cost per trainee of structured training can be less, even if the up-front fixed costs are much higher. One can calculate the point at which the average cost of training under each method is equal. Figure 11-2 illustrates this break-even concept. For the extruder-operator experiment, break-even was about 18 people.

    Cost comparisons of structured and unstructured training for 1 to 20 semi-skilled workers.

    Source: Cullen, J.G., Sawzin, S.Al., Sisson, G.R., and Swanson, R.A. (1976). “Training: What’s it Worth?” Training and Development Journal, 30 (8), p. 17, August 1976.

    Figure 11-2. Cost comparisons of structured and unstructured training for 1 to 20 semi-skilled workers.

  • Production losses. As might be expected, training under both the structured and unstructured methods resulted in reductions of waste and production loss compared to standard minimum-production rates. The average 2.91 pounds of production loss under structured training was significantly less (p < 0.01) than the average 9.35 pounds of production loss under the unstructured training. This represents approximately a 70 percent difference in production losses between unstructured and structured training. The value of the lost production can be used to project one element of the monetary returns for such training programs.

  • Resolution of production problems. The success rate of solving production problems was 80 percent among the structured program trainees—significantly higher (p < 0.025) than the 33 percent rate among unstructured program trainees. This represents a 130 percent increase in solved production problems when structured training is used. Situations characterized by expensive production downtime or difficult startup procedures would make these reported differences particularly pivotal. To convert them to monetary values, you would calculate the time that production is halted while problems remain unresolved, and the projected production levels that would have been attained if production had been operational, multiplied by the value of that production. In this case, 67 percent of problems not solved by trainees in the unstructured program would be solved by trainees in a structured program, so the value of production achieved when those problems are solved is the monetary value of the difference.

  • Job attitudes. Trainees in the structured and unstructured groups displayed no significant difference (p > 0.8) in their attitudes toward the pipe-extrusion job. Thus, there appears to be little danger of alienating or creating employee resentment using one program over the other.

The Logic of the Training Cost-Benefit Model

Although training costs may appear substantial, and are often much more vivid than less tangible benefits, we have seen that it is possible to derive tangible estimates of both costs and benefits. In this section, we provide a brief summary of the logic and components of typical cost-benefit analyses of training.

Training Costs

For the cost-benefit model, the costs for training are classified either as fixed or variable. As noted in Chapter 2, “Analytical Foundations of HR Measurement,” fixed costs do not vary with numbers of trainees, training time, or training-program development. Variable costs change as the number of trainees, training time, and training-program development vary. For example, if a firm uses regular production equipment (which is a fixed cost for production) for training, the losses in production are considered a variable cost because they rise with the extent of the training.

Training costs include the following elements:

  1. Development of the training program:

    1. Analysis time: Total staff hours to analyze the job

    2. Design time: Total staff hours to design the training program

    3. Material costs: All material costs incurred from onset through completion of one training program, including supplies to facilitate training-program development (for example, graphics, travel, duplication, display boards, training aids)

  2. Training materials (expendable): raw materials, cost of hard copies of training program

  3. Training materials (not expendable):

    1. Instructional hardware: Durable items purchased for the training program (such as production machine to be used only for training, video camera, LCD player)

    2. Instructional software: Durable items of instructional content purchased for the training program (such as manufacturer’s operating manual, videotapes, CD-ROMS, DVDs)

  4. Training time:

    1. Trainee time: Total hours, and resulting salary, incurred for trainee to reach competency

    2. Trainer time: Total hours, and resulting salary, incurred for trainee to reach competency

  5. Production losses resulting from training:

    1. Production rate losses

    2. Material losses

Trainee Quality and Performance

The return from a training program is a competent and high-performing worker. To evaluate this, one must specify the competencies required in each situation. For the plastic extruder-operator training program, production-task performance included the following components:

  • Trainee can perform job startup successfully.

  • Trainee can maintain set standard of plastic tubing.

  • Trainee can perform in production-malfunction performance tests successfully.

  • Trainee can perform job shutdown successfully.

Analyses of Training Quality and Performance

Measurements of actual task performance, for purposes of assessing training returns, included the following:

  1. Measurement of task performance:

    • Time: To reach competency, to deal successfully with deliberately induced machine malfunctions, to follow startup procedures

    • Production rate: Number of 3-foot lengths of acceptable pipe per hour of production

    • Performance test: Reaction to induction of machine malfunctions via performance test (downtime, loss of tubing, time to respond to malfunction, time to correct malfunction)

    • Product quality: Measured by visual and dimensional criteria (comparison to samples of defective and nondefective pipe)

    • Raw material usage: Weight of raw material supplied to the machine versus weight of scrap and amount of acceptable product produced per hour

  2. Monetary value of returns

    1. Trainee performance data converted to a monetary value

    2. Total returns of training programs

Evaluation

Evaluation is the final phase of assessing costs and benefits. Quantify each variable under training costs and training returns. Calculate monetary equivalents whenever possible for those variables that are expressed in nonmonetary indexes (for example, time). Chapter 6, “Employee Attitudes and Engagement,” showed that the financial impact of employee attitudes also can be estimated. Use those figures for the analysis and evaluation stages.

The general approach to cost-benefit comparison of alternative training strategies is to analyze the training variables, to convert them to monetary equivalents, and then to compare costs. If desired, individual variables such as “time taken to reach competency” can also be compared and reported as separate indexes of effectiveness.

The cost-benefit model we have just described has been used successfully in several training situations. Evaluation isn’t always easy, but if it can help establish empirically some of the links between HRD and improvements in productivity, then it will help managers avoid viewing training incorrectly as a cost rather than as an investment, and risking that worthy development investments will go unfunded.

Utility-Analysis Approach to Decisions about HRD Programs

Faced with a bewildering array of alternatives, decision makers must select the programs that will have the greatest impact on pivotal talent pools—those where investments in HRD will have the largest marginal impact on activities, decisions, and ultimately, on the value created for the firm. Recall that utility analysis specifically incorporates the idea of pivotalness by including the quantity of workers affected by an HR program, as well as SDy, the pivotal value of enhanced worker quality. We saw that in Chapters 8 through 10 that utility analysis is a powerful tool for staffing programs,[10] and now we show how it can be used to evaluate proposed or ongoing HRD programs.

The basic difference is that staffing programs create value through the quality of the choices they support regarding who joins. In contrast, programs such as HRD do not change the membership of the work force. Instead, they change the quality of the intact pool of workers. So, instead of deriving changes in quality based on who joins or leaves a work force, we must derive changes in quality based on the direct effect of a program on the individuals who participate in it.

Modifying the Brogden-Cronbach-Gleser Model to Apply to Training

In the Brogden-Cronbach-Gleser model, the only difference between the basic equation for calculating staffing utility (Equation 8-17 in Chapter 8, “Staffing Utility: The Concept and Its Measurement”) and that for calculating utility from HRD programs is that the term dt is substituted for the product rxy times Modifying the Brogden-Cronbach-Gleser Model to Apply to Trainingx (that is, the validity coefficient times the average standard score on the predictor achieved by selectees).[11] The resulting utility formula is as follows.

Equation 11-1. 

Where ΔU is the gain to the firm in dollars resulting from the program; N is the number of employees trained; T is the expected duration of benefits in the trained group; dt is the true difference in job performance between the trained and untrained groups in SD units; SDy is the standard deviation of dollar-valued job performance among untrained employees; and C is the total cost of training N employees. The parameter dt is the effect size. It reflects how different those who participate in a development opportunity are in terms of job-relevant outcomes, compared to those who do not participate. It is expressed in standardized units, just as Z-scores were in the selection utility equation.

To illustrate that idea graphically, we’ll plot the (hypothetical) distribution of job performance outcomes of the trained and untrained groups on the same baseline (expressed in Z-units, with a mean of 0 and a standard deviation of 1.0), as shown in Figure 11-3. In that exhibit, d represents the size of the effect of the training program. How is d computed? It is simply the difference between the mean of the trained and untrained groups in standard Z-score units. This might be the difference in average job performance, time to competency, learning, and so on. Therefore:

Equation 11-2. 

Standard-score distributions of job performance outcomes among trained and untrained groups.

Note: Standard-score distributions of job performance outcomes among trained and untrained groups.u is the average job performance score of the untrained group; Standard-score distributions of job performance outcomes among trained and untrained groups.t is the average job performance score of the trained group; and d is the effect size.

Figure 11-3. Standard-score distributions of job performance outcomes among trained and untrained groups.

Where d is the effect size. If the effect is expressed in terms of job performance, Standard-score distributions of job performance outcomes among trained and untrained groups.t is the average job performance score of the trained group; Standard-score distributions of job performance outcomes among trained and untrained groups.u is the average job performance score of the untrained group; and SDx is the standard deviation of the job performance scores of the total group, trained and untrained. If the SDs of the two groups are unequal, the SD of the untrained group should be used because it is more representative of the incumbent employee population.

Hypothetically, suppose that we are evaluating the impact of a training program for quality-control inspectors. Let’s say that job performance is evaluated in terms of a work sample—that is, the number of defects identified in a standard sample of products with a known number (for example, 10) of defects. Suppose the average job performance score of employees in the trained group is 7, for those in the untrained group it is 6.5, and the standard deviation of the job performance scores is 1.0. The effect size is as shown in Equation 11-3.

Equation 11-3. 

In other words, the performance of the trained group is half a standard deviation better than that of the untrained group. Because a perfectly reliable, objective measure of job performance was used in this case, the estimate of d need not be corrected for unreliability. In many if not most cases, managers will be using criteria that are less than perfectly reliable, such as supervisory ratings of the job performance of subordinates. In such cases, d must be corrected statistically for unreliability or measurement error in the criterion; otherwise, the estimate will be biased (too conservative).

If supervisory ratings are used as job-performance criteria, reliability probably will be estimated in terms of the extent of inter-rater agreement. A large-sample study that investigated the reliability of ratings of first-level supervisors found that average inter-rater reliabilities were .69 and .64, respectively, for ratings of supervisory abilities and ratings of the performance of specific job duties.[12] Regardless of how the reliability of job performance measures is estimated, the formula for computing the true difference in job performance between the trained and untrained groups is as shown in Equation 11-4.

Equation 11-4. 

Alternatively, Equation 11-5:

Equation 11-5. 

Where all terms are as defined above, and is the square root of the reliability of the job-performance measure.

To express that difference as a percentage change in output, assuming that performance is measured on a ratio scale, it is necessary to multiply dt by the ratio of the pretest standard deviation to the pretest performance mean (SD / M) times 100.[13] Thus, the percentage change in output equals this:

Equation 11-6. 

Issues in Estimating dt

If an organization already has conducted a training program and possesses the necessary data, it can compute dt on the basis of an empirical study. Pre- and post-measures of job performance in the trained and untrained groups should be collected systematically, with special care taken to prevent the ratings or other measures of job performance from being influenced by knowledge of who has or has not been trained. These are the same kinds of problems that bedevil all HRD evaluation research, not just research on dt. Several thorough treatments of these issues are available.[14]

When several studies on the same topic have been done, or when dt must be estimated for a new HRD program where there are no existing data, dt is best estimated by the cumulated results of all available studies, using the methods of meta-analysis. Such studies are available in the literature.[15] As studies accumulate, managers will be able to rely on cumulative knowledge of the expected effect sizes associated with proposed HRD programs. Such a “menu” of effect sizes for HRD programs will allow HR professionals to compute the expected utilities of proposed HRD programs before the decision is made to allocate resources to them.

Sometimes, the results of evaluation research are presented in terms of statistics such as r, t, or F. Each of these can be transformed into d by means of the following formulas.[16] When two groups are compared (and therefore df = 1), the F statistic is converted to a t statistic using Equation 11-7.

Equation 11-7. 

The t-statistic then can be converted into the point-biserial correlation (rpb) between the dichotomous variable (training versus no training) and rated performance using Equation 11-8.

Equation 11-8. 

Where Nt is the total number of persons in the study, the sum of the trained and untrained groups.

To transform rpb into d, use Equation 11-9.

Equation 11-9. 

Where p and q are the proportions of the total group in the trained and untrained groups, respectively.

For example, suppose 100 employees are trained and 100 serve in a control group. Results of training are expressed as F = 6.0, using supervisors’ ratings as criteria (assume the reliability of the supervisor ratings ryy = 0.60). Using Equation 11-7,

t = 2.45

Using Equation 11-8,

rpb = 0.17

So,

d = 1/0.5 (0.9950)(0.17/0.985)

d = 0. 34

Therefore, dt is

Issues in Assessing Effect Sizes

Different effect sizes can occur not because training is differentially effective, but because the evaluations differ in breadth of coverage of the outcomes. To be methodologically precise, evaluation should measure only training-related performance.[17] Training programs in first-level supervisory skills may encompass a large portion of the supervisor’s job, whereas training programs designed to affect sales of a specific product may influence only a few tasks of a sales representative’s job. In terms of impact, not all elements of the job are equally pivotal.[18]

Effect sizes measured using specific criteria will usually be larger than those based on a criterion of overall job performance because of the increased precision. When comparisons focus only on the elements that training affects, the observed effects are larger. However, there is a trade-off. If the outcomes of training are very narrowly defined, a large effect size must be adjusted to reflect the fact that only part of the work outcomes are considered, so the proportion of total work value affected is smaller. At the limit, if training evaluations are so narrowly focused on esoteric training outcomes, even large training effects may be unimportant. Thus, it is vital to match the outcomes used to assess the effects of training to the decision context, and to ensure that training outcomes are comparable to allow meaningful comparisons of effect sizes.[19] The value of a change in performance will vary according to the percentage of pivotal tasks measured by criteria.

A large-scale study of the relative effects of HRD interventions in a major U.S.-based multinational firm adjusted overall utility estimates by recalculating the valuation base as the product of the percentage of job skills affected by training. Thus, the utility estimates represented only the value of performance on specific job elements.[20]

Break-Even Analysis Applied to Proposed HRD Programs

Having determined an expected value of dt, we can use the Brogden-Cronbach-Gleser model (Equation 11-1 in this chapter) to compute a break-even value of SDy (the value at which benefits equal costs and ΔU= $0.00; see Chapters 2 and 10). For example, suppose 300 employees are trained, the duration of the training effect is expected to be 2 years, dt = 0.55, and the per-person cost of training is $1,500. Setting ΔU = $0.00 yields the following:

$0.00 = 2(300)(0.55)(SDy) – 300 ($1,500)

SDy = $1,364

Even if dt is as low as 0.10, the break-even value of SDy is still only $7,500, well below the values of SDy (for example, $25,000 to $35,000 in 2006 dollars) typically reported in the literature. To the extent that precise estimates of dt and SDy are unavailable, break-even analysis still allows a decision maker to use the general utility model to assess the impact of a proposed HRD program. If estimates of dt and SDy are available, utility can be computed, and the expected payoff from the program can be compared with the break-even values for dt or SDy. The comparison of “expected case” and “worst case” scenarios thus provides a more complete set of information for purposes of decision making.

Duration of the Effects of an HRD Program

A key parameter in Equation 11-1 is T, the duration of the effect of a training or HRD program. The idea is that the effects of development may not last forever because the relevance of the learning has a half-life due to changing work situations. In most cases, this parameter is difficult to estimate. One approach that has proven useful is the Delphi method, often used in long-range forecasting. With this method, a group of subject matter experts is asked to provide judgments about the duration of the training effect. Each expert responds individually and anonymously to an intermediary. The intermediary’s task is to collect and summarize the experts’ opinions and redistribute that information back to the experts for another round of judgment. The cycle continues until the experts reach a consensus, often after three or four rounds of judgments.

In practice, we have little knowledge about the duration of training effects. To deal with this issue in the large-scale study described in the previous section, researchers computed break-even values in terms of time. Such values represent the amount of time that the training effect must be maintained for the value of training outcomes to just offset the training investment. Across 18 training programs (managerial, sales, and technical), they found great variability in results, with break-even periods ranging from a few weeks to several years. In the extreme, two management training courses were never expected to break even or to yield a financial gain, because they produced slight decreases in performance; effect sizes were negative. The lesson to be learned from those results is that if we do not understand how long training effects last, we do not really understand the effects of training on organizational performance.

Economic Considerations and Employee Flows Applied to HRD Programs

Because training activities lead to “diminishing returns” over time (that is, training effects dissipate over time), a utility model that incorporates employee flows should be used to assess the net payoff of the program over time.[21] Beyond that, variable costs, taxes, and discounting must be considered to assess correctly the true impact of a proposed or ongoing HRD program. Because we considered these issues in Chapter 10, “The Payoff from Enhanced Selection,” here we need consider only the summary model that incorporates all of these factors. Then, we will present a worked example to demonstrate how the utility analysis proceeds. The model is shown in Equation 11-10. It is the same model used in Chapter 10, but here we have substituted the true effect size dt for the product of the validity coefficient and standardized average predictor score of selectees that we used in Chapter 10.

Equation 11-10. 

For purposes of illustration let us adopt the dt value we computed earlier, 0.44. Assume that 100 employees are trained each year for 5 years, and that for each cohort, the training effect dissipates gradually at the rate of 25 percent annually. No employees separate during this period (and therefore Nst = 0). That information allows us to compute a weighted average dt value for the trained group each year, as a new cohort of trainees is added. Table 11-1 shows the weighted average dt values.

Table 11-1. Diminishing Returns of an HRD Program over Five Years

Year

Nk

Weighted Average

1

100

(100(0.44)) / 100

2

200

(100(0.44) + 100(0.44 – 25%) / 200

3

300

(100(0.44) + 100(0.44 – 25%) + 100(0.44 – 50%)) / 300

4

400

(100(0.44) + 100(0.44 – 25%) + 100(0.44 – 50%) + 100(0.44 – 75%)) / 400

5

500

(100(0.44) + 100(0.44 – 25%) + 100(0.44 – 50%) + 100(0.44 – 75%) + 100(0.44 – 100%)) / 500

Year

Weighted Average dt Values

1

0.44

2

0.385

3

0.33

4

0.275

5

0.22

Notes: dt = The true difference in job performance between the trained and untrained groups in standard deviation units; HRD = Human resources development; Nk = number of employees receiving training who remain in the work force.

To use Equation 11-10, assume that SDy = $30,000, variable costs (V) = –0.10, the tax rate is 45 percent, and the discount rate is 8 percent. Because costs ($1,000 per person) are incurred in the same period that benefits are received, we will use k as the exponent in the cost term in Equation 11-10. The total payoff of the HRD program is the sum of the utilities of each of the five periods:

ΔU1 = 100(0.926)(0.44)($30,000)(0.90)(0.55) – $100,000(0.55)(0.926)

ΔU1 = $554,118

ΔU2 = 200(0.857)(0.385)($30,000)(0.90)(0.55) – $100,000(0.55)(0.857)

ΔU2 = $932,802

ΔU3 = 300(0.794)(0.33)($30,000)(0.90)(0.55) – $100,000(0.55)(0.794)

ΔU3 = $1,123,629

ΔU4 = 400(0.735)(0.275)($30,000)(0.90)(0.55) – $100,000(0.55)(0.735)

ΔU4 = $1,160,198

ΔU5 = 500(0.681)(0.22)($30,000)(0.90)(0.55) – $100,000(0.55)(0.681)

ΔU5 = $1,074,959

The sum of those one-period utility estimates is $4,845,706. This is the total expected payoff of the HRD program over the five-year period.

Example: Skills Training for Bankers

The utility-analysis concepts discussed thus far were illustrated nicely in a study of the utility of a supervisory-skills training program applied in a large commercial bank.[22] The study incorporated the following features:

  • Training costs were tabulated using cost-accounting techniques.

  • The global estimation procedure was used to estimate SDy.

  • Pre- and post-training ratings of the job performance of (non-randomly assigned) experimental- and control-group subjects were compared in order to determine dt.

  • Utility analysis results that included adjustments for economic factors (discounting, variable costs, and taxes) were compared to unadjusted utility results.

  • Break-even analysis was used to assess the minimum change in Sdy required to recoup the costs invested in the program.

  • The effect on estimated payoffs of employee flows, decay in training effects, and employee turnover were considered explicitly.

Results showed that the training program paid off handsomely over time, even under highly conservative assumptions. Training 65 bank managers in supervisory skills produced an estimated net payoff (after adjustment for the economic factors noted earlier) of $70,194 (all figures in 2006 dollars), and $300,963 by Year 5. Not surprisingly, the reductions in value associated with adjusting for economic factors tended to become greater the further in time they were projected. In general, however, utility figures adjusted for economic factors were 60 percent to 80 percent smaller than unadjusted figures.

When break-even analysis was used, even assuming a 25 percent yearly reduction in the strength of the training effect, break-even values of SDy were still less than 50 percent of the values used in the utility analysis. Finally, in terms of employee flows, the economic impact of training additional groups was also considerable. For example, the estimate for the tenth year of the utility of training 225 employees in the first five years was more than $738,467 even after adjustment for economic factors. Data such as these are useful to decision makers, whether their focus is on the broad allocation of organizational resources across functional lines or on the choice of specific HR programs from a larger menu of possible programs.

Costs: Off-Site Versus Web-Based Meetings

Having illustrated methods and technology for assessing the value of employee-development efforts, this final section of the chapter focuses on identifying costs—specifically, the costs of offsite versus web-based meetings. Given the wide proliferation and continued growth of Internet-based technologies, many organizations have opted for a web-based or off-site approach to cut costs. What follows is a general costing framework that can be applied to many types of training and that can be used to compare relative costs.

Off-site meetings, those conducted away from organizational property, are useful for a variety of purposes: for conducting HRD programs, for communicating information without the interruptions commonly found at the office, for strategic planning, and for decision making. In many cases, however, the true costs of an off-site meeting remain unknown because indirect attendee costs are not included along with the more obvious direct expenses. The method described here enables planners to compute the actual costs of each type of activity in an off-site meeting.[23] Then we consider web-based meeting costs.

We make the following assumptions about a hypothetical firm, Valco Ltd. The firm has 500 employees, including 100 first-line supervisors and managers. Under the general planning and direction of Valco’s training department (one manager and one secretary), Valco holds a total of ten days of off-site meetings per year (either training sessions or various types of meetings for managers). The firm retains outside speakers and consultants to develop and conduct the meetings. On the average, 20 managers attend each meeting, and the typical meeting lasts two full days.

Costs shown in Table 11-2 are based on those figures. The estimates we are using here are broad averages intended only to create a model for purposes of comparison. Note that in this example, we make no attempt to place a monetary value on the loss of productive time from the job, although if it is possible to estimate such costs reliably, do include them in the calculations. As with the illustrations in other chapters, we have attempted to make the numbers as realistic as possible, but primary concern should be with the methodology rather than with the numbers.

Table 11-2. Costs of an Off-Site Management Meeting

Cost Element

Cost per Participant per Day

Total Cost

  1. Development of programs (annual)

  
  • Training dept. overhead

  
  • Training staff salaries

  
  • Outside consultants

  
  • Equipment + meeting materials

$350[a]

$350,000

  1. Participant cost (annual)

  
  • Salaries and benefits (average)

$550[b]

$130,000

  1. Delivery of one meeting for 20 people

  
  1. Facility costs

  
    • a. Sleeping rooms

$220

$4,400

    • b. Three meals daily

109[c]

2,180

    • c. Coffee breaks

30[d]

600

    • d. Reception

20[e]

400

  1. Meeting charges

  
    • a. Room rental

50

1,000

    • b. Audiovisual equipment rental

40

800

    • c. Business services

25[f]

500

  1. Transportation to the meeting

175[g]

7,000

Summary: Total Cost per Participant per Day

  
      1. Development of programs

$350

 
      1. Participant cost

550

 
      1. Delivery of one meeting (hotel + transportation)

669

 
 

Total: $1,569

Notes: Meeting duration: 2 full days. Number of attendees: 20 people. Costs do not reflect an estimate of the value of the lost productive time by the people in the program. Adding it would increase the above costs dramatically.

[a] To determine the per-day cost, divide $350,000 by the number of meeting days held per year (10). Then divide the answer ($35,000) by the number of managers (100) attending all programs = $350 per day of a meeting.

[b] To determine the per-day cost, divide the total of $130,000 by 236 (average number of work days per year) = $550 per day of the work year.

[c] Assume the following daily costs per person: $20 for breakfast, $30 for lunch, $40 for dinner + 21 percent service fee/gratuity = $108.90.

[d] Assumes a total cost of $300 per coffee break, one morning + one afternoon = $600/day, divided by 20 attendees = $30 per person per day.

[e] Assumes a charge of $100 to set up a bar + a $300 minimum total charge = $400 divided by 20 = $20 per person per day.

[f] Assumes a daily charge of $500 for Internet access, photocopying, and facsimile services.

[g] To determine the per-day cost, divide group total ($7,000) by the number of participants (20); then divide the resulting figure ($350) by the number of meeting days (2) = $175 per day.

As you can see in Table 11-2, the per-day, per-person cost of Valco’s meeting comes to $1,569. Actually, that figure probably does not represent the true cost of the meeting because no distinction is made between recurring and nonrecurring costs.[24]

During the development of a program, organizations absorb nonrecurring costs such as equipment purchases and training designers’ salaries. Recurring costs absorbed each time a program is presented include session expenses, such as facilities and trainers’ salaries, and costs that correspond to the number of participants in a program, such as training materials and trainees’ salaries.

Separating costs into categories allows each set of costs to be incorporated into utility calculations for the time period in which each expense is incurred. Thus, the high initial expenses associated with a program may indicate that costs exceed benefits for some period of time or over a certain number of groups of trainees. However, at some point, an organization may begin to derive program benefits that signal the beginning of a payback period. Separating costs from benefits helps decision makers to clarify information about the utility of HR programs and return on investment.[25] This is as important for off-site meetings as it is for web-based ones.

Web-based meetings incur all the costs shown in Table 11-2, with the exception of sleeping rooms (item 1a), the reception (item 1d), meeting charges (items 2a, b, and c), and transportation to the meeting (item 3). However, a premises-based license for web-based conferencing typically costs at least $3,000 per year for unlimited usage.[26] Moreover, the emerging generation of unified communications platforms featuring integrated instant messaging, email, video, and audio tools is making it easier for geographically dispersed attendees to exploit the full range of media.[27]

The very highest-level videoconferencing systems, such as Hewlett-Packard’s Halo Collaboration Studio, Polycom’s RPX product, or Cisco’s Telepresence Meeting solution, include a set of technologies that allow people to feel as if they are present at a remote location (“being there”), a phenomenon called “telepresence.”[28] To achieve the illusion that all attendees are in the same room, each vendor makes its videoconferencing rooms look alike, using the same semicircular conference tables illuminated by the same type of light bulbs and surrounded by identical wall colors. Participants appear as life-size images, and sit at the table facing video displays, which have cameras set just above or around the screen.[29]

Telepresence systems are not cheap. H-P’s system can cost as much as $425,000, plus $18,000 a month per conference room for operating costs. Cisco’s product costs $299,000 for the hardware itself (rich audio, high-definition video, and interactive elements), plus $40,000 for planning and design, plus $3,500 a month for maintenance. Those costs will likely limit the use of telepresence systems to large, deep-pocketed organizations, and comprise only a fraction of the more than $1 billion spent on videoconferencing equipment each year.[30]

Why do so many meetings still occur in person all over the globe every year? Perhaps because 64 percent of communication is nonverbal,[31] and most lower-end web-based conferencing systems lose those cues. Hence many organizations feel that there is no substitute for face-to-face contact and the opportunity for interpersonal interaction. The influence of the environment in training cannot be minimized The task for decision makers is to consider whether facility costs or web-based conferencing costs as a percentage of the total of the true meeting costs identified will or will not be offset by a corresponding increase in learning effectiveness. Only by considering all the factors that have an impact on learning effectiveness—program planning and administration, the quality of the trainer, program delivery, and learning environment—can we derive the greatest return, in time and dollars spent, on this substantial investment in people.

Process: Enhancing Acceptance of Training Cost and Benefit Analyses

The total cost of evaluating 18 training programs in the multinational firm we described earlier in the chapter was approximately $680,000 (in 2006 dollars).[32] That number may seem large, until you consider that during the time of the study, the organization spent more than $327 million on training. Thus, the cost of training evaluation was roughly 0.2 percent of the training budget during this time period. Given expenditures of such magnitude, some sort of accountability is prudent.

To enhance managerial acceptance, the researchers presented the utility model and the procedures that they proposed to use to the CEO as well as to senior strategic planning and HR managers before conducting their research. They presented the model and procedures as fallible, but reasonable estimates. The researchers noted that management pre-approval prior to actual application and consideration of utility results in a decision-making context is particularly important when one considers that nearly any field application of utility analysis will rely on an effect size calculated with an imperfect quasi-experimental design. (See Chapter 2 for more on quasi-experimental designs.)

Conclusion

One of the important lessons to be learned from the material presented in this chapter is that methods are available now for estimating the costs and benefits of HRD programs (proposed, ongoing, or completed). Instead of depending on the power of persuasion to convince decision makers of the value of HRD programs, HR professionals can, by the use of cost-benefit models, join with the other functional areas of business in justifying the allocation of scarce organizational resources on the basis of evidence rather than on beliefs.

Exercises

Software to calculate answers to one or more exercises below is available at www.shrm.org/publications/books.

1.

Soclear, Inc., a janitorial-service firm, wants to conduct a controlled study of structured versus unstructured training for window washers of office buildings. How would you design the structured and unstructured training programs? To use a cost-benefit model, what criteria of job performance might you use? How will you collect and record data? How might you make cost comparisons?

2.

Jane Burns, an HR analyst for Standard City, USA, knows that SDy for firefighters in her city is $28,000. The fire department has asked the city to provide training in team building for 500 of its employees, at a cost of $2,500 per employee. The effects of this organization-development effort are expected to last for two years. Using Equation 11-1, compute the break-even value for dt necessary for the city to recoup the costs of the program.

3.

Suppose, in Exercise 3, that you have just read a meta-analysis of team building studies and know that the cumulated estimate of dt is 0.45. Compute an expected utility for the program, and compare it to the break-even value you identified earlier. How might this affect the chances that the project will be funded?

4.

With regard to Exercise 4, suppose that the discount rate is 10 percent, and variable costs are –0.10. The city is not taxed. How do these factors affect the estimate of expected utility that you developed in Exercise 4?

5.

Pilgrim Industries, a 2,000-employee firm with 400 managers, holds 40 days of off-site meetings per year. Outside consultants develop and conduct the meetings, and on the average 20 managers attend each meeting. The typical meeting lasts two full days. Last year, total program-development costs consumed $350,000. The average attendee’s salary (plus benefits) was $70,000. To deliver each two-day meeting for 20 people, sleeping accommodations, food, telephone, and a cocktail reception cost $10,000. In addition, transportation, business services, meeting room, and audiovisual equipment rental totaled another $11,000. Determine the total per day, per-person cost of one off-site meeting.

6.

Pilgrim’s CEO has heard about the remarkable quality of “tele presence” web-based conferencing systems, and she has asked you to prepare a per-person, per-day cost comparison of an off-site meeting versus a web-based conference for a two-day meeting. You calculated the per-person per day cost of an off-site meeting in Exercise 5. What costs must you consider with respect to a web-based system? Is there any other information you would want to have before recommending one alternative over the other?

References

1.

W. F. Cascio and H. Aguinis, Applied Psychology in Human Resource Management (6th ed.) (Upper Saddle River, NJ: Prentice Hall, 2005).

2.

T. Mills, “Human resources—Why the new concern?” Harvard Business Review, March-April 1975, 120–134.

3.

R. A. Noe, Employee Training and Development (4th ed.) (Burr Ridge, IL: McGraw-Hill, 2008). R. Jana, “On-the-job video gaming,” Business Week, March 27, 2006, 43. K. G. Brown and J. K. Ford, “Using computer technology in training: Building an infrastructure for active learning,” in K. Kraiger (ed.), Creating, Implementing, and Managing Effective Training and Development (San Francisco: Jossey-Bass, 2002). M. A. Quinones and A. Ehrenstein, (eds.), Training for a Rapidly Changing Workplace (Washington, D.C.: American Psychological Association, 1997).

4.

B. J. Avolio, J. J. Sosik, D. I. Jung, and Y. Berson, “Leadership models, methods, and applications” in W. C. Borman, D. R. Ilgen, and R. J. Klimoski (eds.), Handbook of Psychology (Vol. 12) (Hoboken, NJ: Wiley, 2003) 277–307.

5.

Cascio and Aguinis, op. cit. See also W. R. Shadish, T. D. Cook, and D. T. Campbell, Experimental and Quasi-Experimental Designs for Generalized Causal Inference (Boston: Houghton Mifflin, 2002).

6.

J. W. Boudreau and P. M. Ramstad, Beyond HR: The New Science of Human Capital (Boston: Harvard Business School Publishing, 2007).

7.

P. Coy and J. Ewing, “Where are all the workers?” Business Week, April 9, 2007, 28–31.

8.

WorldatWork Total Rewards Model. Downloaded on October 4, 2007 from http://www.worldatwork.org/waw/aboutus/html/aboutus-whatis.html#model.

9.

J. G. Cullen, S. A. Sawzin, G. R. Sisson, and R. A. Swanson, “Training: What’s it worth?” Training and Development Journal, 30:8, 1976, 12–20.

10.

See, for example, M. C. Sturman, C. O. Trevor, J. W. Boudreau, and B Gerhart, “Is it worth it to win the talent war? Evaluating the utility of performance-based pay,” Personnel Psychology, 56, 2003, 997–1035. See also H. Mabon, “The cost of downsizing in an enterprise with job security,” Journal of Human Resource Costing and Accounting, 1:1, 1996, 35–62. H. Mabon and G. Westling, “Using utility analysis in downsizing decisions,” Journal of Human Resource Costing and Accounting, 1:2, 1996, 43–72.

11.

F. L. Schmidt, J. E. Hunter, and K. Pearlman, “Assessing the economic impact of personnel programs on work force productivity,” Personnel Psychology, 35, 1982, 333–347.

12.

H. R. Rothstein, F. W. Erwin, F. L. Schmidt, W. A. Owens, and C. P. Sparks, “Biographical data in employment selection: Can validities be made generalizable?” Journal of Applied Psychology, 75, 1990, 175–184.

13.

P. R. Sackett, “On interpreting measures of change due to training or other interventions: A comment on Cascio (1989, 1991),” Journal of Applied Psychology, 76, 1991, 590, 591.

14.

Cascio and Aguinis, op. cit. Shadish et al., op. cit. I. L. Goldstein and J. K. Ford, Training in Organizations: Needs Assessment, Development, and Evaluation (4th ed.) (Belmont, CA: Wadsworth, 2002).

15.

W. Arthur, Jr., W. Bennett, Jr., P. S. Edens, and S. T. Bell, “Effectiveness of training in organizations: A meta-analysis of design and evaluation features,” Journal of Applied Psychology, 88, 2003, 234–245. M. J. Burke and R. R. Day, “A cumulative study of the effectiveness of managerial training,” Journal of Applied Psychology, 71, 1986, 232–246. R. A. Guzzo, R. D. Jette, and R. A. Katzell, “The effects of psychologically-based intervention programs on worker productivity: A meta-analysis,” Personnel Psychology, 38, 1985, 275–291. C. C. Morrow, M. Q. Jarrett, and M. T. Rupinski, “An investigation of the effect and economic utility of corporate-wide training,” Personnel Psychology, 50, 1997, 91–129.

16.

Schmidt et al., op. cit.

17.

J. P. Campbell, “Training design for performance improvement,” in J. P. Campbell and R. J. Campbell (eds.), Productivity in Organizations (San Francisco: Jossey-Bass, 1988) 177–216.

18.

Boudreau and Ramstad, op cit.

19.

Morrow et al., op. cit.

20.

Ibid.

21.

J. W. Boudreau, “Effects of employee flows on utility analysis of human resource productivity improvement programs,” Journal of Applied Psychology, 68, 1983, 396–406.

22.

J. E. Mathieu and R. L. Leonard, Jr. “Applying utility concepts to a training program in supervisory skills: A time-based approach,” Academy of Management Journal, 30, 1987, 316–335.

23.

The method is based on W. J. McKeon, “How to determine off-site meeting costs,” Training and Development Journal, 35, May 1981, 126–122.

24.

Mathieu and Leonard, op. cit.

25.

Ibid.

26.

“Web conferencing: A better way to meet,” at http://archive.webpronews.com/2005/0619.html.

27.

J. Murray, “Poor comms management harms virtual teams,” IT Week, September 20, 2006, at www.itweek.co.uk.

28.

Wikipedia, “Telepresence,” 2007, at http://en.wikipedia.org/wiki/Telepresence.

29.

L. Lee, “Cisco joins high-end videoconference fray,” Business Week, October 25, 2006, at www.businessweek.com/print/technology.

30.

Ibid.

31.

Pearn-Kandola, The Psychology of Effective Business Communications in Geographically Dispersed Teams (San Jose, CA: Cisco Systems, Inc, 2006).

32.

Morrow et al., op. cit.

 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.31.22