Chapter 2. Analytical Foundations of HR Measurement

The preceding chapter noted the importance of analytics within a complete framework for a decision-based approach to human capital measurement. As you will see in the chapters that follow, each type of HR measurement has its own particular elements of analytics, those data analysis and design factors that ensure that the findings are legitimate and generalizable. However, it’s also true that nearly every element of human resource management (HRM) relies on one or more supporting analytical concepts. These concepts are often the elements that scientists have identified as essential to drawing strong conclusions, or they reflect the tenets of economic analysis that ensure that the inferences that we draw from measures properly account for important economic factors, such as inflation and risk.

As you read through the various chapters of this book, each of which focuses on a different aspect of HR measurement, you will encounter many of these analytical concepts over and over again. In the interests of efficiency, we present some of the most common ones here so that you can refer back to this chapter as often as necessary to find a single location for their description and definition. What they have in common is that they are general guidelines for interpreting data-based information. We present them in two broad groups: concepts in statistics and research design, and concepts in economics and finance. Within each category, we address issues in rough order from general to specific. Let’s begin by considering why measures expressed in economic terms tend to get the attention of business leaders.

Traditional Versus Contemporary HR Measures

HRM activities—those associated with the attraction, selection, retention, development, and utilization of people in organizations—commonly are evaluated by using measures of individual behaviors, traits, or reactions, or by using statistical summaries of those measures. The former include measures of the reactions of various groups (top management, customers, applicants, or trainees), what individuals have learned, or how their behavior has changed on the job. Statistical summaries of individual measures include various ratios (for example, accident frequency or severity), percentages (for example, labor turnover), measures of central tendency and variability (for example, mean and standard deviation of performance measures, such as bank-teller shortages and surpluses), and measures of correlation (for example, validity coefficients for staffing programs, or measures of employee satisfaction and turnover).

Measuring individual behaviors, traits, or reactions and summarizing them statistically is the hallmark of most HR measurements, which are often largely drawn from psychology. More and more, however, the need to evaluate HRM activities in economic terms is becoming apparent. In the current climate of intense competition to attract and retain talent domestically and globally, operating executives justifiably demand estimates of the expected costs and benefits of HR programs, expressed in economic terms. They demand measures that are strategically relevant to their organizations, and that rely on a defined logic to enhance decisions that affect important organizational outcomes. Reporting employee turnover levels for every position in an organization may seem to business leaders to be an administrative exercise for the Human Resources department. However, they can often readily see the importance of analyzing and understanding the results of turnover among high performers (“A” players) who are difficult to replace, in a business unit that is pivotal to strategic success (for example, R&D in a pharmaceutical organization). Developing such measures certainly requires attention to calculating turnover appropriately, and to the statistical formulas that summarize it. However, it also requires an interdisciplinary approach that includes information from accounting, finance, economics, and behavioral science. In the chapters that follow, we strive to apply this approach to several important areas of HRM.

Measures developed in this way can help senior executives assess the extent to which HR programs are consistent with and contribute to the strategic direction of an organization. HR professionals have been accused of being out of touch with the strategic direction of their organizations, of being isolated and parochial. Consider how Ken Carrig, Chief Administrative Officer at SYSCO Corporation, described the future role of HR:

When I look at the future of HR it’s looking where we can add value to the organization and to the organization’s customers. We have to have the human resources function looking beyond the organization that it operates in to the customer that it is serving. And by doing that, you don’t just focus on employment costs and productivity; rather, you begin to ask questions like, what does it take to add value to that customer? How do you help that customer make more money? How do you help that customer be more effective? With good HR processes and practices, there’s a lot to add.[1]

SYSCO is the largest food marketing and distribution company in North America. How did HR at SYSCO get noticed? In conducting HR research, Carrig and his HR group found a strong relationship (R2 = .46) between three key human capital metrics (the work-climate scores reported by employees, employee productivity, and retention), and operating pretax earnings. Work-climate scores were an additive combination of scores on the following dimensions: leadership support, front-line supervisor, rewards, quality of life, engagement, diversity, and customer focus. Productivity was measured in terms of the number of employees per 100,000 cases shipped. Retention was expressed as 1 minus the rate of employee turnover over a one-year period in three key segments: marketing associates, drivers, and night warehouse employees.

The overall relationship is lagged about six months, such that employee satisfaction (work-climate scores) drives customer satisfaction, which drives long-term profitability and growth. In short, SYSCO not only has been able to determine what practices and processes are helping to drive the human capital indices, but also how those, in fact, influence the financial metrics over time.[2]

Another way to express the important relationship that SYSCO found is that the human capital metrics, particularly the work-climate scores, were a leading indicator of pretax earnings to operating executives in individual SYSCO companies. In short, because the human capital metrics were directly relevant to the strategic success of their organizations, senior executives began to pay careful attention to changes in them and to monitor them on a regular basis. That is management by measurement, and it neatly illustrates how HR measurement can help drive business success. Measures, and the statistics used to analyze their implications, are important tools. Like any tool, they work best when used correctly.

Next, we describe some fundamental concepts from statistics and research design that help ensure the kind of data gathered, and the calculations used to summarize those data, are best suited to the questions the data should answer. They are general interpretive concepts.

Fundamental Analytical Concepts from Statistics and Research Design

We make no attempt here to present basic statistical or research methods. Many excellent textbooks do that much more effectively than we can in the space available. Rather, we assume that the reader is generally familiar with these issues, and our purpose here is to offer guidelines to interpretation and to point out some important cautions in those interpretations. In the following sections, we address three key concepts: generalizing from sample data, correlation and causality, and experimental controls for extraneous factors.

Generalizing from Sample Data

As a general rule, organizational research is based on samples rather than populations of observations. A population consists of all the people (or more broadly, units) about whom or which a study is meant to generalize, such as employees with fewer than two years’ experience, customers who patronize a particular store, or trucks in a company’s fleet. A sample represents a subset of people (or units) who actually participate in a study. In almost all cases, it is not practical or feasible to study an entire population. Instead, researchers draw samples.

If we are to draw reliable (that is, stable, consistent) and valid (that is, accurate) conclusions concerning the population, it is imperative that the sample be “like” the population—a representative sample. When the sample is like the population, we can be fairly confident that the results we find based on the sample also hold for the population. In other words, we can generalize from the sample to the population.[3]

One way to generate a representative sample is to use random sampling. A random sample is achieved when, through random selection, each member of a population is equally likely to be chosen as part of the sample. A table of random numbers, found in many statistics textbooks, can be used to generate a random sample. Here is how to use such a table. Choose any starting place arbitrarily. Look at the number, say, 004. Assuming you have a list of names, such as applicants, count down the list to the fourth name. Choose it. Then look at the next number in the table, count down through the population, and choose that person, until you have obtained the total number of observations you need.

Sometimes a population is made up of members of different groups or categories, such as males and females, or purchasers of a product and nonpurchasers. Assume that among 500 new hires in a given year, 60 percent are female. If we want to draw conclusions about the population of all new hires in a given year, based on our sample, the sample itself must be representative of these important subgroups (or strata) within the population. If the population is composed of 60 percent females and 40 percent males, we need to ensure that the sample is similar on this dimension.

One way to obtain such a sample is to use stratified random sampling. Doing so allows us to take into account the different subgroups of people in the population, and helps guarantee that the sample represents the population on specific characteristics. Begin by dividing the population into subsamples or strata. In our example, the strata are based on gender. Then randomly select 60 percent of the sample observations from this stratum (for example, using the procedure described earlier), and the remaining 40 percent from the other stratum (males). Doing so ensures that the characteristic of gender in the sample represents the population.[4]

Many other types of sampling procedures might be used,[5] but the important point is that it is not possible to generalize reliably and validly from a sample to a population unless the sample itself is representative. Unfortunately, much research that is done in HR and management is based on case studies, samples of convenience, and even anecdotal evidence. Under those circumstances, it is not possible to generalize to a broader population of interest, and it is important to be skeptical of studies that try to do so.

Drawing Conclusions about Correlation and Causality

Perhaps one of the most pervasive human tendencies is to assume incorrectly that just because two things increase and decrease together, one must cause the other. The degree of relationship between any two variables (in the employment context, predictor and criterion) is simply the extent to which they vary together (covary) in a systematic fashion. The magnitude or degree to which they are related linearly is indicated by some measure of correlation, the most popular of which is the Pearson product-moment correlation coefficient, r. As a measure of relationship, r varies between −1.00 and +1.00. When r is 1.00, the two sets of scores (x and y) are related perfectly and systematically to each other. Knowing a person’s status on variable x allows us to predict without error his or her standing on variable y.

In the case of an r of +1.00, high (low) predictor scores are matched perfectly by high (low) criterion scores. For example, performance-review scores may relate perfectly to recommendations for salary increases. When r is −1.00, however, the relationship is inverse, and high (low) predictor scores are accompanied by low (high) criterion scores. For example, consider that as driving speed increases, fuel efficiency decreases. In both cases, positive and negative relationships, r indicates the extent to which the two sets of scores are ordered similarly. Given the complexity of variables operating in business settings, correlations of 1.00 exist only in theory. If no relationship exists between the two variables, r is 0.0, and knowing a person’s standing on x tells us nothing about his or her standing on y. If r is moderate (positive or negative), we can predict y from x with a certain degree of accuracy.

Although correlation is a useful procedure for assessing the degree of relationship between two variables, by itself it does not allow us to predict one set of scores (criterion scores) from another set of scores (predictor scores). The statistical technique by which this is accomplished is known as regression analysis, and correlation is fundamental to its implementation.[6]

Sometimes people interpret a correlation coefficient as the percentage of variability in y that can be explained by x. This is not correct. Actually, the square of r indicates the percentage of variance in y (the criterion) that can be explained, or accounted for, given knowledge of x (the predictor). Assuming a correlation of r = .40, then r2 = .16. This indicates that 16 percent of the variance in the criterion may be determined (or explained), given a knowledge of the predictor. The statistic r2 is known as the coefficient of determination.[7]

A special problem with correlational research is that it is often misinterpreted. People often assume that because two variables are correlated, some sort of causal relationship must exist between the two variables. This is false. Correlation does not imply causation! A correlation simply means that the two variables are related in some way. For example, consider the following scenario. An HR researcher observes a correlation between voluntary employee turnover and the financial performance of a firm (for example, as measured by return on assets) of –.20. Does this mean that high voluntary turnover causes poor financial performance of a firm? Perhaps. However, it is equally likely that the poor financial performance of a firm causes voluntary turnover, as some employees scramble to desert a sinking ship. In fact, such a reciprocal relationship between employee turnover and firm performance has now been demonstrated empirically.[8]

At a broader level, it is equally plausible that some other variable (for example, low unemployment) is causing employees to quit, or that a combination of variables (low unemployment and a global economic recession) is causing high voluntary turnover and low financial performance by the firm. The point is that observing a correlation between two variables just means they are related to each other; it does not mean that one causes the other.

In fact, there are three necessary conditions to support a conclusion that x causes y.[9] The first is that y did not occur until after x. The second requirement is that x and y are actually shown to be related. The third (and most difficult) requirement is that other explanations of the relationship between x and y can be eliminated as plausible rival hypotheses.

Path analysis, a special application of regression analysis, can help to clarify theoretical and empirical relationships.[10] The most convincing claims of causal relationships, however, usually are based on experimental research.

Eliminating Alternative Explanations Through Experiments and Quasi-Experiments

The experimental method is a research method that allows a researcher to establish a cause-and-effect relationship through manipulation of one or more variables and to control the situation. An experimental design is a plan, an outline for conceptualizing the relations among the variables of a research study. It also implies how to control the research situation and how to analyze the data.[11]

For example, researchers can collect “before” measures on a job—before employees attend training—and collect “after” measures at the conclusion of training (and when employees are back on the job at some time after training). Researchers use experimental designs so that they can make causal inferences. That is, by ruling out alternative plausible explanations for observed changes in the outcome of interest, we want to be able to say that training caused the changes. Many preconditions must be met for a study to be experimental in nature. Here we merely outline the minimum requirements needed for an experiment.

The basic assumption is that a researcher controls as many factors as possible to establish a cause-and-effect relationship among the variables being studied. Suppose, for example, that a firm wants to know whether online training is superior to classroom training. To conduct an experiment, researchers manipulate one variable (known as the independent variable; in this case, type of training) and observe its effect on an outcome of interest (a dependent variable; for example, test scores at the conclusion of training). One group will receive classroom training, one group online training, and a third group no training. The last group is known as a “control” group because its purpose is to serve as a baseline from which to compare the performance of the other two groups. The groups that receive training are known as “experimental” or “treatment” groups because they each receive some treatment or level of the independent variable. That is, they each receive the same number of hours of training, either online or classroom. At the conclusion of the training, we will give a standardized test to the members of the control and experimental groups and compare the results to each other. Scores on the test are the dependent variable in this study.

Earlier we said that experimentation involves control. This means that we have to control who is in the study. We want to have a sample that is representative of the broader population of actual and potential trainees. We want to control who is in each group (for example, by assigning participants randomly to one of the three conditions: online, classroom, or no training). We also want to have some control over what participants do while in the study (design of the training to ensure that the online and classroom versions cover identical concepts and materials). If we observe changes in post-training test scores across conditions, and all other factors are held constant (to the extent it is possible to do this), we can conclude that the independent variable (type of training) caused changes in the dependent variable (test scores derived after training is concluded). If, after completing this study with the proper controls, we find that those in one group (online, classroom, or no training) clearly outperform the others, we have evidence to support a cause-and-effect relationship among the variables.

Many factors can serve as threats to valid inferences, such as outside events, experience on the job, or social desirability effects in the research situation.[12]

Is it appropriate to accept wholeheartedly a conclusion from only one study? In most cases, the answer is no. This is because researchers may think they have controlled everything that might affect observed outcomes, but perhaps they missed something that does affect the results. That something else may have been the actual cause of the observed changes! A more basic reason for not trusting completely the results of a single study is that a single study cannot tell us everything about a theory.[13] Science is not static, and theories generated through science change. For that reason, there are methods, called meta-analysis, that mathematically combine the findings from many studies to determine whether the patterns across studies support certain conclusions. The power of combining multiple studies provides more reliable conclusions, and this is occurring in many areas of behavioral science.[14]

Researchers approaching organizational issues often believe that conducting a carefully controlled experiment is the ultimate answer to discovering the important answers in data. In fact, there is an important limitation of experiments and the data they provide. Often, they fail to focus on the real goals of an organization. For example, experimental results may indicate that job performance after treatment A is superior to performance after treatments B or C. The really important question, however, may not be whether treatment A is more effective, but rather what levels of performance we can expect from almost all trainees at an acceptable cost, and the extent to which improved performance through training “fits” the broader strategic thrust of an organization.[15] Therefore, even well-designed experiments must carefully consider the context and logic of the situation, to ask the right questions in the first place.

Quasi-Experimental Designs

In field settings, major obstacles often interfere with conducting true experiments. True experiments require the manipulation of at least one independent variable, the random assignment of participants to groups, and the random assignment of treatments to groups.[16] However, some less-complete (that is, quasi-experimental) designs still can provide useful data even though a true experiment is not possible. Shadish, Cook, and Campbell offer a number of quasi-experimental designs with the following rationale:[17]

The central purpose of an experiment is to eliminate alternative hypotheses that also might explain results. If a quasi-experimental design can help eliminate some of these rival hypotheses, it may be worth the effort.

Because full experimental control is lacking in quasi-experiments, it is important to know which specific variables are uncontrolled in a particular design. Investigators should, of course, design the very best experiment possible, given their circumstances; but where “full” control is not possible, they should use the most rigorous design that is possible. As a detailed example, consider one type of quasi-experimental design.[18]

This design, which is particularly appropriate for cyclical training programs, is known as the recurrent institutional cycle design. For example, a large sales organization presented a management development program, known as the State Manager Program, every two months to small groups (12–15) of middle managers (state managers). The one-week program focused on all aspects of retail sales (new product development, production, distribution, marketing, merchandising, and so on). The program was scheduled so that all state managers (approximately 110) could be trained over an 18-month period.

This is precisely the type of situation for which the recurrent institutional cycle design is appropriate—that is, a large number of persons will be trained, but not all at the same time. Different cohorts are involved. This design is actually a combination of two (or more) before-after studies that occur at different points in time. Group I receives a pretest at Time 1, then training, and then a post-test at Time 2. At the same chronological time (Time 2), Group II receives a pretest, training, and then a post-test at Time 3. At Time 2, therefore, an experimental and a control group have, in effect, been created. One can obtain even more information (and with quasi-experimental designs, it is always wise to collect as much data as possible or to demonstrate the effect of training in several different ways) if it is possible to measure Group I again at Time 3 and to give Group II a pretest at Time 1. This controls the effects of history. Moreover, Time 3 data for Groups I and II and the post-tests for all groups trained subsequently provide information as to how the training program is interacting with other organizational events to produce changes in the criterion measure.

Several cross-sectional comparisons are possible with the “cycle” design:

  • Group I post-test scores at Time 2 can be compared with Group II post-test scores at Time 2.

  • Gains made in training for Group I (Time 2 post-test scores) can be compared with gains in training for Group II (Time 3 post-test scores).

  • Group II post-test scores at Time 3 can be compared with Group I post-test scores at Time 3 (that is, gains in training versus gains [or no gains] during the no-training period).

To interpret this pattern of outcomes, all three contrasts should have adequate statistical power (that is, at least an 80 percent chance of finding an effect significant, if, in fact, the effect exists).[19] A chance elevation of Group II, for example, might lead to gross misinterpretations. Hence, use the design only with reliable measures and large samples.[20]

This design controls history and test-retest effects, but not differences in selection. One way to control for possible differences in selection, however, is to split one of the groups (assuming it is large enough) into two equivalent samples, one measured both before and after training and the other measured only after training, as shown in Table 2-1.

Table 2-1. Example of an Institutional Cycle Design

 

Time 2

Time 3

Time 4

Group IIa

Measure

Train

Measure

Group IIb

 

Train

Measure

Comparison of post-test scores in two carefully equated groups (Groups IIa and IIb) is more precise than a similar comparison of post-test scores from two unequated groups (Groups I and II).

A final deficiency in the “cycle” design is the lack of adequate control for the effects of maturation. This is not a serious limitation if the training program is teaching specialized skills or competencies, but it is a plausible rival hypothesis when the objective of the training program is to change attitudes. Changes in attitudes conceivably could be explained by maturational processes such as changes in job and life experiences or growing older. To control for this effect, give a comparable group of managers (whose age and job experience coincide with those of one of the trained groups at the time of testing) a “post-test-only” measure. To infer that training had a positive effect, post-test scores of the trained groups should be significantly greater than those of the untrained group receiving the “post-test-only” measure.

Campbell and Stanley aptly expressed the logic of all this patching and adding:[21]

One starts out with an inadequate design and then adds specific features to control for one or another of the recurrent sources of invalidity. The result is often an inelegant accumulation of precautionary checks, which lacks the intrinsic symmetry of the “true” experimental designs, but nonetheless approaches experimentation.

Remember, a causal inference from any quasi-experiment must meet the basic requirements for all causal relationships: that cause must precede effect, that cause must covary with effect, and that alternative explanations for the causal relationship are implausible.[22] Patching and adding may help satisfy these requirements.

Fundamental Analytical Concepts from Economics and Finance

The analytical concepts previously discussed come largely from psychology and related individual-focused social sciences. However, the fields of economics and finance also provide useful general analytical concepts for measuring HRM programs and consequences. Here, the focus is often on properly acknowledging the implicit sacrifices implied in choices, the behavior of markets, and the nature of risk.

We consider concepts in the following six areas: fixed, variable, and opportunity costs/savings; the time value of money; estimating the value of employee time using total pay; cost-benefit and cost-effectiveness analyses; utility as a weighted sum of utility attributes; and sensitivity and break-even analysis.

Fixed, Variable, and Opportunity Costs/Savings

We can distinguish fixed, variable, and opportunity costs, as well as reductions in those costs, which we call “savings.” Fixed costs or savings refer to those that remain constant, whose total does not change in proportion to the activity of interest. For example, if an organization is paying rent or mortgage interest on a training facility, the cost does not change with the volume of training activity. If all training is moved to online delivery and the training center is sold, the fixed savings equal the rent or interest that is now avoided.

Variable costs or savings are those that change in direct proportion to changes in some particular activity level.[23] The food and beverage cost of a training program is variable with regard to the number of training participants. If a less-expensive food vendor replaces a more-expensive one because it charges a lower-price-per-food delivery, the variable savings represent the difference between the costs of the more-expensive and the less-expensive vendors.

Finally, opportunity costs reflect the “opportunities foregone” that might have been realized had the resources allocated to the program been directed toward other organizational ends.[24] This is often conceived of as the sacrifice of the value of the next best alternative use of the resources. For example, if we choose to have employees travel to a training program, the opportunity cost might be the value they would produce if they were back at their regular locations working on their regular jobs. Opportunity savings are the next-best uses of resources that we obtain if we alter the opportunity relationships. For example, if we provide employees with laptop computers or handheld devices that allow them to use e-mail to resolve issues at work while they are in the training program, the opportunity savings represent the difference between the value that would have been sacrificed without the devices and the reduced sacrifice with the devices.

The Time Value of Money—Compounding, Discounting, and Present Value[25]

In general, the time value of money refers to the fact that a dollar in hand today is worth more than a dollar promised sometime in the future. That is because a dollar in hand today can be invested to earn interest. If you were to invest that dollar today at a given interest rate, it would grow over time from its present value (PV) to some future value (FV). The amount you would have depends, therefore, on how long you hold your investment, and on the interest rate you earn. Let us consider a simple example.

If you invest $100 and earn 10 percent on your money per year, you will have $110 at the end of the first year. It is composed of your original principal, $100, plus $10 in interest that you earn. Hence, $110 is the FV of $100 invested for one year at 10 percent. In general, if you invest for one period at an interest rate of r, your investment will grow to (1 + r) per dollar invested.

Suppose you decide to leave your $100 investment alone for another year after the first? Assuming the interest rate (10 percent) does not change, you will earn $110 × .10 = $11 in interest during the second year, so you will have a total of $110 + $11 = $121. This $121 has four parts. The first is the $100 original principal. The second is the $10 in interest you earned after the first year, and the third is another $10 you earn in the second year, for a total of $120. The last dollar you earn (the fourth part) is interest you earn in the second year on the interest paid in the first year ($10 × .10 = $1).

This process of leaving your money and any accumulated interest in an investment for more than one period, thereby reinvesting the interest, is called compounding, or earning interest on interest. We call the result compound interest. At a general level, the FV of $1 invested for t periods at a rate of r per period is as follows:

Equation 2-1. 

FVs depend critically on the assumed interest rate, especially for long-lived investments. Equation 2-1 is actually quite general and allows us to answer some other questions related to growth. For example, suppose your company currently has 10,000 employees. Senior management estimates that the number of employees will grow by 3 percent per year. How many employees will work for your company in five years? In this example, we begin with 10,000 people rather than dollars, and we don’t think of the growth rate as an interest rate, but the calculation is exactly the same:

10,000 × (1.03)5 = 10,000 × 1.1593 = 11,593 employees

There will be about 1,593 net new hires over the coming five years.

Present Value and Discounting

We just saw that the FV of $1 invested for one year at 10 percent is $1.10. Suppose we ask a slightly different question; namely, how much do we have to invest today at 10 percent to get $1 in one year? We know the FV is $1, but what is its PV? Whatever we invest today will be 1.1 times bigger at the end of the year. Because we need $1 at the end of the year:

PV × 1.1 = $1

Solving for the PV yields $1/1.1 = $0.909. This PV is the answer to the question, “What amount invested today will grow to $1 in one year if the interest rate is 10 percent?” PV is therefore just the reverse of FV. Instead of compounding the money forward into the future, we discount it back to the present.

Now suppose that you set a goal to have $1,000 in two years. If you can earn 7 percent each year, how much do you have to invest to have $1,000 in two years? In other words, what is the PV of $1,000 in two years if the relevant rate is 7 percent? To answer this question, let us express the problem as this:

$1,000 = PV × 1.07 × 1.07

PV = (1.07)2

PV = 1.1449

Solving for PV:

PV = $1,000/1.1449 = $873.44

At a more general level, the PV of $1 to be received t periods into the future at a discount rate of r is as follows:

Equation 2-2. 

The quantity in brackets, 1 / (1+ r)t, is used to discount a future cash flow. Hence, it is often called a discount factor. Likewise, the rate used in the calculation is often called the discount rate. Finally, calculating the PV of a future cash flow to determine its worth today is commonly called discounted cash flow (DCF) valuation. If we apply the DCF valuation to estimate the PV of future cash flows from an investment, it is possible to estimate the net present value (NPV) of that investment as the difference between the PV of the future cash flows and the cost of the investment. Indeed, the capital-budgeting process can be viewed as a search for investments with NPVs that are positive.[26]

When calculating the NPV of an investment project, we tend to assume not only that a company’s cost of capital is known, but also that it remains constant over the life of a project. In practice, a company’s cost of capital may be difficult to estimate, and the selection of an appropriate discount rate for use in investment appraisal is also far from straightforward. The cost of capital is also likely to change over the life of a project because it is influenced by the dynamic economic environment within which all business is conducted.

If these changes can be forecast, however, the NPV method can accommodate them without difficulty.[27]

Now back to PV calculations. PVs decline as the length of time until payment grows. Look out far enough, and PVs will be close to zero. Also, for a given length of time, the higher the discount rate is, the lower the PV. In other words, PVs and discount rates are inversely related. Increasing the discount rate decreases the PV, and vice versa.

If we let FVt stand for the FV after t periods, the relationship between FV and PV can be written simply as one of the following:

Equation 2-3. 

The last result is called the basic PV equation. There are a number of variations of it, but this simple equation underlies many of the most important ideas in corporate finance.[28]

Sometimes one needs to determine what discount rate is implicit in an investment. We can do this by looking at the basic PV equation:

PV = FVt/ (1 + r)t

There are only four parts to this equation: the present value (PV), the future value (FVt), the discount rate (r), and the life of the investment (t). Given any three of these, we can always find the fourth. Now let’s shift gears and consider the value of employees’ time.

Estimating the Value of Employee Time Using Total Pay

Many calculations in HR measurement involve an assessment of the value of employees’ time (for example, those involving exit interviews, attendance at training classes, managing problems caused by absenteeism, or the time taken to screen job applications). One way to account for that time, in financial terms, is in terms of total pay to the employee. The idea is to use the value of what employees earn as a proxy for the value of their time. This is very common, so we provide some guidelines here. However, at the end, we also caution that the assumption that total pay equals the value of employee time is not generally valid.

Should “total pay” include only the average annual salary of employees in a job class? In other words, what should be the valuation base? If it includes only salary, the resulting cost estimates will underestimate the full cost of employees’ time, because it fails to include the cost of employee benefits and overhead. Overhead costs include such items as rent, energy costs, and equipment. More generally, overhead costs are those general expenses incurred during the normal course of operating a business. At times, these costs may be called general and administrative or payroll burden. They may be calculated as a percentage of actual payroll costs (salaries plus benefits).[29]

To provide a more realistic estimate of the cost of employee time, therefore, many recommend calculating it as the mean salary of the employees in question (for example, technical, sales, managerial) times a full labor-cost factor.[30] The full labor-cost multiplier incorporates benefits and overhead costs.

To illustrate, suppose that in estimating the costs of staff time to conduct exit interviews, we assume that an HR specialist is paid $23 per hour, and that it takes 15 minutes to prepare and 45 minutes to conduct each interview, for a total of 1 hour of his or her time. If the HR specialist conducts 100 exit interviews in a year, the total cost of his or time is therefore $2,300. However, after checking with the accounting and payroll departments, suppose we learn that the firm pays an additional 40 percent of salary in the form of employee benefits, and that overhead costs add an additional 35 percent. The full labor-cost multiplier is therefore 1.75, and the cost per exit interview is $23 × 1.75 = $40.25. Over a 1-year period and 100 exit interviews, the total cost of the HR specialist’s time is therefore $4,025—a difference of $1,725 from the $2,300 that included only salary costs.

Note that total pay, using whatever calculation, is generally not synonymous with the fixed, variable, or opportunity costs of employee time. It is a convenient proxy, but must be used with great caution. In most situations, the costs of employee time (wages, benefits, overhead costs to maintain the employees’ employment or productivity) simply don’t change as a result of their allocation of time. They are paid no matter what they do, as long as it is a legitimate part of their jobs. If we require employees to spend an hour interviewing candidates, their total pay for the hour is no different than if we had not required that time. Moreover, they would still be paid even if they weren’t conducting interviews. The more correct concept is the opportunity cost of the lost value that employees would have been creating if they had not been using their time for interviewing. That is obviously not necessarily equal to the cost of their wages, benefits, and overhead. That said, it is so difficult to estimate the opportunity cost of employees’ time that it is very common for accounting processes just to recommend multiplying the time by the value of total pay. The important thing to realize is the limits of such calculations, even if they provide a useful proxy.

Cost-Benefit and Cost-Effectiveness Analyses

Cost-benefit analysis expresses both the benefits and the costs of a decision in monetary terms. One of the most popular forms of cost-benefit analysis is return-on-investment (ROI) analysis.

Traditionally associated with hard assets, ROI relates program profits to invested capital. It does so in terms of a ratio in which the numerator expresses some measure of profit related to a project, and the denominator represents the initial investment in a program. More specifically, ROI includes the following:[31]

  1. The inflow of returns produced by an investment.

  2. The offsetting outflows of resources required to make the investment.

  3. How the inflows and outflows occur in each future time period.

  4. How much what occurs in future time periods should be “discounted” to reflect greater risk and price inflation.

ROI has both advantages and disadvantages. Its major advantage is that it is simple and widely accepted. It blends in one number all the major ingredients of profitability, and it can be compared with other investment opportunities. On the other hand, it suffers from two major disadvantages. One, although the logic of ROI analysis appears straightforward, there is much subjectivity in items 1, 3, and 4 above. Two, typical ROI calculations focus on one HR investment at a time and fail to consider how those investments work together as a portfolio. Training may produce value beyond its cost, but would that value be even higher if it were combined with proper investments in individual incentives related to the training outcomes?[32]

Here is a simple example of the ROI calculation over a single time period. Suppose your company develops a battery of pre-employment assessments for customer-service representatives that includes measures of aptitude, relevant personality characteristics, and emotional intelligence. Payments to outside consultants total $500,000 during the first year of operation. The measured savings, relative to baseline measures in prior years, total $30,000 in reduced absenteeism, $55,000 in reduced payments for stress-related medical conditions, and $70,000 in reduced turnover among customer-service representatives. The total expected benefits are therefore $155,000.

ROI = Total expected benefit/program investment

ROI = $155,000 / $500,000 = 31 percent

Cost-effectiveness analysis is similar to cost-benefit analysis, but whereas the costs are still measured in monetary terms, outcomes are measured in “natural” units other than money. Cost-effectiveness analysis identifies the cost of producing a unit of effect (for example, in a corporate-safety program, the cost per accident avoided). As an example, consider the results of a three-year study of the cost-effectiveness of three types of worksite health-promotion programs for reducing risk factors associated with cardiovascular disease (hypertension, obesity, cigarette smoking, and lack of regular physical exercise) at three manufacturing plants, compared to a fourth site that provided health-education classes only.[33]

The plants were similar in size and in the demographic characteristics of their employees. Plants were allocated randomly to one of four worksite health-promotion models. Site A provided health education only. Site B provided a fitness facility; site C provided health education plus follow-up that included a menu of different intervention strategies; and site D provided health education, follow-up, and social organization of health promotion within the plant.

Over the three-year period of the study, the annual, direct cost per employee was $17.68 for site A, $39.28 for site B, $30.96 for site C, and $38.57 for site D (in 1992 dollars). The reduction in risks ranged from 32% at site B to 45% at site D for high-level reduction or relapse prevention, and from 36% (site B) to 51% (site D) for moderate reduction. These differences were statistically significant.

At site B, the greater amount of money spent on the fitness facility produced less risk reduction (–3%) than the comparison program (site A). The additional cost per employee per year (beyond those incurred at site A) for each percent of risks reduced or relapses prevented was –$7.20 at site B (fitness facility), $1.48 for site C (health education plus follow-up), and $2.09 at site D (health education, follow-up, and social organization of health promotion at the plant). At sites C and D, the percent of effectiveness at reducing risks/preventing relapse was about 1.3% to 1.5% per dollar spent per employee per year, and the total cost for each percent of risk reduced or relapse prevented was less than $1 per employee per year ($0.66 and $0.76, at sites C and D, respectively).

In summary, both cost-benefit and cost-effectiveness analyses can be useful tools for evaluating benefits, relative to the costs of programs or investments. Whereas cost-benefit analysis expresses benefits in monetary terms, and can accommodate multiple time periods and discount rates, cost-effectiveness analysis expresses benefits in terms of the cost incurred to produce a given level of an effect. Cost-benefit analysis enables us to compare the absolute value of the returns from very different programs or decisions, because they are all calculated in the same units of money. Cost-effectiveness, on the other hand, makes such comparisons somewhat more difficult because the outcomes of the different decisions may be calculated in very different units. How do you decide between a program that promises a cost of $1,000 per avoided accident versus a program that promises $300 per unit increase of employee satisfaction? Cost-effectiveness can prove quite useful for comparing programs or decisions that all have the same outcome (for example, which accident-reduction program to choose).

It’s a dilemma when one must decide among programs that produce very different outcomes (such as accident reduction versus employee satisfaction) and when all outcomes of programs cannot necessarily be expressed in monetary terms. However, many decisions require such comparisons. One answer is to calculate “utilities” (from the word use) that attempt to capture systematically the subjective value that decision makers place on different outcomes, when the outcomes are compared directly to each other.

Utility as a Weighted Sum of Utility Attributes

Utility analysis is a tool for making decisions. It is the determination of institutional gain or loss anticipated from various courses of action, after taking into account both costs and benefits. For example, in the context of HRM, the decision might be which type of training to offer or which selection procedure to implement. When faced with a choice among alternative options, management should choose the option that maximizes the expected utility for the organization across all possible outcomes.[34]

In general, there are two types of decisions: those where the outcomes of available options are known for sure (decisions under certainty), and those where the outcomes are uncertain and occur with known or uncertain probabilities (decisions under uncertainty). Most theories about judgment and decision-making processes have focused on decisions under uncertainty, because they are more common.[35]

One such theory is subjective expected utility theory, and it holds that choices are derived from only two parameters:

  • The subjective value, or utility, of an option’s outcomes

  • The estimated probability of the outcomes

By multiplying the utilities with the associated probabilities and summing over all consequences, it is possible to calculate an expected utility. The option with the highest expected utility is then chosen.

A rational model of decision making that has been used as a guide to study actual decision behavior and as a prescription to help individuals make better decisions is known as multi-attribute utility theory (MAUT). MAUT is a type of subjective expected utility theory that has been particularly influential in attempts to improve individual and organizational decision making. Here is a brief conceptual overview of how it works.

Using MAUT, decision makers carefully analyze each decision option (alternative program or course of action under consideration) for its important attributes (things that matter to decision makers). For example, one might characterize a job in terms of attributes such as salary, chances for promotion, and location. Decision weights are assigned to attributes according to their importance to decision makers. Each available option is then assessed according to a utility scale for its expected value on all attributes. After multiplying the utility-scale values by the decision weights and summing the products, the option with the highest value is selected.[36] Total utility values for each option are therefore computed by means of a payoff function, which specifies how the attribute levels are to be combined into an overall utility value.

To illustrate, suppose that a new MBA receives two job offers. She decides that the three most important characteristics of these jobs that will influence her decision are salary, chances for promotion, and location. She assigns the following weights to each: salary (.35), chances for promotion (.40), and location (.25). Using a 1–5 utility scale of the expected value of each job offer on each attribute, where 1 = low expected value, and 5 = high expected value, suppose she assigns ratings to the two job offers as shown in Table 2-2.

Table 2-2. Multi-Attribute Utility Table Showing Job Attributes and Their Weights, the Values Assigned to Each Attribute, and the Payoff Associated with Each Alternative Decision

Attribute

Salary

Promotion

Location

Payoff (Weight × Value)

Weight

.35

.40

.25

 

Job A values

3

4

2

3.15

Job B values

4

3

4

3.60

Based on this calculation of multi-attribute utility, the new MBA should accept job B because it maximizes her expected utility across all possible outcomes. MAUT models can encompass a variety of decision options, numerous and diverse sets of attributes reflecting many different constituents, and very complex payoff functions, but they generally share the characteristics shown in the simple example in the preceding table.[37]

At a broader level, Huber identified five advantages of MAUT models over less systematic and structured decision systems:[38]

  • Because they make explicit a view of the decision situation, they help to identify the inadequacies of the corresponding, implicit mental model.

  • The attributes contained in such models serve as reminders of the information needed for consideration of each alternative.

  • The informational displays and models used in the mathematical model serve to organize external memories.

  • They allow the aggregation of large amounts of information in a prescribed and systematic manner.

  • They facilitate communication and support to be gained from constituencies.

Sensitivity and Break-Even Analysis

Both of these techniques are attempts to deal with the fact that utility values are estimates made under uncertainty. Hence, actual utility values may vary from estimated values, and it is helpful to decision makers to be able to estimate the effects of such variability. One way to do that is through sensitivity analysis.

In sensitivity analysis, each of the utility parameters is varied from its low value to its high value while holding other parameter values constant. You then examine the utility estimates that result from each combination of parameter values to determine which parameter’s variability has the greatest effect on the estimate of overall utility.

In the context of evaluating HR programs, sensitivity analyses almost always indicate that utility parameters that reflect changes in the quality of employees caused by improved selection, as well as increases in the number (quantity) of employees affected, have substantial effects on resulting utility values.[39] Utility parameters that reflect changes in the quality of employees include improvements in the validity of the selection procedure, the average score on the predictor, and dollar-based increases in the variability of performance.

Although sensitivity analyses are valuable in assessing the effects of changes in individual parameters, they provide no information about the effects of simultaneous changes in more than one utility parameter. Break-even analysis overcomes that difficulty.

Instead of estimating the level of expected utility, suppose decision makers focus instead on the break-even value that is critical to making a decision? In other words, what is the smallest value of any given parameter that will generate a positive utility (payoff)? For example, suppose we know that a training program conducted for 500 participants raises technical knowledge by 10 percent or more for 90 percent of them. Everyone agrees that the value of the 10 percent increase is greater than $1,000 per trainee. The total gain is therefore at least (500 × .90 = 450 × $1,000) $450,000. Assuming that the cost of the training program is $600 per trainee, the total cost is therefore $300,000 (500 × $600). Researchers and managers could spend lots of time debating the actual economic value of the increase in knowledge, but, in fact, it does not matter because even the minimum agreed-upon value ($1,000) is enough to recoup the costs of the program. More precisely, when the costs of a program are matched exactly by equivalent benefits—no more, no less—the program “breaks even.” This is the origin of the term break-even analysis.[40]

The major advantages of break-even analysis suggest a mechanism for concisely summarizing the potential impact of uncertainty in one or more utility parameters.[41] It shifts emphasis away from estimating a utility value toward making a decision using imperfect information. It pinpoints areas where controversy is important to decision making (that is, where there is doubt about whether the break-even value is exceeded), versus where controversy has little impact (because there is little risk of observing utility values below break-even). In summary, break-even analysis provides a simple expedient that allows utility models to assist in decision making even when some utility parameters are unknown or are uncertain.

Conclusion

As noted at the outset, the purpose of this chapter is to present some general analytical concepts that we will revisit throughout this book. The issues that we discussed comprised two broad areas:

  • Some fundamental analytical concepts from statistics and research design

  • Some fundamental analytical concepts from economics and finance

In the first category, we considered the following concepts: cautions in generalizing from sample data, correlation and causality, and experiments and quasi-experiments. In the second category, we considered some economic and financial concepts in six broad areas: fixed, variable, and opportunity costs/savings; the time value of money; estimating the value of employee time using total pay; cost-benefit and cost-effectiveness analyses; utility as a weighted sum of utility attributes; and sensitivity and break-even analysis. All of these concepts are important to HR measurement, and understanding them will help you to develop reliable, valid metrics. It will be up to you, of course, to determine whether those metrics fit the strategic direction of your organization.

References

1.

Society for Human Resource Management Foundation. (2004). HR in alignment: The link to business results (DVD). Alexandria, VA: Society for Human Resource Management Foundation.

2.

Ibid. See also Carrig, K., & Wright, P. M. (2006). Building Profit Through Building People. Alexandria, VA: Society for Human Resource Management.

3.

Jackson, S. L. (2003). Research Methods and Statistics: A Critical Thinking Approach. Belmont, CA: Wadsworth/Thomson Learning.

4.

Ibid.

5.

See, for example, Cochran, W. G. (1966). Sampling Techniques (2nd ed.). New York: Wiley. Kerlinger, F.N., & Lee, H. B. (2000). Foundations of Behavioral Research (4th ed.). Stamford, CT: Thomson Learning.

6.

For more on this, see Cohen, J., & Cohen, P. (1983). Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum. Dancey, C. P., & Reidy, J. (2004). Statistics without Maths for Psychology: Using SPSS for Windows (3rd ed.). Harlow, England: Prentice Hall.

Draper, N. R., & Smith, H. (1981). Applied Regression Analysis. New York: Wiley.

7.

Guilford, J. P., & Fruchter, B. (1978). Fundamental Statistics in Psychology and Education (6th ed.). New York: McGraw-Hill.

8.

Kelly, K., Ang, S., Yeo, G. H. H., & Cascio, W. F. (Aug. 2007). Employee turnover and firm performance: Modeling reciprocal effects. Paper presented at the annual conference of the Academy of Management, Philadelphia, PA.

9.

Rosnow, R. L., & Rosenthal, R. (1984). Understanding Behavioral Science: Research Methods for Consumers. New York: McGraw-Hill.

10.

Pedhazur, E. (1982). Multiple Regression in Behavioral Research: Explanation and Prediction (2nd ed.). New York: Holt, Rinehart, & Winston.

11.

Kerlinger & Lee, 2000, op. cit.

12.

For more on this, see Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Boston: Houghton Mifflin.

13.

Jackson, 2003, op. cit.

14.

Schmidt, F. L., & Hunter, J. (2003a). History, development, evolution, and impact of validity generalization and meta-analysis methods, 1975-2001. In K.R. Murphy (Ed.), Validity Generalization: A Critical Review (p. 31-65). Mahwah, NJ: Lawrence Erlbaum. Schmidt, F.L., & Hunter, J. E. (2003b). Meta-Analysis. In J. A. Schinka and W. F. Velicer (Eds), Handbook of Psychology: Research Methods in Psychology (vol. 2, p. 533-554). New York: John Wiley & Sons.

15.

Cascio, W. F., & Aguinis, H. (2005). Applied Psychology in Human Resource Management (6th ed.). Upper Saddle River, NJ: Prentice-Hall.

16.

Kerlinger & Lee, 2000, op. cit.

17.

Shadish et al., 2002, op. cit.

18.

As presented by Cascio & Aguinis, 2005, op. cit.

19.

Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum.

20.

Shadish et al., 2002, op. cit.

21.

Campbell, D. T., & Stanley, J. C. (1963). Experimental and Quasi-Experimental Designs for Research. Chicago: Rand McNally.

22.

Shadish et al, 2002, op. cit.

23.

Swain, M. R., Albrecht, W. S., Stice, J. D., & Stice, E. K. (2005). Management Accounting (3rd ed.). Mason, OH: Thomson/South-Western.

24.

Ibid.; Rothenberg, J. (1975). Cost-benefit analysis: A methodological exposition. In M. Guttentag and E. Struening (Eds.), Handbook of Evaluation Research (vol. 2, p. 75-106). Beverly Hills, CA: Sage.

25.

Material in this section is drawn largely from Brealey, R. A., Myers, S. C., and Allen, F. (2006). Principles of Corporate Finance. New York: McGraw-Hill/Irwin. Ross, S. A., Westerfield, R. W., & Jordan, B. D. (2003). Fundamentals of Corporate Finance (6th ed.). Burr Ridge, IL: McGraw-Hill/Irwin. Watson, D., & Head, A. (2001). Corporate Finance: Principles and Practice (2nd ed.). London: Pearson Education. This is just a brief introduction to these concepts at a conceptual level, and includes only rudimentary calculations.

26.

Ross et al., 2003, op. cit.

27.

Watson & Head, 2001, op. cit.

28.

Ross et al., 2003, op. cit.

29.

Audit guide for consultants. (August 1999). Downloaded from http://www.wsdot.wa.gov/NR/rdonlyres/53951C28-068F-4505-835B-58B2C9C05C7C/0/Chapter6.pdf on January 19, 2007. Overhead and indirect costs. Downloaded from http://www.ucalgary.ca/UofC/research/html/cont_indus/overhead.html on January 19, 2007.

30.

Raju, N. S., Burke, M. J., Normand, J., & Lezotte, D. V. (1993). What would be if what is wasn’t? Rejoinder to Judiesch, Schmidt, and Hunter (1993). Journal of Applied Psychology, 78, 912-916.

31.

Boudreau, J. W., & Ramstad, P. M. (2006). Talentship and HR measurement and analysis: From ROI to strategic organizational change. Human Resource Planning, 29(1), 25-33.

32.

Boudreau, J. W., & Ramstad, P. M. (2007). Beyond HR: The New Science of Human Capital. Boston, MA: Harvard Business School Publishing.

33.

Erfurt, J. C., Foote, A., & Heirich, M. A. (1992). The cost-effectiveness of worksite wellness programs for hypertension control, weight loss, smoking cessation, and exercise. Personnel Psychology, 45, 5-27.

34.

Cascio, W. F. (2007). Utility analysis. In (S. Rogelberg, Ed.), Encyclopedia of Industrial and Organizational Psychology (Vol. 2, p. 854-858). Thousand Oaks, CA: Sage.

35.

Slaughter, J. E., & Reb, J. (2007). Judgment and decision-making process. In (S. Rogelberg, Ed.), Encyclopedia of Industrial and Organizational Psychology (Vol. 1, p. 424-427). Thousand Oaks, CA: Sage.

36.

Ibid.

37.

Boudreau, J.W. (1991). Utility analysis for decisions in human resource management. In M.D. Dunnette & L.M. Hough (Eds.), Handbook of Industrial and Organizational Psychology (Vol. 2, p. 621-745). Palo Alto, CA: Consulting Psychologists Press.

38.

Huber, G. P. (1980). Managerial Decision Making. Glenview, IL: Scott Foresman.

39.

Boudreau, 1991, op. cit.

40.

Cascio, 2007, op. cit.

41.

Boudreau, 1991, op. cit..

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.225.72.245