7

Lies About the Return on Learning

David Vance

Return on learning is a topic many people have strongly held beliefs about and aren’t afraid to share them. I was once asked to join a group of my colleagues for a two-hour discussion on the return on investment (ROI) for learning. I was happy to oblige and looked forward to a good discussion on its application in our field. When the moderator asked how many of us used ROI, I was the only one—of the 15 people in the room—who said yes. The group proceeded to catalog the reasons why ROI should never be used. So it turns out I was invited to be the token ROI supporter, which was fine because I believe ROI can play an important role in learning. But I learned that day how truly controversial this topic is in the learning field.

Why all the fuss? And why the passionate debate? Well, I think there are many misunderstandings about ROI for learning. In this chapter, I will try to sort them out by examining eight lies on the topic. I will then provide a framework for confronting—and dealing with—these lies, which I hope will allow practitioners to benefit from key insights, while avoiding the common pitfalls.

But first, here’s a little background on ROI for learning. Jack Phillips introduced the concept in his 1983 book, Handbook of Training Evaluation and Measurement Methods. It is often referred to as Level 5 of learning evaluation, supplementing the four levels introduced by Don Kirkpatrick in 1959. Recall that Level 4 in the Kirkpatrick framework is results, which Phillips refers to as impact. Phillips starts with impact and then assigns it a dollar value, which becomes the gross benefit or bottom-line impact of the learning before learning costs are considered. The next step is to subtract the total cost of the learning (accounting costs like development and delivery, plus the opportunity cost or value of the participant’s time) to determine the net benefit to the organization from investing in the learning program. Finally, ROI can be calculated as the net benefit divided by total cost and expressed as a percentage.

Return on Learning Example

Suppose the impact (Level 4 of the Kirkpatrick framework) of a training initiative is a 2 percent increase in sales, of which the sales or accounting department determines the bottom-line impact is $500,000. The learning and development department calculates the total cost of the training to be $300,000. We can now calculate the net benefit and the ROI of the training:

Net benefit: Gross benefit ($500,000) – Total cost ($300,000) = $200,000

ROI: Net benefit ($200,000) / Total cost ($500,000) = 40 percent

Both the net benefit and ROI compare benefits with costs, with the net benefit as a unit of money and ROI as a percentage.

These equations allow us to create a traditional business case for learning, in which the benefits of an investment are explicitly compared with the costs, providing the information a decision maker needs to decide whether to fund a project and to determine whether it was a good investment. The case can be made by looking at just the net benefit ($200,000) or the ROI (40 percent) of the learning project.

In this chapter, I use the term return on learning to include any use of net benefit, ROI, or similar measure to compare the benefit of a training program with the cost. We will start by examining five lies that support not calculating a return on learning, and then continue by examining four lies about using ROI in particular.

Lie #1: We Don’t Need to Measure Return on Learning

The most common lie I hear for not calculating a return on learning is simply “We don’t need to measure it.” In short, the argument is that leaders know we are doing a good job and thus no one is asking to see a return. Let’s explore this in three parts.

We are actually doing good work, and our leaders believe they are getting a good return on their investment.

While I appreciate the trust senior leaders place in my learning colleagues and the outstanding relationships many have with their CEOs, we have an obligation to make a business case for large investments and to determine how those investments have performed. Note that I’m not suggesting we compute a return on learning for every program. Doing so consumes valuable time and resources. But we should for all large programs—in budget or number of participants. To not is simply an abrogation of our fiduciary responsibility as managers.

There are two good reasons to calculate the return on learning. First, you will learn more about your program’s costs and benefits by analyzing them, which will allow you to improve the program, if necessary. When I was the chief learning officer of Caterpillar University, we projected a return on learning for our major programs and conducted three post-deployment return-on-learning studies every year. These studies helped us continually improve. Second, your supportive CEO may not always be there, or your organization’s financial picture may change. The best leaders are always prepared and do not take funding for granted.

We are actually not doing good work.

Learning leaders may think that their return is high on all large programs, but they won’t know for sure if they don’t calculate a return on learning. Many learning leaders mistakenly assume that their programs offer significant value when, in fact, they do not. And even if their program quality is high and the impact significant, they still do not know if the benefits outweigh the costs without doing the analysis. Would your organization’s leaders consider it good work if the costs exceeded the expected benefits? How would you identify opportunities to increase the impact or achieve the same impact but at lower cost without calculating a return on learning?

Your leaders are not convinced.

For his 2010 book Measuring for Success, Jack Phillips surveyed 96 CEOs on what information about learning was most important to them. First was alignment of learning to their top goals, and second was ROI. So no matter what many learning leaders think, the fact is that their CEOs want to see the business case for the learning organization and for high-budget programs. CEOs expect the same from all other departments and for all major investments; this is simply how business is conducted. But in many organizations, learning professionals have not been asked to follow this disciplined planning process, damaging their credibility and explaining why many learning leaders are lacking in influence. Quite simply, they have not earned it.

So in the end, even if you think your leaders believe you are adding value, you should measure the return on learning (either the net benefit or ROI) for your largest and most discretionary programs. This will ensure that you are delivering the promised value and are in a position to continually improve.

Lie #2: Return on Learning Cannot Be Calculated

The second most common lie I hear for not calculating a return on learning is that it cannot be calculated. Many learning leaders would like to calculate the return on learning, but they say that it isn’t possible and thus isn’t worth pursuing further.

Return on learning most certainly can be calculated. Jack and Patti Phillips and others have written numerous books explaining in great detail how to do so. Calculating a program’s net benefit or ROI requires just three measures: the program’s impact, the monetary value of that impact, and the program’s cost. These three measures can be forecast or estimated. At this point, some leaders will say that forecasts and estimates are subjective and unreliable—and thus inaccurate—and that the impact of the training program cannot be isolated from other factors contributing to reaching the goal (like a 10 percent increase in sales). Let’s look closer at these two objections.

Accuracy

It is true that calculating a return on learning requires forecasting (before the program is developed and deployed) or estimating (after the program is deployed or completed). Organizations frequently forecast outcomes before actual data are in hand to learn more about what is possible and achievable in the future, not to predict with 100 percent accuracy. Learning leaders should adopt the same practice. Claiming that the forecasts of a return on learning will not be accurate is not a valid reason for not calculating them. Learning leaders must make the best forecasts possible, based on their experience with similar programs, their knowledge of the value learning investments will create in the coming year, and inputs from their customers or anyone else who can provide useful information.

After a program has been delivered and the costs are known, organizations still must estimate the impact of the program to calculate the return. Even here, they don’t need 100 percent accuracy to gain value from the exercise. They want to know if the actual return was close to the forecast return. Perhaps the forecast was too optimistic or too pessimistic. Or perhaps the forecast was fine but the plan wasn’t executed properly, resulting in a weaker impact or higher costs. Organizations look to use this information to better deliver the planned results the next time around. They want to learn from the difference between what they thought would happen and what actually happened. Learning leaders need to take a similar approach—make forecasts, monitor variances, replan, and then repeat, becoming more accurate over time.

Isolation of Learning

Not being able to isolate the impact of learning from other factors is one of my favorite reasons learning leaders use for not attempting to calculate a return on learning. Proponents of this reason may admit that costs could be calculated and may even admit that a dollar value could be assigned to impact if we could only isolate the impact of learning. Let’s take a common learning example to explore this objection. Say the company has a goal to increase sales by 10 percent next year and the senior vice president of sales agrees that training, if properly designed, delivered, and reinforced, could contribute to reaching that goal. (The training may, for example, consist of a course on consultative selling skills and a course on product features.) Now the question is how much this training could achieve, by itself in isolation from all the other factors.

In this example, we may expect that an improving economy would play a large role in higher sales. The company may be introducing new products with better features that should lead to higher sales. And, perhaps, the company may also have implemented a sales incentive system and hired five new salespeople. All these factors will play a role in generating higher sales. So how much sales growth can we attribute solely to training? This example illustrates the isolation issue, which, for many, is an insurmountable obstacle to calculating the return on learning. It needn’t be.

First, let’s be clear about our standards. We are not looking for perfection or absolute certainty, which does not exist in the real world of business. Your CEO, board of directors, and senior leaders know this. But some HR colleagues do not. They worry about sharing a forecast or estimate with senior leaders and then being asked to “prove it.” Unfortunately, there is no “prove it” when it comes to a business plan or forecast about the future, because the future has not yet happened. So in most circumstances, your leaders will not ask you to prove it. Now, if we are talking about calculating a return on learning for a completed project, they may ask us to prove it. So, as we will discuss at the end of the chapter, we want to be smart about how we present our results. In either case, our standard is to be roughly right or close enough to make the right business decision (proceed with the program or not) if done proactively, or to draw the right conclusions from the return on learning study if done after the fact. We don’t have to be exactly right, and we are not publishing these results in a scholarly journal. We are in the business world trying to make the best decisions we can with limited information in an environment of unrelenting uncertainty. As a result, “good enough for business” is our standard.

Now that we are on the same page with regard to our standards, let’s see how the isolation issue can be addressed. For planning purposes, the best way is to engage the sponsor—the senior vice president of sales in our example. If you have any pertinent historical information available (like from previous after-the-fact studies for similar programs) share it. If not, discuss all the possible factors that might contribute to an increase in sales. Ask questions. For example, how important is the learning initiative relative to the other factors? Is it the single most important factor (usually not) or is it one of several important factors that might influence results? Is it important but not as important as one or two others? You might ask the sponsor to prioritize all the factors on the list. This exercise alone will give all involved a pretty good idea of the relative importance of the training program. Because all of the factors together will contribute 100 percent toward reaching the goal, asking about the relative importance of each factor can provide additional important information.

So if the goal is to increase sales by 10 percent and there are five important factors, with learning in the middle, you, with the help of your key stakeholders, might assign a factor of 20 percent to the training program. In other words, the isolated expected impact of learning is a 2 percent (10 percent x 20 percent) increase in sales. Then, you must find someone in sales or accounting to help calculate the bottom-line impact of a 2 percent increase in sales, which is the amount that can be directly attributed to training.

This sounds very subjective. Well, of course it is. All forecasts and business plans are. They represent an organization’s educated guess (forecast) about events that have not yet occurred. They are based on historical data and the economic environment, but they are, in the end, guesses. In the end, the point is that we followed a process and engaged the right people to create the best forecast we could, based on the information available. That is all a management team can expect. I did this at Caterpillar for five years and for two different CEOs, and I was never asked to reduce my net benefit forecasts. In fact, by the end, I was told that my forecasts were too conservative.

Finally, let’s look at the isolation issue when calculating the return on learning after the completion of a program. In this case, the industry standard methodology is to ask a random sample of participants to estimate the percentage impact on their results that can be attributed solely to the training program. They are then asked to rate their level of confidence in the estimate they just provided. The two percentages, multiplied together, result in the “confidence-adjusted isolation factor” for that individual. An average is taken over the sample to determine the confidence-adjusted isolation factor for the program. This is, of course, self-reported data, but an adjustment has been made allowing for some subjectivity. (Note that there are other methods to isolate impact, like using a control group, but this is the easiest and can always be done. See Phillips [1983] for all the potential methods.)

It is important to remember why we are doing this and what our standard is. We want to see if the program’s results are close to what we expected (our forecast), see what we can learn and how we can improve the program, and build a library of actual results to help us forecast future results accurately.

Lie #3: We Won’t Learn Anything

There is a backward logic that learning leaders use when claiming that they won’t learn anything from an ROI. They should calculate an ROI, and then determine if they learned anything. Not the other way around. So unless learning leaders believe they already know everything worth knowing, there is an opportunity to learn and improve by evaluating the return on learning. Whether the exercise turns out to be worthwhile will depend on what is learned and at what cost. But that can only be determined after the fact.

In my experience at Caterpillar, asking staff to think about the return on learning—the expected application rate for the learning, the performance support and sponsor engagement required to produce the desired application rate, the expected impact, the monetary value of that impact, and the accounting and opportunity costs for the program—always produced valuable insights and led to changes in how we planned to design, deploy, and reinforce the learning. Put simply, our programs would not have had the same effect on our company’s results had we not subjected ourselves to the business discipline of exploring the expected return on our learning investments. But keep in mind that the level of effort in calculating the return on learning must be proportionate to the budget for the program. We reserved this effort for our larger programs.

Lie #4: Calculating a Return on Learning Costs Too Much and Takes Too Long

The method described under Lie #2 of reaching an agreement with stakeholders on the isolated impact of learning can be completed in one to two hours. Assigning a monetary value to that impact may take one to two hours, and calculating the costs of the proposed program may take another several hours. Altogether, calculating a return on learning could be completed in fewer than 10 hours and with no outside assistance. And given that the process is reserved for large programs, the investment is small, especially when compared with the amount being invested in designing, developing, and delivering the learning solution.

Calculating a return on learning after a program is complete could take more or less time, depending on the sample size and the method used to solicit the self-reported estimate of impact. Some organizations automate this process and send surveys to participants three to six months after a program in order to obtain the confidence-adjusted isolation factor, which can then be used to estimate impact. Total time involved in these calculations should be less than 10 hours. Or if conducting a 15–30 minute telephone interview with each participant, it would probably take 20 to 30 hours for a sample size of 30 participants, given the logistics involved. This should be manageable for a limited number of post-program return-on-learning studies.

Another option is to hire an expert in the discipline, which may give the results increased credibility, if funding is available. The consultant could cost $10,000 to $25,000 per study, which may be prohibitive in some situations.

Regardless of the preferred method, the cost of estimating and measuring the impact of a learning solution for a major initiative will most likely be dwarfed by the cost of the solution itself.

Lie #5: No One Will Believe the Return on Learning

Unfortunately, the lie that no one will believe the return on learning becomes a truth more often that it should. If we manage our training function like a business and convey our results with humility (meaning that we acknowledge the role of forecasts and estimates), we are likely to have credibility. But even if we don’t overstate or misuse the results, our stakeholders may still not believe us—and they shouldn’t.

Here is the issue. After Jack Phillips introduced the concept of ROI for learning in 1983, many learning leaders latched onto ROI as a way to demonstrate their worth, validate their existence, and protect their budgets when senior leaders started to question the return on their learning investments. But a good ROI is not always enough to assuage the concerns of senior leaders.

Senior leaders start to show concern when learning leaders appear to not be managing the company’s assets and investments the same way that other business leaders do. When senior leaders don’t see basic management principles being applied, they naturally start to wonder whether they are getting value for their investment in learning. So they start to ask questions about how much is being spent, what it is being spent on, and what tangible business results they are getting for their investment. In this environment, ROI would seem like a perfect way to show the value of learning. If learning leaders could just show high ROI, the questions would stop, right?

Well, most senior leaders are smarter than that. Why would they be impressed with a high ROI for a program that is not aligned to the organization’s goals? Why would they believe a high ROI forecast when there was no input from stakeholders on how the ROI should be assessed? And wouldn’t they be naturally suspicious of a high ROI number that surfaced just a month after they started questioning the spending on a particular program or suggesting that the learning budget be cut? So when the function is not being run like a business, when the learning leader has little credibility with senior leaders, and when a high ROI appears out of nowhere to defend a program under question, it is easy to understand why senior leaders might be suspicious.

But it does not have to be this way. Learning leaders can run their departments like a business, implement a governance model, maintain close relationships with senior leaders, and integrate return on learning into their regular planning and audit processes with transparency and accountability. This means telling senior leaders and governing boards what the expected return on learning should be for key programs, keeping them up to date on interim results, and sharing the actual results—good and bad—when projects are completed.

Most of all, learning leaders need to reinforce why they calculate and share the return on learning—to encourage better planning, enable better execution, and generate the information needed to continually improve. It is to ensure that investments in learning produce the best possible return for the organization, not to justify programs or defend department budgets under fire.

Lie #6: We Should Prioritize Programs With High Returns on Learning

Some learning leaders who regularly forecast returns on learning for their key programs fall into the trap of trolling for the highest return. They prioritize programs by expected ROI and use this to decide which programs to fund. This is both a mistake and an improper use of ROI.

Return on learning should not take the place of strategically aligning learning to the organization’s highest priority goals. Alignment should start with a discussion with critical stakeholders about next year’s goals, along with their relative priority. The learning leaders should work with stakeholders to determine how learning can contribute to reaching these goals and what the expected isolated impact will be. At this point, the learning leaders will know the organization’s key priorities to which they can align training investment.

Imagine a table showing the organization’s goals in order of priority and under each goal is the name of one or more training programs that will help reach it, along with the expected impact, the cost of the program, the net benefit, and the ROI. The estimated training investment would be allocated according to organizational priorities until the funds ran out.

Now imagine a learning-centric table that lists training programs by ROI, ignoring the goal the programs support. It may be that the programs with the highest ROI support the lowest priority goal. Which programs would stakeholders fund first: the ones with an acceptable ROI that support high-priority goals, or ones with a high ROI that support lower priority goals? This is a no-brainer.

As long as the programs have an acceptable ROI, stakeholders will choose to fund them in the order of the goals they support. This approach will maximize the impact the learning investment will have on the organization’s results, although it may not maximize the ROI of the learning investment. As long as the mission of the learning function is to support reaching the organization’s goals, this is the approach to take.

Note that ROI can be used to rank programs for funding when multiple programs support the same goal. For example, if three programs supported a low-priority goal, but there was not enough budget to fund all three, the program with the highest ROI should be funded first, followed by the program with the next highest ROI.

Lie #7: Return on Learning Is the Only Way to Demonstrate Value

While return on learning is the most powerful method to demonstrate value, it is a lie to suggest that it is the only way to demonstrate value. Application (Level 3 of the Kirkpatrick framework) and impact (Level 4) also demonstrate value. After all, if you did a thorough needs analysis and identified that applying a new skill was required to improve performance, it is logical to conclude that the successful application of that skill demonstrates value. Likewise, a training program that results in a 2 percent increase in sales unquestionably has value.

But if we stop there, how do we know if the project is worth doing? Senior leaders always want to improve performance, but only if it makes sense to do so. No one would suggest spending $100,000 for a one-time performance improvement valued at $25,000. Return on learning only helps provide decision makers with what they need to make an informed investment decision—a comparison of benefits (value) and costs.

Lie #8: ROIs for Learning and Finance Are Calculated the Same Way

Some people assume that ROI for learning is the same as ROI for other financial investments. While both are designed to determine whether a project is worth pursuing, the numerator and the denominator in the equation are different. In finance and accounting, the numerator would most often be the current value of a multiyear stream of net benefits, and the denominator would be the value of the investment required to produce the stream of benefits. The investment would be a balance sheet asset item, such as the capital cost of a new piece of equipment.

In contrast, the equation for an ROI for learning typically consists of a numerator that represents the net benefit for just one year (although it could be the current value of multiple years), with the net benefit calculated as the net financial benefit less the opportunity cost of the participants’ time. The denominator for a learning ROI is cost, comprising expense items from the income statement (not the balance sheet), such as salaries and consultant fees, and the opportunity cost, which doesn’t appear in any financial statement.

So if your accountants tell you that your ROI for a training program is not a true ROI (in a financial sense), they are right. A learning ROI is not strictly a financial ROI, but rather a convenient tool to help us compare costs and benefits in order to decide whether a program is worth developing (or how much value the program actually delivered, if calculated after its completion). If calling this calculation an ROI causes push back from the financial branch of your organization, just change the name. You might call it return on learning (ROL), which is what we did at Caterpillar when our accountants objected to our use of the term ROI. Once we labeled it something other than ROI, they were fine.

Conclusion

I hope you now have a better feel for the return on learning concept and the lies or misconceptions surrounding its use. Without a doubt, some learning leaders misuse return on learning. They dismiss it outright because they believe (mistakenly) that it can’t be calculated or that they won’t learn anything from it. They claim that it costs too much or takes too long to calculate. They assume that their senior leaders will not believe the return on learning. They prioritize programs with high returns on learning, rather than ensuring that the programs align to their organization’s goals. They think that calculating it is the only way to demonstrate value to senior leaders. And they assume that their learning ROI is identical to a financial ROI. But perhaps the most pervasive excuse is that learning leaders don’t need to do it because they know how great they are and that their senior leaders are not asking for it.

Return on learning is a very powerful concept. It is the only way to make a business case for a large, expensive discretionary program. And it is an excellent tool to explore a program’s actual impact in order to identify future opportunities for improvement. ROI and net benefit are simply numbers, inherently neither good nor bad. If someone misuses them the fault lies with the practitioner, not the concept or the calculation. So let’s not dismiss the best way we have to compare costs and benefits. Instead, let’s use it wisely to make better decisions, to learn, and to continually improve.

To avoid the pitfalls in using return on learning to make a business case, follow these steps:

1.   Always start with your organization’s goals. Know your stakeholders’ goals and how they are prioritized.

2.   Meet with the owner of each goal to understand if and how a learning program can help reach it. Or if someone else asks you to create a program, make sure that it aligns to a high-priority goal.

3.   If it does, agree on the specifics of the program with stakeholders, including the expected cost and benefit. These will be forecasts, and subject to error, but that is okay. This is how organizations plan. It is an imperfect, messy process, but you will be more successful having gone through it.

4.   Share your forecast and assumptions with other leaders and solicit their comments and feedback. Make appropriate changes. Be humble and transparent. Remember why you are doing this: You want to make the right decisions about key programs and you want to improve. And remember that your forecast should simply be close enough to inform the decision-making process.

Likewise, to avoid misusing return on learning after a pilot or project is complete, follow these steps:

1.   Discuss the expected impact of the program with the stakeholders before starting the program. Agree on the expected isolated impact or a proxy for it to measure success.

2.   Meet with stakeholders as the program is deployed to share interim results or leading indicators. Ask how they believe it is going and whether they are still comfortable with the expected impact.

3.   Be conservative in estimating the isolated impact. Value it the same way you valued it in the business case before the program was started.

4.   Present your results with humility and in the spirit of continual learning. If the results differ widely from what you had forecast, try to understand why, and view this as a learning opportunity. Present both the good and the bad. This will increase your credibility.

If you are new to the return on learning concept, there are many books and workshops on ROI to get you started. You should then look for a program to try it with and see what you learn. Then do it again. And again. Pretty soon you will be amazed by how much you have learned and how your skills to deliver value to your organization have improved.

References

Phillips, J.J. 1983. Handbook of Training Evaluation and Measurement Methods. Houston, TX: Gulf Publishing Division.

Phillips, J.J., and P.P. Phillips. 2009. Measuring for Success. Alexandria, VA: ASTD Press.

Recommended Reading

Phillips, P.P., and J.J. Phillips. 2006. Return on Investment Basics. Alexandria, VA: ASTD Press.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.135.213.212