2

Measuring ROI: The Basics

Measuring and evaluating learning through technology has earned a place among the critical issues in the learning and development and performance improvement fields. For decades, this topic has been on conference agendas and discussed at professional meetings. Journals and newsletters regularly embrace the concept, dedicating increased print space to it. Even executives have an increased appetite for evaluation data.

Although interest in the topic has heightened and much progress has been made, it is still an issue that challenges even the most sophisticated and progressive learning departments. While some professionals argue that having a successful evaluation process is difficult, others are quietly and deliberately implementing effective evaluation systems. The latter group has gained tremendous support from the senior management team and has made much progress. Regardless of the position taken on the issue, the reasons for measurement and evaluation are intensifying. Almost all technology-based learning professionals share a concern that they must show the results of the investments. Otherwise, funds may be reduced or the function may not be able to maintain or enhance its present status and influence within the organization.

The dilemma surrounding the evaluation of learning through technology is a source of frustration with many senior executives—even within the field itself. Most executives realize that technology-based learning is a basic necessity when organizations experience significant growth or increased competition. They intuitively feel that providing e-learning and mobile learning opportunities is valuable, logically anticipating a payoff in important bottom-line measures, such as productivity improvements, quality enhancements, cost reductions, time savings, and improved customer service. Yet the frustration comes from the lack of evidence to show that programs really work. While results are assumed to exist and programs appear to be necessary, more evidence is needed, or executives may feel forced to adjust funding in the future. A comprehensive measurement and evaluation process represents the most promising, logical, and rational approach to show this accountability. This book shows how to measure the contributions of learning through technology with several case studies. This chapter defines the basic ROI issues and introduces the ROI Methodology.

ROI DEFINED

Return on investment (ROI) is the ultimate measure of accountability. Within the context of measuring learning through technology, it answers the question: For every dollar invested in technology-based learning, how many dollars were returned after the investment is recovered? ROI is an economic indicator that compares earnings (or net benefits) to investment, and is expressed as a percentage. The concept of ROI to measure the success of investment opportunities has been used in business for centuries to measure the return on capital expenditures such as buildings, equipment, or tools. As the need for greater accountability in learning, demonstrated effectiveness, and value increases, ROI is becoming an accepted way to measure the impact and return on investment of all types of programs, including technology-based learning.

The counterpart to ROI, benefit-cost ratio (BCR), has also been used for centuries. Benefit-cost analysis became prominent in the United States in the early 1900s, when it was used to justify projects initiated under the River and Harbor Act of 1902 and the Flood Control Act of 1936. ROI and the BCR provide similar indicators of investment success, although one (ROI) presents the earnings (net benefits) as compared to the cost, while the other (BCR) compares benefits to costs. Here are the basic equations used to calculate the BCR and the ROI:

What is the difference between these two equations? A BCR of 2:1 means that for every $1 invested, $2 in benefits are generated. This translated into an ROI of 100 percent, which says that for every $1 invested, $1 is returned after the costs are covered (the investment is recovered plus $1 extra).

Benefit-cost ratios were used in the past, primarily in public sector settings, while ROI was used mainly by accountants managing capital expenditures in business and industry. Either calculation can be used in both settings, but it is important to understand the difference. In many cases the benefit-cost ratio and the ROI are reported together.

While ROI is the ultimate measure of accountability, basic accounting practice suggests that reporting the ROI metric alone is insufficient. To be meaningful, ROI must be reported with other performance measures. This approach is taken with the ROI Methodology, the basis for the studies in this book.

THE ROI METHODOLOGY

The ROI Methodology is comprised of five key elements that work together to complete the evaluation puzzle. Figure 2-1 illustrates how these elements are interconnected to create a comprehensive evaluation system.

FIGURE 2-1. Key Elements of an Evaluation System

Evaluation Framework

The system begins with the five-level ROI framework, developed in the 1970s and becoming prominent in the 1980s. Today, this framework is used to categorize results for all types of programs and projects.

Level 1 Reaction and Planned Action data represent the reactions to the program and the planned actions from participants. Reactions may include views of the format, ease of use, convenience, and fit. This category must include data that reflect the value of the program content, including measures of relevance, importance, amount of new information, and participants’ willingness to recommend the program to others.

Level 2 Learning data represent the extent to which participants have acquired new knowledge about their strengths, development areas, and skills needed to be successful. This category also includes the level of confidence of participants as they plan to apply their newly acquired knowledge and skills on the job.

Level 3 Application and Implementation data determine the extent to which professionals apply their newly acquired knowledge and skills from the learning program. This category of data also includes data that describe the barriers preventing application, as well as any supporting elements (enablers) in the knowledge and skill transfer process.

Level 4 Business Impact data are collected and analyzed to determine the extent to which applications of acquired knowledge and skills positively influenced key measures that were intended to improve as a result of the learning experience. The measures include errors, rejects, new accounts, customer complaints, sales, customer returns, down time, cycle time, job engagement, compliance, absenteeism, and operating costs. When reporting data at this level, a step to isolate the program’s effect on these measures is always taken.

Level 5 Return on Investment compares the monetary benefits of the impact measures (as they are converted to monetary value) to the fully loaded program costs. Improvement can occur in sales, for example, but to calculate the ROI, the measure of improvement must be converted to monetary value (profit of the sale) and compared to the cost of the program. If the monetary value of sales improvement exceeds the costs, the calculation is a positive ROI.

Each level of evaluation answers basic questions regarding the success of the program. Table 2-1 presents these questions.

TABLE 2-1. Evaluation Framework and Key Questions

Level of Evaluation Key Questions
Level 1: Reaction and Planned Action

• Was the learning relevant to the job and role?

• Was the learning important to the job and success of the participant?

• Did the learning provide the participant with new information?

• Do participants intend to use what they learned?

• Would they recommend the program or process to others?

• Is there room for improvement in duration and format?

Level 2: Learning

• Did participants gain the knowledge and skills identified at the start of the program?

• Do participants know how to apply what they learned?

• Are participants confident to apply what they learned?

Level 3: Application and Implementation

• How effectively are participants applying what they learned?

• How frequently are participants applying what they learned?

• Are participants successful with applying what they have learned?

• If participants are applying what they learned, what is supporting them?

• If participants are not applying what they learned, why not?

Level 4: Business Impact

• So what if the application is successful—what impact will it have on the business?

• To what extent did application of knowledge and skills improve the business measures the program was intended to improve?

• How did the program affect sales, productivity, operating costs, cycle time, errors, rejects, job engagement, and other measures?

• How do you know it was the learning program that improved these measures?

Level 5: ROI

• Do the monetary benefits of the improvement in business impact measures outweigh the cost of the technology-based learning program?

Source: ROI Institute, Inc.

Categorizing evaluation data as levels provides a clear and understandable framework to manage the technology-based learning design and objectives and manage the data collection process. More importantly, however, these five levels present data in a way that makes it easy for the audience to understand the results reported for the program. While each level of evaluation provides important, stand-alone data, when reported together, the five-level ROI framework represents data that tell the complete story of program success or failure. Figure 2-2 presents the chain of impact that occurs as participants react positively to the program; acquire new knowledge, skills, and awareness; apply the new knowledge, skills, and awareness; and, as a consequence, positively affect key business measures. When these measures are converted to monetary value and compared to the fully loaded costs, an ROI is calculated. Along with the ROI and the four other categories of data, intangible benefits are reported. These represent Level 4 measures that are not converted to monetary value.

ROI Process Model

The second part of the evaluation puzzle is the process model. As presented in Figure 2-3, the process model is a step-by-step guide to ensure a consistent approach to evaluating a learning project. Each phase of the four-phase process contains critical steps that must be taken to ensure the output of a credible evaluation. The ROI process is described in more detail in the next section.

FIGURE 2-2. Chain of Impact

Source: Phillips, P.P., and J.J. Phillips. (2005). Return on Investment Basics. Alexandria, VA: ASTD Press.

Operating Standards and Philosophy

The third piece of the evaluation puzzle ensures consistent decision making around the application of the ROI Methodology. These standards, called the 12 Guiding Principles of the ROI Methodology, provide clear guidance about the specific ways to implement the methodology to ensure consistent, reliable practice in evaluating learning through technology. When these guiding principles (shown in Table 2-2) are followed, consistent results can be achieved. Additionally, these principles help maintain a conservative and credible approach to data collection and analysis. They serve as a decision-making tool and influence decisions on the best approach by which to collect data, the best source and timing for data collection, the most appropriate approach for isolation and data conversion, the costs to be included, and the stakeholders to whom results are reported. Adhering to the principles provides credibility when reporting results to executives.

FIGURE 2-3. The ROI Process Model

Source: © ROI Institute. All rights reserved.

TABLE 2-2. 12 Guiding Principles for Effective ROI Implementation

1.When a higher level of evaluation is conducted, data must be collected at lower levels.
2.When an evaluation is planned for a higher level, the previous level of evaluation does not have to be comprehensive.
3.When collecting and analyzing data, use only the most credible sources.
4.When analyzing data, choose the most conservative alternatives for calculations.
5.At least one method must be used to isolate the effects of the solution/program.
6.If no improvement data are available for a population or from a specific source, it is assumed that no improvement has occurred.
7.Estimates of improvements should be adjusted for the potential error of the estimate.
8.Extreme data items and unsupported claims should not be used in ROI calculations.
9.Only the first year of benefits (annual) should be used in the ROI analysis for short-term solutions/programs.
10.Costs of the solution/program should be fully loaded for ROI analysis.
11.Intangible measures are defined as measures that are purposely not converted to monetary values.
12.The results from the ROI Methodology must be communicated to all key stakeholders.

Source: ROI Institute, Inc.

Case Applications and Practice

The fourth piece of the ROI Methodology evaluation puzzle includes case applications and practice, which allow for a deeper understanding of the ROI Methodology’s comprehensive evaluation process. Case studies are a way to provide evidence of a program’s success. Thousands of case studies across many industries, including business and industry, healthcare, government, and even community and faith-based initiatives, have been developed, describing the application of the ROI Methodology. The case studies in this book, all based on measuring the ROI in learning through technology, provide excellent examples of application of the ROI Methodology.

Practitioners beginning their pursuit of the ROI Methodology can learn from these case studies, as well as those found in other publications. However, the best learning comes from actual application. Conducting an ROI study around learning through technology will allow participants to see how the framework, process model, and operating standards come together. The first study serves as a starting line for a track record of program success.

Implementation

The last piece of the ROI Methodology evaluation puzzle is implementation. While it is significant to conduct an ROI study, one study alone adds little value to your efforts to continuously improve and account for learning investments. The key to a successful learning function is to sustain the use of ROI. Building the philosophy of the ROI Methodology into everyday decision making is imperative for attaining credibility and consistency in learning effectiveness. Implementing the ROI Methodology requires assessing the organization’s culture for accountability and its readiness for evaluating technology-based learning programs at the ROI level. It also requires defining the purpose for pursuing this level of evaluation; building expertise and capability; and creating tools, templates, and standard processes.

ROI PROCESS

To evaluate a technology-based learning program using the ROI Methodology, it is important to follow the step-by-step process to ensure consistent, reliable results. These 10 steps taken during the four phases of an evaluation project make up the evaluation process model.

Evaluation Planning

The first phase of a successful application of the ROI Methodology is planning. The plan addresses the key questions about the evaluation and defines how to know when success has been achieved. The plan begins with developing and reviewing the program objectives to ensure that the application and impact objectives have been defined. Next, the data collection plan is developed, which includes defining the measures for each level of evaluation, selecting the data collection instrument, identifying the source of the data, and determining the timing of data collection. The baseline data for the measures being tracked should be collected during this time. The next step is to develop the ROI analysis plan. Working from the impact data, the most appropriate technique to isolate the effects of the learning initiative is selected. The most credible method for converting data to money is identified along with the cost categories for the program. Intangible benefits are listed and the communication targets for the final report are identified.

Data Collection

Once the planning phase is complete, the data collection phase begins. Level 1 and 2 data are collected as learning takes place, using common instruments such as questionnaires, completion of exercises, demonstrations, and a variety of other techniques. Follow-up data at Levels 3 and 4 are collected after the program is complete, when application of the newly acquired knowledge, skills, attitudes, and awareness becomes routine. After the application the consequences are captured as impact measures.

Data Analysis

Once the data are collected, data analysis begins. As described earlier, the method for data analysis is defined in the planning stage; so data analysis is just a matter of execution. The first step in data analysis is to isolate the effects of the learning program on impact data. Isolation is often overlooked when evaluating the success of technology-based learning programs, yet this step answers the critical question, “How much of the improvement in business measures is due to this particular learning program?”

Moving from Level 4 to Level 5 begins with converting Level 4 impact measures to monetary value. This step is usually easy because most of the important impact data are already converted to money. If not, there are some easy techniques to use. The fully loaded costs are developed during the data analysis phase. These costs include needs assessment (when conducted), design, participants’ time, overhead, and evaluation costs.

Intangible benefits are also identified during the data analysis phase. Intangible benefits are the Level 4 measures that are not converted to monetary value. These measures can also represent any unplanned program benefits that were not identified during the planning phase.

The final step of the data analysis phase is the ROI calculation. Using simple addition, subtraction, multiplication, and division, the ROI is calculated.

Reporting

The most important phase in the evaluation process is the final report. Evaluation without communication of results is a worthless endeavor. If the key stakeholders are not aware of the program’s progress, it is difficult to improve the process, secure additional funding, and market programs to other participants.

While there are a variety of ways to report data, a micro-level report of the complete ROI impact study is important. This is a record of the success of the learning program. (A macro-level reporting process includes results for all programs, projects, and initiatives, and serves as a scorecard of results for all initiatives.) An important point to remember, however, is regardless of how detailed or brief the report may be, the information in it must be actionable. Otherwise, there is no value in conducting the ROI analysis.

BENEFITS OF ROI

The ultimate use of results generated through the ROI Methodology is to show value of programs, specifically the economic value. However, there are a variety of other uses for ROI data, including justification of spending, improvement of the programs, and gain of support for learning through technology.

Justify Spending

Justifying spending on technology-based learning is becoming critical today. Learning managers are often required to justify investments in existing and new programs, as well as investment in changes or enhancements to existing programs.

For those who are serious about justifying investments in learning through technology, the ROI Methodology described in this book is a valuable tool. For new programs where a preprogram justification is required, there are two approaches: preprogram forecasts and ROI calculated on pilot implementation.

Calculating ROI in existing programs is more common in practice than forecasting success for new programs. Typically, ROI is used to justify continued investments in existing programs. While technology-based learning programs have been routinely conducted, there is sometimes a concern that the value does not justify continuation of the program.

Improve the Learning Program

The most important use of the ROI Methodology is to improve programs. Data are collected along a chain of impact as the results are generated. When there is a lack of success, the causes are pinpointed. These are the barriers. In addition, certain factors that have caused the success are identified. These are the enablers. Together, this information helps to improve the learning program.

Set Priorities

In almost all organizations, the need for learning exceeds available resources. A comprehensive evaluation process, such as the ROI Methodology, can help determine which programs rank as the highest priority. Learning programs with greatest impact (or the potential for greatest impact) are often top priority. Of course, this approach has to be moderated by taking a long view, ensuring that developmental efforts are in place for a long-term payoff.

Eliminate Unsuccessful Programs

While eliminating a learning initiative is rare, it’s possible that this action would be taken only if the learning program cannot add the value that was envisioned (it was the wrong solution). Sometimes, a program is no longer needed or necessary, but a new need for a different process emerges. The ROI Methodology can help determine which approach is eliminated and which alternative is selected as a replacement.

Gain Support

Another use for the ROI Methodology is to gain support for learning through technology. A successful program needs support from key executives and administrators. Showing the ROI for programs can alter managers’ and executives’ perceptions and enhance the respect and credibility of all technology-based learning.

Key executives and administrators are likely the most important group to influence learning programs. They commit resources and show support for learning with the need to achieve results that positively affect business impact. To ensure that effective programs are continued, it is necessary for learning managers to think like business leaders—focusing programs on results in the organization. ROI is one way this focus can occur. ROI evaluation provides the economic justification and value of investing in the technology-based learning program selected to solve a problem or take advantage of an opportunity.

Managers and supervisors of participants can sometimes be antagonistic about learning programs, questioning their value. When this occurs, it is generally because the manager or supervisor has not seen success with a change in behavior from participants. Managers and supervisors aren’t interested in what their participants learn; they are interested in what they do with what they learn. Participants also must take learning gained through the programs a step further by showing the effect on the job with outcomes in errors, rejects, sales, new accounts, incidents, down time, operating costs, and engagement. If the programs can show results linked to the business, managers and supervisors will provide increased support for these programs.

Participants and prospective participants should also support the program. Showing the value of programs, including ROI, can enhance their credibility. When a technology-based learning program is achieving serious results, participants will view programs in a value-add way and may be willing to spend time away from their pressing duties. Also, by making adjustments in the learning based on the evaluation findings, participants will see that the evaluation process is not just a superficial attempt to show value.

FINAL THOUGHTS

This chapter introduced the concept of the ROI Methodology, a systematic and logical process with conservative standards that is used by more than 5,000 organizations. The process collects and generates six types of data: reaction, learning, application, impact, ROI, and intangible benefits. It also includes techniques to isolate the effects of the learning on impact data, such as sales, productivity, new accounts, quality, costs, and time. The next three chapters explain this process in more detail and form the basis for the case studies presented in later chapters. Chapter 3 introduces the important steps of evaluation planning and the first major challenge, data collection.

For more detail on this methodology, see The Value of Learning: How Organizations Capture Value and ROI and Translate Them Into Support, Improvement, Funds (Phillips and Phillips, 2007, Pfeiffer).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.73.102