3

Evaluation Planning and Data Collection

As the learning and development profession has matured, accountability of learning has increased. In the past, we could provide learning solutions, including technology-enabled learning, to clients and measure the success of that solution based upon self-reports. But in a complex world of increased scrutiny, less time, and fewer resources, this is no longer enough. Instead, the very real demands for learning require a shift from an activity-based approach to a results-based approach to learning through technology, as outlined in Table 3-1. In some cases, executives want to see the financial ROI.

TABLE 3-1. Activity-Based vs. Results-Based Approach

Activity-Based Results-Based
Business need is not linked to the learning. Program is linked to specific business impact measures, such as revenue, productivity, quality, cost, time, and new customers.
No assessment of performance issues that need to change to meet business needs. There is an assessment of performance that needs to improve to meet business needs.
Smart objectives are not developed for application. Smart objectives for change and the related business impact are identified.
Participants are not fully prepared to achieve results from the program. Results expectations are communicated to, and in partnership with, participants.
Work environment is not prepared to support the transfer of learning to application and business impact. Work environment is prepared to support transfer of learning to application and business impact.
Partnerships with key managers to support the participants have not been identified and developed. Partnerships are established with key managers prior to learning to ensure participation and support.
Results or ROI analysis in real, tangible, and objective measures—including monetary impact—are not captured. Results and ROI analysis are measured.
Planning and reporting is input focused. Planning and reporting is outcome focused.

For more detail on this shift, see The Value of Learning: How Organizations Capture Value and ROI and Translate Them Into Support, Improvement, Funds (Phillips and Phillips, 2007, Pfeiffer).

For learning programs to lead to results, they must first be positioned for success. Positioning occurs through the establishment of alignment, the starting point for implementation. Alignment drives objectives, the first step in the ROI process, as shown in Figure 2-3 (found in chapter 2). Objectives drive the design of learning programs by describing how participants should react to the program, what they will learn, what they will do with what they learn, and the impact their behavior change will have on key business measures. A variety of data collection methods are available to collect reaction, learning, application, and impact data. This chapter explores alignment, objectives, evaluation planning, and data collection.

ACHIEVING BUSINESS ALIGNMENT

Objectives are core to business alignment. As shown in Figure 3-1, they evolve from the needs assessment process and drive the evaluation process. Objectives serve as the catalyst between what stakeholders want and need and what they get from a program or initiative. The first step toward developing objectives is clarifying stakeholder needs. First, business alignment starts as the learning program is connected to the business need. Next, alignment continues with impact objectives. Participants focus on the business impact during implementation. Finally, business alignment is validated as the learning program’s contribution to the business impact is calculated (isolating the impact of learning).

Clarifying Stakeholder Needs

Technology-based learning programs originate from a need. The ultimate need is in the potential positive payoff of the investment. Several questions should be asked when deciding if and how much to invest in a new initiative:

Is the program required?

Is the program worth implementing?

Is this problem worth pursuing?

Is this an opportunity?

Is there likelihood for a positive ROI?

The problem or opportunity can be obvious, such as:

Sales have decreased 30 percent from last year.

Compliance discrepancies have doubled in the last year.

Sales are flat—no growth.

Error rate is 0.09; should be less than 0.03 percent.

Product returns have increased 20 percent in six months.

Excessive turnover of critical talent has occurred (35 percent above benchmark data).

Customer service has been inadequate—3.89 on a 10-point customer satisfaction scale.

Safety record is the worst in the industry.

This year’s out-of-compliance fines totaled $1.2 million, up 82 percent from last year.

Product returns are excessive—30 percent higher than previous year.

Absenteeism is excessive in call centers—12.3 percent compared to industry average of 5.4 percent.

Sexual harassment complaints per 1,000 employees are the highest in the industry.

Grievances are up 38 percent from last year.

Upgrade sales are 50 percent of last year.

Operating costs are 47 percent higher than the industry average.

Or they may be less obvious, such as:

We want our customers to be more engaged.

Create a project management office.

The sales force should be more consultative.

Develop an “open-book” company.

Become a technology leader.

Become fully in compliance.

Implement a career advancement program.

Create a wellness and fitness center.

Build capability for future growth.

Create an empowered workforce.

Become a green company.

Integrate all technology systems.

Improve branding for all products.

Implement lean Six Sigma for all professional employees.

Every sales professional must have negotiation skills.

We need more just-in-time training.

Let’s organize a virtual business development conference.

Create a great place to work for the sales team.

In either case, payoff needs are those problems or opportunities that, if addressed, will ultimately help the organization make money, save money, or avoid costs, and deliver a positive ROI. When the payoff need is discussed, the specific business measures that need to improve to address the payoff need are identified. These “business needs” represent hard data, categorized as output, quality, cost, and time; they may also represent soft data such as measures of satisfaction, image, and reputation. Examples of business measures in need of improvement may be sales, errors, waste, rework, accidents, incidents, new accounts, cycle time, downtime, product returns, and customer complaints.

FIGURE 3-1. Business Alignment Process

Source: ROI Institute, Inc.

After the business needs are defined, the next step is to clarify performance needs. These are behaviors, actions, or activities that need to change or improve on the job in order to influence the business measures. The needs at this level can vary considerably and might include ineffective behavior, not following a procedure, and incomplete process flows. There are a variety of ways to identify gaps in performance, including questionnaires, interviews, observations, brainstorming, nominal group technique, statistical process control, and other approaches.

When performance needs are identified, the assessment addresses learning needs. When identifying learning needs, the basic question being answered is: What do participants need to know in order to change their behavior or take the desired actions on the job (performance need) to improve business measures (business need)? A variety of techniques are available to uncover learning needs, including task analysis, questionnaires, surveys, interviews, and observations.

Next, preference needs represent the preferred way in which the knowledge, skill, and information are delivered. This addresses the issue of preference for the learning program for the participant, her manager, and other stakeholders. This focuses on issues such as relevance, importance, and intent to use.

Finally, the project team determines the input needs, which simply represent the target audience, required resource investment, timing, duration, and all other aspects of program implementation. With needs analysis complete, the next step is to develop objectives.

Determining Program Objectives

Program objectives reflect the needs of stakeholders. When implementing technology-based learning, it is important to develop objectives at all five levels of evaluation. These objectives tie the learning to meaningful outcomes (reaction, learning, application, impact, and ROI). Program objectives represent the chain of impact, ensuring that designers, developers, participants, supervisors and managers, senior leaders, and evaluators are aware of the potential for success. Objectives should detail specifics about quality, accuracy, and time.

Level 1 reaction objectives describe expected immediate satisfaction with the program. They describe issues that are important to success, including the relevance of the program and importance of the information or content. In addition, these objectives describe expected satisfaction with the logistics of the learning, from delivery to expected use. Table 3-2 shows some typical reaction objectives.

Level 2 learning objectives describe the expected immediate outcomes in terms of knowledge acquisition, skills attainment, and awareness and insights obtained through the learning experience. These objectives set the stage for preparing participants for job performance transformation. It is important to note that even performance support tools (a nonlearning solution) will still have a learning component and thus learning objectives. Table 3-3 shows some typical learning objectives.

TABLE 3-2. Typical Reaction Objectives

At the end of the program, participants should rate each of the following statements at least a 4 or 5 on a 5-point scale:

The program was organized.

The delivery of the content was appropriate.

The program was valuable for my work.

The program was important to my success.

I will recommend this program to others.

The program was motivational for me personally.

The program had practical content.

The program contained new information.

The program represented an excellent use of my time.

I will use the content from this program.

TABLE 3-3. Typical Learning Objectives

After completing the program, participants will be able to:

Identify the six features of the new policy in three minutes.

Demonstrate the use of each software routine in the standard time.

Use problem-solving skills, given a specific problem statement.

Determine whether they are eligible for the early retirement program.

Score 75 or better in 10 minutes on the new-product quiz.

List all five customer-interaction skills.

Explain the five categories for the value of diversity in a work group.

Document suggestions for award consideration.

Score at least 9 out of 10 on a sexual harassment policy quiz.

Identify five new technology trends explained at the virtual conference.

Name the six pillars of the division’s new strategy.

Successfully complete the leadership simulation in 15 minutes.

Level 3 application objectives describe the expected intermediate outcomes in terms of what the participant should do differently as a result of the technology-based learning. Objectives at this level also describe expectations as to the time at which participants should apply knowledge, skills, and insights routinely. Table 3-4 presents some typical application objectives.

Level 4 impact objectives define the specific business measures that should improve as a result of the actions occurring through the learning process. Improvement in these intermediate (and sometimes, long-term) outcomes represent changes in output, quality, costs, and time measures, as well as “softer” measures, such as engagement, satisfaction, and brand. Objectives at this level answer the question, “So what?” as it relates to the investment in learning. They describe to stakeholders the importance of learning through technology. Table 3-5 offers some examples of typical impact objectives.

TABLE 3-4. Typical Application Objectives

When the project is implemented:

At least 99.1 percent of software users will be following the correct sequences after three weeks of use.

Within one year, 10 percent of employees will submit documented suggestions for saving costs.

The average 360-degree leadership assessment score will improve from 3.4 to 4.1 on a 5-point scale in 90 days.

95 percent of high-potential employees will complete individual development plans within two years.

Employees will routinely use problem-solving skills when faced with a quality problem.

Sexual harassment activity will cease within three months after the zero-tolerance policy is implemented.

80 percent of employees will use one or more of the three cost-containment features of the healthcare plan in the next six months.

By November, pharmaceutical sales reps will communicate adverse effects of a specific prescription drug to all physicians in their territories.

Managers will initiate three workout projects within 15 days.

Sales and customer service representatives use all five interaction skills with at least half the customers within the next month.

Last, the Level 5 ROI objective defines for stakeholders the intended financial outcome. This single indicator sets the expectation for how the benefits of learning will relate to the cost. (Will the improvement in impact generated from the program recoup the costs of its implementation?)

An ROI objective is typically expressed as an acceptable return on investment percentage that compares the annual monetary benefits minus the cost, divided by the actual cost, and multiplied by 100. A 0 percent ROI indicates a break-even program. A 50 percent ROI indicates that the cost of the program is recaptured and an additional 50 percent “earnings” (50 cents for every dollar invested) is achieved.

For some programs, the ROI objective is larger than what might be expected from the ROI of other expenditures—such as the purchase of a new company, a new building, or major equipment. However, the two are related, and the calculation is the same for both. For many organizations, the ROI objective for a learning program is set slightly higher than the ROI expected from other “routine investments” because of the relative newness of applying the ROI concept to these types of programs. For example, if the expected ROI from the purchase of a new company is 20 percent, the ROI from a team leader development program might be in the 25 percent range. The important point is that the ROI objective should be established up front and in coordination with the sponsor.

TABLE 3-5. Typical Impact Objectives

After project completion, the following conditions should be met:

Sales for upgrades should reach $10,000 per associates in 60 days.

After nine months, grievances should be reduced from three per month to no more than two per month.

The average number of new accounts should increase from 300 to 350 per month in six months.

Tardiness should decrease by 20 percent within the next calendar year.

An across-the-board reduction in overtime of 40 percent should be realized for front-of-house managers in 60 days.

Employee complaints should be reduced from an average of three per month to an average of one per month.

By the end of the year, the average number of produce defects should decrease from 214 per month to 153 per month.

The employee engagement index should rise by one point during the next calendar year.

Sales expenses should decrease by 10 percent in the fourth quarter.

A 10 percent increase in brand awareness should occur among physicians during the next two years.

Customer returns per month should decline by 15 percent in six months.

Evaluating the Program

This final phase of the alignment process is the basis for this book. Evaluation is based on meeting the objectives of the learning program. The more specific the objective, the easier it is to plan the evaluation. From clear objectives, the evaluator can determine what measures to collect during the evaluation process, the sources of the data, the timing of data collection, and the criterion for success. A critical step in the evaluation phase that validates the alignment of the program to the business need is isolating the effects of the program. As you will explore in the next chapter, this step is an imperative to report credible, reliable, and valid results. A variety of techniques are available to isolate the effects of learning.

DEVELOPING THE EVALUATION PLANS

When planning an evaluation, two documents are completed in as much detail as possible. The data collection plan lays the initial groundwork and answers the five key questions outlined in Table 3-6.

Providing detailed answers to these questions up front sets the scope of the data collection process. An example of a completed data collection plan is shown in Figure 3-2. In this program, sales associates are selling an upgrade to an existing product using a mobile device. Level 1 and Level 2 collections are built into the learning program. Level 3 is collected with a web-based questionnaire sent by the sales coordinator. Level 4 data are monitored in the system and tracked by the evaluator and from system records.

TABLE 3-6. Data Collection Plan Key Questions and Descriptions

Key Question Description
What do you ask? The answers to this question lie in the program objectives and their respective measures.
How do you ask? How you ask depends on a variety of issues, including resources available to collect data. For example, Level 2 data may require tests, self-assessments, or exercises.
Whom do you ask? Use the most credible source; sometimes this includes multiple sources.
When do you ask? Timing of data collection is critical, particularly for application and impact measures. Select a point in time at which you believe application and impact will occur.
Who does the asking? Typically, the system collects data at Levels 1 and 2. For the higher levels of evaluation, representatives of the evaluation team may be assigned specific roles.

The second planning document is the ROI analysis plan, which includes the seven key information categories listed in Table 3-7. These seven key areas are addressed in detail in the ROI analysis plan, as shown in Figure 3-3. Two impact measures are monitored—monthly sales and time to first sale. Although the upgrade is released to all sales associates at the same time, not all the sales associates were using the mobile learning program. This provided an opportunity to compare a user group to a nonuser group (experimental versus control group). The profit margin for the upgrade was used to convert to money. The profit of the first sale was not included since it was included in the total sales of the upgrade.

Planning in detail what you are going to ask, how you are going to ask, who you are going to ask, when you are going to ask, and who will do the asking, along with the key steps in the ROI analysis will help ensure successful execution. Additionally, having clients sign off on the plans will ensure support for the evaluation approach when results are presented. These planning documents (Figures 3-2 and 3-3) are explained in more detail in chapter 11.

FIGURE 3-2. Completed Data Collection Plan

TABLE 3-7. ROI Analysis Plan Key Areas and Description

Key Area Description
Methods for isolating the effects of the program Decide the technique you plan to use to isolate the effects of the program on the impact measures.
Methods for converting data to monetary value Identify the methods to convert impact measures to monetary value.
Cost categories Include the cost of needs assessment, program design and development, program delivery, evaluation costs, along with some amount representative of overhead and administrative costs for those people and processes that support programs.
Intangible benefits List those measures you choose not to convert to monetary value are considered intangible benefits.
Communication targets for the results Identify those audiences to whom results will be communicated.
Other issues that may influence the impact or the evaluation itself Anticipate any issues that may occur during the learning process that might have a negative effect or no effect on impact measures.
Comments or reminders to the staff managing the program Place reminder notes of key issues, comments regarding potential success or failure of the program, reminders for specific tasks to be conducted by the evaluation team, etc.

CONSIDERATIONS FOR COLLECTING DATA

A variety of data collection techniques can be used to collect the right data from the right source at the right time. How data are collected depends upon a variety of factors, including accuracy, time, cost, and utility.

Accuracy

The data collection technique that will provide the most accurate results is desired when selecting a data collection method. However, accuracy will have to balance with the cost of data collection. Usually the higher the accuracy, the higher the costs. Never spend more on data collection than the cost of the program. A guideline to keep in mind is that the full cost of an ROI study should not exceed 5 to 10 percent of the fully loaded cost of the learning program. All evaluation costs are included in the denominator of the ROI equation, which means expensive data collection reduces the ROI percentage. It’s usually a trade-off.

FIGURE 3-3. Completed ROI Analysis Plan

Validity and Reliability

A basic way to look at validity is to ask, “Are you measuring what you intend to measure?” Content validity can be determined using sophisticated modeling approaches; however, the most basic approach to determining the validity of the questions asked is to refer to objectives. Well-written objectives represent the measures to take. Consider the use of subject matter experts, along with additional resources, such as literature reviews and previous case studies to judge validity.

While validity is concerned with ensuring you are measuring the right measures, reliability is concerned with whether the responses are consistent. The most basic test of reliability is repeatability. This is the ability to obtain the same data from several measurements of the same group collected in the same way. A basic example of repeatability is to administer the questionnaire to the same individual repeatedly over a period of time. If the individual responds the same way to the questions every time, there is minimum error, meaning there is high reliability. If, however, the individual has different responses, there would be high error, meaning low reliability.

Time and Cost

When selecting data collection methods, several issues should be considered with regard to time and cost. The time required to complete the instrument is one consideration. Also, consider the time required for managers to complete the instrument if they are involved, or the time in assisting participants through the data collection process. All expenditures for data collection—including time to develop and test the questionnaire, time for the completion of data collection instruments, and the printing costs—are costs to the program. Also, consider the amount of disruption that the data collection will cause employees; interviews and focus groups typically require the greatest disruption, yet provide some of the best data. Balance the accuracy of the data needed to make a decision about the program with what it will cost to obtain that data.

Utility

A final consideration when selecting a data collection method is utility. How useful will the data be, given the type of data collected through the process? Data collected through a questionnaire can be easily coded and put into a database and analyzed. Data collected through focus groups and interviews, however, call for a more challenging approach to analysis. While information can be collected through dialogue and summarized in the report, a more comprehensive analysis should be conducted. This requires developing themes for the data collected and coding those themes. This type of analysis can be quite time-consuming and in some cases frustrating if the data are not collected, compiled, and recorded in a structured way.

Another issue with regard to utility has to do with the use of the data. Avoid asking a lot of questions simply because you can, and instead consider whether you really need to ask a question in order to obtain the data to make decisions about the learning program. Remember, data collected and reported leads to business decisions, regardless of whether the programs are offered through a corporate, government, nonprofit, community, or faith-based organization. How can you best allocate the resources for programs to develop people or improve processes? With these issues in mind, if you can’t act on the data, don’t ask the question.

METHODS FOR COLLECTING DATA

Given the considerations covered in the previous section, a variety of methods and instruments are available to collect data at the different levels of evaluation. Some techniques are more suited toward some levels of evaluation than others; but in many cases, the approaches to data collection can cut across all levels of evaluation. Table 3-8 lists different data collection methods used to collect data at different levels. The most often used are questionnaires, interviews, focus groups, action plans, and performance records.

TABLE 3-8. Data Collection Methods

Questionnaires and Surveys

Surveys and questionnaires are the most often used data collection technique when conducting an ROI evaluation. Surveys can collect perception data (such as reaction), and precise data (such as the amount of sales). Questionnaires and surveys are inexpensive, easy to administer, and depending on the length, take very little of respondents’ time. Questionnaires can be sent via mail, memo, email, or distributed online (posted on an intranet or via one of any number of survey tools available on the Internet).

Questionnaires also provide versatility in the types of data that can be collected. They are used to collect data at all levels of evaluation: the demographics of participants (Level 0), reaction to the learning program (Level 1), knowledge gained during the program (Level 2), how participants applied that knowledge (Level 3), and the impact of the application (Level 4). You can also ask participants to indicate how much a particular measure is worth, how much that measure has improved, other variables that may have influenced improvements in that measure, and the extent of the influence of those variables.

Questions can be open-ended, closed, or forced-choice. Likert-scale questions are common in questionnaires, as are frequency scales, ordinal scales, and other types of scales, including paired comparison and comparative scales. Periodically, an adjective checklist on a questionnaire gives participants the opportunity to reinforce their perception of the program.

While questionnaires can be quite lengthy and include any number of questions, the best are concise and reflect only questions that allowfor the collection of needed data. Results from brief questionnaires are powerful when describing the impact of a learning program, as well as its monetary benefits.

Interviews

Interviews are an ideal method of data collection when details must be probed from a select number of participants. Interviews allow for gaining more in-depth data than questionnaires, action plans, and focus groups. However, it is important to consider costs and utility, particularly when considering evaluation at Levels 1 and 2. Guiding Principle 2 states, “When evaluating at a higher level, the previous level does not have to be comprehensive.” For example, if you plan to evaluate the program to Level 3, it would not be cost effective to use interviews to collect Level 2 learning data.

Interviews can be structured or unstructured. Unstructured interviews allow for greater depth of dialog between the evaluator and the participant. Structured interviews work much like a questionnaire, except that there is a rapport between the evaluator and the participant. The respondent has the opportunity to elaborate on responses and the evaluator can ask follow-up questions for clarification.

Interviews can be conducted in person, over the telephone, or online. Interviews conducted in person have the greatest advantage, because the person conducting the interview can show the participant items that can help clarify questions and response options. In-person interviews also allow for observation of body language that may indicate that the participant is uncomfortable with the question, anxious because of time commitments, or not interested in the interview process. Unlike the situation with an email or web-based questionnaire where the disinterested participant can simply throw away the questionnaire or press the delete key, in an interview setting, the evaluator can change strategies in hopes of motivating participants. Interviews are used when the evaluator needs to ask complex questions or the list of response choices is so long that it becomes confusing if administered through a questionnaire. In-person interviews are often conducted when the information collected is viewed as confidential or when the participant would feel uncomfortable providing this information on paper or over the telephone.

While interviews provide the most in-depth data, they are also the most expensive. Scheduling interviews can be a challenge with busy managers, professionals, and sales staff. If possible, consider using a professional interviewer, who is skilled at interviewing as well as at using the ROI Methodology. The interviewing process can be daunting, especially when asking questions related to Level 4 business impact measures, isolation, and data conversion. A third-party interviewer skilled in these techniques can ensure that the data obtained are accurate and credible when presented to stakeholders during the reporting phase.

Focus Groups

Focus groups are a good approach to collect information from a group of people when dialogue among the group is important. Focus groups work best when the topic is important to the participants. High-quality focus groups produce discussions that address the topics you want to know about. The key to successful focus groups is to keep focused. Serious planning is necessary to design the protocol for a focus group. The conversations that transpire are constructed conversations focusing on a key issue of interest.

Action Plans and Performance Contracts

In many cases, action plans are incorporated into the program. They are used to collect Level 3 and Level 4 data. Prior to the learning program, participants identify specific business measures they need to improve as a result of the program. Through the process they, along with their program leader, identify specific actions to take or behaviors that they will change to target improvement in those measures.

Performance Records

Performance records are organizational records. Data found in performance records represent standard data used throughout the organization in reporting success for a variety of functions; using performance records as a method of data collection can save time and money. Sales records and quality data are generally easy to obtain. However, not all measures in which there is interest are readily available in the record. It would be a wise investment of your time to learn what data are currently housed within the organization and can be utilized or referenced for the program.

GENERATING HIGH RESPONSE RATES

An often asked question when considering the data collection process is, “How many responses do you need to receive to make the data valid and useable?” The answer is, all of it! Guiding Principle 6 states that if no improvement data are available for a population or from a specific source, it is assumed that no improvement has occurred. While it is unlikely that 100 percent of potential respondents will provide data, it is important to collect as many responses as possible. Inference may not be possible to nonrespondents, so if 20 participants are involved and data are only provided by 10, results are only reported for the 10 and all the analysis and ROI is based on the 10 responses. This conservative standard ensures that credible results are reported.

If we report for nonrespondents, then we inflate the results on an assumption for which we have no basis. However, because we also adhere to Guiding Principle 10, costs of the solution or program should be fully loaded for ROI analysis, we will account for the cost of learning for all 20 participants. So, the key is to develop a strategy to obtain responses from as many potential respondents as possible.

Table 3-9 lists a variety of action items to take to ensure an appropriate response rate. Start by providing advanced communication about the evaluation. Clearly communicating the reason for the evaluation ensures that participants understand that the evaluation is not about them, it is about improving the program. Identify those people who will see the results of the evaluation and assure them that they will receive a summary of it. If you are using a questionnaire as a data collection instrument, keep it as brief as possible by asking only those questions that are important to the evaluation. If possible, have a third party collect and analyze the data so participants feel comfortable that their responses will be held in confidence and anonymity will remain.

IDENTIFYING THE SOURCE

Selecting the source of the data is critical in ensuring accurate data are collected. Sometimes it is necessary to obtain data from multiple sources. A fundamental question should be answered when deciding on the source of the data: Who (or what system) knows best about the measures being taken?

The primary source of data for Levels 1, 2, and 3 is the participants. Who knows best about their perception of the program, what they learned, and how they are applying what they learned? Although at Level 3, it may also be important to collect data from other sources, such as the manager, to validate or complement the findings.

TABLE 3-9. Increasing Response Rates

Source: Phillips, P.P., J.J. Phillips, and B. Aaron. (2013). Survey Basics. Alexandria, VA: ASTD Press.

Performance Records

Given the variety of sources for the data, the most credible source is the organization or internal performance records. These records reflect performance in a work unit, department, division, region, or organization. Performance records can include all types of measures that are usually readily available throughout the organization. This is the preferred method of data collection for Level 4 evaluation, since it usually reflects business impact data.

Participants

Participants are the most widely used source of data for ROI analysis. They are always asked about their reaction to the program and the extent to which learning has occurred. Participants are often the primary source of data for Levels 3 and 4 evaluation. They are the ones who know what they do with what they learned and what happened that may have prevented them from applying what they learned. In addition, they are the ones who have insight to what impact their actions have on the business.

While some perceive participants as a biased option, if they understand the purpose of the evaluation and that the evaluation is not about them—it is about the program—participants can remove their personal feelings from their answers and provide objective data.

Participants’ Managers

Managers of the participants are another important source. In many cases, they have observed the participants as they attempt to use the knowledge and skills. Those managers, who are actively engaged in a learning process, will often serve as support to the participant to ensure that application does occur. Data from managers often balance the participants’ perspectives. In collecting data from the managers, keep in mind any potential bias that may occur from this source of information.

Participants’ Peers and Direct Reports

When evaluating at Level 3, participants’ peers and direct reports are a good source of data. The 360-feedback evaluation provides one of the most balanced views of performance because it considers the perspective of the participants, their managers, their peers, and their direct reports. While gathering input from peers and direct reports can increase the cost of the evaluation, their perspective may add a level of objectivity to the process.

Senior Managers and Executives

Senior managers and executives may also provide valuable data, especially when collecting Level 4 data. Their input, however, may be somewhat limited if they are removed from the actual application of the knowledge and skills applied. However, senior managers and executives may play a key role in the data collection process when implementing a high-profile program where they have a significant investment.

Other Sources

Internal and external experts and databases provide a good source of data when converting business impact measures to monetary value. The ideal situation is to collect monetary value for the business impact measures from the internal experts or databases outside the organization’s records.

DETERMINING THE TIMING OF DATA COLLECTION

The last consideration in the data collection process is timing. Typically, Level 1 data are collected at the completion of the program, and Level 2 data are collected during or at the completion of the program.

Level 3 and 4 data collection occurs after the application is routine—the time in which new behaviors are internalized or the actions are completed. The goal is to collect data as soon as possible, so participants can connect the application to the program. Typically, Level 3 data collection occurs three weeks to two months after the program is complete. Some programs, where skills or actions are applied immediately upon conclusion of the program, should be measured in a matter of days. With Level 4 data, timing may be different than from for Level 3 evaluation, depending on data availability, the stakeholder requirements, and opportunity for the measure to improve. The issue is this: What is the delay or lag time between application and the corresponding impact? Sometimes there is no delay; at other times it may be several months. Usually Level 4 data is collected from three weeks to four months.

While the ROI calculation is an annual benefit, it is unlikely that you will wait a full year to capture Level 4 data. Senior executives usually want to see results sooner rather than later. If the program was introduced to solve a problem (such as unsatisfactory sales revenue), executives and senior managers want the data soon. Otherwise, the decision will be made without the data. It’s ideal to collect the Level 4 measures either at the time of Level 3 data collection or soon after, when impact has occurred. Then, those measures should be converted to monetary benefits and included in the ROI calculation.

Sound data collection strategy is imperative for achieving credible results. Ensuring that the most appropriate methods, sources, and timing are employed in the data collection process will yield results that are reliable and useful to stakeholders. However, it is through the analysis that the real story of learning success is told. Analysis begins with isolating the effects of the program on improvement in business measures.

FINAL THOUGHTS

This chapter introduced the concept of achieving business alignment, which is important for any program, particularly those with significant business impact. It also discussed the importance of evaluation planning (to maintain alignment throughout the evaluation) and data collection. The various methods of data collection were outlined. Using these methods of data collection, you will be able to collect the most credible and timely data and can begin with the data analysis, which is discussed in the next chapter.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.188.120