Evaluating and Innovating in the Virtual Space

It feels like a world ago. As a young teen, my high school driving instructor left our class speechless when he asked a single question: “Where should your focus be when you’re driving?” Our class eventually cobbled together an answer, which sounded something like “Where you are on the road?”

“No,” was his response.

We were stunned, and I’ll never forget his explanation. “Your focus shouldn’t be where you are … but where you’re going to be.” Ah, a trick question! We certainly wanted to take issue with this, but as 16-year-olds, we intuitively knew better than to argue with the only in-room, adult authority on the subject.

If we apply the point of this story and the opening quotation to a broader context, we see that technology—and specifically, virtual training—is also a moving target. As the title of this chapter suggests, evaluation and innovation are dual points of focus for this movement. First, with evaluation, there is a need to focus on what virtual participants are going to do with what they’ve learned. You don’t want to focus on where you are exclusively (participants’ satisfaction with the training), but rather where you’re going to be (learners’ workplace application and the impact to the organization). Whether or not participants liked the virtual training—although this plays into motivation—is not the end-all be-all.

Second, with innovation, you want to be aware of new developments in the virtual training industry and stay abreast of current trends. New innovations can also inspire your own experimentation for the betterment of live online learning. You want to be careful not to limit your vision to only where you currently are, but also keep an eye on future developments in the field, especially as the pace of technological change accelerates. From the late 1990s through 2019, many web conferencing platforms entered and exited the market. The platforms that remained tended to stay relatively similar with a few exceptions. But the global, explosive expansion of the use of virtual training platforms during the COVID-19 pandemic prompted accelerated improvements, sometimes even weekly.

Evaluation and innovation go hand in hand. Each informs the other. For example, we evaluate something to identify what is and isn’t working. Innovation then often emerges when something isn’t working and there is a problem to solve. Evaluation is one of the more challenging tasks we do as learning and development practitioners. And, if we are honest, evaluation may be one of those things that we tend not to focus on or do as often as we should. After all, you may consider yourself a professional designer or trainer, not a professional evaluator. But our training solutions should always evaluate the effectiveness of what we do. We have a vested responsibility to identify—or at least delegate partners to identify—whether our virtual training programs are having an impact, large or small. Otherwise, how do we know whether we are hitting or missing the mark?

Evaluation also helps reduce uncertainty. Any data you collect helps you make decisions about what and where to improve, so we must measure the right things. In some organizations, attendance is the only metric passed on to executives regarding training. Although this is certainly one metric, it does not measure whether participants learned and applied what they learned or whether that had an impact on the organization. We need to be prepared to have data at the ready to show whether training is making a difference.

As previously established in this book, online learning is here to stay. Organizations around the globe are investing more learning and development resources in it. Executives also have higher expectations for L&D. LinkedIn Learning’s 2021 Workplace Learning Report surveyed more than 5,000 people (which included learning professionals, managers, and learners) from 27 countries. Accordingly, L&D professionals said they expected their budgets to increase and predict a continued shift away from ILT to online learning. “In early 2020, 38 percent of L&D pros expected to spend less on ILT and 57 percent expected to spend more on online learning. Today, those numbers are significantly higher: 73 percent of L&D pros expect to spend less on ILT and 79 percent expect to spend more on online learning” (Van Nuys 2021). The increasing allocation of additional resources for online learning is telling. We need to be evaluating the virtual training programs we create from these increased resources to demonstrate the resulting value.

In this chapter, we explore the final capability, evaluating impact. We will identify a virtual training approach for your consideration, highlight helpful evaluation frameworks, and offer examples for evaluating your virtual training. The second half of this chapter addresses innovation—because if something isn’t working, we need to try something different. We should always be evaluating, because it shows us whether we’re meeting our goals. And we should always be innovating so we can continuously improve. Both help us improve and both can take us to the next level.

THE BIG IDEA

To continually improve, evaluate the effectiveness of your virtual training programs and push past the limits of what you’ve previously done to discover what could still be.

Evaluating Virtual Training

Clayton Christensen famously coined the Jobs to Be Done (JTBD) Theory. This theory stems from one central question, “What is the job a person is hiring a product to do?” (Christensen et al. 2016). This succinct, direct question can be useful for learning and development to ask. When virtual training—or training in general—is determined to be the best solution, you can use an adapted version of JTBD. For example, at the outset of a project, consider asking stakeholders, “What is the job you are hiring virtual training to do?” This question single-handedly drives to the heart of the workplace performance change needed. What is the job? How they answer can translate into your performance objectives for the course. From there, you can determine any knowledge or learning objectives to support the doing. It starts with a front-end analysis question like, “What should employees be able to do after completing this virtual training program?” In addition to asking stakeholders this question up front, you can also ask a few learners who will be in the training. As we explored in chapter 2, not only can you gather perspectives from a few sample learners early on for feedforward advice on designs, but you can also gather their perspective even earlier regarding the question, “What is the job you are hiring this virtual training program to do for you?” Learner responses are always insightful—because it’s a different perspective—and this can guide your design trajectory.

Early on, when you first meet with stakeholders, internal customers, or external clients to conduct a front-end analysis, you ask questions to identify the workplace problem that needs to be solved. Yet often the “problem” may not be the real issue. If you dig a bit deeper, you’ll discover that beneath the surface, there is sometimes a causal, underlying core problem.

Training is often identified too quickly as a solution when it may not be the most appropriate, depending on what the core problem is. Let’s say, for example, a customer service team receives excellent ratings from customers, except for one staff person. Rather than deliver customer service training to the entire team “because it would be good for everyone any way,” a more targeted strategy would be to offer customer-service coaching to the one employee who may not even realize their service skills are sub-par. In other workplace situations, all that may be needed is a type of workflow learning or performance support, like a job aid to support a procedural task or to illustrate how to use a system in the flow of work.

This book includes many recommendations from evidence-based practice and learning-science research. Pause and consider for a moment what science is at its essence. With science, we observe, take notes, and test our ideas to see if they hold weight by collecting data. We are empirical. If we look at it through this filter, this can also inform our definition of evaluation. We, too, want to observe the effects of the virtual training programs we create, so we can improve them and know whether they were effective. To be more rigorous about it, we can take measurements. These methods help us determine the overall impact of the virtual training programs we design and deliver.

A talented evaluator once taught me to always begin by asking an internal customer interested in evaluation, “What do you want to measure?” We can usually measure anything, but it’s important to measure the right things. Our evaluations should be twofold: to assess the effectiveness of the program from the learners’ perspective based on their experience, as discussed in chapter 2, and to evaluate it from the stakeholders’ perspective in terms of the learner’s ability to apply it to their jobs and have a larger impact on the organization.

So, in virtual training, where do we begin? We begin with the knowledge, learning, or performance objectives, which come from the front-end analysis you conduct with the internal customer, business area, or external client. Once the objectives are identified, it is at this early juncture that you want to create an evaluation plan for the training program. Consider the following elements as you develop your evaluation plan:

• What will you evaluate (up to what level)?

• Who will evaluate it?

• How often and when will you evaluate?

• What will you do with the evaluation data once it’s available?

• With whom will the evaluation data be shared and why?

When you are considering what to do with the data, one of the best places to start is with the end in mind. Clarify how the data will be used to make decisions. Know what you’re going to do with it before you collect it. According to Douglas Hubbard (2010) in How to Measure Anything, the goal is to reduce uncertainty. Any data you collect can help reduce the uncertainty about something and therefore inform future decisions about it.

For example, let’s say you delivered a virtual training program to customer service staff on how to efficiently resolve customer issues via phone and reduce wait times. If your objectives are to give staff statement prompts they can use to quickly resolve an issue and close a call with courteous, polite service, you should convert these objectives to metrics you want to evaluate later (such as wait times, call length, and customer satisfaction). Then determine how you might collect this data. For example, to measure impact, you’ll want to look at customer service surveys as well as wait time data to determine if callers are spending less time in the queue waiting for a representative. Lastly, determine with whom this data will be shared and how it can inform decision makers. This plan is determined on the front-end right after you identify learning objectives.

PRO TIP 75

Create an evaluation plan in the design phase of your project, and evaluate learning based on the knowledge and performance objectives identified at the outset.

How Do You Evaluate?

Another way to think about evaluation is to look at it from the perspectives of the different audiences vested and determine who needs to know what. For example, when you train, you have multiple audience groupings who may be interested in the feedback: the learners, their managers, a design team, stakeholders, and your training delivery team (such as a producer or facilitator). The delivery team will want to know if learners found the virtual training valuable. Their managers should see the virtual training’s worth because of their investment away from functional work. Executive leadership should see the overall impact to the organization, and in some cases, the ROI, especially for enterprise-wide virtual training programs or highly visible ones. Ideally, customers should experience the ripple impact as well. Be sure to capture your own perspective as a virtual trainer, and your producer’s as well. Your perspective is also insightful. How did it go for you? Did it feel like you were connecting? Were people engaged?

Then, once you know who your target audience is for your evaluation data, you need to collect it. There are several ways to measure the effectiveness of virtual training programs. Some include direct observation, retention or turnover metrics, quality metrics, culture or employee satisfaction surveys, knowledge checks, skill assessments, interviews, key performance indicators, focus groups, learning management system test reporting, or competency assessments. Interviews, for example, are an opportunity to more deeply examine why learners may or may not have applied what was taught in virtual training to their jobs. You can use knowledge checks and assignments that gauge where people are and where they’re struggling as pre-tests ahead of virtual training to ensure they’re the right fit. After the training, you can conduct performance evaluations using rubrics to evaluate demonstrated competencies or abilities covered in virtual training. For example, once a learner demonstrates proficiency in competency-based virtual training, they then receive credit for course completion or competencies attained.

PRO TIP 76

Determine which levels of evaluation are most important to your customers, employees, their managers, and stakeholders to measure the effectiveness of your virtual training programs.

Collect Early Feedforward Advice: Evaluate As You Go

A type of evaluation that you can conduct during the design and prototyping phase of creating virtual training is called feedforward or formative assessment. This is what Michael Allen (2012) advocates in his Successive Approximations Model, leveraging continuous rapid prototyping in an iterative process through the creation and design of the training program. This early feedforward can be invaluable because input and feedback are collected on the initial program designs.

You can invite a few end recipients or staff who are in the target audience to be part of the design team. This way, you can run design ideas by them as you go. Who better to get feedback from than the ultimate end user? The reason this information is collected at the front end is because it saves time in the initial stages of design and allows for rapid prototyping and iteration. If feedback is collected at the backend, it’s too late to adjust and too much time and resources have already been invested.

In contrast with formative assessment (which is part of the design process), summative evaluation comes at the end when you collect feedback from those who experienced and attended the virtual training. This is where learners might click a link to an online evaluation in the chat or receive an email that takes them to an online survey where they can evaluate the class.

Evaluation Frameworks to Guide You

There are a variety of influential frameworks for evaluation and measurement in the field of learning and development. Some include Katzell’s Hierarchy of Steps, the Kirkpatrick Model’s four levels of evaluation, and the ROI Institute’s ROI Methodology, which also includes a process model. Let’s review each and then connect back to virtual training.

Katzell’s 4 Steps to Evaluating Training

In the early 1950s, prominent industrial-organizational psychologist Raymond Katzell originated the concept of a hierarchy of steps to evaluate training programs. This organizing structure laid a foundation for those who would later be inspired by his work. Step one identifies how “trainees feel” about the training. Step two identifies whether they learned through “knowledge and understanding.” Step three identifies how much there were “on-the-job behavior changes” when they returned to their work. And step four looks at “any ripple effects” from these behavior changes like absenteeism or production (Kirkpatrick 1956; Smith 2008).

The Kirkpatrick Model

A model of measurement widely adopted across the talent development industry is the Kirkpatrick model. In 1959 and 1960, Donald L. Kirkpatrick first published articles based on his PhD dissertation about training evaluation in the ASTD Journal. The four words he identified in that article later became known worldwide as the four levels of evaluation: reaction, learning, behavior, and results (Kirkpatrick 1996).

According to the New World Kirkpatrick Model from the Kirkpatrick Partners (2021), the following are updated definitions of the original four levels:

Level 1: Reaction evaluates “the degree to which participants find the training favorable, engaging, and relevant to their jobs.”

Level 2: Learning evaluates “the degree to which participants acquire the intended knowledge, skills, attitude, confidence, and commitment based on their participation in the training.”

Level 3: Behavior evaluates “the degree to which participants apply what they learned during training when they are back on the job.”

Level 4: Results evaluates “the degree to which targeted outcomes occur as a result of the training and the support and accountability package.”

The ROI Process Model

Jack J. Phillips developed the ROI Methodology, which is a systematic approach to help organizations evaluate and improve programs and projects for greater impact. Just as chapter 2 discussed the influential role of design thinking, the ROI Methodology “uses design thinking principles to design for results needed” (Phillips, Phillips, and Ray 2020). This methodology helps organizations collect both qualitative and quantitative data to measure success for training programs like virtual training along a chain of impact from initial planning to requesting more funding. It also includes techniques to help isolate the effects of training programs for more credible data. In 1992, Jack Phillips founded the ROI Institute, which works with network partners in more than 70 countries around the world. Jack and Patti Phillips at the ROI Institute have also identified an ROI Process Model. This five-level model acknowledges an initial level for input and adds a fifth level for ROI, or return on investment:

• Level 0: Input

• Level 1: Reaction

• Level 2: Learning

• Level 3: Application

• Level 4: Impact

• Level 5: ROI

Level 0: Input

The input level acknowledges measures such as the number of people involved, their input of time into the process, types and number of programs, the scope, and costs. “Input is important but doesn’t speak to the outcomes or results” (Phillips, Phillips, and Ray 2020).

Level 1: Reaction

The first level then is learner reaction. How are participants seeing the value in what you do? Is it relevant to them? Is it important to them? Would they recommend it to others? This reaction data from learners can be collected through surveys or evaluations that learners complete throughout the training in short spurts or near the end of a training program. It’s a way to hear feedback about the training directly from the learners. For example, survey questions or online evaluations might ask if learners found the training valuable, if they thought it was engaging and relevant, how they might rate the facilitator’s expertise, how they would rate the quality of the handouts or participant guide, if they would recommend the program to others and why or why not, how usable the technology was, and so on. It is at this level where you identify what participants think and feel (cognitive and emotional learning dimensions) about their overall learning experience.

Level 2: Learning

Level 2 evaluates participant learning and ideally new knowledge construction. This level looks at whether participants have created new schemas or mental models, retained new knowledge, and acquired new skills. It’s a measure of not only knowledge acquisition but also skills attainment. Additionally, through reflection and learning from others in discussions, there may also be new insights, or participants may be more aware of things they were not previously aware of. The metrics at this level all measure the learning component. And learning is the foundation for using, which comes next.

Level 3: Application

The third level is the application. Learning must be applied; otherwise, our virtual training programs can be viewed as a waste of time from the perspective of executives. Was there a behavioral change? Was improvement noticeable and measurable? Did participants apply what they learned to their jobs based on the objectives? “This measure typically takes place at least 30 days after the training program ends” (Huggett 2017).

Level 4: Impact

Application is causal, and all causes have an effect. Application’s effect in this context is the fourth level—the impact to the organization. Usually, these kinds of metrics are recorded in the system as productivity, waste or rework, the time it takes us to do something, sales, customer satisfaction, or customer complaints. For example, if learners consistently make performance changes at Level 3, this can affect results at an organizational level, such as raising customer satisfaction levels or reducing employee attrition metrics. Notably, the results in Level 4 are often the perceived value of the training for executive leadership.

Level 5: ROI

For this level, executives may be wondering if the expensive training programs are really working, especially if they’re critical to the organization or connected to strategy that the executive team cares about. This can influence decisions about whether to devote more resources and continue the program. For this reason, executives may request ROI. In short, return on investment answers the question for every dollar invested in a training program, how many dollars were returned after the investment was recovered? The Phillips’ formula for measuring the ROI percentage is:

Alignment Model

The ROI Institute’s Alignment Model brings it all together by visually depicting the alignment across stakeholders’ needs (the why), the corresponding objectives for each (the how), and the results using the five levels (the what; Figure 10-1). “The objectives derived directly from these needs are defined, which makes a strong case for having multiple levels of objectives that correspond to different needs” (Phillips, Phillips, and Ray 2020). For virtual training, reference this model to best understand the needs driving target objectives and what the results will look like at each level. Implement your program keeping targeted results in mind.

Figure 10-1. The ROI Institute’s Alignment Model

Evaluate Virtual Training to the Level Needed

Perhaps not surprisingly, the most common method of evaluation among many organizations who do virtual training boils down to tracking attendance. As ATD’s 2021 report, Virtual Classrooms: Leveraging Technology for Impact attests, the clear majority (88 percent of the hundreds of organizations surveyed) evaluate training based on attendance and completion.

As you are aware, just because an employee logs in to a virtual training class and remains online for the duration does not guarantee they learned anything, not to mention demonstrated competency. Nor does it prove that they will apply what they have learned to their functional work. This is why virtual training should be evaluated above and beyond attendance. We also need to take a closer look at our reward system. When we award a certificate for attendance or a digital badge for logging in at the right time and logging out at the right time, aren’t we rewarding the wrong things? Wouldn’t it be better to test for competency and then, once they have demonstrated proficiency, award the digital badge or certificate?

In the rest of this section, we’ll look at some ways to ensure your evaluation efforts for virtual training are a success.

The Manager’s Critical Role to Aid Learning

We have long known that managers play a crucial role in the effectiveness of their employees’ learning. But did you realize how crucial they are? “Research has consistently shown that the managers of a group of participants are the most influential group in helping participants achieve application and impact objectives, apart from their own motivation, desire, and determination. No other group can influence participants as much as their immediate managers” (Elkeles, Phillips, and Phillips 2014). To take this a step further, organizations who shared concept card reminders, reinforcement aids, follow-up activities, or other resources with learners’ managers after the training were significantly more likely to be high performing (ATD 2021).

The number-one most crucial factor in whether participants apply what they learn after attending a training session is if the manager sets expectations with the employee before they attend the training (Elkeles, Phillips, and Phillips 2014). This level setting is also important for virtual training. With live online learning, this pre-training session could be a brief on-camera online meeting or quick phone call to reiterate why the manager thinks the training topic is important for their direct report, tying it into organizational goals, what they expect from them after completing the training, and how excited they are to hear how it goes afterward.

The second most important factor in whether employees apply what they learn after training is if the managers follow up afterward as well (Elkeles, Phillips, and Phillips 2014). Managers need to be strategic—they aren’t likely to have time to meet with every direct report before and after on every training topic. However, for the more robust virtual training programs and where it makes sense, they should. Either remotely or on-site and in person, managers could ask their direct reports what they learned from their virtual training class, try to incorporate key action items from the training into performance appraisal goals, or observe their demonstration of the competency on the job and provide data back to the virtual trainer.

PRO TIP 77

Communicate to managers the important role they play evaluating and reinforcing what participants learn in training.

When my virtual training programs span several weeks, I email managers before the virtual training begins about what their employees will be learning and what the expectations are for the program. I also let them know that after the virtual training, employees will have a completed action plan to share with their manager. I give managers a heads up and ask that they have a conversation with their employees within one week after the program is complete to discuss their action plan. Likewise, I make certain that all participants are aware that their managers are there to support them and their action plans will hold them accountable to applying the learning to their work.

To aid the post-training discussion between managers and virtual participants, I send managers a sample email with suggestions for conversation starters and prompts. Figure 10-2 shows one example of an email prompt I have sent. Sometimes managers will even respond in kind with a thank you email or reach out to let me know they will soon be meeting with staff or that they did meet with them. Either way, we know managers are most influential in learners’ application in the workplace and we need their support.

Figure 10-2. Example of Email Request for Manager Follow-Up

Next, let’s look specifically at how you might use the ROI Process Model, for example, to evaluate virtual training. Hopefully, these examples will inspire and spark ideas for you to use in your virtual training programs.

Level 1: Evaluating Learners’ Reaction to Virtual Training

While organizations most often track virtual training by attendance and completion, the second most common evaluation method is Level 1 evaluations. According to ATD’s 2021 virtual training report, 68 percent of respondents consistently evaluate with Level 1 evaluations, or “smile sheets” as they are commonly called. This method is simple and much quicker to implement than some of the other levels. The goal is to gather data to inform decisions about whether to continue offering the class or what parts to tweak and improve. “The challenge is to keep it simple, limit the number of questions, and use forced choice questions with space for comments” (Phillips, Phillips, and Ray 2020). Additionally, there may be questions to collect demographic or marketing data like how you heard about the offering and what other courses you would like to see, as well as a place for comments about what they might recommend for course improvements. Level 1 evaluation sheets often ask about criteria like:

• Whether the class met course objectives

• Perception of value and if it was worth their time

• Did they like it, and if so how much or how little and why?

• Appropriate class or program length

• Organization of materials

• Instructor credibility, knowledge, and preparedness

• Whether the virtual training platform and tools were easy to navigate

• If the links to course materials were simple to find

• If the digital participant guide was helpful

• Did the assignments have clear instructions?

The vast majority of my virtual training programs are measured at this level. It’s easy to do and you can get a quick pulse on what learners thought of the training. The secret to getting returned and completed survey data is to keep your evaluations short. Just ask the critical questions.

In training classes that span several weeks, I’ll give participants a two to three question survey at the end of each live online session to take a quick pulse check on where people are and what they think of the class so far. You can also use the whiteboard to have them type two to four words describing their experience that day or use a quick online poll from the platform to gauge how their virtual experience went. This way you’re collecting feedback as you go. If I learn about integral improvements that need to be quickly addressed, I can pivot as needed in the middle of the program and genuinely thank participants for their feedback. I always use some type of short Level 1 evaluation at the end too and express verbally that we love to hear feedback from learners so we can make the program better.

Learning expert Bill Horton recommends asking online learners to contribute as many suggestions as possible for improving the course to generate an abundant supply of ideas (Kirkpatrick and Kirkpatrick 2006). This helps to improve the likelihood that you will receive more qualitative recommendations out of a larger sample. As we learned in chapter 2, the best ideas often come from the learners themselves. One of my favorite things to do with Level 1 feedback throughout is to encourage learners to use emojis, animated gestures like thumbs up, or applause to provide feedback on how things are going for them. As with everything, use this measure judiciously; you don’t want to over-use it and weaken its effectiveness.

Sometimes, you can even incent learners to complete their virtual training program evaluations by offering a drawing with a prize at the end. Make it clear that participants will be eligible for the drawing if they complete the evaluation. Once they have all submitted their evaluations online—still during class—ask them to raise their virtual hands or click the green check so you can know when they’re done. Then use the randomizer tool (if available on your platform) or use an online app to scramble their names and select one for the prize.

To provide your virtual learners access to more official Level 1 evaluations, you can include a link in the chat or provide a prompt follow-up email containing the evaluation. It’s a best practice to request learners open the link and complete the evaluation while they are still in class and submit it digitally. To ensure a higher response rate, I try to carve out the time for learners to complete their evaluations during the virtual training. Make it the second to last activity you do—not the last activity, because sometimes learners have an online meeting right after your virtual class and have to jump out early and then are not able to complete it. This is important because the likelihood someone will complete the evaluation decreases once they leave the class. We also want to end on a high note leveraging the peak-end rule as discussed in chapter 2. For this reason, it’s best to complete evaluations in-class right before your final, closing activity.

Keep in mind it’s all well and good to track participant responses for Level 1. But what you do with the data is even more important. Do you save it in a file never to be looked at again? Do you share it with managers? Do you share it with L&D directors, the chief learning officer, or the executive team? Clearly, the virtual trainer needs to review response summaries to collect feedback and input for improving the class. The virtual trainer’s manager also needs to see a summary of responses to inform their decisions about whether to offer the class again.

In the past, I’ve isolated some of the best comments and used them as marketing testimonials for promoting future class offerings. On your evaluation, you can include a phrase that says signing this document gives approval for your name and comments to be used for marketing promotions. Or if they wish to maintain anonymity (and for more honest responses), include an option that consents to using their comments without their name. Sometimes digitally checking a box with a typed name or leaving the name line blank to remain anonymous is sufficient. It’s also better to avoid participants emailing evaluations back from an efficiency standpoint. Instead, all results are automatically tabulated by your LMS, a survey provider like SurveyMonkey, or other online means.

Level 2: Evaluating Learning From Virtual Training

For Level 2, virtual trainers or novice online instructors may forget the importance of including knowledge checks along the way. This should be part of the built-in design of virtual training. Instructors can include creative, fun, or feedback opportunities where the learners may not even realize they are being checked for new knowledge acquired in the course. Adapted online games can work, or you can use polls for quizzes or other apps for surveys, quizzes, and tests. Regardless, it is an opportunity to assess the readiness of your learners and determine if anything needs to be reviewed. In the 2021 ATD study Virtual Classrooms: Leveraging Technology for Impact, 57 percent of organizations said they measured the effectiveness of their virtual training programs at this learning level with quiz scores and knowledge checks.

In my virtual training classes, I like to incorporate quizzes as polls to check knowledge at this level. Most platforms have polling features you can use, and some even allow you to create and store tests that you can reuse. For example, in GoToTraining you can create tests ahead of time and then use them to assess competency knowledge before participants can earn a certificate or digital badge within their LMS. MS Teams also has the ability to link to surveys. This way you can request learners take pre-tests and post-tests to measure any knowledge gains by comparing pre and post scores. It’s important to note that in writing your quiz questions, you should emphasize what should be done versus what should not be done. This is a clearer takeaway for the learner and more readily aids their adoption of the knowledge and behavior you want to see.

As we illustrated in chapter 7, another way to check learning and understanding is by requesting learners annotate a slide depicting imagery in which they need to correctly diagnose something. In our example, it was medical imaging. Annotation is also a way for learners to show you they understood something or know where to look. In my virtual training, I also like to leverage chat questions to check participants’ knowledge to see if they’re tracking with me or if we need to go back and review anything (I’ll look for gaps in their chat answers or an extended slowness in their responses.) As online learning expert Bill Horton explains in Evaluating Training Programs, “you can embed evaluation events among the learning experiences” but as an ongoing practice for check-ins, keep them short (Kirkpatrick and Kirkpatrick 2006).

However, you cannot stop there. According to Aaron Horwath (2021), head of learning at Creative Force, “Only being concerned with measuring Levels 1 and 2 dooms any hopes of measuring meaningful impact from the start.” As the authors of Proving the Value of Soft Skills remind us, “It’s not effective unless you have an impact” (Phillips, Phillips, and Ray 2020). People might attend, but training success should never be based on attendance. People may or may not enjoy it, but this is no guarantee the organization will improve or grow. It may contribute to retention because staff relish the opportunity to professionally develop and enjoy time spent away from their desks. And learners might be learning, but there’s no guarantee they’re applying what they’ve learned. Let’s advance to the next level and see how you might evaluate Level 3.

Level 3: Evaluating Application From Virtual Training

In the same spirt as the popular book title Telling Ain’t Training, I contend that knowing ain’t doing. Just because participants have learned how to do something in virtual training does not guarantee they will use it. As chairman and CEO of Allen Interactions, Michael Allen (2007), articulates, “Education is focused on acquisition of knowledge while training is focused on application of knowledge; in other words, education is about knowing and training is about doing.”

In short, Level 3 is all about application of the learning to one’s work. Often, the impetus for training in the workplace is some type of performance gap. So, when we measure at this level, we are following up to see if learners were able to close the gap. In this way, Level 3 also measures participants’ willingness and ability to improve workplace performance through behavior change. This often requires investing more resources in Level 3 (time and money) to observe and measure any improvements. According to ATD’s 2021 report Virtual Classrooms, only “slightly less than half of the over 300 organizations surveyed tracked how virtual training influenced their on-the-job behavior.” A great example of evaluating application is a virtual training course on cyber security. Imagine that the course taught learners how to identify phishing attempts that come into their email inbox and how to recognize those suspicious emails that may be malicious. Learners are taught to be on the lookout for multiple spelling errors and urgency in an email message, for example. Then two to three weeks later, the organization might send a few fictitious emails that are indeed phishing attempts to employees to test those who were trained. This is a way to evaluate whether learners were able to correctly apply what they learned by identifying each phishing attempt and correctly reporting them to their IT department.

I recommend partnering with employees’ managers for this level of evaluation. In the end, it is the manager who is evaluating them for performance quarterly, semi-annually, or annually. When I have partnered with managers for Level 3 evaluations of virtual training programs, I have provided them with aids like conversation prompts for their action plans and checklists for follow-up in their observation of skills, and talked with them about incorporating key principles or action items into their employee’s performance appraisal goals to keep them accountable. The manager is the obvious choice as the key contact to observe and ensure that virtual learners are applying what they’ve learned, but our partnership of support with managers means that we can provide resources to help them. These aids, rubrics, key items learned, and other performance support tools then help them measure performance.

Partnering with an internal or external professional evaluator for Levels 3, 4, and 5 may also be useful. We are learning and development professionals, but not necessarily professional evaluators. By working in partnership with an evaluator, we can capture more data on these higher levels more efficiently and effectively.

There are also other steps we can take to ensure behavior change in the workplace. We discussed previously how managers’ expectation setting with participants has the biggest impact on learners’ application. In addition to managers setting expectations before and after virtual training, you can also invite virtual learners to share with their peers what they learned afterward. This, too, can also make a difference. According to Dorna Eriksson Shafiei, VP of Talent Management at Atlas Copco in Stockholm, Sweden, “In China, we developed a peer-to-peer learning approach, where learners present what they’ve learned to their wider team. Having discussions about how you can apply learning to your working environment is a critical part of changing behavior” (Van Nuys 2021). Remember, the more virtual learners talk about it, write about it, reflect on it, or use it, the more likely they are to apply their new knowledge on the job.

PRO TIP 78

Partner with managers to transfer ownership to them to observe and measure application in the workplace while you support them with aids or evaluation rubrics.

Level 4: Evaluating Impact in Virtual Training

Once you arrive at Level 3 as a way of measuring evaluation, you will be able to assess Level 4. I mentioned earlier about partnering with an internal or external professional evaluator for the higher levels (3, 4, and 5) if needed. You can also consider partnering with stakeholders to collect Level 4 and 5 data. You support the work and guide the process, but transfer the collection, analysis, and summary work to stakeholders or other partners.

The advantage of Level 4 data is that much of it already exists somewhere in the organization because these metrics are often tracked on an ongoing basis through other means. These could include customer satisfaction ratings, performance data, employee engagement surveys, sales numbers, efficiency, retention, costs, quality improvement, or employee satisfaction. For example, if you conducted a virtual training program about how to apply privacy laws in the context of your organization’s work, you could use the number of privacy incidents at your organization before and after the training as a comparison metric. Remember to allow some time after the training before measuring the impact; 30 to 90 days is recommended. Be aware that the seeds of change can take a while to grow before you can reap results. As learning expert Bill Horton advises, “the kinds of business and institutional changes you want to measure for Level 4 seldom have only one cause. And they may take years to manifest” (Kirkpatrick and Kirkpatrick 2006).

When I consulted with an internal department on customer service training skills to be delivered virtually, we discussed the regular customer service surveys they already had in place with their external customers. As a way of measuring how effectively the service staff were able to apply what they learned from the customer service virtual training program, the director and supervisors could review the data from customer service surveys before the training as a baseline, and then, after 30 to 90 days, review the customer service surveys again to compare any changes to the data. In this case, the metrics consisted of both quantitative and qualitative data.

PRO TIP 79

Partner with stakeholders or an internal or external professional evaluator to collect organization-wide impact data when appropriate.

Level 5: Evaluating ROI in Virtual Training

For most virtual training programs, you will likely not need to measure to Level 5. Because of the time-intensive nature of collecting and calculating ROI, it may be prudent to reserve this level of evaluation for higher visibility, top-priority projects. Sometimes we need to ask ourselves, what’s the ROI on ROI?

However, if the C-suite requests ROI data to review, you’ll need to provide it. They may want to see ROI on projects that are part of the organization’s strategic goals or enterprise-wide initiatives, for example. Additionally, Level 5 may be appropriate for virtual training when it’s a robust program like leadership development that is required for all leaders in the organization and is offered multiple times per year. Executives may be interested in seeing the ROI to ensure their investment in this ongoing virtual training program is paying off. According to ATD’s 2021 report Virtual Classrooms, only 25 percent of the more than 300 organizations surveyed evaluated ROI or business results.

If you do evaluate ROI, it’s important for everyone who is part of the project (such as designers, developers, producers, and facilitators) to track their time and expenses from the outset, including material costs. Later, you can make the conversion to money where possible. When I’ve been part of measuring ROI for virtual training programs everyone on the project has tracked their time through an online project management software. This made it easier for us and was also a more reliable way to track time. It also meant that all my time on the project, including project management meetings, gathering feedforward advice from a few learners early on, design and development, coordination meetings with virtual co-facilitators, and virtual delivery were tracked accordingly.

Another example of when ROI evaluation may be appropriate for virtual training programs is when executives are interested in a cost comparison between traditional classroom training converted to online delivery. Saving expenditures on flights, facilities, food, and hotel accommodations could be contrasted with the ROI once converted to virtual training programs.

Overall, evaluation is important to all training. Not only is it essential to evaluate the success of virtual training programs, but to also provide the data we need so we know where change is required. As discussed in chapter 2, we look to the learning sciences as an interdisciplinary field to inform our profession about how to improve effectiveness. At its core, science is about observation and collecting and reviewing data to examine the effects of something. Isn’t this what we’re also doing when we evaluate virtual training programs? We are collecting data to examine the effects of our training programs. We are ensuring the virtual training programs are “doing the job our customers hired it to do” (Christensen et al. 2016).

Future Innovative Trends in Virtual Training

The remainder of this chapter is devoted to exploring innovation for virtual training and online adult education. This is an exciting time for live online learning. The explosive, widespread adoption of web conferencing, video conferencing, and virtual training platforms in the wake of the COVID-19 pandemic have left an indelible mark. Consistent, global usage will continue to drive technology improvements, enhancements, and market competition. According to Pew Research and Larry Irving, the former head of the National Telecommunications & Information Administration, more extensive use of technology will be used for remote learning and education and this has potentially great benefits for reskilling and upskilling staff (Anderson, Rainie, and Vogels 2021).

According to the 2021 EDUCAUSE Horizon Report, Teaching and Learning Edition, “Learning technology stands to become even more widely adopted on the road ahead, and the discovery of new needs and uses for these and other course-related tools will lead to ongoing innovations and entirely new learning technologies” (Pelletier et al. 2021). Let’s look next at our role as innovators in the live online learning field. Specifically, we’ll focus on immersive technologies, increasing global connections, and artificial intelligence.

Immersive Technologies

One trend to watch for in synchronous virtual training is XR (extended reality) immersive technologies. This is the umbrella term for VR (virtual reality), AR (augmented reality), and MR (mixed reality). These innovations, like the so-called metaverse, can help improve trainers’ abilities to deliver learning virtually. As part of these experiences, facilitators can interact with learners in meaningful ways.

Virtual reality is the full immersion, where a learner might wear a VR headset. Then, you as facilitator may lead them through a fully immersive virtual environment with a simulated lesson. You can see how VR might lend itself most to learning where the spatial dimension is critical. AR is when learners view the real world through a smartphone camera with some digital elements overlaid on top of the live view. MR also begins with a real-world environment, but then blends digital objects with the real world so both can interact together.

These immersive technologies can simulate role-play interaction so learners can to practice a variety of topics such as conversation skills, leadership, consulting, sales skills, or customer service. They can then be tested on these skills to help support behavioral change. For example, there’s a VR simulation that allows HR professionals to practice firing an employee with unacceptable performance. Letting an employee go is obviously a very difficult, emotional, and delicate conversation. “The program uses a 3D scan of a real actor and recordings of a variety of gestures, facial expressions, and lines of dialogue. Artificial intelligence, speech recognition, and language processing features ensure that the simulated employee understands what is said and responds appropriately” (Phillips, Phillips, and Ray 2020). Mursion.com offers VR empathy simulations where digital actors react in the moment based on what a learner might do and say in the simulation.

Naturally, these experiences also benefit from debrief sessions. This is where the role of facilitator may expand in the future to debriefing more immersive experiences. When I designed an escape room for 300 sales staff, which they went through in small groups, the debrief led by the employees we trained was notably the most important part from a learning perspective. This was where they could connect their experience to their own work and learn from the insights of others as they discussed how to move forward in a new way.

Immersive technologies may also provide greater opportunity for learners to experience autonomy in their learning. Participants may be able to choose from several simulated scenarios and select the one that most interests them, perform scenes with characters of their choosing, or even customize the layout or context where the scene happens. As we discussed in chapters 2 and 4, one element of motivation we can leverage is the learner’s sense of autonomy and self-directedness.

PRO TIP 80

Take risks in virtual training, experiment, and continue to innovate in new ways.

Opportunities for Increasing Global Connection

In his 1964 book Understanding Media, and in other publications, Marshall McLuhan brilliantly foresaw how electronic media would bring all human beings together, writing “This is the new world of the global village.” Indeed, our world has become a global village, and the internet was just the beginning.

With 21st-century technology tools that are constantly evolving, we can connect with anyone, anywhere, at any time. For virtual training, this means that in-roads are wide open for greater collaboration across physical distances. For example, our virtual classes can include international participants in real time. Obviously, this requires careful coordination with time zones as well as thoughtful research regarding cultural differences, appropriate customs, and language translations as live captions (which some platforms already support). The technical services needed for a global village will only continue to grow and improve.

Virtual classes have few limits regarding where participants and speakers might join. For example, if there is an expert from across the globe who can speak on your training topic or answer live questions with your class, this is now possible. Virtual designers and facilitators can think broader and go beyond our former limits of what was possible and imagine what is now possible.

PRO TIP 81

Invite global experts to engage learners and address their questions in real time during your live online programs.

Artificial Intelligence in Virtual Training

One technology that is exploding is artificial intelligence (AI). Those steeped in machine learning and AI are careful to qualify that it is not intended to replace jobs, but rather assist humans to perform their jobs better. As David Gering, principal data scientist at Danaher told me, “AI is able to analyze people’s interactions after quickly reviewing large amounts of data. It can learn from responses in the past, determine which are most important, and which are least important.”

Conversational AI is also available. This is where an AI assistant’s voice is trained on the context and conversational nuances to respond to humans in a way that sounds and feels natural. For example, during Google’s 2018 developer conference, CEO Sundar Pichai unveiled and demonstrated how Google Duplex, an AI assistant could call and book appointments on your behalf, such as making a reservation for you at a restaurant or scheduling a haircut (Google 2018). These are examples of how AI technology can assist humans by handling simple transactions and saving our time.

AI can also help virtual trainers deliver better learning experiences. According to LinkedIn’s 2020 Workplace Learning Report, “Artificial intelligence (AI) and machine learning are expected to be the next big technologies to impact learning” (Van Nuys 2020). Examples of applied AI in education are rapidly expanding, including AI products that listen to class discussions and highlight areas of improvement for instructors (Fusco, Ruiz, and Roschelle 2021). There are also AI apps that give people feedback on their presentation skills. This service is akin to an instructional coach. Other uses include the ability to identify participation metrics. “In this Zoom era, we also have seen promising speech recognition technologies that can detect inequities in which students have a voice in classroom discussions over large samples of online verbal discourse” (Fusco, Ruiz, and Roschelle 2021).

There are many other ways AI could serve the virtual training field by searching for patterns in certain metrics. Following are some potential areas where AI may be able to specifically assist facilitators and producers in live online learning: producer tasks and chatbots.

Producer Tasks

For organizations or training departments of one where a producer is not available, an AI assistant could monitor chat while the facilitator is training the class. For example, while the facilitator is explaining concepts and annotating on slides, AI could be programmed to monitor texts from learners and respond in kind with brief, prescriptive answers to common questions. If deeper questions are typed in chat, an AI assistant could interject verbally when there’s a pause in the facilitator’s speech and say, “Excuse me, Diana, Mary is asking what the most important thing is to keep in mind when conducting an effective performance review.” This way the facilitator does not need to constantly check chat, and the producer can assist with more major technical challenges.

AI would not replace the producer role, but it would assist with rote tasks so the producer could focus on resolving the more challenging issues. For example, at the outset of the training, a facilitator might say, “Our producer Tom, our support bot Eva, and I are all here to help you in this virtual training program.” It’s conceivable that some day a virtual instructor may be able to select a customized voice for their support bot from a palette of options and perhaps even a choice of accents. Producers would be able to offload some of the easier technical challenges to these supportive AI agents.

Chatbots

An AI-powered chatbot or agent could serve multiple purposes in virtual training. For one, chatbots could be programmed to respond to the most common technical requests from attendees in a dedicated queue and walk them through steps to troubleshoot. For example, if a learner joins a live online session and is not able to hear anything, they could type to the chatbot, “I can’t hear the session, but I can see the slides.” The chatbot would then respond with the protocol they should follow to resolve. It’s like performance support. For example, it might type a response like “Thank you for letting us know. Our apologies. Please exit the platform and log in again to reset everything.”

In the normal chat pod, AI could answer common questions on its own or certain questions that repeatedly get asked in a replicated virtual training class on the same topic. This would allow the trainer to continue facilitating and skip over chat questions for which the chatbot already posted a response. Future virtual platform chats may also be able to accept more than just text, like snippets of audio recordings or brief video messages from learners. Some time in the future, AI could even look at all responses in chat, summarize the ones that have commonality, and report aloud in a conversational voice, “Ms. Howles, 30 percent of participants today are asking about. … Would you care to respond?” Adaptive learning is when AI uniquely and intelligently adjusts accordingly after gathering more data. For example, David Gering explained to me, “AI observes the patterns of what you do and then re-calibrates. The more data you collect, the smarter AI becomes.”

Another benefit of using an AI assistant and chatbots with virtual training participants from different countries is what they can offer for language services. If, for example, you were teaching a class with global attendees and a learner typed a question in their native tongue into the chat, AI can do an immediate translation to the facilitator’s native language so they can understand the chat. Since chat is an immediate medium, this would mean learners didn’t have to worry about translations as they would still be able to chat in their native language.

PRO TIP 82

Be open to opportunities for conversational AI to assist with facilitation and producer roles in virtual training.

Challenges With AI

As is true with everything, there are benefits and challenges with AI. The major challenge is the expense. This is because AI requires large data collection to become smarter. This requires building sets of many questions and triaging them with decision trees, which takes time. According to Gering, this enables it to be better equipped to identify patterns and offer solutions. More likely, rather than individual companies developing AI to aid virtual training, leading virtual platforms will find ways to integrate AI into their products and services. This might lead to commonality in AI features, functions, and support across platforms.

Another drawback is that AI is not entirely free of bias. For example, it has been demonstrated that if AI analyzes data that itself embodies bias, AI will just continue that bias. And, of course, sometimes we just want to talk to another person. Right? But as AI improves and becomes more human-like in terms of responsiveness, conversational quality, and voice sound, this apprehension may diminish. That said, perhaps AI serves as frontline help, with more specific or thorny problems escalated to the virtual trainer or producer.

Be Forward Minded

Around the turn of the 20th century, my grandfather-in-law was a professional blacksmith. Soon there was talk that times were changing, and something new was coming down the road … literally. The new automobile was rumored to potentially change the landscape for the blacksmithing trade. But my husband’s grandfather didn’t believe it. He didn’t think anything would significantly change, so he didn’t adapt. He did not reskill. Instead, he continued to professionally shoe horses the way he always had. When automobiles eventually become popular enough that horse-drawn carriages disappeared, he found himself out of an occupation. He lost his job, and learned a valuable lesson the hard way. Don’t limit your focus and your vision to where you are, look where things are headed too. By not looking ahead and adapting, we risk being left behind. We need to push past the limits of what we have known to discover what could still be.

As tools and technology continue to evolve, they will continue to bring improvements and more affordances. The uptick in online learning budgets will also encourage new virtual training vendors to enter the marketplace. Let’s not get stuck doing what we’ve always done. We need to keep pushing ourselves to be better, learn better, and train better using evolving technology. By allowing where we’ve been to guide us, observing where we are to inform us, and imagining what can still be to inspire us, together we can push the boundaries to take virtual training to the next level.

PRO TIP 83

Keep abreast of where the virtual training industry is headed next.

Summary

You now have a blueprint for upskilling through evaluation and innovation. In this chapter, we explored evaluating impact, which is just one of eight essential core capabilities. Strive to consistently evaluate the impact of your virtual training programs. Dedicate time to assess their value, and review and analyze data so you can continue improving program offerings.

We also explored the potential future landscape for virtual training. Live online training is here to stay and it’s brimming with opportunity. More evidence-based research is still needed to inform our practice though, especially as the technologies that support what we do rapidly evolve and change. Continued innovation will open the door to new ways of training and learning in the virtual space.

The challenge will be to approach virtual training in such a way that we give ourselves permission to move past older paradigms, thinking patterns, tools, and the status quo. Virtual training is a different medium with different opportunities. As such, it necessitates a fresh perspective and mindset. It may be a shared virtual space, but it is no longer a self-contained, four-walled room. I prefer not to use the term virtual classroom because room is outdated and can limit forward thinking about what is possible. In past presentations, Bill Horton has referred to thinking that gets stuck in the old paradigms as “horseless carriage thinking.” The reason for this is because people called the first automobile a “horseless carriage.” In other words, people used old terminology to describe something entirely new. It’s interesting that we still say “horsepower” to measure a car engine’s power! However, old terminology is no longer applicable and can limit our thinking with new tools. It is my hope that you, too, are inspired to think in new ways beyond the traditional classroom and see the potential for greater opportunity in virtual space.

After attending and congratulating my niece on her vocal recital at a liberal arts college, I was fortunate to also meet her voice professor. Clearly, the professor was a very accomplished vocalist herself. What I was surprised to discover, however, was that she was still taking private voice lessons herself. Here she was at a prestigious college of music teaching vocal majors, and yet, she was still honing her craft and developing professionally.

I believe that following a path of ongoing improvement should be part of our journey as well. As learning professionals who develop others, we should also be developing ourselves. In this way, we walk our talk. Regardless of our current proficiency level as learning analysts, learning experience designers, developers, facilitators, trainers, online adult educators, producers, evaluators, managers, directors, or chief learning officers, we must continue to professionally develop. To do this, we need to evaluate and innovate.

As we evaluate and look for ways to improve virtual training, it’s also important to capitalize on the benefits of incorporating virtual training into blended learning solutions. The next chapter examines how to use asynchronous (on demand) and synchronous (live online) as a combined training solution.

Pro Tips for Evaluating Impact Skills

TIP 75

Create an evaluation plan in the design phase of your project, and evaluate learning based on the knowledge and performance objectives identified at the outset.

TIP 76

Determine which levels of evaluation are most important to your customers, employees, their managers, and stakeholders to measure the effectiveness of your virtual training programs.

TIP 77

Communicate to managers the important role they play evaluating and reinforcing what participants learn in training.

TIP 78

Partner with managers to transfer ownership to them to observe and measure application in the workplace while you support them with aids or evaluation rubrics.

TIP 79

Partner with stakeholders or an internal or external professional evaluator to collect organization-wide impact data when appropriate.

TIP 80

Take risks in virtual training, experiment, and continue to innovate in new ways.

TIP 81

Invite global experts to engage learners and address their questions in real time during your live online programs.

TIP 82

Be open to opportunities for conversational AI to assist with facilitation and producer roles in virtual training.

TIP 83

Keep abreast of where the virtual training industry is headed next.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.27.244