Chapter 12

The Data Side of the Story

Power Your Learning System With Continuous Measurement

It’s time to talk about measurement, the biggest challenge facing L&D today. We’re going to cover some basic tips for improving data practices, including:

• The four reasons L&D needs to invest in measurement

• What L&D can learn about data from marketing

• How to find and access the right data

• How to shift from programmatic to continuous measurement

L&D has a measurement problem. Of course, I’m not talking about your team (wink wink). I’m talking about all those other L&D teams trying to figure out how to move beyond butts in seats as an indicator of organizational impact.

Measurement is the biggest capability gap in the L&D profession. According to Brandon Hall’s 2019 Learning Strategy Survey, 69 percent of companies say the inability to measure learning’s impact represents a challenge to achieving critical learning outcomes. This data fluency gap extends beyond L&D and throughout the HR function. Just 23 percent of HR professionals are comfortable using analytics without guidance, according to research from Insight222.

The good news is that most L&D teams know they have a data problem. How many times have you heard a peer bemoan the inability to get past Level 2 of The Kirkpatrick Model (Figure 12-1)?

Figure 12-1. The Kirkpatrick Model vs. Reality

And how many alternative frameworks have been introduced to address this issue over the past 50 years? Kaufman. Anderson. Brinkerhoff. Van Pelt. Phillips. Thalheimer. L&D has so many measurement and evaluation frameworks from which to choose that you probably aren’t even sure which of those names I made up.

The bad news is that, despite all the models, frameworks, sessions, articles, and webinars, L&D still struggles with data. Can we solve this decades-old problem by the end of this chapter? Absolutely not. Instead, I hope the MLE Framework helps you shift your organization’s perspective on the relationship between learning and data so we can finally begin to fix L&D measurement.

4 Reasons to Solve the Data Problem

Measurement isn’t going to fix itself. It’s going to require strategic effort from L&D and its partners. Unfortunately, your team already has a lot going on, and measurement strategy may not seem like an item to address ASAP compared with timely operational demands and regulatory requirements. This is why, according to the Corporate Research Forum (2021), 69 percent of organizations with more than 10,000 employees have entire teams devoted to people analytics.

Becoming data-enabled requires investment. This may sound daunting if your L&D function is already doing its best to get by with limited resources. However, improving your data practices is well worth the time, capacity, and effort required, especially if you hope to grow the L&D function over the long term.

If you firmly believe L&D needs to get better with data but constantly get pushback from your peers, managers, or partners, here are four justifications you can use to influence their perspective.

1. Good Measurement Makes L&D Proactive

How does L&D know when a solution is needed?

L&D is a reactive business function. We wait for stakeholders to request our help as a shared service. Our limited capacity is prioritized based on the next big initiative, such as a new product release or regulatory change. This may or may not have anything to do with the challenges employees face on the job every day. If L&D does act based on a timely performance issue, we usually don’t get involved until after the problem has negatively impacted the organization.

L&D must become proactive to deliver the most possible value to the business. You can’t wait for people to get hurt (literally and metaphorically) before you offer up right-fit solutions. Instead, L&D must leverage data to identify prevailing performance trends and proactively step in to help people who are struggling. But, to take this step forward with our measurement practices, you need more than learning data. L&D must recognize that learning data is part of the greater business data puzzle. You still need to know how learning solutions are being leveraged, but you must also connect these initiatives to on-the-job results.

2. Good Measurement Validates Impact

What’s the point of doing something if you don’t know if it works?

My favorite smile sheet question of all time is “Do you intend to use what you’ve learned on the job?” It embodies everything that’s wrong with learning measurement. Intent does not matter, especially during a training event. Results matter. Good or bad, they often occur well after training is completed.

Every company function is expected to deliver measurable results. Why should L&D be held to a different standard than sales, marketing, or product management? L&D must connect the dots between learning solutions and performance outcomes. Otherwise, you’ll always have a hard time justifying added investment in skill development. Executives don’t care how many hours of training were delivered per employee last year. They need validation that their L&D investments are fostering a capable workforce and value-added job performance.

3. Good Measurement Unlocks the Future

How can you apply advanced practices without advanced data?

Imagine a future in which you can provide personalized support for every employee, regardless of the size of your audience or L&D team. Employees receive automated coaching and content recommendations via chat based on their current knowledge and skill gaps. When they log into your learning platform, the experience adapts to their personal needs and preferences. As they grow their skills, employees are automatically matched with open projects and positions. L&D can track the impact of their learning solutions in real time and make proactive adjustments to maximize their business value.

This isn’t the future. All these capabilities exist right now thanks in large part to artificial intelligence (AI) and robotic process automation (RPA). However, you can’t offer this version of a workplace learning experience without the data needed to power the technology. As long as your L&D function struggles with measurement, you’ll set an artificial limit on how far you can advance your practices.

4. Good Measurement Is Critical to the Skills Economy

How do you know which skills your organization does or doesn’t possess?

Skill is now considered the fundamental unit of the employee experience. Every role can be broken down into its foundational skill requirements. Skills are the reason people get hired, trained, and assigned. They are the key enabler of performance and therefore have become the true currency of the workplace. Unfortunately, the skills economy crashes when you can’t figure out which skills people do or don’t have across your organization.

Traditional L&D measurement models focus on timely knowledge acquisition, not sustained performance capability. Skills take time to develop. They degrade when not maintained. They can only be validated through real-world application. Therefore, to adopt a skills-based approach, L&D must first improve its measurement strategy. Otherwise, you won’t be able to keep up with the evolving skills needs of your business.

Why L&D Measurement Really Fails

Traditional learning measurement tries to shove a square peg into a round hole. It doesn’t matter which model to which you subscribe. The issue runs much deeper. This is why so many different reasons were cited by L&D professionals when Brandon Hall Group asked about their lack of measurement capability:

•  We don’t have time or staff (47 percent)

•  We don’t have proper metrics (41 percent)

•  We don’t have the technology to support it (39 percent)

•  It’s too difficult to link learning to outcomes (33 percent)

•  It’s too difficult to assess (29 percent)

•  We don’t see a need (4 percent)

I’ve run into every one of these measurement challenges during my career. My L&D teams were often only lucky enough to get management buy in to deliver the training program. Asking for support to conduct additional measurement felt like a fairytale. In some cases, stakeholders had the data we needed but were unwilling to share it with us. They didn’t see the need or trust us to properly handle their sensitive data when the only metrics we had shared in the past were benign items like course completions and test scores.

L&D has struggled with data because our overall approach to learning does not match the realities of modern workplace performance. There’s only so much data you can gather during a structured training program that takes place during a limited period of time. You can track who completed the course, which parts of the content they touched, and how they felt about the experience. You may be able to track knowledge change using pre- and post-assessment, but that doesn’t mean knowledge is retained or applied during the days, weeks, and months after training. Learning isn’t constant. It ebbs and flows over time based on a variety of factors (Figure 12-2).

Figure 12-2. How People Think Learning Works vs. How It Really Works

L&D’s programmatic infrastructure inherently limits our measurement capability. Our decades-old models made sense when training primarily took place in classrooms and skill requirements were more stable. Today, they simply don’t generate the data needed to keep pace with changing workplace needs. Going beyond basic metrics to reach Levels 3 and 4 of the Kirkpatrick Model requires time and resources L&D simply doesn’t have.

A Lesson From Marketing

L&D can learn a lot from our friends in marketing, including how to build engaging content and how to design a campaign that facilitates behavior change. However, the best lesson we can take from our friends down the metaphorical hall is how to become a data-enabled function. After all, marketers track more than your ad consumption. They use everything they can learn about you to serve up targeted advertising and influence your buying decisions. Marketing knows how to measure impact, but this wasn’t always the case.

Marketing professionals found themselves in a similar spot to L&D in the 1990s and early 2000s. Before the internet became ubiquitous, marketers leaned on traditional tactics, such as direct mail, print ads, and radio and television spots. Oh . . . and billboards. Marketers applied a limited understanding of their audience to attract as many eyes to their content as possible. Then, they did their best to correlate changes in business results to marketing activities. Could they confirm that driving past a billboard caused you to purchase a new breakfast cereal? No. They just knew sales increased in the region after the billboards went up. Correlation is not causation, but it’s the best they could do with the tools they had.

Fast forward 20 years and marketing is the most data-enabled function within most businesses. Rather than be bound by the limits of antiquated tactics, marketing innovated their practices alongside technology. The internet led to the rise of digital marketing. Mobile and social technology provided even more access to quality data. Sure, marketers still use billboards, but they’re now a small piece of a strategic toolset.

It’s time for L&D to catch up with marketing by expanding our ecosystem to include more data-rich tactics. The MLE Framework will not solve all your measurement problems, but it will challenge you to rethink your data practices. As you shift away from implementing courses as your default learning solution, you’ll no longer be able to rely on antiquated metrics like the average number of training hours per employee to validate L&D impact. Instead, you must leverage the MLE Framework to tell a more holistic story through the collection, analysis, and application of a wider range of learning and performance metrics.

The Principles of Good Data

L&D needs more data, but you also need the right data. You need data that helps you understand the needs of your employees and how their performance does (or does not) change as a result of learning solutions.

As you begin to identify the metrics that really matter within your ecosystem and how you will capture them, consider the five principles of good data:

•  Volume: The appropriate amount of data must be collected to make meaningful observations and identify persistent trends.

•  Velocity: Data must be gathered and analyzed at the speed required to inform decision making.

•  Variety: Different types of data are needed to paint a holistic picture.

•  Veracity: Data must be trustworthy and free of bias and disruptive outliers.

•  Value: Data must be selected for inclusion based on its proven importance.

L&D must build a robust data infrastructure based on these five principles to overcome historic measurement gaps and power a robust learning and performance ecosystem.

Finding the Right Data

Figuring out where to start is often the biggest obstacle when it comes to fixing L&D measurement. Like everything else we’ve discussed in this book, it will take time to evolve your data practices. You may have a grand vision for how data should inform your modern learning strategy, but you’ll likely have to implement it piece by piece while continuing to deal with timely stakeholder requests.

Improving measurement begins with identifying the types of data you’ll need to power your learning ecosystem. Specific metrics will vary by organization and use case. That said, most high-value data fits within four categories: operational, people, performance, and learning.

Operational Data

How do you know there’s a performance problem in the first place?

Start by exploring the same operational data stakeholders use to measure performance. This may include sales revenue, basket size, and net promoter scores in a retail environment, or first-call resolution, average hold time, and quality scores in a contact center. This data is essential to validating the impact of L&D solutions. If stakeholders are unable (or unwilling) to share, your measurement capabilities will be limited as a result.

Operational data is sometimes referred to as business data by L&D professionals. This is a misnomer that unnecessarily separates the data collected and applied by L&D from that used by the rest of the organization. Learning data should be viewed as a subset of business data instead of an entirely different category.

People Data

Who is L&D trying to help solve this problem?

This is the easiest data for L&D to access. Organizations have lots of employee data. This includes demographics, roles, team structures, locations, tenure, and more. You probably already partner with HR to use this data to provision LMS users and assign training content. Expanding your use of people data will help you better understand your audience and target solutions to the right individuals and groups.

Performance Data

What is happening on the job?

Behavioral data is typically included in L&D measurement models as an indicator of real-world knowledge transfer. However, it’s typically collected outside L&D and therefore requires support from operational partners. As a result, it’s hard to access and even harder to collect consistently. Some operations have built-in mechanisms for capturing behavior data. For example, safety-critical environments such as warehouses and manufacturing facilities employ auditors to record performance data and mitigate business risk.

Learning Data

How are employee knowledge and skill changing?

Learning data is limited by L&D tactics. If you rely primarily on SCORM-based courses, you’re likely tracking metrics such as seat times, completions, test scores, and survey feedback. More advanced data standards, such as the Experience API (xAPI), help L&D collect more detailed user data, including how people engage with learning activities. The MLE Framework enables you to expand the concept of learning data beyond structured training activities. Each framework layer includes a range of potential data points. Here are a few examples by layer:

•  Shared knowledge. Intranet sites and wikis provide metrics similar to modern websites. This includes visits, unique visitors, pageviews, shares, time spent, locations, devices, and search terms.

•  Performance support. Chat tools and social platforms offer metrics such as engagement volume, popular days and times, popular topics, and user sentiment.

•  Reinforcement. Simulations and scenario-based questions allow you to track changes in knowledge and skill over time through application in risk-free environments.

•  Coaching. Coaching support tools allow you to track engagement, on-the-job behavior, and employee feedback.

•  Pull training. On-demand learning platforms help you understand areas of interest beyond required training, including search results, content consumption, content ratings, and content shares.

•  Push training. Whether it’s online or on-the-job, required training allows you to track employee progress, completion, immediate knowledge and skill changes, and participant feedback.

This is far from a comprehensive list of L&D data requirements, but hopefully you see how applying data from different categories will help you proactively determine the need for and impact of L&D solutions.

Shift to Continuous Measurement

The MLE Framework establishes consistent channels through which L&D can rapidly deliver right-fit learning and support solutions. This concept also applies to measurement. The tactics within each layer provide continuous data collection and application opportunities (Figure 12-3). This transforms measurement into a repeatable, scalable process. Rather than building measurement tactics into every training program, L&D can leverage the same tactics over and over again, regardless of the topic or solution.

Figure 12-3. Continuous Application and Measurement Cycle

The MLE Framework’s approach to continuous learning measurement is grounded in L&D’s understanding of business priorities and employee personas. While specific metrics and tactics vary by organization, this approach helps L&D capture a critical set of insights:

•  Engagement. How are people engaging with learning and support resources?

•  Learning. How is people’s knowledge changing over time?

•  Application. How are people’s behaviors changing on the job?

•  Outcomes. How are business results and performance outcomes changing?

When captured continuously, these insights will help you connect the dots and measure the impact of learning solutions on business results. You can also apply advanced analytical tools, such as machine learning models, to this data to identify trends and proactively adjust your solutions:

•  Prediction. Is the organization projected to achieve key business goals based on current performance trends?

•  Adaptation. How can L&D modify its strategy to ensure optimal results?

A continuous measurement approach helps L&D leverage data to ask better questions and proactively improve solutions rather than waiting until a training program concludes to find out that it missed the mark.

As you shift from programmatic to continuous measurement and apply data in new ways, you must also keep a few guiding principles in mind:

•  Governance. Data must be collected, analyzed, and applied ethically, responsibly, and transparently. Employees should always have the opportunity to question how data is used to inform their workplace experience. Measurement practices must align with company, industry, and regional standards, such as the General Data Protection Regulation (GDPR) within the European Union.

•  Sentiment. L&D professionals spend a lot of time poking fun at Level 1 evaluations (smile sheets). Survey data may not be as important as behavior observations or business outcomes, but employee sentiment is an important part of a learning ecosystem. L&D must stay connected with its audience and ensure they feel properly supported, even when other metrics point toward positive results.

•  Interoperability. L&D must be able to pull and push data to and from operational systems to power its solutions. For example, L&D must capture skills data so that it can be shared to operations systems such as workforce management and project assignment tools. Therefore, L&D must work with data and technology partners to ensure data interoperability.

•  Reporting. Any L&D measurement approach must check certain boxes. If a regulator requires seat time to be tracked within compliance programs, L&D must ensure this data is collected and reported—even if it’s not perceived to be particularly valuable. All reports should be actionable, not just descriptive, so managers and administrators can leverage L&D insights to make timely, reliable decisions.

The First Step Toward Improved Measurement

That’s how you fit a square peg into a round hole. You totally change the shape of the peg (learning measurement) so that it seamlessly fits within the hole (workplace reality). We’ve only scratched the surface of data strategy in this chapter, and it probably sounds like a lot of work already. If L&D wants to keep pace with organizational needs, deliver clear business value, and advance our practices along the way, we must fix our data problem.

Thankfully, you don’t have to figure this out all on your own. Chances are the smartest data people within your organization don’t work in L&D. If you work in a big company with lots of resources, you may be able to hire data specialists to improve your measurement practices as their full-time job. However, many L&D teams don’t have the luxury of bringing in expert talent. This doesn’t mean you have to become a data scientist to get started. Instead, find the really smart data people that work within your company and buy them lunch. Building relationships with your business operations and human resource partners is a great place to start your journey toward improved data practices. Pick their brains to improve your understanding of how the organization uses data and how L&D may be able to take advantage of existing resources. Then, do your homework and improve your foundational data knowledge. This will help you ask better questions and make informed decisions as you apply the MLE Framework to finally fix learning measurement.

By the way, you probably already figured out that there is no such thing as the Van Pelt Learning Measurement Model. Lucy Van Pelt is a friend of Charlie Brown who always pulls the football away at the last moment and offers cheap-and-unqualified psychiatric advice for neighborhood kids. Last time I checked, she did not grow up to become an L&D professional.

Roger Kaufman, Valerie Anderson, Robert Brinkerhoff, Jack Phillips, and Will Thalheimer, on the other hand, all developed evaluation methods, many of which take inspiration from The Kirkpatrick Model:

•  Kaufman’s Five Levels of Evaluation

•  Anderson’s Value of Learning Model

•  Brinkerhoff’s Success Case Method

•  Phillips’s ROI Methodology

•  Thalheimer’s Learning Transfer Evaluation Model

Check out each of these frameworks and borrow elements to create your own right-fit measurement strategy.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.162.111