15

Impact: Making It Happen

Graham Johnston

Learning functions have different roles in their respective organizations, based on how they are viewed and used to support the business strategy and how they drive the overall organizational culture of learning. But they all share a common objective to be effective, value-added, and impactful in building capabilities that drive improved performance for the organization. Measurement is how we know if we’ve achieved that, but it is just a means to that end. We measure learning to understand what worked and what didn’t, but more important, to direct our efforts to improve so we can maximize our impact. In this respect, everyone in the learning function is responsible for driving impact and should—as performance consultants to the business—have the mindset and capabilities to do so.

The goal is to implement and maintain a learning impact strategy, not just a learning measurement strategy, and there are two ways to approach this:

• determining the value and impact that the learning function provides to the business

• showing the effectiveness of individual learning solutions.

For both, impact is achieved by defining outcomes up front, and then using those to shape planning and design, the measurement approach, and its continuous improvement efforts. Let’s explore these further.

The Value and Impact of the Learning Function

The learning function at a large professional services firm uses a four-part construct to define the macro-level value and impact to the business and provide a basis for strategic planning and how the function assesses and reports its performance (Figure 15-1).

Figure 15-1. The Learning Function’s Value and Impact Construct

While designed for the professional services environment, this construct is applicable for learning functions of any size and scope and across all industries. It includes the following components that define the goals for a high-performing learning function:

Business alignment and impact. Develop and deploy solutions that address business priorities, build required capabilities, and enable business performance.

Effectiveness. Enable learning, provide content that is aligned with role requirements, and improve individual, team, and organizational performance.

Efficiency. Optimize resources and manage budget, schedule, and vendor spending.

Innovation. Incorporate leading practices and creative problem solving to address development issues, needs, and challenges.

Defining Value and Impact

This construct should be used first to guide our planning efforts by defining the outcomes we want to achieve, and then as a basis for measurement. As we develop the learning function’s annual strategy and plan, we should be identifying goals against each of the four components by asking:

• What business priorities should we support or enable through learning? What does the business need its people to know and be able to do?

• How can we best provide learning and development solutions that address role-focused learning needs and help people perform better?

• How can we operate efficiently and do more with less?

• How can we creatively address problems and opportunities for how we design, develop, or deploy learning solutions?

With these questions answered, we have defined the goals and outcomes that serve as our north star for implementing our strategy and plan, and determining how we measure and articulate success.

Measuring Value and Impact

So how do we measure success for this macro-level impact? Let’s start with a few guiding principles. For one, don’t let the initial perception that an outcome or goal cannot be measured convince you to not pursue it to begin with. For example, if a business priority dictates that part of the workforce has a certain set of capabilities, we wouldn’t decide to not build those just because we weren’t sure if or how we would know we were successful, right?

And that ties to the other principle, which is to cast a wide net when determining the ways success can be measured. We all tend to seek out the data—or more specifically the numbers—but quantitative measures may not be available, and they’re not always representative of a given component’s success. Additionally, qualitative proof points, anecdotes, and testimonials can be equally if not more telling than the numbers. Think back to the example about the business-driven capability need. Sure, we can quantify the number of people who completed associated learning programs, and maybe we can even capture assessment data where it’s collected. But does that tell us if the capability was really built, or if people performed better as a result? What if we asked stakeholders how they were seeing people better apply this capability? In the professional services firm context, we might solicit feedback from project team leaders and clients around how project team members have gotten better at diagnosing client needs and opportunities. That is certainly reflective of our success in supporting that business objective.

What other numbers can we review? For example, what can we look at to know if we are:

Aligned with and affecting the business? Business development or sales measures, operational performance, and talent-focused measures, such as retention and individual and team engagement.

An effective learning function? Aggregate learning program evaluation data, team leader feedback, and individual and team performance.

An efficient learning function? Cost savings and cost avoidance, our responsiveness to the business, and our design, development, deployment timeframe.

An innovative learning function? By creative problem solving; enabling or accelerating methods, tools, and solutions; and simply trying things out, whether they are successful or not.

So when and how often should we measure the impact of the learning function? It depends on the organization, but a best practice should be to at least capture impact against this four-part construct on a bimonthly basis. These are, after all, the outcomes the learning function seeks to achieve, and so logically we should take a regular pulse of how we are performing against them. With updated, telling, and actionable data on hand, we can proactively and responsively articulate impact to the business. This cadence also allows us to regularly validate progress or identify and respond to any necessary course corrections along the way.

Articulating Value and Impact

We’ve covered how we can define the impact of the learning function and how to measure that. But how should we demonstrate our value and impact to the business? We already have a leg up because we’re focusing on what’s most important to them—their business priorities and the capabilities the workforce needs to perform at a high level. Effectiveness, efficiency, and innovation may not be as important, but they do reflect things that should matter to the business.

You might be thinking that while this depiction of the learning function’s impact makes sense, your stakeholders in the business aren’t asking for this. A professional services firm faced a similar situation—historically they had reported on learning activity and output, but not necessarily impact. The business was used to and expected to see information on budget and spend, learning solutions that were developed and deployed, learning hours offered, participants and completions, and aggregate learning program evaluation data. But as the organization’s learning impact strategy evolved, that information was complemented by qualitative and quantitative data showing accomplishments against the impact construct, which included business alignment and impact, effectiveness, efficiency, and innovation. Business stakeholders hadn’t previously sought this out, but as it became part of the regular conversation around the learning function’s performance, they developed an immediate appreciation for it and began to view learning in a different light—as an enabler for organizational performance. This level of preparation, proactiveness, and responsiveness also served to keep the learning function out of a defensive posture where they would have otherwise been asked to show ROI or their investments would have been challenged.

Dashboards, scorecards, and other tools can be used to capture and report on the learning function’s impact, but what’s most important is that value- and impact-focused conversations are occurring with stakeholders—no matter what the communication vehicle may be. The professional services firm developed a quarterly dashboard that captures quantitative and qualitative data against the four components of the impact construct, which was then shared with key business and talent leaders. The learning leaders also captured their own accomplishments against the construct using a shared document that team members updated with qualitative and quantitative proof points at the end of every month. This meant current information was always available to draw from when the need or opportunity arose to speak to how the learning function served as a performance enabler and key engine for building critical capabilities to achieve business outcomes.

Learning functions are best positioned to drive impact if they plan for it up front by defining outcomes with the business, execute and maintain their strategy with those outcomes in mind, and measure and demonstrate their impact against them. With these insights into how to optimize the aggregate performance of the learning function, let’s focus now on the effectiveness of individual learning solutions.

Learning Solution Effectiveness

The effectiveness of learning solutions of any type—formal learning, on-the-job development, curated content, mentors, networks, and so forth—is defined by the learning gained, the applicability of the content or experience to one’s role, and, most important, its influence on performance. Similar to the value and impact of the learning function, the effectiveness of individual learning solutions rests on the up-front definition of outcomes and how those shape the design, measurement, and continuous improvement.

Defining Effectiveness

As learning professionals, we’re all accustomed to being asked to “develop a training course” before any discussion has occurred around drivers and needs, and what type of learning solution—if any—is most appropriate. An up-front definition of outcomes—how the information will help the business or what learners should be able to do, for example—by both the learning team and business stakeholders is important for steering the learning content in the right direction and for achieving effectiveness. There are three types of outcomes to define at the outset and get alignment on: business objectives, performance objectives, and the learner experience (Figure 15-2).

Figure 15-2. Three Types of Learning Effectiveness Outcomes

Business objectives reflect the business needs or priorities that the learning solutions are intended to support and could include:

Service and solution delivery. An organization needs to improve or increase a service it provides to its customers, requiring that its people develop and apply a specific capability.

Sales and business development. An organization seeks to grow business in a market segment and needs to strengthen knowledge of services and solutions, as well as customer relationship and sales skills.

Operational performance. An organization must demonstrate regulatory or compliance requirements, achieve process enhancements, or improve cost and revenue management.

Talent. An organization is looking to improve retention for its new hires after their first year and looks to improve their engagement.

Typically, at least one of these business objectives is driving the learning needs, and it’s important to confirm that objective so it can be referenced throughout the solution’s life cycle.

Most business objectives have a corresponding capability or set of capabilities that define the second type of outcome—performance objectives. More and more, performance objectives (what the learner needs to be able to do) are taking the place of learning objectives (what the learner needs to know)—or at least complementing them—because just learning something is not enough and performance is what matters most. When it comes to marketing a solution to a learner, sharing how that solution will help their performance will be more resonant and compelling than simply telling them what they will learn. There are typically multiple performance objectives for a given learning need, which helps define content components and the delivery structure.

The last type of outcome—which isn’t often applicable for learning needs—is around the learner’s experience and emotions, or how the learner should feel. Take, for example, the retention objective and need to improve employee engagement discussed earlier in this chapter. In that instance, campus hires may be more engaged because they feel prepared to perform in their new role, connected to the organization, energized and excited to perform, or inspired to make an impact. These could be the desired outcomes for an onboarding experience, where the primary outcome is new hire engagement and secondary outcome is improved retention.

Defining these outcomes provides a foundation that shapes the learning solution design, measurement, and refinement—all toward achieving effectiveness.

Designing for Effectiveness

The design of learning solutions is a separate discussion, but the relevant point here is how the defined outcomes should inform decisions in the design process. When selecting learning solutions, determining content to include, or deciding how content should be delivered and what practice, application, and feedback should be incorporated, we should be asking ourselves regularly if and how the design decisions align with business objectives, performance objectives, and the learner experience and emotions. What we want to avoid is not including design elements that are needed to meet the outcomes or, more commonly, including design elements that don’t meet the outcomes. It’s important to remember the outcomes we defined up front so we don’t go down the path of building a learning solution that doesn’t do what we need it to.

Measuring Effectiveness

A risk in learning effectiveness measurement is that we work from data that are available but aren’t actually relevant to our outcomes, resulting in unnecessary work and a mischaracterization of solution effectiveness. Most of us have had experiences where we’ve captured and reported on data that isn’t particularly telling toward what we are trying to achieve, such as tracking down and highlighting retention data when reduced attrition wasn’t actually a goal for the development experience, or proficiency data for a capability the learning program wasn’t actually intended to build. This is where a definition of success in the learning solution outcomes is so important, because it focuses our measurement efforts on data that validates that the outcomes were achieved, as well as where and how they weren’t, so we know where to direct further analysis or refinement.

When defining the outcomes for a given solution or experience, a best practice is to identify the measures, methods, and sources for each business objective, performance objective, and learner experience or emotion. This directs the measurement approach at the outset and helps capture the right qualitative and quantitative data. Let’s examine how best to measure these three categories of outcomes.

By definition, the measurement of business objectives dictates that we are capturing business data rather than learning data. This can be challenging because we don’t own that data and it may be difficult to access. However, the reality is that we need to point to the data to demonstrate how we have enabled the business objectives, even indirectly. Take, for example, the business objective of increased sales in a customer segment. In defining that outcome, the business may target a 20 percent increase in sales. We then set out to develop and deploy a series of programs to build service or solution knowledge and customer relationship and sales skills. After that curriculum has been developed and delivered, and after the audience has had a chance to apply it and perform, we would look to the business to see if that 20 percent increase was realized. While the learning function is not solely responsible for meeting that target, it’s the best indicator of our influence. Therefore, it’s important to set these expectations with the organization when agreeing on business objectives, and to establish access to the business data we need to demonstrate our effectiveness.

The shift from learning objectives to performance objectives presents more of a challenge because it’s easier to measure if someone has learned something than it is to measure if they performed better. But the latter is still the right outcome to focus on, and there are different ways we can determine how performance has improved. Some performance objectives are easier to measure than others; for example, if we’re looking for an increased ability to produce widgets, we can point to the widget production rate. But if we’re talking about a capability like critical thinking, we need to consider the different ways a learner has successfully applied that capability. The first and most common method is to get the learner perspective, often through post-program evaluations conducted upon program completion that ask what the participant learned, if the content is applicable to their role, and if it will help improve performance. Then we can ask those content and performance questions again 30 to 90 days after program completion to gauge what a learner thought was going to happen versus what actually did. However, just because the learner says their performance improved doesn’t necessarily mean that it did, and that’s where we can link to other means for assessing performance. Feedback from team leaders or supervisors or even customer feedback on a given capability can be very telling; for example, hearing a manager say, “Since he attended that program, I’ve really observed an increase in Steve’s critical thinking skills, and here’s how,” is powerful. And where it may be difficult to isolate and diagnose a specific capability or performance objective, you can bundle them, asking, “Since attending that program, has the team become much better at identifying, assessing, and addressing the client’s issues, needs, and opportunities?”

When it comes to the learner’s experiences and emotions, the best way to assess that is to ask the learners. Let’s go back to the campus hire engagement example. Here, the solution might be a six- to nine-month holistic onboarding experience designed to make them feel prepared to perform, connected to the organization, energized and excited, and inspired to make an impact. Campus hires could be asked how they feel against those four elements at multiple iterations throughout the experience, with the goal being that those feelings are stronger and stronger each time.

Refining for Effectiveness

We’re not doing our job as performance consultants if we simply measure and report on solution effectiveness and stop there. Very few learning solutions are perfect, and the real value to the business comes when we identify and address the parts of our learning solutions that weren’t as effective as we had hoped. For example, what if evaluation data showed that participants learned a lot from the program and the content was applicable to their role, but there wasn’t a noticeable improvement in their performance? Sharing the good news and the not-so-good news with stakeholders and making a commitment to improve go a long way toward being seen as a trusted business advisor.

The first thing we need to do is make measurement and continuous improvement core components of the learning solution life cycle, as well as part of expectations for all learning professionals as part of the impact mindset. We also need to collect telling and actionable data, and there’s an art and a science to this. The science outlines how we define outcomes for the learning solution and measures, methods, and data sources for each, while also determining what data we need or, more important, what data we do not need. The art comes in how we solicit that data, so it tells us what we need to know and is targeted enough to react to. Evaluation, survey, interview, or focus group questions represent the primary means for this. These guiding principles can help drive response rates and overall respondent engagement:

• Focus questions around the three primary components of effectiveness as it pertains to performance objectives—learning gained, applicability of the content to role, and impact on performance.

• Don’t ask questions seeking data you don’t need or that you have gathered from other sources.

• Do not include multiple questions that are or even appear to be similar.

• On evaluations or surveys, don’t include “double-barreled” questions, which have more than one statement for learners to respond to in a single question.

• Pose questions as absolute statements so respondents can more easily indicate their level of agreement. (For example, “What I learned in this program is essential to my work.”)

• Limit the number of questions asked and use plain language and simple sentence structure so the intent is clear.

These guidelines are intended to prevent respondents from disengaging, hastily answering questions, or not answering them at all, which will maximize the quantity and quality of data that we can analyze and act upon to improve learning solutions.

What could a continuous improvement process to refine the learning structure, content, and delivery look like? For example, if we find that business objectives haven’t changed in the way the business had hoped, it could lead to a conversation around the role of the learning solution and if something could be or needs to be changed. For performance objectives and learner experiences and emotions, it is easier to isolate data to show if those were achieved, and if not, why. For example, if program evaluation data show that learning was achieved but respondents didn’t think the content was applicable to their role, it could trigger an examination of whether the program’s target audience was accurate or if there was a proper understanding of performance expectations. If the data showed that learners weren’t effectively applying a given capability, that would direct us to assess the associated content and how we’re delivering it, including methods for practice, application, and feedback.

Summary

As a learning function, our goals should be the same ones as the business, and we should speak their language with them. Their success should be our success, so that’s how we can ensure we’re focusing on the right things. We exist not just to enable learning, but to drive individual, team, and organizational performance. That mindset is what makes us valuable and impactful as performance consultants.

The conversation is no longer just about measurement—it’s about impact and how we come to a shared definition of it up front, use it to guide our planning and design, and then to inform that measurement and our continuous improvement efforts. In creating and maintaining a learning impact strategy, we bring value and impact to the business at the macro level as a high-performing learning function, while also delivering effective learning solutions that address specific outcomes. All learning professionals should adopt this impact mindset, and everyone should be held accountable for impact and responsible for analysis and continuous improvement in addition to design and development.

Key Takeaways

Change the lens for how we define, drive, and demonstrate the value and impact of the learning function, to be better positioned as trusted advisors and performance enablers for the business.

Speak the language of the business with the business and make their definition of success our definition of success.

Define outcomes—including business and performance objectives—up front to inform and target design, measurement, and continuous improvement.

Focus on telling, actionable data that align with those outcomes, and give equal attention to qualitative data and the power of anecdotes and observations.

To whatever extent there are gaps and needs, there is a tremendous opportunity to enhance how the learning function is viewed and used and to strengthen its role and brand.

Questions for Reflection and Further Action

1. What does your learning function’s impact strategy look like?

2. To what extent do your business stakeholders view you as performance enablers?

3. What would they say if asked about your accomplishments and how you helped them achieve their objectives?

4. What is the level of ownership and accountability for driving impact across your entire learning function?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.188.175.182