2

Lies About Learning Research

Doug Lynch

The learning profession has a propensity for promoting inaccuracy and lies. In the simplest terms, a lie is a falsehood, and in that sense, anything that ends up being ultimately untrue could be interpreted as a lie. But in the messy world of social science and corporate learning, such a standard isn’t practical. A more precise definition of a lie implies an intent to deceive. Sadly, because of its messiness and our own lack of conviction in the efficacy of what we do, our space is fraught with well-intentioned lies.

My focus in this chapter is on the lies designed to gloss over the messiness of our world with respect to research on the effectiveness of what we do. At its core, research is simply a systematic investigation. This concept is important to practitioners because there is a false dichotomy common in our world, which, if accepted, becomes a lie—that research and practice are somehow at odds. A related lie is that tension also exists between theory and practice. These two lies—that theory and research are at odds with practice—become clear if you tease at them. Everything you do is grounded in a theory and when you do what you do, you are conducting research. You simply can’t get around it.

Here is an example. There are three theories that explain gravity: quantum mechanics, Einsteinian, and Newtonian. None of these theories is a truth. Each is backed by research—that is, a systematic investigation that led to some empirical evidence that supports the theory; each also has research that refutes it. Turns out quantum mechanics does a great job of explaining how gravity interacts among very small things like particles, but a poor job of explaining how gravity interacts with very large objects like galaxies. Einstein’s work does the opposite. Newton did a great job explaining how gravity works for the likes of you and me (the apple falling from the tree), but Newtonian physics falls apart if you are trying to map the universe or work with particles. All three theories are very practical, and we use them—or products based on them—every day. The key for practitioners, in this case engineers, is to know which theory to apply to frame the questions they wish to explore or the problems they wish to solve. We are, in essence, learning engineers and, as engineers, we need to know the guiding theories.

Not being fluent in theory and research sets us up to either accept lies or perpetuate lies, and that affects how we interpret our systematic investigation of our work as learning professionals. Each of us conducts and consumes research daily. As a result, our consumption and production of research can impede our success. We need to be informed consumers and producers of research on corporate learning. We need to know when vendors are deceiving us. And we need to be careful when we make proclamations about our own success. Why? Because in a knowledge economy, how we develop people is important to our companies, employees, and society.

There is ample research suggesting that learning is a beneficial endeavor, and when done right, it helps not only the company, but also employees and the broader economy. This chapter will encourage you to be thoughtful when you make claims about what you do and to be careful when you are evaluating what others say they can do. I will present some examples that illustrate my statement as a reasonable hypothesis and present some evidence for you to evaluate. I will end the chapter with some practical tips you can follow as a learning professional and with a call to arms for us as a community. But let me start by urging you to not take anything I state prima facie. Investigate it more thoroughly yourself.

Research, Theory, and You

My claim is that you are already a researcher and, perhaps even more uncomfortably, you are a learning theorist. How so? Well, as a learning professional you engineer change and you need theory and research to execute that change. The questions before us in the field are how aware are we that we do these two things and how good are we at theorizing and researching corporate learning. Let’s look at the theories that underpin why we do what we do.

But first, let’s deal with the content—the “what” of the work you do. Let’s acknowledge that all the content you deliver is somehow empirically vetted. Whether it is emotional intelligence or strategy, you got the content somewhere and someone posited a theory to explain a behavior and researched it. I worry about the veracity of much of the popular trends that business executives consume, but for our purposes let’s put that aside and focus on the how and why of what we do when we provide learning and development solutions to help our employees grow and companies succeed.

As you identify a business problem and develop the corresponding intervention, you operate at the intersection of two theoretical concepts grounded in the economic literature. The first is human capital theory, which suggests that you can make people more productive if you invest in them through development. Everything you do is predicated on this one theory that has ample evidence supporting it. At the same time, however, some evidence suggests that a second, competing theory (signaling) may also be obfuscating the impact of your development in increasing human capital.

Given that this competing theory also has ample evidence and can get in the way of the efficacy of your development efforts, it is worth taking a moment to discuss it. Signaling basically says that when you decide to develop someone you are simply sending a signal to the market—managers, co-workers, and so on—that this person is different, rather than actually making her more productive. Economists have found evidence supporting this competing theory. For example, two separate studies, one by Thomas J. Kane at Harvard and the other by Alan B. Krueger at Princeton, investigated the value of an Ivy League degree. They both found ample evidence that the market pays a premium for such a pedigree, but when they investigated further and controlled for “endowments” (think of these as individual traits such as intelligence and work ethic), they found that the evidence suggested it was more of a signal than the result of any additional learning taking place at these schools.

These ways of thinking about people development are based on the ideas of James Heckman, a Nobel Laureate who won the medal for work in this area. And it appears that other economists think there is merit to these approaches. While it may seem contradictory that these two theories compete, keep in mind that James Joyce said the sign of true genius is keeping competing ideas in your head simultaneously.

Let me give you a concrete example with which you may be familiar—high-potential programs. Being selected by a company to be part of a high-potential program may do two things, which compete with each other from a design perspective. It may be that these programs actually increase the capabilities of those who attend. But simply being selected to participate also sends a signal—to the company, other employees, and the participants themselves—about their talent separate from what they gain in the program. If all we are doing is sending a signal, making it a rite of passage into upper management, we might design the program very differently than if the only reason we are doing it is to develop some future leader who has some latent potential.

Even though these two theories compete, they both suggest why development may be a viable, and perhaps differentiating, business strategy.

Now we turn to how you implement development, and recognize that it, too, is governed by theories backed by research. With learning theories, there are also competing theories, and it is important to own your assumptions because they influence your design and evaluation. There are three competing theories of learning, each (like the physics example) with significant empirical evidence to support them: behaviorist, cognitivist, and sociocultural. You should be fluent in these theories in the same way engineers are fluent in physics and the competing theories of gravity.

Every time you design or deliver a program, you are making theoretical assumptions; they are embedded in how you build your program. And if you are evaluating learning, even if it is simply with a smile sheet, you are conducting research. But if you are not clear and explicit, chances are you’re not maximizing the impact of what you do. And that lack of clarity will lead to perpetuating lies about learning.

Setting the Bar for Truth in Our World

Now that we are aware that we are all theorists and researchers, let’s talk about the paradigms that we use to empirically evaluate our evidence. The reason to focus our energies here is that it is the crux of how we decide whether to continue doing what we do or to try something different. If something works, we are likely to continue to do it; if it doesn’t, we are likely to try something different. There are two popular approaches to evaluating what works. Neither approach has significant traction among top researchers as a particularly compelling empirical approach, but they do among corporate learning professionals.

The most common approach to evaluating the efficacy of what we do is the Kirkpatrick model. Conceptually, this model is quite elegant; it gives us a framework and model that we all grasp. It might be more useful if we didn’t think of the levels as ordinal (four is better than one) but rather as categorical (four is different from one). However, the problem that I have seen, at least anecdotally, is how the model is applied empirically. Because we tend to see the conceptual model as ordinal, we aspire to reach the top level. Consequently, we tend to pay less attention to how we measure at the desired level and how we analyze the data because, as Robert Browning said, “a man’s reach must exceed his grasp” (Browning 1933). Consequently, we often don’t measure and analyze accurately, and yet those two concepts are at the heart of effective research or a systematic investigation. For example, a Level 4 case study is not the way a financial analyst would gather empirical evidence to measure efficacy.

I’ll spend more time on this later in the chapter. But here’s another example to illustrate my point. Scientists are curious about measuring the age of our world and have different ways of gathering evidence to explore this. One way would be to use annual Gallup data from surveys asking what most Americans believe. However, if one were to use that evidence and then regress it, one might find that the age of the earth is approximately 6,000 years. While such an approach is statistically valid and reliable, in some cases it might not be the best way of answering the question. Surveying Americans is an effective way of determining what they believe, but even though it is measured accurately and analyzed correctly, the underlying data are fraught with error. A better approach might be to use radiometric dating; such an approach might produce less reliable results in terms of the margin of error (when estimating the age of rocks there is variance of several million years), but, ultimately, a more accurate answer to the question.

The other common approach to evaluating learning is Brinkerhoff’s Success Case Method, which is about telling success stories. Conceptually, one is not looking for typical outcomes, but, rather, extraordinary outcomes. So if you can find one instance where something amazing happened, that is the story to tell in this methodology. The logic of this approach is that performance is described based on the extremes rather than a mean or median score.

Indeed, the paleontologist Stephen Jay Gould wrote of his own mortality in this vein in “The Median Isn’t the Message” when he learned that the average life expectancy for someone with his form of cancer was less than a year and yet he lived many years after developing the cancer (Gould 1985). The median or mean is simply part of a description of a distribution. At the same time, outliers are also parts of the distribution and much less likely to occur than a mean or median.

So the Brinkerhoff approach is akin to telling the story of someone who survived falling out of a plane at 36,000 feet and then using that information to justify jumping out of a plane without a parachute. I certainly find this approach compelling in terms of advocacy; reading about someone who survives a plane crash or beats the odds of cancer is motivating. But such an approach is less compelling in terms of research because it suggests that an atypical outcome is typical. Following this logic, from an analytical perspective, it also suffers from the shortcomings of some implementations of the Kirkpatrick model: case studies, which might be wonderful for contextualizing what happened, are generally not accepted by scientists for explaining causal relationships.

Some Current Whoppers in the World of Corporate Research

Let’s first recognize that vendors, when advertising or providing testimonials on the efficacy of some software or training, are explicitly trying to mislead you. This doesn’t mean that they are not providing value, but they are not motivated to conduct systematic investigations on the limitations of their products. More troubling to me are the examples of pervasive misinformation that I’ve seen in our profession. I understand the motivation, because there is a perception and some plausible evidence that the public and companies undervalue learning. But when we behave that way, we may undermine our own credibility because we generate and perpetuate what can be broadly construed as lies.

Three of the most popular business books of all time are fraught with deceptions of various sorts, yet we continue to buy them and believe them. Perhaps the most upsetting is In Search of Excellence (Peters and Waterman 1982), where one of the authors admitted that some data were faked (Byrne 2001). Another great book, The Seven Habits of Highly Effective People is largely aspirational and based on Stephen R. Covey’s strong religious grounding and ethics (Covey 1989). He argues that if people behaved more in this way, companies would be better off. But there wasn’t any thorough empirical testing of the hypothesis. So while I think his book is great, and inspirational in its aspirations, it has been interpreted as something that it is not. In the literature, I could find no evidence of whether adopting the habits advocated in the book is an effective leadership strategy.

The third book is Jim Collins’s (2001) Good to Great, which is simply guilty of poor research design. Collins got his evidence by looking for patterns in behavioral traits among leaders of successful companies. But then he argues that these patterns are generalizable. In other words, any leader that adopted these traits would make his company great. The best way to investigate the question, once the traits had been identified, would have been to identify two groups of leaders who were similar, except for having those traits, randomly distributing them as CEOs at companies, and then evaluating company performance. Clearly, this is impractical, but he could have at least run a broader sample to see if there was any evidence of the same behaviors existing in companies that weren’t so great. One could have also easily investigated whether endogeneity was present (this is kind of a methodological chicken-and-egg question—do great companies make great leaders or vice versa).

This attention to detail in how a question is asked may seem trite, but I can give you two examples that illustrate its importance. You could run a pattern of the traits of the best basketball teams and then generalize that you only need tall African Americans to put together a great team. Or you could find that drug use is quite prevalent among musicians and that all you need to do to be a rock star is consume illicit drugs. We need to be fluent consumers of business research and pay particular attention to how the hypotheses put forth are systematically investigated. If we don’t hold authors to some minimum standard of investigatory rigor, we are guilty of perpetuating the falsehoods.

Now let’s move on to examples grounded purely in our world, where we are the producers of the theory and the research, rather than consumers of it. Let’s start with an example that represents a commonly perpetuated lie—the notion of learning styles. Generally, a learning style refers to the way an individual acquires and processes content. The hypothesis is that different people learn differently and that if one does not align an individual’s learning style with the delivery of content, then learning will not occur. What is so interesting about this example is that it is so pervasive and popular, even though it has been thoroughly researched and there is overwhelming evidence against it. Indeed, there have been more than 100 peer-reviewed studies investigating this hypothesis, and they all found no evidence of differences in how people learn. Even when people self-identified with a particular learning style and then had content delivered consistent with that style, there was no increase in learning. In other words, there is overwhelming evidence that people learn in many different ways but no evidence that designing to accommodate an individual’s style preference leads to better outcomes. Despite the overwhelming evidence, if you search for “corporate learning” and “learning styles” you get close to 4.5 million hits.

So why do I still hear companies talking about building or buying programs to accommodate a particular learner’s style? The primary reason, I suspect, is that the myth (that learning styles matter) simply sounds better than the truth (that they don’t). Both buyers and consumers of content feel better when they believe that their unique needs have been considered when a product was designed. And arguing that learning styles are not nearly as relevant as we think earns us no credibility and, in fact, may even hurt us because, for better or worse, the myth has been accepted as truth in the marketplace of learning ideas. So we just choose to let it slide, recognizing that we are perpetuating what is, for all intents and purposes, a lie.

Another lie involves making an inaccurate inference from observational data. Think of this as a rendition of an old show tune from Bye Bye Birdie—“what’s the matter with kids today?” The prevailing argument uses the following logic: A manager notices that the younger people in the office work differently than the older people and then assumes that there is something fundamentally different about younger workers today (the popular pieces started appearing when Generation X was entering the workforce and now it is applied to Millennials). This presumption is a classic example of misinterpreting a generational effect for a cohort effect. Peter Cappelli, the George W. Taylor professor of management and director of the Center for Human Resources at the Wharton School of Business, has written about this eloquently. There is no evidence that humans have evolved into some new species since the birth of Millennials. And as the old musical song attests, folks have been observing that young people seem different from older people since time immemorial. This is what research circles call a specification error.

Next, let me give an example that is akin to embellishment. It is a good theory but one that has never been researched effectively (and, by definition, probably never will)—most learning happens informally. When I search for this “informal learning” on the web, I get more than 14 million hits. Note that I am not arguing that informal learning does not exist. Many peer-reviewed articles have investigated it, most recently a Stanford study used by the White House for public policy purposes. But what has not been vetted is this ratio of 70-20-10, despite a host of articles written about it.

From what I was able to ascertain when attempting to follow the literature, this was akin to that old childhood game of telephone. In the 1960s a Canadian researcher, Allen Tough, posited that one could think of the relationship between informal learning and formal learning like an iceberg. It seems that over time, someone, or perhaps many people, took that metaphor, attached weights to it, and all of a sudden it became an accepted notion that 90 percent of learning is informal and people budgeted accordingly. But how does one even measure that? It is now accepted as a truth, and meaningful decisions are made based on it. We should infer from this that, while informal learning exists, by definition we cannot structure it or measure it, and we certainly should not assign ratios to it and budget accordingly.

My final example relates to return on investment (ROI)—the holy grail among corporate learners. I first heard this term applied to the world of corporate learning when a chief learning officer (CLO) from a Fortune 100 company publically proclaimed an ROI of 1,328 percent. I was shocked at such a statement. There is a rich history in the literature of labor economics on estimating the returns on education. Calculating ROI is one way to measure return on education; net present value or optionality approaches are also options. To put the 1,328 percent in context, economists using various econometric models have estimated that most returns on a high-quality college education hover between 10 and 20 percent. So if this company had really cracked this nut, then it meant it was performing 100 times as well as, say, Harvard Business School. One need only look at the balance sheet and income statement to realize that what was happening instead was very poor research.

Now to be clear, there is ample evidence on the returns on education and it has been quantified, so it can be calculated. And I understand the compelling arguments around the need for learning leaders to have more business acumen and talk the language of business. But if you went to the chief financial officer (CFO) with such a claim, it would not be laudable, but laughable. CFOs understand conceptually the notion of return on education, and they also probably know that those returns are not best measured as ROI.

Think about it. ROI implies that one can look at a company’s financial performance over time, take into account all the extraneous variables that would affect that company, quantify all the costs associated with delivering the program, isolate the endowment effects (such as motivation, prior education, and intelligence), and come up with a simple ratio. In theory, one could use stock price, because it is supposed to take into account both the market effects and future earnings, but even so it is a fool’s errand. This is a classic example of poor research design. It is a plausible question supported by theory, although not sufficiently practical to execute for the average CLO. Trying to measure returns on corporate learning using ROI is like trying to turn lead into gold.

What These Things Have in Common

What lessons can we draw from these examples? Well, I hope you see that we are all in good company when making these mistakes. Being diligent is difficult when you are under financial and time pressure. Perhaps trickier is that, as professionals, we have our own experiences to draw from, and those experiences shape our views of the world. But because we may know from experience that something is there and we are under pressure to perform, we too often act more like learning advocates, as opposed to learning engineers. What matters to our employers and to us is that we help the business, not that we have a large budget or lots of programs. We should put our heart and soul into the design and delivery of programs. But at the end of the day, the programs need to stand or fall on their merits, and we should always have the best interests of the organization and our colleagues in the forefront, rather than the creativity or coolness of our programs.

We need to stop being lemmings. We are so enamored with the next great trend that we jump on these bandwagons, partially because the success of our programs matters so much. But we need to be much more thoughtful in how we interpret the research we are consuming to inform our program designs. And we need to recognize that the way we ask a question matters most because we cannot fix through clever analysis that which we messed up in design.

How to Prevent Getting Hoodwinked and Lying to Yourself

The key to ensuring that you aren’t deceived or don’t inadvertently deceive is to be an effective theoretician and researcher. What does that mean? First, pose questions (and by definition answers) that can be tested—investigated empirically. Better to measure what you can measure well—say, a Level 2 Kirkpatrick evaluation—than aspire to do a Level 4 evaluation and use self-reported data to say that you’ve measured business impact.

Second, know your underlying theory and link your questions to it. Imagine using a behavioral approach for compliance training, a cognitive approach for strategy, and perhaps a social theory approach for innovation. You should know which pedagogical approach you are using and why.

Whenever possible, use methods that permit direct investigation of your questions. If you want to see if someone can do something, don’t ask if she can do it, ask her to demonstrate that she can do it. If you want to know if he found the course useful, ask him whether he found it useful. Be clear and thoughtful about how you measure things and do it as directly as possible.

Third, provide a coherent and explicit chain of reasoning; walk yourself through how you got to the question. For example, before you run a sales training program, make sure you understand why you’re running the program. What evidence do you have that it is a skills problem (as opposed to a slow economy)? What evidence do you have that it is a performance problem that cuts across individuals, as opposed to some other thing that may be influencing sales, such as the product itself or sales support? Then investigate why you are running a particular intervention. How do you know that is the right choice compared with a different program that might be available in the marketplace? Make sure you have a strategy in place for evaluating success before running the program. Perhaps one approach would be to run the intervention once as an experiment to test the hypothesis and then repeat, evaluating all the way. You could also see if others have investigated the question before you and learn from their results. Finally, whenever possible, disclose research to encourage professional scrutiny and debate. This makes all of us better.

Let’s review two research approaches: collecting numbers (quantitative) and collecting observations (qualitative). Keep in mind that all social sciences ask the same basic sorts of questions, so all the approaches are relevant. But each approach is like a lens, and it brings certain things into focus and moves other things out of focus.

We probably should be conducting research that is a mix of qualitative and quantitative, with a heavy emphasis on quantitative methods. When it comes to developing people, it is important to recognize that you aren’t ever going to be able to capture the full impact of any training; learning is too complex, there are so many interaction effects, and learning doesn’t happen the way it was conceived in the popular Matrix movies series, where you only need to plug in and download knowledge. Performance and learning are loosely coupled; some takes time to stick and some lasts quite a long time. Thus, there are always some consumption and investment benefits to training, and there are individual and social benefits to learning. So you might not be able to take a course on innovation and turn a company around overnight, but the course might spark a bulb that generates a solution in the future. Or you might take a course that benefits your team as much as yourself.

Going beyond these conceptual ideas, how we systematically gather evidence means different things and answers questions differently. For empirical investigations, we could conduct interviews, run statistical analyses of data—either observational or self-reported—or we could conduct controlled experiments. Even with the same data and the same paradigm, we can analyze things differently with different measurement techniques. We could use simple correlation, a residual approach that controls for other factors, or we could measure direct returns. The example I gave earlier about investigating the age of the earth gets at this idea, even though both the survey and direct investigation of the rocks were analyzed quantitatively. Remember, the most direct approach is generally the better approach.

Finally, if we are interested in things like the value of training, we need to be aware of measurement challenges, such as the interaction of education, prior education, and ability; selectivity bias; the quality versus the quantity of training; and discounting for time. Two strategy courses, one taught by Bob the Builder and the other by Michael Porter, shouldn’t be treated the same. And a great course that provides benefits to the company 10 years down the road needs to be evaluated differently from the great course that provides benefits tomorrow.

The Hierarchy of Evidence for Impact

Generally, the more times something has been tested, the better. So you would rather use a tool, product, or intervention that 100 studies say works, rather than simply rely on one study with the same finding (with the huge caveat that the quality of each study matters).

If you are attempting to demonstrate impact, the gold standard among researchers is randomized control trials; they are the classic experiments that scientists use with control groups. Almost as good are natural experiments and quasi-experiments. Less desirable are the survey and case study, which happen to be the ones that we see most often in our space. While the survey and case study tell us things, they don’t answer the question of whether something worked or not. Surveys can tell us what people say and, if well designed, what people think, but they are less accurate at telling us what people do. Case studies are spectacular at illustrating a point, but suffer from the same problems that surveys suffer from, plus the added constraint of generally having a sample size of one. As a result, it is difficult to make inferences with respect to generalizability.

Why Research Matters

In some fundamental ways, all social scientists are interested in answering the same sorts of questions about the world. But each discipline has very specific rules about the fidelity of research and is concerned about poorly designed and executed mixed-method research. With respect to education evaluation, the U.S. Government’s Office of Management and Budgets, the Campbell Collaborative, and National Academy of Sciences have attempted to describe a hierarchy of evidence where, roughly speaking, there is consensus:

   More studies are better than fewer.

   Anecdotal case studies or testimonials are the weakest form of evidence.

   Randomized controlled experiments are the gold standard.

And there is recognition that some approaches are better at revealing hypotheses and exploring why others are better at evaluating whether something worked as planned. Table 2-1 lists some purposes or goals of research and the corresponding approaches.

Table 2-1. Research Methods and Their Goals

Research Methods Goal
Qualitative and quantitative To ensure implementation and replicability
Qualitative To provide context and insight
Quantitative To evaluate the effectiveness of evidence

For social science research to be funded and evaluated, the government and others have developed hierarchies of evidence with randomized controlled experiments being the gold standard, and the more times a study is replicated, the more reliable the finding.

The problem with that approach is that it may be untenable. It is easy to imagine running one cohort through a pilot versus a control group for a short period of time, but, ultimately, the reason the program is offered is to solve a problem, and saying that only 50 percent of employees will participate in a program to determine its efficacy may be a nonstarter with the business leader. It may win points among an editorial board or for tenure, but among CLOs, heads of talent, and business leaders who have P&L responsibility, this approach will, most often, be useless.

But as some of the lies have illustrated, in its absence, what has emerged is a system where evidence is largely word of mouth or personal testimonials, sometimes by vendors who have an agenda and sometimes by the learning professionals who have a point of view. These people act as advocates rather than researchers. As a result, much of what we design and deliver is rooted in little evidence that any of it works, but we have a large stake in saying it works, regardless of its actual efficacy.

However, we can’t assume that learning professionals will always rise to the challenge. What is needed instead is a pragmatic solution. You must choose the level of sophistication you need to understand the impact of your programs. What tools can you use to gather evidence of impact? What tools do you use to analyze performance and learning?

We can address this issue pragmatically. In our country’s courts, there are rules of evidence that dictate the way one gathers and analyzes evidence. For example, how DNA evidence is evaluated is different from how testimony is evaluated, but if both are gathered and evaluated accurately they can inform the jury. If someone provides testimony about events they witnessed while under the influence, or as someone who has failing eyesight, the testimony may be viewed differently. If DNA evidence has been tainted, it may be evaluated differently. In the same way, each of our disciplines have different research paradigms that cause us to approach evidence differently. This is how our systems align.

But the difference is in how the courts treat the evidence. There are different standards for evaluating the evidence, depending on whether the case is civil or criminal. In a criminal case, the standard is “beyond a reasonable doubt,” whereas in a civil case, it is a “preponderance of evidence.” You could present the same evidence, asking the same questions, and, depending on the standard, arrive at two different findings.

What we need is a way to evaluate evidence systematically, using a standard different from those used for tenure review or peer-reviewed journals. We need a preponderance of evidence standard that is quicker, more flexible, and easier to interpret than the research we do for our own education. And it needs to be driven by the needs of the market. The big data movement might make this fairly simple to execute in the near future.

We also need our profession to raise the bar. Some years ago, the Association for Talent Development was a major sponsor of a global initiative to create an ISO standard for corporate learning. While it was adopted by 42 countries, to my knowledge that standard was not adopted by a single U.S. employer. Our professional associations, particularly ATD, need to continue to lead the charge and call out the lies as they become clear.

What You Can Do in the Meantime

We continue to evolve as a profession. Hopefully, programs like the one I created at the University of Pennsylvania (PennCLO) and initiatives taken by organizations like ATD to develop professionals, can help establish a set of rules for research on learning. In the meantime, there are some pretty basic things you can do to catch lies and prevent lying.

Question everything. Be critical when you purchase and when you deliver. Be less of an advocate and more of a critic. And don’t assume that because someone has a whitepaper or even one peer-reviewed article about her approach or product that it proves anything.

Research what you can. Not being able to run large-scale, longitudinal, controlled randomized trials with matched pairs is not an excuse to throw up your hands and follow the latest trend. Try to systematically question what you are planning and gather what evidence you can to support or refute what you actually do. And don’t become so vested in a strategy that you ignore mounting evidence that what you are doing is having little effect.

Use empirically vetted content. In other words, if you have to choose between two marketing courses—one based on the research of a professor and the other one based on the whims of a prophet—go with the professor.

Use pedagogy aligned with your business problem. If you need to get your folks working better as virtual teams, a behaviorist approach may not be the most prudent design strategy.

If I’ve succeeded, I have raised more questions than I have answered and made you more skeptical than before. That is the crux of my message. Question everything to the extent possible and recognize that to be an effective professional you need to concurrently be a theoretician and researcher. While this is true in every profession, it is especially true in ours. Darwin argued eloquently for the survival of the fittest—that those species that could adapt would survive. For organizations, the only way to adapt is to learn. We are our companies’ fulcrums to the future.

References

Browning, R. 1933. Men and Women and Other Poems. Boston: Orion Publishing Group.

Byrne, J.A. 2001. “The Real Confessions of Tom Peters.” Businessweek, December 2. www.businessweek.com/stories/2001-12-02/the-real-confessions-of-tom-peters.

Collins, J. 2001. Good to Great: Why Some Companies Make the Leap … and Others Don’t. New York: HarperBusiness.

Covey, S.R. 1989. The Seven Habits of Highly Effective People. New York: Free Press.

Gould, S.J. 1985. “The Median Isn’t the Message.” Discover (June): 40–42.

Peters, T.J., and R.H. Waterman Jr. 1982. In Search of Excellence: Lessons From America’s Best-Run Companies. New York: Harper & Row.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.98.250