FOURTEEN

Sticky Findings
Research Evidence Practitioners Find Useful

DENISE M. ROUSSEAU
AND JOHN W. BOUDREAU

What we have here is a failure to communicate.
Cool Hand Luke

Technical skill is mastery of complexity, while creativity is mastery of simplicity.
CHRISTOPHER ZEEMAN

The hardest problem of all: how people think.
EDWIN KREBS

STICKY FINDINGS ARE RESEARCH RESULTS that grab attention, gain credibility, and are readily shared. The merits of taking aspirin after a heart attack (Smith et al., 2001) or limiting TV watching for kids (Hancox, Milne, & Poulton, 2005) are sticky findings that many people have acted on. Still, an eminently useful evidence-based idea is no guarantee of uptake. Indeed, a destructive idea often stands a better chance of acceptance and use if it captures the attention of people whose interests it serves. Consider the overreliance on stock options in management compensation as one widely popular but bad-for-business idea (Ghoshal, 2005). What is popular is not necessarily scientifically valid and vice versa. Nonetheless, for research findings to be used they need to be sticky, or perhaps more accurately put, they need to be presented in sticky ways. Our chapter’s thesis is this: We need to design our research and communications with users in ways that make our findings sticky. The users we have in mind are business professionals, organizational leaders, regulators, and the general public.

What Makes Findings Sticky

The notion of sticky findings owes much to the idea of stickiness in fads, fashions, and innovations as popularized by Malcolm Gladwell’s (2000) book Tipping Point. This notion is developed further by the Heath brothers, in their book Made to Stick (Heath & Heath, 2008). Both books base their ideas on scientific studies, embellished with stories that give their research base “legs.” Appropriately, findings with legs to travel on are what sticky findings are about.

Sticky ideas have certain traits (Gladwell, 2000; Heath & Heath, 2008). They are simple. They grab attention by being unexpected. Or they are sticky because they create an emotional reaction, ranging from fear (am I messing up my kids?) to relief (ok, I’ll keep aspirin in my office). Last, sticky ideas are easily communicated. Or, if the original notion is really complicated, it has been boiled down to get the key idea across. By virtue of their simplicity, emotionality, and unexpectedness, sticky ideas make for a vivid story, a catchy saying, or sometimes an easy fix to try out and share.

For example, in our courses we have found it easy to discuss prospect theory, a judgment process in which people set a reference point in evaluating alternatives (Kahneman & Tversky, 1979). Its core idea can be captured in a sticky phrase, one that we have students repeat, “losses hurt more than gains feel good.” It is the gist of prospect theory, and its easy recall allows students to recognize how it applies across diverse scenarios—from purchasing decisions to change management. We aren’t talking about creating the Sticky Research Diet. But taking a page from popular self-help books, we do need to find ways that make research findings memorable, communicable, and actionable.

Why We Need Sticky Findings

All chapters in this book are motivated by concern that a long-standing objective in organizational and management research remains largely unrealized: conducting research that is both scientifically advanced and used by practitioners. The persistent research-practice gap is an over-identified problem, meaning that there are lots of reasons why practitioners do not readily apply our findings. This book suggests important causes of this problem, including that scholarly research topics may be too removed from the real problems managers and organizations face. How research is produced is a cause of the research-practice gap, because typically managers are not involved in problem formulation or in the research process (the antithesis of the “engaged scholarship” Van de Ven [2007] describes). Certainly, formulating research problems with greater fidelity to the issues faced by practitioners and involving them in the process may well enhance the usefulness and stickiness of research. Involving opinion-leading users in research, in particular, can lead to their endorsement and popularization of its findings.

In this chapter, we suggest that many, perhaps most, research findings can be made sticky, if communicated in an appropriate manner, channel, and form. In this matter, we concur with the respondents to Shapiro and colleagues’ (2007) survey of the Academy of Management’s academic and practitioner members. The consensus of that survey’s findings is that a good deal of the practical value of research findings gets lost in translation. Taking this problem to heart, we develop the notion of sticky evidence and the features we believe can make for successful research translations (including “Implications for Practice” sections of research articles [Bazerman, 2005], as well as other communiqués targeting practitioners).

Let’s talk now about the underuse of organizational and management research findings by practitioners (i.e., the research-practice gap) and then about how sticky findings can help bridge that gap. Fact: Research evidence is seldom used as a basis for management practice. Evidence: Professional human resources (HR) managers are unfamiliar with many basic findings in that field, and often fail to act on those findings they do know (Rynes, Brown & Colbert, 2002). The classic example of under-use of research evidence is utility analysis. Utility analysis formulas developed in industrial/organizational psychology show high payoffs from improved selection. Boudreau and Ramstad (2003) observed that these utility-analysis models are seldom used by leaders, and when their use is attempted, sometimes they impact decisions and sometimes they do not (Borman, Hanson, & Hedge, 1997; Florin-Thuma & Boudreau, 1987; Latham & Whyte, 1994; Macan & Highhouse, 1994; Roth, Segars, & Wright, 2000; Whyte & Latham, 1997).

Yet, utility-analysis results seem to be more acceptable to operating executives when they are integrated with capital-budgeting considerations (Carson, Becker, & Henderson, 1998; Cascio & Morris, 1990; Mattson, 2003). These results seem more credible to managers when presented as a special case of investment analysis, including considerations such as discounting for risk and time, after-tax returns, and the costs required to invest in improved selection. Simply put, investing in improved selection is not that dissimilar from investing in any other initiative with uncertain returns, and making that clear seems to improve the stickiness of what are otherwise seen by leaders as arcane psychological calculations.

Practicing managers have little access to research evidence outside professional schools. Evidence: Managers generally rely on their peers and opinion-based management periodicals such as Harvard Business Review for new ideas (Rynes et al., 2002. NB: Rynes and colleagues found that less than 1 percent of HR managers at the manager, director, or VP levels read that field’s three top-tier academic journals. Most [75 percent] read none of the three.)

Practitioners often do not act on the findings they do know about, and it is not clear whether they even believe them. Evidence: The most widely documented failure of uptake involves the strong and persistent preference practitioners have for intuitive methods of selection. Practitioners continue to use unreliable and invalid, ad hoc or unstandardized employment interviews, a practice that prevails over more reliable and well-validated standardized predictors or mechanical combinations of selection techniques (Rynes, 2010). Highhouse (2008) reviewed multiple studies showing that mechanical or statistical predictions of employee behavior are superior to both intuitive methods, such as the unstructured interview, and combinations of mechanical plus intuitive methods. Nevertheless, the unstructured interview remains the most popular and widely used selection procedure and has been for over the past 100 years (Buckley, Norris, & Wiese, 2000).

Educators and the textbooks they use often do not focus on research findings, fearing that it will make courses too dry. Management educators often view research as “fun squishing.” Evidence: Educators and textbook writers who make inconsistent use of research in their classes and texts indicate feeling pressed to avoid being too research-y (Rousseau, 2006). Case studies are used as a teaching method frequently without any grounding of their analysis in findings supported by management research.

Complicating things further, firms are characterized by weak and inconsistent decision management, making it difficult to systematically act on research findings. Evidence: Firms typically have unsystematic ways of making decisions regardless of whether the decision is novel or routine (Yates, 1990). They make limited use of protocols or decision trees for common or repeated decisions from selection to performance management, to implementing change, despite evidence of both their value and the repeatable nature of many managerial decisions (Boudreau, 2010; Drucker, 1993; Yates, 1990). Boudreau (2010) suggests that this unsystematic approach may be particularly true for decisions about human capital, compared to other resources, because managers typically have much less training and accountability for understanding human capital decision frameworks than they do for frameworks for resources such as money, customers, and technology. Indeed, Lawler and Boudreau (2009) report that both HR and non-HR leaders in large companies rate business leaders lower on the degree to which they use sound principles for decisions about human capital issues than on the degree to which they do so for more traditional disciplines such as finance.

Lest the last paragraph sound like a rant against managers, let us be clear: academics are at least half the problem. Scholars in organizational and management research are often unaware or unconcerned with how potential users think. Even academic writing intended for practice is often dominated by what a layperson regards as irrelevant backstory (the five other theories considered before the current one) or in-group (“scientific”) debates (from the hierarchical structure of a construct such as intelligence, to the distinction between psychological contract and social exchange theory). Academics tend to assume that if something is a problem in scholarly research, it is also a problem in practice. A lot of academics also aren’t particularly concerned with communicating research findings to practitioners. Scholarly careers are based on citation rates and recognition from other academics—not practitioners. Moreover, even full-time academics, not just clinical professors and adjuncts, are reluctant to make evidence central to their teaching. Only recently are more systematic ways of using evidence being promoted in management education (e.g., Rousseau & McCarthy, 2007).

The research-practice gap itself contributes to problems in making research sticky. Academics and practitioners tend to be mutually incompetent in relating to each other. Academics don’t have a good understanding of how practitioners think nor even what they do. Practitioners, an even more heterogeneous group than academics, often lack training and knowledge regarding basic organizational phenomena and limited insight into their own decision processes. People tend to be overly optimistic when evaluating the quality of their performance on social and intellectual tasks (Ehrlinger et al., 2008). In particular, poor performers grossly overestimate their performance because their incompetence deprives them of the skills needed to recognize their deficits. Surrounded as we are by peers who make the same mistakes we do, this lack of insight into our own errors leads to overly optimistic estimates of how much academics understand about practitioners and vice versa. As a first step, academics and scholarly researchers need to gain insight into the thinking and decision styles of practitioners. How might we do that? Let’s consider what we need to know to make findings sticky.

Core Features of Sticky Findings

The job of the teacher is to arrange victories for students.
—Quintilian

We propose a set of core features in communiqués with practitioners that help make evidence sticky. These features reflect research on cognition and decision making, persuasion, and diffusion of innovation (Gladwell, 2000; Goldstein, Martin, & Cialdini, 2008; Heath & Heath, 2008). In specifying these features, we assume that the target practitioner audience is largely made up of novices, not experts in management and organizational research.

1. Findings must be presented in ways that appear practice related.

All people have limited cognitive capacity to process information, particularly when that information is novel or unfamiliar (Simon, 1997). Sticky findings focus on practitioner-relevant or germane content and exclude what’s irrelevant. Thinking and learning occur in working memory, which is constrained in the number of bits of data that can be processed at one time. Details of interest only to researchers are distracting and should be excluded. This unnecessary backstory can include the history of the research, its methodological intricacies, or theoretical nuance. The temptation for academic writers to include background is common and can be distracting even for academic readers. It is striking how often we review scholarly manuscripts that include long and comprehensive summaries of prior work but fail to answer this question: What anomalies in prior theoretical frameworks or empirical findings would be explained by this research? Sticky findings require not only an answer to this question but that the answer be presented in practice-related ways. Similarly, use of traditional academic citation style (like “Lennon & McCartney, 1965”) distracts lay readers, who tend to wonder: What does this mean? Is it important?

Generally, experts process new information in the context of existing frameworks and mental models. Nonexperts tend to process new information experientially (Chi, Feltovich, & Glaser, 1981; Chi, Glaser, & Rees, 1982). They ask questions such as these: Does this track with my experience? What would it mean if I acted on it? The presentation should address which aspects of a novice’s experience, via examples, might help them understand the finding and why the novice’s experiences might be misleading. For example, policy-capturing studies suggest that what people report as important to their work choices can differ from factors driving their actual choices (Karren & Barringer, 2002). Evidence indicates that watching what people do may be more informative than asking them to tell you what they do. Practitioners and students often need to experience these types of discrepancies firsthand by actually doing policy-capturing exercises and seeing for themselves how their actual choices differ from their beliefs. In this fashion, we can convey the sticky finding that relying on what people say—instead of what we observe them do—may lead us astray.

2. Express clear core principles in plain language and familiar analogies.

To be easily grasped and recalled, findings need to be expressed succinctly as facts. The principal findings of goal setting, for example, can be expressed as a fact: Specific goals tend to lead to higher performance than do general, “do-your-best” goals (Locke & Latham, 1984). Similarly with the core principle of intergroup research: Socially distant groups tend to have negative perceptions of each other (Insko et al., 1990). Again, scholarly tradition often acts against the use of plain language. A tenet of scholarly research is conservatism. Thus, scholarly writing is appropriately careful to qualify findings with many conditions and caveats. As illustrated above, the phrases “tend to” or “often” may allow the essence of the findings to be stated in direct and plain language, even while maintaining conservatism. Editors and reviewers might be more tolerant of such plain-language statements in research abstracts and practical implication sections. Creating manuscript sections specifically for such plain-language summaries might also help nonacademic readers know where to look to find the essence of the research.

To convey abstract or complicated findings in a manner novices understand, we need to get their attention. Vividness can offset complexity by drawing greater attention to a finding. A vivid, memorable presentation involves actively framing the finding in a way that a person can easily recall and share with others. In medicine, the importance of base rates (i.e., the odds of having a particular disease or condition) in accurately diagnosing a patient’s ailment is recalled with the phrase “if you hear hoof beats, think horses, not zebras.”

We need to be creative in expressing research findings via memorable communiqués—or collaborate with others who can. We should also consider the best media for memorable communication. Video, social media, or in-person dialogue may be more effective or a great complement to print and text. Research journals can incorporate editorial elements that explicitly invite online discussion, blogging, and other electronic interaction. A good start is for journals to make consistent efforts to translate and communicate findings to traditional press outlets. Of course, pithy findings can be still unearthed by intrepid reporters, bloggers, and online communities. Yet editors and journals could make this easier by seeding these outlets with plain-language and vividly stated findings. In whatever form, the message needs to be in plain language, accompanied by analogies and illustrations to ease user understanding, recall, and uptake. Edwin Locke’s (2009) recent book Handbook of Principles of Organizational Behavior (2d ed.) illustrates how robust findings in organizational behavior (OB) can be readily communicated to students and practitioners in this fashion.

3. Describe causal processes and mechanisms through which a principle works in plain language with familiar analogies.

To be persuasive, findings have to make sense. Making sense means either fitting into an existing understanding people already have or inducing them to understand and think differently about something. In particular, research indicates that research findings are persuasive when people understand how or why they work (Hornikx, 2005). Mechanisms show why the findings are to be believed and how to adapt and make use of the principle in practice. For example, evidence shows that the central mechanism underlying effective goal setting is the extent of the individual’s acceptance and commitment to the goal. Understanding the need for goal commitment and acceptance allows the previously described principle about goals (“difficult and specific goals typically enhance performance more than do-your-best goals”) to pass the inevitable sniff test that a novice might conduct: “We set goals around here all the time. Nothing happens. Hmm, maybe people aren’t buying in. Should we take a different approach to setting goals?”

Mechanisms make it possible for people to actively process a research-based principle and be mindful of how it can be used. They can then think about how the principle connects to their own experience. A finding’s relevance or practicality is less likely to be discounted when the potential user understands how and why it works.

Management educators frequently find it hard to teach certain well-established research findings (Rynes, personal communication, 2008). Tough ideas to get across include the notions that human resource management can be strategic and that leadership can be learned. Sometimes the problem lies in a finding’s contradiction with preexisting beliefs or experiences (“the HR Department I work with only does administration and regulatory compliance; I’ve never seen a leader change”). However, before we blame preexisting beliefs as the major barrier, we must first ask if we have effectively conveyed the mechanisms underlying the finding. Novices who have experienced only administrative or compliance-based HR activities can be forgiven for not understanding the mechanisms through which HR can enhance strategic success, despite all the correlations between sophisticated HR processes and financial outcomes they might have read about. Users who understand the reasons underlying such a finding are more likely to persuade themselves that the finding makes sense and that acting on it can work.

Understanding mechanisms can also help users convince others whose support they need to take action. For this purpose, the mechanisms underlying research findings might be presented in ways that have fidelity or analogy to mechanisms users already understand. For example, Boudreau and Ramstad (2007) suggest that although leaders discount strategic decisions about human resources because “people are too unpredictable,” the same leaders routinely make strategic decisions about products and marketing based on research describing the likely behavior of “unpredictable people”—product consumers. Reframing research on human resources as similar to research on consumer behavior reveals connections that make it easier to see how HR research might be used. For example, researchers can frame the question of predicting employee responses to changes in supervision, rewards, or career practices as very similar to predicting consumer reactions to changes in service or product features. It is useful to identify analogies in practice that have fidelity with the logical mechanisms that underlie the research findings (Boudreau, 2010).

4. Frame research according to the end-users’ interests.

Findings stick when a practitioner comes to believe that accepting or acting on them serves their interests—while not acting on them has adverse consequences. But it is not enough to argue for the benefits of acting on a finding. We need to recognize that findings sometimes fly in the face of preferred practices. They can challenge a comfortable status quo or make a person feel threatened. Recalling the maxim “losses hurt more than gains feel good,” it is important to call attention to harm or the costs of not acting on the evidence. Physicians evaluating medical evidence show such a pattern. When medical research indicates that a current treatment causes harm, physicians are more likely to express willingness to change their current practice than when a new practice is found to provide a benefit (Aberegg, Arkes, & Terry, 2006). The persistence of the status quo is a challenge to any presentation of evidence regarding a “better practice.”

Since sticky findings are communicated in ways designed to motivate reflection, recall, and use, framing them with respect to the end-users’ interests is part of stickiness. Why should a potential user consider acting in new ways or make a special effort to change how decisions are made? What is the end users’ interest in the finding? What will happen if the findings are not adopted? Science is not about marketing or advocacy. Still, we need more attention to the ways practitioners might interpret research findings—and design communication strategies for more effective uptake.

For researchers, this observation connects naturally to the assessment of practical significance, as opposed to statistical significance, and the admonishments to present confidence intervals, not just point estimates (e.g., Bonett & Wright, 2007). For example, findings that have probabilities as the dependent variable can often be presented as odds-ratios. Dunford, Oler, and Boudreau (2008), for example, analyzed the impact on executive retention of stock options whose strike price is above the market value (underwater options) on executive retention. They summarized one finding as follows: “If executives’ portfolios were underwater by $30 per share, moving them up by $20 per share would reduce the odds of turnover by 32%” (p. 725). Findings presented as proportions of explained variance can be expressed conditionally: “Those with intelligence scores in the top 30% tend to perform this much better, on average, than those in the bottom 30%.”

As a firsthand experience, we have tried teaching an MBA class the notion that socialization and training along with other organizational practices can be “substitutes for leadership.” This idea comes from Kerr and Jermier’s (1978) essay describing how leaders can shape their organizations via certain routine practices. That lecture turned out to be a real dud. Following the class, student feedback essentially said, “We want to be leaders” and “don’t need substitutes.”

It turns out that “substitutes for leadership” sounded to management students like a way to “replace us.” In subsequent classes, students responded much more positively to Kerr and Jermier’s idea of substitutes when framed more consistently with their vision of themselves as future leaders. This framing involved presenting those substitutes as “leader extenders” instead, as a way for leaders to leave their lasting imprint on the organization (“making your mark”) and a way to influence people when they couldn’t be physically present (“because you can’t be everywhere at once”). The mechanisms that substitute for (or extend) leadership are, of course, the very means to realize those critical values and behaviors central to a leader’s vision of an effective organization. Toward the end of our class discussion, we make the point that under conditions where an incompetent leader takes the helm, the practices that substitute for leadership can help the firm be resilient until better management comes along or the leader improves. In essence, we reframe leadership substitutes as insurance against the possibility of incompetent leadership, because the insurance analogy is quite familiar to MBAs well versed in financial risk hedging.

Quality connections, interactions, and idea sharing between academics and practitioners contribute to framing findings appropriately. So, too, does insight into the learning experience of students (cf. Pace & Mittendorf, 2004). At the same time, as in the case of substitutes for leadership, multiple renditions of the same finding may be needed depending on the audience, its knowledge, experience, and interests.

5. Embed findings within practitioner decision frameworks.

The features of accessible language to present findings and to specify the underlying mechanisms (Features 2 and 3 above) are reinforced when research findings are connected to actual decisions users already face. Sometimes pertinent decisions are obvious, as in goal-setting findings that can be applied to formal performance management, or findings regarding the greater validity of structured interviews over unstructured ones that can be acted on in screening job applicants.

Generally, findings in organizational psychology have their most obvious connections in improving HR processes. This is often quite a compelling connection if the audience is HR professionals. However, many non-HR managers have little context for understanding why improved HR processes matter. A fundamental challenge may be to help non-HR leaders see the value of the improved HR processes in the first place, in order to motivate their acceptance and willingness to act on such research. It does little good for HR leaders to understand that the higher predictability and reliability of structured interviews makes them more valuable than unstructured ones, if the hiring managers they work with do not know why improved validity matters. The translation of validity into value is often where the communication breaks down.

Another challenge to evidence use comes when the ways practitioners approach their current decisions are poorly developed. For example, non-HR managers regard decisions about human capital differently from their decisions about other resources (such as money, technology, materials, and customers) for which they have been more rigorously trained and are held accountable for using logical and evidence-based decision approaches (Boudreau, 2010; Lawler & Boudreau, 2009). Managers may be less systematic or informed in their human capital decisions because the evidence is embedded in disciplines such as psychology, sociology, and organization dynamics. Such disciplines are often less familiar to non-HR managers than areas such as economics, operations, and consumer behavior—more traditional management disciplines.

Yet, when managers do pay attention to the ways they make certain decisions, their capacity to incorporate research findings into their everyday thinking and decision making increases. Grounding human capital research findings in the frameworks practitioners already use can help them incorporate bigger chunks of knowledge.

To illustrate, Cascio and Boudreau (2010) demonstrate how evidence from research on the utility and monetary value of recruitment, selection, and retention nicely fits the metaphor (and decision framework) of a supply chain. When framed this way, it is obvious that when a leader adopts a suboptimal approach to selection, that is precisely the same as choosing to reject a validated method of quality control for raw materials or unfinished goods. Other useful analogies include how job design is similar to engineering component design, how turnover is similar to inventory optimization, and how workforce flows and career paths are similar to logistics (Boudreau, 2010). Recall our earlier example noting the similarity of policy-capturing research on applicant and employee preferences to consumer behavior research. One way to get leaders to attend to such research is to point out in research translations how the value of policy-capturing findings in the arena of human capital is comparable to what can be learned from research on consumers.

For HR managers and leaders, such frameworks, routines, and metaphors often reside in the decisions and processes they use when enacting HR programs and strategies. For example, research on motivation in areas such as goals, equity, justice, needs, learning, and social dynamics is highly relevant to decisions about reward and pay systems. Yet, HR professionals engaged in the daily challenges of designing and implementing reward systems do not naturally encounter such research. The second author once worked with a senior compensation executive in a global consumer products organization, who was also teaching a class on compensation at a local university. That executive found he could teach students from his experience developing reward systems, but he lacked a theoretical and research grounding. The second author suggested incorporating information from Pinder’s (2009) research-based textbook on work motivation and then systematically connecting the theory and underlying research findings to the elements of reward system design.

This executive was one of the top rewards practitioners in the world, educated at one of the top human resource programs, but even he had not incorporated research findings into his work or his teaching. The simple idea of tying his professional rewards framework to established research findings significantly enhanced his awareness of the useful research available. One can imagine similar connections for practitioners who design and implement learning, engagement, development, communications, and a host of other HR systems. The key is to consider their existing decision systems or routines and then create the right hooks to relevant research.

6. Explicate the conditions of use.

Research communiqués that discuss concrete issues surrounding implementation make acting on the findings seem possible and perhaps easier (Bazerman, 2005). It also provides opportunity to suggest tools (e.g., checklists) and frameworks (e.g., a decision tree) that can enable use and guide action. One of the most powerful interventions in evidence-based clinical care has been the increasingly widespread use of checklists to guide nurses and physicians in adhering to the modes of patient care that the research evidence supports (Wolff, Taylor, & McCabe, 2004).

In discussing use, it is important to keep in mind the tremendous diversity of practitioners and practice situations, as well as of practitioner knowledge and experience. Usability may be aided by laying out findings hierarchically so that users can access information at the level of generality or detail that their needs and interests warrant. For example, starting with the basic principles that summarize research findings (specific goals typically motivate higher performance than do-your-best goals), through more complete summaries of overall findings (contingencies where general goals are more effective), to procedural knowledge guiding use (check to ensure goals are understood and accepted; set no more than five goals). Hierarchically organized summaries let users obtain information commensurate with their levels of interest and expertise.

Again, connecting such hierarchies to analogous approaches in more accepted management areas may help. For example, inventory optimization proceeds first by using the “80-20” rule to identify which 20 percent of inventory components create 80 percent of the impact on productivity or value. It then applies deeper analysis and attention to that 20 percent. In the same way, Boudreau and Ramstad (2007) and Boudreau (2010) have suggested first identifying the “pivotal roles” where improved employee performance makes the biggest difference to organizational success and then focusing performance management improvements there.

7. Users can easily access research findings.

Ease of access to research findings is critical for their diffusion, adoption, and use. Most managerial decisions are made with the information people already have in hand (Yates, 1990). Practitioners have to be able to find the evidence both in their personal reading and other self-improving efforts. Ready access to evidence perhaps matters most. Think about a manager who undertakes an active search for relevant scientific findings when preparing to make an important decision. The busy user needs reporting formats that can be found and used with minimum effort. Putting research findings where decision makers can get at them may increasingly require using more “virtual” media such as the Internet; blogs; social media; TED.com (Technology, Entertainment, and Design); and YouTube. As we noted earlier, scholarly journals may offer an important opportunity by explicitly calling on authors to include plain-language implications and then creating standard locations for such summaries, both in print and online.

8. Include opinion leaders’ testimonies.

Human learning is largely social learning, a fact true for management and organizational practitioners as it is for professionals such as physicians (Brown & Duguid, 2001). Social networks carry new information and reinforce use of existing knowledge. The stickiness of research findings is aided by the testimony of opinion leaders and their stories and examples.

The social environment where practitioners work can help or hinder their access to and ability to use research findings. Rynes and colleagues (2002) found that HR managers most commonly look to other HR practitioners for help in solving HR problems. Local opinion leaders can hinder research uptake if they are conservative about new ideas or disdainful or illiterate about science. Research-literate leaders who embrace new ideas ease the uptake for others. When opinion leaders adopt and understand research findings, valuable opportunities emerge. Including testimony from opinion leaders in research summaries can help give the findings legs, particularly if a catchy story or notion makes that testimony and idea more attractive and shareable.

Implications for Useful Research

Building Practitioner Capacity to Use Sticky Knowledge

Useful research seems to us to be that which provides insight into the frameworks and mental models practitioners already use. In the original Doing Research That Is Useful for Theory and Practice (Lawler et al., 1985), similar issues were raised with regard to users’ mental or cognitive “maps.” Argyris (1985) noted managers develop maps that structure their thinking and decision making. Such cognitive maps exist, for example, where leaders attempt to create a successful matrix structure; their maps might reflect what such structures are supposed to accomplish and the kinds of success they realize. Noting the features of such maps, Argyris describes how qualitative methods can be used to obtain them. In his response to Argyris, Driver (1985) advises managers and academics to build these maps together so that both understand their respective jargon and mental models.

Such research might entail both qualitative description (Argyris, 1985) and more positivist investigations into the models that decision makers use. Research might investigate how professional education, in management or in other fields such as engineering, shapes the models and frameworks decision makers use. It would be useful to know whether fields more closely aligned with dominant managerial models (such as finance, operations, marketing, and risk optimization) have created more readily used decision frameworks, and whether those frameworks incorporate relevant evidence more quickly or easily. We suspect that in finance and operations research, practice is more closely aligned with research through the prevalence of concepts, such as the time value of money or the spoilage rate of goods or materials, in which managers think and solve problems. Experiments need to be conducted investigating how organization and social science research might be presented in management education as frameworks guiding common human resource and organizational decisions.

Promoting Sticky Evidence in Future Research

Stickiness requires helping a user recognize how findings apply “to me.” Research findings in themselves tend to be generic, applicable to a host of managerial decisions and organizational circumstances. Sticky evidence that connects with practitioners is an essential part of the solution to the research-practice gap and to making our research useful. It calls for greater attention to the design of communications between researchers and endusers (cf. Gruber, 2006). A base of knowledge for promoting sticky evidence now exists in the emerging practice domain of communication design. This new field entails the explicit and conscious effort to attract, inspire, and motivate people to respond to communiqués, with the goal of creating benefits for them, their organizations, and communities (Eppler & Mengies, 2004). Its processes involve strategic business thinking, user research, creativity, and problem solving. Tapping into this knowledge base can aid our efforts at promoting use of organizational research.

Note that sticky findings need not be sticky for all. There are practitioners who may derive little value from even the most captivating presentation of the evidence. These might include managers with limited education, those who are intuitive rather than analytic, or the inexperienced. Drucker (1993) pointed out years ago that typical managers treat repeat decisions (such as hiring, firing, and capital investment) as unique. Recognizing the commonalities among decisions requires managers to reflect on their personal practices as a manager, something that busy people may find difficult. As such, it is more pragmatic to target the likely early adopters of research evidence, that is, those self-improving practitioners and practice communities interested in learning and innovation.

There is no shortage of bottlenecks to the uptake and use of research findings, ranging from research undertaken without practice in mind to practitioners making decisions that ignore germane and robust research findings. Sticky evidence coupled with more systematic attention to practitioner decision heuristics, frameworks, and routines offers a way forward. As sticky findings become more accessible, we look forward to the expanding use of management and organizational research by practitioners. Then, indeed, we will be doing useful research.

REFERENCES

Aberegg, S. K., Arkes, H., & Terry, P. B. (2006). Failure to adopt beneficial therapies caused by bias in medical evidence evaluation. Medical Decision Making, 26, 575–582.

Argyris, C. (1985). Making knowledge more relevant to practice: Maps for action. In E. E. Lawler III, A. M. Mohrman, S. A. Mohrman, G. E. Ledford, Jr., & T. G. Cummings, (Eds.), Doing research that is useful for theory and practice (pp. 79–106). San Francisco: Jossey-Bass.

Bazerman, M. H. (2005). Conducting influential research: The need for prescriptive implications. Academy of Management Review, 31, 25–31.

Bonett, D. G., & Wright, T. A. (2007). Comments and recommendations regarding the hypothesis testing controversy. Journal of Organizational Behavior, 28(6), 647–659.

Borman, W., Hanson, M., & Hedge, J. (1997). Personnel selection. Annual Review of Psychology, 48, 299–337.

Boudreau, J. W. (2010). Retooling HR. Boston: Harvard Business Press.

Boudreau, J. W., & Ramstad, P. M. (2003). Strategic industrial and organizational psychology and the role of utility analysis models. In W. C. Borman, D. R. Ilgen, & R. J. Klimoski (Vol. Eds.), Handbook of psychology: Vol. 12. Industrial and organizational psychology (pp. 193–221). Hoboken, NJ: Wiley.

Boudreau, J. W., & Ramstad, P. R. (2007). Beyond HR. Boston: Harvard Business School Press.

Brown, J. S., & Duguid, P. (2001). Knowledge and organization: A social-practice perspective. Organizational Science, 12, 198–213.

Buckley, M. R., Norris, A. C., & Wiese, D. S. (2000). A brief history of the selection interview: May the next 100 years be more fruitful. Journal of Management History, 6, 113–126.

Carson, K. P., Becker, J. S., & Henderson, J. A. (1998). Is utility really futile? A failure to replicate and an extension. Journal of Applied Psychology, 83, 84–96.

Cascio, W. F., & Boudreau, J. W. (2011). Supply-chain analysis applied to staffing decisions. In S. Zedeck (Ed.), Handbook of industrial and organizational psychology. Washington, DC: American Psychological Association.

Cascio, W. F., & Morris, J. R. (1990). A critical re-analysis of Hunter, Schmidt, and Coggin’s “Problems and pitfalls in using capital budgeting and financial accounting techniques in assessing the utility of personnel programs.” Journal of Applied Psychology, 75, 410–417.

Chi, M.T.H., Feltovich, P. J., & Glaser, R. (1981). Categorization and representation of physics problems by experts and novices. Cognitive Science, 5, 121–152.

Chi, M.T.H., Glaser, R., & Rees, E. (1982). Expertise in problem solving. In R. S. Sternberg (Ed.), Advances in the psychology of human intelligence (vol. 1, pp. 1–75). Hillsdale, NJ: Erlbaum.

Driver, M. (1985). Response and commentary on making knowledge more relevant to practice: Maps for action. In E. E. Lawler III, A. M. Mohrman, S. A. Mohrman, G. E. Ledford Jr., & T. G. Cummings (Eds.), Doing research that is useful for theory and practice (pp. 107–114). San Francisco: Jossey-Bass.

Drucker, P. F. (1993). The effective executive. New York: HarperCollins.

Dunford, B. B., Oler, D. K., & Boudreau, J. W. (2008). Underwater stock options and voluntary executive turnover: A multidisciplinary perspective integrating behavioral and economic theories. Personnel Psychology, 61(4), 687–726.

Ehrlinger, J., Johnson, K., Banner, M. Dunning, D., & Kruger, J. (2008). Why the unskilled are unaware: Further explorations of (absent) self-insight among the incompetent. Organizational Behavior and Human Decision Processes, 105, 98–121.

Eppler, M. P., & Mengies, J. (2004). The concept of information overload: A review of literature from organization science, accounting, marketing, MIS, and related disciplines. Information Society, 20(5), 1–20.

Florin-Thuma, B. C., & Boudreau, J. W. (1987). Performance feedback utility in a small organization: Effects on organizational outcomes and managerial decision processes. Personnel Psychology, 40, 693–713.

Ghoshal, S. (2005). Bad management theories are driving out good management practices. Academy of Management Learning & Education, 4, 75–91.

Gladwell, M. (2000). Tipping point: How little things can make a big difference. New York: Little, Brown.

Goldstein, N. J., Martin, S. J., & Cialdini, R. B. (2008). Yes! 50 scientifically proven ways to be persuasive. New York: Free Press.

Gruber, D. A. (2006). The craft of translation: An interview with Malcolm Gladwell. Journal of Management Inquiry, 15, 397–403.

Hancox, R. J., Milne, B. J., & Poulton, R. (2005). Association of television viewing during childhood with poor educational achievement. Archives of Pediatric and Adolescent Medicine, 159, 614–618.

Heath, C., & Heath, D. (2008). Made to stick: Why some ideas survive and others die. New York: Random House.

Highhouse, S. A. (2008). Stubborn reliance on intuition and subjectivity in employee selection. Industrial and Organizational Psychology: Perspectives on Science and Practice, 1, 333–342.

Hornikx, J. (2005). A review of experimental research on the relative persuasiveness of anecdotal, statistical, causal, and expert evidence. Studies in Communication Sciences 5(1), 205–216.

Insko, C. A., Schopler, J., Hoyle, R. H., Dardis, G. J., & Graetz, K. A. (1990). Individual-group discontinuity as a function of fear and greed. Journal of Personality and Social Psychology, 58, 68–79.

Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47, 263–291.

Karren, R. J., & Barringer, M. W. (2002). A review and analysis of the policy-capturing methodology in organizational research: Guidelines for research and practice. Organizational Research Methods, 5(4), 337–387.

Kerr, S., & Jermier, J. (1978). Substitutes for leadership. Organizational Behavior and Human Performance, 22(3), 375–403.

Latham, G. P., & Whyte, G. (1994). The futility of utility analysis. Personnel Psychology, 47, 31–46.

Lawler, E. E., III, & Boudreau, J. W. (2009). Achieving strategic excellence in human resources management. Stanford, CA: Stanford University Press.

Locke, E. A. (2009). The Blackwell handbook of principles of organizational behavior. Oxford: Blackwell.

Locke, E. A., & Latham, G. P. (1984). Goal setting: A motivational technique that works. Englewood Cliffs, NJ: Prentice Hall.

Macan, T. H., & Highhouse, S. (1994). Communicating the utility of HR activities: A survey of I/O and HR professionals. Journal of Business and Psychology, 8(4), 425–436.

Mattson, B. W. (2003). The effects of alternative reports of human resource development results on managerial support. Human Resource Development Quarterly, 14(2), 127–151.

Pace, D., & Mittendorf, J. (2004). Decoding the disciplines: A model for helping students learn disciplinary ways of thinking. San Francisco: Jossey-Bass.

Pinder, C. (2009). Work motivation in organizational behavior. Englewood Cliffs, NJ: Prentice Hall.

Roth, P., Segers, A., & Wright, P. (2000). The acceptance of utility analysis: Designing a model. Paper presented at the Academy of Management Annual Meeting, Toronto, Canada.

Rousseau, D. M. (2006). Is there such a thing as evidence-based management? Academy of Management Review, 31, 256–269.

Rousseau, D. M., & McCarthy, S. (2007). Evidence-based management: Educating managers from an evidence-based perspective. Academy of Management Learning and Education, 6, 94–101.

Rynes, S. (2010). The research-practice gap in I/O psychology and related fields: Challenges and potential solutions. In S. Kozlowski (Ed.), Handbook of industrial and organizational psychology. New York: Oxford University Press.

Rynes, S. L., Brown, K. G., & Colbert, A. E. (2002). Seven common misconceptions about human resource practices: Research findings versus practitioner beliefs. Academy of Management Executives, 18(3), 92–103.

Shapiro, D. L., Kirkman, B. L., & Courtney, H. G. (2007). Perceived causes and solutions of the translation gap in management. Academy of Management Journal, 50, 249–266.

Simon, H. A. (1997). Administrative behavior (4th ed.). New York: Free Press.

Smith, S. C., Blair, S. N., Bonow, R. O., & Brass, L. M. (2001). Guidelines for preventing heart attack and death in patients with atherosclerotic cardiovascular disease. Circulation, 104, 1577–1579.

Van de Ven, A. (2007). Engaged scholarship: A guide to organizational and social research. New York: Oxford University Press.

Whyte, G., & Latham, G. P. (1997). The futility of utility analysis revisited: When even an expert fails. Personnel Psychology, 50, 601–611.

Wolff, A. M., Taylor, S. A., & McCabe, J. F. (2004). Using checklists and reminders in clinical pathways to improve hospital inpatient care. Medical Journal of Australia, 181, 428–431.

Yates, J. F. (1990). Judgment and decision making. Englewood Cliffs, NJ: Prentice Hall.

ABOUT THE AUTHORS

Dr. Denise M. Rousseau is the H. J. Heinz II University Professor of Organizational Behavior and Public Policy at Carnegie Mellon University. Rousseau is founder of the Evidence-Based Management Collaborative, a network of scholars, consultants, and practicing managers to promote evidence-informed organizational practices and managerial decision making. She is editor of the Handbook of Evidence-Based Management to be published by Oxford University Press in 2011. A two-time winner of the Academy of Management’s award for best management book, Rousseau won the award in 2006 for her most recent book, I-Deals: Idiosyncratic Deals Workers Bargain for Themselves. Psychological Contracts in Organizations: Understanding Written and Unwritten Agreement won in 1996.

John W. Boudreau, PhD, is Professor of Management and Organization and Research Director of the Center for Effective Organizations (CEO) at the Marshall School of Business, University of Southern California. A Fellow of the National Academy of Human Resources, he has published more than 50 research articles and books, translated into Chinese, Czech, and Spanish. His scholarly work has won multiple awards from the Academy of Management’s Human Resources and Organizational Behavior divisions. His practitioner work has been featured in Harvard Business Review, the Wall Street Journal, Fortune, Fast Company, and Business Week. He advises companies including Global 100 multinationals, nongovernmental organizations, and early-stage companies. Prior to the University of Southern California, Boudreau was a Cornell University professor for over 20 years and Director of Cornell’s Center for Advanced Human Resource Studies (CAHRS).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.135.225