TEN

Academic-Consultant Collaboration
Doing Research across the Divide

RUTH WAGEMAN*

SOME TEN YEARS AGO, an unusual research collaboration began. A trio of senior consultants, deeply experienced in working with chief executives, reflected on countless observations of wheel-spinning, conflict-ridden leadership teams and wondered whether it really had to be that way. The popular press writings on top teams provided help . . . of a sort. These works described similar patterns: conflicts among members never surfaced effectively, chief executives driving an agenda with no signs of team ownership of the strategy, repeated returns to the same sticky issues. But the underlying message in these works was less helpful: That’s just how it is with teams of top leaders (e.g., Katzenbach, 1997a, 1997b).

So far, the story is not unusual: Observant practitioners noted a pattern of dysfunction and an opportunity to provide help to clients. They began exploring a significant opportunity to build a new practice for their firm.

Then the story becomes unusual. These senior consultants turned to an academic colleague to find out whether scholarly research on top teams might guide interventions to improve their functioning. With a literature review of upper-echelons research (Hambrick, 2000; Hambrick & D’Aveni, 1996; Hambrick & Mason, 1984) and another academic colleague drawn into the mix, this group collectively reached a conclusion: Existing research is informative about leadership team dysfunctions, but the field is wide open for some inventive new understandings of how to help such teams become more effective.

What motivated the consultants (Debra A. Nunes, Mary Fontaine, and James A. Burruss of Hay Group) was a growing frustration that chief executive officers (CEOs) were taking popular writings as license to live with dysfunction in their leadership teams. What drove the academics (Richard Hackman at Harvard and me, then at Dartmouth) was a conviction that what is seen as “normal life” in upper echelons—zero task interdependence, competition for a coveted top job, no clear team purposes—are circumstances under which no other team would be expected to function effectively. Moreover, these conditions, we suspected, were eminently malleable. Our shared conviction became the following: It doesn’t have to be that way. Would we be interested in undertaking some field research together, with the aim of developing a diagnostic model for leadership teams that would inform a top teams consulting practice? We would.

We undertook a study of what became a sample of 120 leadership teams of organizations around the world, which headed whole businesses or major business units (Wageman et al., 2008a). Teams were in the sample for a variety of reasons: Some were led by CEOs who sought help with ineffective leadership teams; others had undertaken strategic and structural changes to their organizations and wanted advice about the implications for how their leadership teams operated. Some were poor performers, whereas others were fundamentally sound.

We assessed their effectiveness in providing quality leadership to the enterprise. Sixteen expert observers, all consultants working with leadership teams, drew upon an array of archival, survey, and observational data to rate each team on Hackman’s (2002) three criteria of effectiveness: (1) how well the team served its main constituencies, (2) the degree to which the team showed signs of becoming more capable over time, and (3) the degree to which the net impact of the team was more positive than negative on the well-being and development of the individual leaders who made up the team.

We also assessed the design and leadership of each team to identify those features that most powerfully differentiated superb from struggling leadership teams. Members of all teams completed the Team Diagnostic Survey (TDS) (Wageman, Hackman, & Lehman, 2005), which captures a team’s main design features, the quality of its work processes, the behavior of the leader, and the quality of members’ relationships. Finally, for many of the leaders in the sample, we also had social motive profiles (McClelland, 1985) and competency assessments (Spencer & Spencer, 1993).

The collaboration began in 1999. Around 2003, our intention to write a book for senior leaders began to crystallize, and our writing efforts began in 2005. The book Senior Leadership Teams: What It Takes to Make Them Great was released in early 2008. All told, this collaboration was nine years in the making. What I aim to do in this chapter is use the lessons of our experiences—both positive and negative—to generate some hypotheses about the conditions under which academic-consultant collaboration will produce rigorous research that informs high-quality interventions in social systems.

Players

To draw general lessons from this particular academic-consultant collaboration, it helps to know who the players were. Mary Fontaine heads the Leadership practice of Hay Group and is one of the founding members of Hay Group’s McClelland Center for Research and Innovation. Mary has her PhD in business administration from the University of North Carolina at Chapel Hill and spent some time as a professor at Duke University. She has spent more than 20 years consulting in the field of leadership and organizational effectiveness.

Jim Burruss is also one of the founding members of the McClelland Center. He earned his PhD in clinical psychology at Harvard University, studying and working with David McClelland. He has for 30 and more years applied his understanding of human motivation to helping organizations in both the private and public sectors around the world.

Deb Nunes is vice president at the McClelland Center. She has a master’s degree in counseling and personnel psychology from Western Michigan University and an MBA from Boston University. Deb’s clients are primarily large global companies, and she has spent more than 20 years helping CEOs and the heads of major business units implement their strategies.

Each of these consultants has a long history of using rigorous research directly in his or her practice. Each is deeply invested in the continuing improvement of his or her own practice, and all three regularly draw upon basic research—such as work on the physiological bases of social motives inspired by McClelland’s research (e.g., Schultheiss, Campbell, & McClelland, 1999)—when speaking with clients about their leadership challenges.

Richard Hackman and I were both full-time academics when this collaboration began. Richard is a professor of social and organizational psychology at Harvard. He has conducted research on team dynamics and performance, leadership effectiveness, and the design of self-managing teams and organizations, among many topics. His book on leading teams (Hackman, 2002) was the starting place for our model of leadership team effectiveness. At the time our collaboration began, I was a member of the faculty of the Tuck School of Business at Dartmouth College. I remain an academic scholar in part of my professional life as visiting faculty at Harvard in the Psychology Department. I also have worked for the last several years as the director of research for Hay Group, continuing to work with Deb, Jim, and Mary.

We five were, as a group, a leadership team of sorts—one that shared responsibility for creating, launching, and implementing a research project intended to produce findings that guide effective action. Like all leadership team members, we also had our own individual responsibilities—to write and publish the work, for example, or to build a new top teams practice. So it seems appropriate to use our own research-derived model of leadership team effectiveness to analyze our collaboration.

Product

How did our own collaboration stand on the three criteria of team effectiveness? For Criterion 1, the key question for our collaboration is the following: Was the work useful both for advancing theoretical knowledge about senior leadership teams and for informing practice aimed at improving the effectiveness of such teams? We produced a book written for senior leaders about how to create effective leadership teams (Wageman et al., 2008a) that draws upon our research findings and our collective observations of the challenges of leading teams of leaders. Our aspiration was to draw upon the rich observational wisdom of the consultants and the conceptual models of the academics to provide a useable framework for leaders about how to improve the effectiveness of their leadership teams. The book offers working leaders myriad examples of CEOs struggling with the challenges of leading leadership teams as well as vivid descriptions of how they overcame them, organized around our key research findings.

If one takes online and in-person reviews seriously, the book has been well received among practitioners. As we have not created a journal article for a peer-reviewed outlet, the evidence is not clear that the research achieves the standards of academic rigor and influence on theory that are among the main aspirations of scholarly research. A research article about the difficulties of leading teams of leaders is published in a volume on leadership (Wageman & Hackman, 2010). Several popular-press articles for executives have been published from the work as well (Bolster & Wageman, 2009; Nunes & Wageman, 2007; Wageman et al., 2008b; Wageman, Wilcox, & Gurin, 2008). But the principle evidence of the usefulness in consulting of our collaborative research is this: A significant top teams practice within Hay Group has grown around the work, and the interventions used within that practice are built explicitly on the implementation model that came out of our collaboration. Overall, I’d suggest this collaboration scores pretty well on Criterion 1, always allowing that the longitudinal evidence (Does the consulting practice actually improve the functioning of top teams?) is largely anecdotal.

For Criterion 2, our question is the following: Did the team operate in ways that at a minimum avoided significant downward spirals over time, and—better still—showed signs of developing increasing capability? This criterion requires a little expansion, because it forms the frame for my analysis of the contributors to and detractors from scholar-consultant collaboration more generally.

I draw on the model of team performance proposed by Hackman and Morris (1975; see also Hackman & Wageman, 2005). The model posits that team effectiveness is a joint function of three performance processes: (a) the level of effort group members collectively expend carrying out task work, (b) the appropriateness of the performance strategies the group uses, and (c) the amount of knowledge and skill members bring to bear. Associated with each of the three performance processes is both a characteristic “process loss” (Steiner, 1972) and an opportunity for positive synergy, or a “process gain.” That is, members may interact in ways that depress the team’s effort, the appropriateness of its strategy, and the utilization of member talent; alternatively, their interaction may enhance collective effort, generate uniquely appropriate strategies, and actively develop members’ knowledge and skills.

To assess our standing on Criterion 3, I interviewed the core members of our collaboration, so that I could legitimately speak on behalf of the team. Was the net impact of the collaboration, on balance, a positive contribution to our well-being and learning? The answer was an unequivocal “yes.” It may be a significant lesson for scholar-consultant partnerships more generally that each member of the team expressed (a) significant and cherished learning from the experience, and (b) a general feeling that they miss the team and wish there were more such intensive learning experiences in their lives.

Process: Enablers and Obstacles in Consultant-Academic Collaborations

Our collaboration had both strengths and weaknesses on the three processes. Based on our experience, I offer some practical lessons about what helped us and what got in the way, and pose some practical principles for creating conditions for effective collaboration between academic scholars and consultant practitioners.

Lessons from Effort Levels in Our Collaboration

Two key effort-related patterns characterized our work. First, whenever we had our periodic team meetings, we typically spent a whole day together, examining quantitative findings from analysis of the TDS data (primarily the responsibility of the academics) and exploring those patterns based on direct observations from working with those teams (primarily the work of the consultants). Almost without exception, the engagement of members at these meetings was very high. We worked hard at developing our insights without glossing over real differences in our perspectives. The behavioral signs of deep commitment to the project within the meetings were very strong.

However, we also showed some significant process losses around effort. Often each of us committed to some analysis or pre-reading in preparation for the next meeting. When we convened, we invariably found that several or all of us had failed to do our “homework.” For example, the quantitative analyses were prepared in advance and circulated to the team, but members had not actually read or thought about the material in advance, and we wasted considerable time at each meeting reading materials in silence as we sat around a conference table. Levels of pre-work even declined over time, as the team members who typically did come prepared early on learned that others might not, and thus a downward spiral began. Moreover, these meetings were well spaced out in time (once every two months or so). To underscore the significance of this process loss: It took us nine years from the beginning of the project before we wrote a complete draft of the book.

This pattern of timing is not unique to this collaboration. In my now considerable experience conducting research with consultants, the rhythm frequently evolves this way: short, intensive bursts of focus by the team between long periods of little or no progress and poor preparation in advance of the collaborative work. Even my own dedicated team of researcher-consultants at Hay Group fell into this pattern. Our collaborative project with Tim Hall and other colleagues at Boston University (BU) studying the impact of career complexity on leadership development (Wolff et al., 2008) also unfolded this way, though it had periods of real acceleration late in the game. Our wholly internal projects comparing the challenges faced by chief executives in India, China, and the West (North America and Western Europe) (Gutierrez, Spencer, & Zhu, 2009) and our research on the coming leadership drought (Wolff, Fontaine, & Wageman, 2009; Wolff, Callahan, & Spencer, 2009) showed similar stop-start patterns. What contributed to these effort patterns—both the positive energy at meetings and the long lulls in concentrated effort?

Enabler: Convene for a purpose that is highly consequential for both consultants and scholars

We labored hard on this project across a span of years because we all deeply wanted to solve the problem of how to enhance the effectiveness of leadership teams. I believe it is significant that the phenomenon and the research question came first from the consultants, and after an exploration of the research literature, then became an intriguing puzzle for the academics. I don’t mean to assert that the order of events has to happen that way. But that sequence meant that the subject of the research was without question one of some urgency and importance to astute practitioners. They needed to understand why CEOs had such ineffective teams. The leaders they worked with were feeling the pain, and the consultants’ deeply held professional values drove them to want answers. The scholars in the group needed to find out if their own understandings of teams in organizations could provide some leverage on leadership teams given the very real differences from other teams we understood well. As a consequence, there was a superb fit of people’s values with the collaborative purpose and a powerful drive to work toward answers.

In my observation, few academic-consultant collaborations begin this way. More typically, the research question comes from the academics, shaped by the scholar’s own interests rather than a pressing practical problem for consultants and their clients. Because Hay Group owns extensive databases (millions of assessments of working leaders around the world), I am often the recipient of requests for data from doctoral students and faculty. The next study of emotional intelligence across industry sectors, or Leader Member Exchange-based theory about leadership styles, or a further refinement of learning styles in groups is unlikely to gain much traction among consultants, when the underlying problem addressed is so obscure or small. Although providing access to data is certainly a great way for consulting firms like Hay Group to contribute to the development of new knowledge about organizations, these requests are not, in my view, ideal opportunities for true collaboration between consultants and academics.

But just as often, a “pressing problem” identified by consultants is something already pretty well explored in the research literature, and thus is of little interest to a scholar seeking a great topic—though it can be of great interest to consultants who want some new and marketable intellectual property to sell. As research director, I also am the recipient of many requests from colleagues about “what we should study”: cross-cultural collaboration, matrix organizations, virtual teams, globalization, or family-owned businesses. These ideas about what would make for useful research are born from the expressed frustrations and challenges of working leaders. But they represent already well-trodden research ground. Consultants cannot know that because the problems are articulated in lay language or business jargon unrelated to academic search terms one would use to find relevant research (for example, at Google Scholar). And that is not what a typical consultant would do, in any case. I hope my consultant colleagues will forgive the generalization, but reaching out to academics or seeking scholarly knowledge about a client problem is not typical behavior among consultants. One of my chief complaints over the last several years of working closely with management consultants is that they do not read much. Typically, the first reaction when they face a novel problem in practice is to ask other practitioners if they have faced that problem and already have a methodology for tackling it. Their second instinct is to invent something based on their own ideas and experience. Asking these questions—Has anyone ever studied this problem systematically? What does research have to say about it?—is far down the list.

When consultants do look to the scholarly literature for insight, they often are disappointed and frustrated. Organizational scholars may study real problems, but those problems are framed using language that practitioners will not recognize. With a foot in both camps, one useful role I have found myself playing is finding and summarizing the scholarly research relevant to a problem facing clients. For example, I might be asked: Has anyone done a study of what it takes to transition between a line leadership role and a matrix role? No. But they have studied why organizations design matrix structures, and the challenges they create. They also have studied informal influence processes and peer-to-peer collaboration across functions, behaviors associated with effectiveness in a matrix leadership role. Taken together, different streams of research in distinct literatures can create a sturdy platform for a knowledgeable approach with clients. But the skill required to find these studies and synthesize them is an academic one. It is a critical path to making research useful, and one that scholarly writers could undertake more often.

The danger of these intergroup differences is that consultants and academics fall into stereotyping each other. The academics, fruitlessly seeking opportunities to pursue their research projects, receive a lukewarm response and come to view consultants as anti-intellectual or uninterested in evidence. Consultants, seeking readily applicable frameworks that speak directly to a client’s problems, view researchers as clueless about reality or interested in novelty at the expense of utility. And both groups miss opportunities to find the shared consequential purposes that underlie each of their concerns.

A key resource that consultants can bring to a research collaboration is knowledge of what their clients’ struggles are—that is, what would be useful—as well as the language clients use to describe those struggles, which ultimately will be needed to express findings in usable terms. A key resource that researchers bring is the skill of developing problems into researchable questions—and knowledge of what already has been learned by others (see Table 10.1 for a summary of the benefits and risks of academic-consultant collaboration). Bringing those two resources together requires a deep conversation about purposes—a collaborative definition of a project that both surfaces and satisfies the main needs of both sides of the collaboration.

TABLE 10.1 Benefits and Risks of Academic-Consultant Research Collaborations

Image

Obstacle: The rhythms of research work and consulting work are conflicting . . . as are the incentives

For all our drive to get answers to a pressing problem, we lost a lot of momentum as we worked together. The clues to why we suffered effort-related process losses lie, I think, where such effort problems usually do: the features of the work and the features of the incentives in our organizations.

The consultants were deeply experienced individuals who had profound insight into leadership and group dynamics. Those characteristics made them ideal partners in research that reached into new theoretical territory. It also made them very expensive and rarely available. It is highly costly for a consulting firm to invest senior consultants’ time in research-based development of a new methodology. Only when the potential market is large, and the method is a substantial new approach applicable to many clients—not a refinement of existing practice—is the investment of time and money worth it to a consulting firm. Time spent on research is time not billing.

That fact, in my view, is why most consulting firm “research” consists largely of analysts surveying leaders and other practitioners about “best practices,” rather than engaging in predictive research about influences on individual, group, and organizational effectiveness. It simply is not viewed as cost effective to develop that kind of rigorous basis for client work. Many decision makers in organizations will buy consulting practices that have no research basis, so why spend that kind of money on it? That certainly was not the stance of Mary, Deb, and Jim—but more junior consultants with less political capital could not have made the choice to spend their time on a long-term research collaboration. That choice is far more easily made by the academics in the group, who are expected to concentrate their time on groundbreaking projects.

Moreover, the natural rhythm of research work is completely different from the rhythms of consulting. Consulting work is done largely on the time schedule of the client, requiring unexpected travel, rapid preparation and engagement, and moving on to another client. Research work, by contrast, is much more amenable to planning and also requires dedicated and uninterrupted days to stay with a conceptual problem or a series of analyses, to work through interpretation, and above all to capture in writing what is being learned. Watching consultants try to fit these activities in between client engagements, I’ve come to be very skeptical about the idea of consultants doing research part-time as a second requirement of their jobs. The result is either that no substantive research ever gets done, or as in our own collaboration, the uninterrupted days are so rarely possible that it takes years and years to get the work completed.

Where I have seen success in moving a collaboration at a good pace is in instances where the academics in the collaboration take the main responsibility for analysis, interpretation, and writing—the key pieces of the work that require uninterrupted attention. For example, research team members at Hay Group, including Steve Wolff, Guorong Zhu, and Betzaluz Gutierrez, worked with Tim Hall and Kathy Kram at BU using a unique longitudinal database of complete career information on 57 senior leaders, conducting a series of studies about how career complexity influences leadership development. Those data hung around in a half-coded state for years while the consultant-researchers were obligated to respond to shorter-term demands and were able to turn their attention to the data only in brief spurts of activity. It was when the academic collaborators could take on much of the burden of coding and crafting a paper that the work came to fruition (Wolff et al., 2008), with the consultant-researchers providing their contributions where and when they could.

An alternative job design would be to release consultants for an extended period from any responsibility to clients so they can focus on research. The virtues of that strategy are in the many opportunities to bring their rich field knowledge into the research throughout the process. But I have never yet seen a deliberate attempt to design a researcher-consultant job that allows the individual some months of research time in between periods concentrated on consulting work. The usual expectation is that one will do both, every week or even every day. And under those conditions, only the consulting gets done.

Lessons from Our Performance Strategies

We found that senior leadership teams often fall into mindless habitual routines in how they work together, such as marching at speed through a largely tactical agenda, or conducting their meetings by having each individual give a presentation about what is going on in his or her part of the organization. Frustrated CEOs cite slippage in coordination and an inability to execute agreed-upon plans as the most common process loss. Rarely do such teams ask these questions: What does this have to do with our strategic agenda? Are there better ways we could use this time together? We, too, lapsed into some ineffective routines and fell prey to an inability to carry out our plans.

For example, the core members of the team began to meet regularly, at one- or two-month intervals, for a half or full day starting around 2002. At each meeting, we discussed the observations of the consultants from the teams they had worked with most recently; and we explored patterns in the latest set of analyses from the growing top teams database. Five years in, it became increasingly obvious that, while we were deepening our personal understanding of leadership teams with each encounter, we had not yet put one single word on paper.

Writing a book together was at the center of our aspirations, precisely because we believed that a book stood the best chance of making our research usable to working leaders around the world. We hoped to bring together in the book the systematic conceptual understanding of leadership teams from our scholarly approach with the rich observations and myriad examples of real working leaders, to provide a set of road maps for leaders to design and lead their leadership teams well. Yet for all the energy we had for the subject and for all the richness of learning, in five years we had not written one word. I believe there are some lessons in these process losses—and how we ultimately changed them—for scholar-consultant collaborations more generally.

Enabler: Provide protected time and space for research

The long and intensive meetings this diverse group had and the habitual work we did together—not disaggregated—were a critical positive contribution to making the research usable. The consultants in the team acquired a precious and rare resource in their busy work lives: time to reflect, compare notes with each other, and explore alternative understandings of leadership teams. They could apply the emerging lessons about leadership teams immediately in their own work. Rather than waiting for the book to be written—all the lessons wrapped up and synthesized—they could alter their practice and incorporate new insights in real time. In that sense, the development part of our R&D undertaking—building a top teams practice—was happening simultaneous with, rather than only after, the research.

The strategy we developed—working with quantitative findings as the basis for a rich discussion of direct observations—allowed the consultants to reality test the patterns in the data for what it might mean in practice. At the same time, it inspired those of us responsible for the quantitative analysis to undertake new explorations of the data based on insights from ongoing practice. It was in dialogue, not sequential or disaggregated action, that we achieved both conceptual depth and practical relevance.

Enabler: Collect data as a core part of the consulting process

While consultants have superb access to organizations, they often face the same obstacles as academic researchers in gathering data: Completing questionnaires, participating in interviews, and permitting observation of meetings are all well down the priority list of people at work. However, when data collection is a part of the diagnostic process, data accumulate as a direct function of the organization’s own priorities: their desire for consulting help.

In our research, every team the consultants worked with completed the TDS, and many of the leaders in those teams also were assessed as individuals. As a consequence, no special effort was needed to get the systematic data to generate quantitative findings. At the same time, the rich qualitative observation that made sense of those patterns was also in hand, through direct work of the consultants with the teams. We could then seek systematic evidence of patterns we hypothesized from consultant observations by analyzing the accumulating data from new teams added to the database, all as the consulting work unfolded.

Many management consulting firms operate in this fashion: collecting systematic data, perhaps conducting structured interviews with senior leaders, or distributing unit-level questionnaire assessments. The accumulated data offer wonderful opportunities for useful research. However, when researchers seek access to databases—as opposed to working in dialogue with consultants to define and tackle problems of shared interest—the work usually results in findings of interest only to scholars.

Obstacle: Emergent norms mishandled the learning-production balance

Our collaboration, like any scholar-consultant collaboration, posed a major dilemma that our team took a long time to address: (1) No one individual in the group had the knowledge and skill to capture the combination of conceptual and observational richness that our team brought together in its discussions. (2) But writing is not a group task. Who was going to draft an account of our team’s collective understanding, and how?

For years, our team had emergent norms that favored learning and did not hold us to account about producing anything. I do not mean to suggest that learning is a relatively trivial goal for such collaborations. Rather, production and learning have to be managed as a balancing act. The learning was deeply motivating, but we were not capturing the learning to make it available to others.

In early 2005, or what turned out ultimately to be the calendar midpoint of our nine-year collaboration (Gersick, 1988), we finally addressed the unspoken problem of how to prepare a manuscript about what we had learned as a team. To put it excessively kindly, I introduced a team coaching intervention (Hackman & Wageman, 2005). I wrote a letter to the team describing my observations of what we were and were not doing well, pointed out that we had written nothing, and asked if we should abandon the idea of a book given that none of us felt competent to write it. To the eternal credit of my colleagues, what resulted was a mature, competent, creative, and focused discussion of whether we wanted to write a book, whether we really could, and if so, how.

Two creative breakthroughs came out of that conversation. First, we invented a uniquely suited approach to writing the book. One chapter at a time, we held a day-long meeting, all tape recorded, to address a structured set of questions: (1) What are the key quantitative findings? (2) Based on direct observation, what do we see as the main challenges facing CEOs in getting a needed design condition in place for their teams, and what vivid examples do we have to illustrate these challenges? (3) What are some key actions that help a CEO get that condition in place for his or her team, and what vivid examples do we have of CEOs doing so in distinctly different ways?

The recordings of these discussions became the basis for me to write an extended chapter outline, which the group then discussed, refined, and expanded in the first part of its next meeting. Then we tackled a new chapter. We asked a professional writer, Scott Spreier, a journalist and senior consultant who participated in our meetings, to interview the appropriate consultants to flesh out the stories and expand the chapter outlines. All of us had a hand in commenting on the drafts, and Deb and I took the lead in revising the main chapters into final drafts for submission to the publisher. In this way, we managed to avoid group writing on the one hand, and we avoided disaggregating responsibilities for each chapter or parts of the work on the other. At the same time, we created repeated opportunities to combine the conceptual and observational richness our team had as a whole in creating each chapter.

The second creative breakthrough came as we crafted the structure of our book. We realized that the causal model that underpinned our analysis (see Fig. 10.1) might not be the ideal way to talk to leaders about how to create conditions for a great leadership team. Rather, we saw that there were natural interdependencies among conceptually distinct factors in team design that needed to be addressed together in implementation. For example, while clear team purposes are conceptually distinct from having a team that is a bounded and stable entity, in reality a CEO tackling one issue (What is this team for?) must simultaneously consider changing the team boundaries (Who is my leadership team? Do I have more than one?). As a consequence of our midpoint crisis, we developed a wholly new implementation model for leaders (see Fig. 10.2) that became the shape of the book—and the core of Hay Group’s leadership teams practice.

Image

FIGURE 10.1 Conceptual Model of Influences on the Effectiveness of Leadership Teams

The lesson from our midpoint crisis is straightforward but crucial: Choose a natural breakpoint in the work, and ask members of the collaboration to look explicitly at their task performance strategies and what is and is not working (Hackman & Wageman, 2005). A well-conducted review of the collaborative process sets the stage for creative breakthrough and for using well precisely those unique capabilities that brought the group together in the first place.

Lessons from the Use of Talent in Our Collaboration

Use of diverse abilities is the main aspiration of most consultant-scholar collaborations: to combine the distinctive capabilities and resources of both groups to produce knowledge that is both rigorous and usable. It was also the aspect of our process that was, I believe, our chief strength. What conditions contributed to our ability to use our respective talents well?

Enabler: Compose the collaboration for the optimum mix of members

To have an effective collaboration of any kind, the team has to be composed well in the first place (Gruenfeld, 1998). When I teach about optimum levels of diversity, I typically borrow Richard’s heuristic of “neither so alike that all are peas from the same pod,” nor “so different that they cannot speak the same language” (Bowers, Pharmer, & Salas, 2000). For scholar-consultant collaboration, the larger risk is that the team will err on the side of too heterogeneous, with obvious fracture lines between the two groups (Chatman & Flynn, 2001; Lau & Murnighan, 2005).

Image

FIGURE 10.2 Implementation Model for Creating Effective Leadership Teams

Our team composition helped us to avoid the fracture line problem of diversity. We had a mix of clinical wisdom and conceptual skills on both sides—resulting in an overlap in capabilities between the two groups, even while we each also had some specialized abilities. That is, we had two scholars with some consultant-like characteristics and three consultants with researcher-like characteristics. Richard and I both have done our research primarily in the field, habitually studying research questions with obvious practical implications and using direct observation as a key part of our typical methodology (and sometimes a fair bit of intervention as well). At the same time, the consultants in the team all either had conducted research themselves or had a long history of using rigorous research directly in their practice.

As a consequence, this collaboration started with a high platform for recognizing and valuing the special expertise members could bring to bear on the work. When we convened again long after the book was written, it was easy for each of us to identify surprises or major changes in our thinking that occurred as a consequence of working across academic-consultant lines. For example, Jim underscored how his practice had changed from intervening directly in conflicts in leadership teams to addressing first the inevitable lack of clarity in the purposes of such teams—an important conceptual breakthrough for him. I learned to interpret and understand the problems that chief executives articulate from working with Deb and her vivid examples throughout our collaboration.

I do not believe that this enabler is unrealistic. I have reflected on the better and worse experiences I have had over the years in conducting field research. In all the most positive experiences, I had found (mostly unintentionally) a collaborator inside the organization who had a PhD in organizational behavior or a related field and who had sustained an interest in research. Indeed, many consulting firms have internal research groups that can support the building of relationships between the firm and academic collaborators. Such people are not, I find, all that rare. I hope what they find in me is a scholar who has a genuine interest in understanding how people experience the phenomena I study and who is open to co-defining the research questions we study together.

Purposes

My analysis has been about consultant-academic collaborations with which I am personally familiar, for the sake of having the observational data that would allow me to harvest lessons from experience. Let me close with some observations about a more general question from others I have observed at a greater distance. Are there particular purposes and methods for which academic-consultant collaborations are especially well suited and—just as important—not well suited?

Danger Zones

Some collaborative purposes can inadvertently create conflicts and tensions in the relationship between academics and consultants. I will not assert that these conflicts are insurmountable, but given the natural difficulties of such collaborations for even well-crafted purposes, I would at least urge caution in undertaking them. The two main categories of purposes that strike me as fraught with problems are (1) developing a new instrument (such as measure of individual, group, or organizational phenomena), and (2) assessing the effectiveness of consulting practice.

Developing an instrument

Intellectual property issues in this kind of collaboration will likely become divisive, unless the academics in the collaboration are willing to participate in the development of an instrument that will then become unavailable for their own use. Typically, the aims of academics include the publication of the psychometric properties of the instrument in peer-reviewed journals and the promotion of its use in research. And researchers want to be free to use our own methods and ideas without constraint.

Consulting firms, by contrast, seek a sustainable competitive advantage from protected intellectual property. If a diagnostic instrument can be readily imitated and sold by others, it is not a sustainable advantage (Reed & Defillippi, 1990). Any instrument that is published in an academic journal, with information about how to use it, will be adopted and sold by other consultants, so long as it taps something marketable to clients. It is in the best interests of a consulting firm to keep an instrument from being published and to hold the copyright so that it can be sold exclusively by its makers.

The concerns of both parties are very real. Small and independent consultants who have no R&D function, in particular, are hungry for new things to sell and will adopt anything in the public domain if it looks useful. Academic scholars have lost the legal right to use their own methodologies when they were copyrighted and sold by consultants.

In our own collaboration, we did refine and test the final version of the Team Diagnostic Survey, and it is published in a peer-reviewed journal—but its use in the practice is not a main source of competitive advantage for Hay Group, about which I will say more in a following section.

Assessment of effectiveness of existing practice

I hear frequent requests from consultants and clients for more evaluation of the impact of consulting practices. Usually, that takes a form such as: We need to prove that our consulting work improves firm profitability; can you academics help? Leaving aside the whole array of methodological and conceptual reasons why that kind of research is largely impossible, I want to underscore the inherent conflict between academics and consultants about what would be interesting findings from an evaluation study. For an academic, finding that a popular intervention or consulting approach does not produce any significant change in effectiveness is potential fodder for an interesting paper and an opportunity to explode some myths. For the consultant, such an outcome risks the destruction of precious intangible assets, such as reputation with clients, as well as of a tangible asset, a core consulting methodology. Positive evidence is a problem for neither, but it may not make for a particularly compelling research article unless the underlying theory of the practice is genuinely novel.

I do not mean to imply that consulting interventions should be excused from rigorous assessment. But practice evaluation is not a good basis for academic-consultant collaboration, except, perhaps, where consultants themselves, seeking rigorous assessment designs, turn to academics for advice about their own attempts to test and refine their intervention practices.

Sweet Spots

There are two lessons I have gained from my observations about purposes for which academic-consultant collaboration may be especially fruitful.

Developing a research instrument that supports the practice—and produces a growing database for future research

Joint development of measurement methodologies can be a source of ongoing collaboration between academics and consultants, when the continuing use of the instrument is not a main source of income for consultants. In the course of our research, for example, Richard and I refined and finished the published version of the TDS, but we continue to hold the copyright as we want it available to researchers and educators.

This arrangement was possible because the principle source of sustainable competitive advantage for the consultants is the intervention practices built around the research findings, and not the diagnostic instrument itself. Those practices are not readily imitable by novices but are highly dependent on rich experience and high-level capability.

Our consultant colleagues continue to use the TDS with top teams as a diagnostic instrument for assessing the design and leadership of such teams. As a consequence of this arrangement, there are two continuously growing databases—one of top teams held by Hay Group and one of a broad array of different types of teams held by the academics—that can provide the basis for future research collaborations.

Exploring an important problem that takes one into new theoretical and practice territory

Richard Walton articulated this idea 25 years ago when he called for identifying questions with “dual relevance” (Walton, 1985). An appropriate research question for consultant-academic collaborations is one that addresses both a gap in theory and an undeveloped area of practice. These kinds of questions can find a welcoming scholarly audience, thereby meeting the external requirements for the academics in the collaboration, and can also find a grateful client population, satisfying the consultants’ constituencies.

Just as important is that those kinds of problems create the curiosity, puzzlement, frustration, passion, annoyance, and determination that keep both consultants and academics invested in making the project work. When the consultants are personally bothered by a dysfunction they observe and are looking for insight about how they can help, and when the academics are puzzled by an inexplicable pattern and are looking for insight about how to understand it, they will exercise the influence it takes to get the resources and the protected space needed for the collaboration. They will invest in improving their own team as a performing unit and manage the conflicts in work rhythms and incentives that might otherwise get in the way of successfully working across the consultant-academic divide.

REFERENCES

Bolster, C. J., & Wageman, R. (2009, February). Compensation committees and senior leadership teams. Trustee.

Bowers, C. A., Pharmer, J. A., & Salas, E. (2000). When member homogeneity is needed in work teams: A meta-analysis. Small Group Research, 31, 305–327.

Chatman, J. A., & Flynn, F. J. (2001). The influence of demographic heterogeneity on the emergence and consequences of cooperative norms in work teams. Academy of Management Journal, 44, 956–974.

Gersick, C. J. G. (1988). Time and transition in work teams: Toward a new model of group development. Academy of Management Journal, 31, 9–41.

Gruenfeld, D. H. (Ed.). (1998). Research on managing groups and teams: Composition. Stamford, CT: JAI Press.

Gutierrez, B., Spencer, S. M., & Zhu, G. (2009). Thinking globally, leading locally: Chinese, Indian, and Western leadership. Manuscript submitted for publication.

Hackman, J. R. (2002). Leading teams: Setting the stage for great performances. Boston: Harvard Business School Press.

Hackman, J. R., & Morris, C. G. (1975). Group tasks, group interaction process, and group performance effectiveness: A review and proposed integration. In L. Berkowitz (Ed.), Advances in experimental social psychology (vol. 8, pp. 45–99). New York: Academic Press.

Hackman, J. R., & Wageman, R. (2005). A theory of team coaching. Academy of Management Review, 30, 269–287.

Hackman, J. R., & Wageman, R. (2008). Working at the intersection: Insights from an academic-consultant collaboration about senior leadership teams. Workshop presented at the annual meeting of the American Psychological Association, Boston.

Hambrick, D. C. (2000). Fragmentation and other problems CEOs have with their top management teams. California Management Review, 37, 110–131.

Hambrick, D. C., & D’Aveni, R. A. (1996). Top team deterioration as part of the downward spiral of large corporate bankruptcies. Management Science, 38, 1445–1466.

Hambrick, D. C., & Mason, P. A. (1984). Upper echelons: The organization as a reflection of its top managers. Academy of Management Review, 9, 193–206.

Katzenbach. J. R. (1997a, November–December). The myth of the top management team. Harvard Business Review, 82–91.

Katzenbach, J. R. (1997b). Teams at the top: Unleashing the potential of both teams and individual leaders. Boston: Harvard Business School Press.

Lau, D. C., & Murnighan, D. K. (2005). Interactions within groups and subgroups: The effects of demographic faultlines. Academy of Management Journal, 48, 648–659.

McClelland, D. (1985). How motives, skills, and values determine what people do. American Psychologist, 40, 812–825.

Nunes, D. A., & Wageman, R. (2007, December). What every CEO wants to know: Six conditions to create an effective top team. White Paper Series. Washington, DC: Human Capital Institute.

Reed, R., & Defillippi, R. J. (1990). Causal ambiguity, barriers to imitation, and sustainable competitive advantage. Academy of Management Review, 15, 88–102.

Schultheiss, O. C., Campbell, K. L., & McClelland, D. C. (1999). Implicit power motivation moderates men’s testosterone responses to imagined and real dominance success hormones and behavior. Hormones and Behavior, 36, 234–241.

Spencer, L. M., & Spencer, S. M. (1993). Competence at work: Models for superior performance. New York: Wiley.

Steiner, I. D. (1972). Group process and productivity. New York: Academic Press.

Wageman, R., & Hackman, J. R. (2010). What makes teams of leaders leadable? In N. Nohria & R. Khurana (Eds.), Advancing leadership. Boston: Harvard Business School Press.

Wageman, R., Hackman, J. R., & Lehman, E. V. (2005). The Team Diagnostic Survey: Development of an instrument. Journal of Applied Behavioral Science, 41, 373–398.

Wageman, R., Nunes, D. A., Burruss, J. A., & Hackman, J. R. (2008a). Senior leadership teams: What it takes to make them great. Boston: Harvard Business School Press.

Wageman, R., Nunes, D. A., Burruss, J. A., & Hackman, J. R. (2008b, January). Behind the seniors: How you can help a CEO get the top team on a path to excellence. People Management, 38–40.

Wageman, R., Wilcox, I., & Gurin, M. (2008, October). The demise of the heroic CEO and the rise of senior leadership teams. Pharmaceutical Commerce.

Walton, R. E. (1985). Strategies with dual relevance. In E. E. Lawler III, A. M. Mohrman, S. A. Mohrman, G. E. Ledford, & T. G. Cummings (Eds.), Doing research that is useful for theory and practice (pp. 176–203). San Francisco: Jossey-Bass.

Wolff, S. B., Callahan, A., & Spencer, S. M. (2009). The coming leadership gap: Leadership challenges affected by the predicted competency shortage. Paper presented at the annual meeting of the Academy of Management, Chicago.

Wolff, S. B., Fontaine, M., & Wageman, R. (2009). The coming leadership gap: An exploration of competencies that will be in short supply. International Journal of Human Resources Management, 9, 250–274.

Wolff, S. B., Zhu, G., Hall, D. T., & Meras, M. (2008). Impact of career complexity on adaptability: A longitudinal study of senior executives. Paper presented at the annual meeting of the Academy of Management, Anaheim. Manuscript submitted for publication.

ABOUT THE AUTHOR

Ruth Wageman is Director of Research for Hay Group and Visiting Scholar in the Department of Psychology at Harvard University. Professor Wageman received her PhD from Harvard in 1993; she received her bachelor’s degree in Psychology from Columbia University in 1987, and returned there to join the faculty of the Graduate School of Business, making her the first female alum of Columbia College to join Columbia’s faculty. She also has been a member of the faculty of the Tuck School of Business at Dartmouth. Her research, teaching, and consulting interests include the design of effective leadership teams, the theory and practice of leadership development, and the effectiveness of self-organizing teams with civic and political purposes.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.138.122.11