Evaluation from an Appreciative Inquiry Perspective

,

Evaluation from the Appreciative Inquiry perspective works from the assumption that the uncountable number of variables in any human system makes it impossible to determine the one or even several best ways to do any human process. Nor is it possible to replicate what works in one group and assume that it will work the same way in the next. Indeed, working in human systems requires a flexible, open, creative stance that embraces ambiguity and innovation. In any human interaction, each person has an experience unique to that individual and substantially different from the experience of every other person.

Appreciative Inquiry as a perspective for an evaluation process is grounded in several basic beliefs. The first is the belief that the intervention into any human system is fateful and that the system will move in the direction of the first questions that are asked. In other words, in an appreciative evaluation, the first questions asked would focus on stories of best practices, most successful moments, greatest learning, successful processes, generative partnerships, and so on. This enables the system to look for its successes and create images of a future built on those positive experiences from the past. Appreciative Inquiry enables organizations to carry out valuations that move organizations toward their highest aspirations and best practices.

Second, the theory that we are all connected, as the new sciences demonstrate, suggests that there is no such thing as an objective observer. This implies that every evaluation in a system needs to be understood and planned as a powerful intervention into the system with the power to alter and shape the future of that system.

Finally, an AI valuation process gives the additional benefit of continuity. There is no implication that the past is deficient or wrong, simply that we look back for those life-giving forces, those moments of excellence in which we can take pride and use those as guidance to move us into a positive and generative future.

Following is an example of an AI evaluation process.

An Appreciative Evaluation Example*

During 1998, the Research and Development division of SmithKline Beecham Pharmaceuticals undertook an evaluation of a major and innovative simulation-based training program, the SB Discovery Simulation.

This training program had been designed to help scientific leaders and key contributors work effectively within the new drug discovery research paradigm. Over the course of three intensive days, participants worked in research teams utilizing a dynamic computer model of the drug discovery process. The aim was to create a realistic learning environment in which a drug company attempts to maximize its portfolio of research efforts over a ten-year period.

At the time of this evaluation process, 480 people from SmithKline Beecham in the U.S. and the UK had attended the program—a critical mass of the original target population. End-of-course evaluations were conducted for each program. The data collected was largely favorable, with participants reporting an increase in knowledge and understanding in a number of areas. Suggestions for improvements were acted on wherever appropriate, so that the program was continuously refined during the rollout.

The OD group who had led the design and delivery of the Discovery Simulation in conjunction with senior Discovery research scientists were satisfied up to a point that the simulation now worked well and consistently elicited positive responses from those who attended. However, since they had made a major investment in this program, they decided it was important to conduct an in-depth evaluation study to ascertain whether it had made a lasting impact on the organization. If such an impact could be demonstrated, they also wished to determine how to further capitalize on this investment.

To find an outside evaluator, SmithKline Beecham put out a request for proposals to several consulting groups that they knew would offer different approaches, but still with the expectation that they would conduct a reasonably traditional evaluation process in which the consultants would interview people in the company, compile the data, and give the client a report of their findings. The usual report includes the strengths and weaknesses of the simulation and its outcomes and recommendations from the consultants for next steps.

One of the companies that received the invitation to tender was The Synapse Group, Inc., a consulting firm based in Portland, Maine. The Synapse consultants responded with a proposal that turned traditional evaluation thinking on its head. The proposal suggested the use of Appreciative Inquiry to conduct a “valuation process,” sometimes called “Embedded Evaluation.” They believed that this approach could give SB information about the strengths of the program in ways that would create positive forward momentum by taking the best of what had happened and using it to create a collective image of a desired future as a basis for moving the program in the direction of its best practices.

In alignment with the usual AI process, the consultants and the SB team designed an interview protocol, conducted 104 interviews, created provocative propositions, and created five different reports, each tailor-made for a part of the system. The outcomes were remarkably rich.

* From an article by Bernard J. Mohr, Elizabeth Smith, and Jane Magruder Watkins, 2000.

Learning, Innovations, and Reaffirmations from This Case

Some months after the final report had been distributed and its recommendations were well underway to being implemented, the external consultants sat down with the SB Project Manager for the purpose of asking: “In retrospect, what did we learn from this work? What do we consider to be the innovations? What was reaffirmed for us?” Shown below for purpose of illustrating the two worlds (client and consultant) are the lists of “Learnings, Innovations, and Reaffirmations” (Table 9.1) from this case developed independently at first by each party.

Table 9.1. Learnings, Innovations, and Reaffirmations

Client’s List Consultant’s List
Joint partnership between consultant and client generates a synergy not present when the consultants and clients relate to each other in a more traditional manner (the consultant as expert or vendor). Joint partnership with the client and ongoing adaptation to the local conditions is key to using innovations successfully.
AI does work for evaluation purposes, particularly at Level 3 of Kirkpatrick (identifying behavioral changes). The scale of a project does not necessarily correlate to the quality of organizational learning.
AI is not the antithesis of problem solving. External consultants may tend to underestimate the client’s interest in the theory and constructs underlying the intervention approach.
Things that from a traditional evaluation process might be considered impure are OK and helpful within an embedded evaluation/AI process, for example, the use of leading questions, the use of data from the pilot, the use of people who have a vested interest in the outcome as interviewers/data collectors. By participating as interviewers, the external consultants were able to contribute more to the data compilation and were also able to become more part of the team. We all had similar stakes in the learnings/outcomes.
The “core AI” questions were the ones that produced the richest data. Before the client can receive innovative assistance, innovation must be present in the worldview of the consultant. And innovations are more likely to come from an outside source who doesn’t have a set picture of how things ought to be.
The experience of AI interviewing is so positive that it affects the interviewer as well as the interviewee. HR/OD professionals who have been trained in traditional interviewing styles may find it harder to use the more flexible dialogue protocol approach of AI.
Since the data generated are in a qualitative/narrative form, the amount of time for digesting the volume of data is significant (particularly in evaluating applicability/applications). SB management trusted their internal HR/OD group to conduct the evaluation without a steering committee and the project was completed just fine without a steering committee.
The richness of the data allows many more questions to be answered than might be answerable with a more traditional quantitative model of evaluation. Making assumptions about what clients will or will not be comfortable with can lead to unnecessary constraints on the project.

As we chatted about our lists, we realized once again the power of partnership—for what constituted innovation for the consultants was a new learning (but not necessarily an innovation) for the client. A case in point was the inclusion of the consultants as part of the interview team. From the consultant perspective this was “innovative” because “normal” in AI practice suggests the importance of limiting the active role of the consultant in interviewing/data collection in order to minimize consultant dependency. The clients “learning” that a partnership between consultant and client generates a synergy not present when the consultants and clients relate to each other in a more traditional manner (the consultant as expert or vendor) was a not a new learning for the consultants. It was a reaffirmation of deeply held beliefs on their parts.

As we shared with each other the things that we had learned, the things we saw as innovations, and the things that we felt were reaffirmed for us, we recognized once more the criticality of our partnership that enabled the consultant and client to build a shared world and a shared language with which to make decisions and conduct the work. Not only did the data reflect a generative and creative picture of the simulation itself, but it also told a powerful story of the pride, loyalty, and commitment that the people of SmithKline Beecham feel for their company. As with all Appreciative Inquiries, it is the innovative, generative, and creative processes that are co-created by client and consultant that becomes a life-giving force for the organization. We realized that we were sharing a reflection about our work together that was, itself, an AI valuation process.

Evaluation as an Integral Part of Any AI Process

In the early 1990s the Global Excellence in Management (GEM) Initiative ran a series of institutes for the leadership of international development agencies. The process included a workshop to help each client system develop a customized interview protocol; time for the clients to conduct the interviews; a six-day residential institute to work with the data and do strategic planning; consultant time over the following year; and a formal evaluation visit twice over the next two years to do an appreciative valuation of their work. The valuation questions included such subjects as the most exciting stories of events following the institute; positive changes in the organization in alignment with their strategic planning process; unexpected positive outcomes for their organization; etc. The valuation interviews were part of a full-day workshop, with the clients and consultants all participating in the dialogue about best and most exciting outcomes. These dialogues became guidance for the ongoing work of the agency. Most of the clients incorporated this practice of appreciative review as a management process for their organization.

This was a major learning for us about the ongoing and iterative nature of AI. A colleague once asked me what to do when an AI process had been very successful over a long period of time but was running out of steam. My cryptic answer was, “Do more AI!” What I was actually referring to was the process of continually valuating with AI questions how the AI process itself is going—questions like: “Share some stories of our most successful and exciting experiences using an AI approach in our company. What things have we liked most? What changes are apparent as a result?” and so on. Once again, the image of the stream comes to mind. It is through the constant shifting and changing of an organization’s dialogues that we find the flow toward our imagined future. Using AI as a valuation process on a regular basis assures that the imagined future will be positive.

And so we end this book repeating our mantra: Appreciative Inquiry is a way of seeing and being in the world. It is based on the belief that we can create what we imagine when we open our minds and our social processes to the widest possible dialogue among the largest number of people who are involved and invested in our enterprise. Appreciative Inquiry applied, whether as a planning process or an evaluative process, becomes empowering and life-affirming in any human system.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.143.241.133