Continuation and Evaluation

Two of the foregoing twelve points deserve special emphasis. One of the most common errors committed by program officers reviewing proposals is to recommend funding without adequately considering two important elements: how a project, once initiated, can continue to operate without further foundation support, and how the project can be evaluated so as to yield important lessons to the foundation. While it is possible to defer these considerations, experience teaches that it is not wise. Both continuation efforts and evaluation plans prove more effective if they are part of program planning from the beginning. Hence, the time to seriously plan for these elements is while the proposal is still under consideration, not when the project is half over.

Continuation

Continuation is a subject only rarely addressed in most proposals. The applicant is understandably focused on trying to get the project started and on attempting to secure foundation support for doing so. Moreover, few applicants understand how important it is for foundations to continually stop funding old grantees so that they will have the flexibility to support new ones. It is the applicants' firm belief, therefore, that once they secure funding from a foundation, that funding should be perpetually renewable, as long as they are doing a good job. Given this preoccupation with starting up and this expectation that securing a grant from a foundation is the functional equivalent of “until death do us part,” it is no surprise that continuation is hardly ever mentioned in proposals.

When first asked for a continuation plan, the applicant is likely to react initially with frustration: “We haven't gotten the grant, the foundation says we might not get the grant at all, yet it wants us to do a lot of work to plan what we will do after the grant ends.” Sometimes the realization that foundation funding is not forever elicits an answer that combines naïveté with brutal realism, as in this response to a request for a continuation plan actually received by a midwestern foundation: “We would hold on as long as we could pay our bills, while hoping for a miraculous rescue.”

The first continuation plan submitted by applicants usually relies entirely on securing grants from other foundations. This is about the same thing as a personal retirement plan that relies on repeatedly winning the lottery: it theoretically can happen, but the odds are badly against it. As Michael Seltzer argues in his important book Securing Your Organization's Future (1987), two concepts must be stressed to the applicant: diversifying sources of support and gaining control over as many of these sources as possible. To be sure, grants from foundations, corporations, and government sources should be a part of any continuation plan, but not the only component nor perhaps even the main one. Earned income should have a prominent place, for it is not subject to the vagaries of foundation funding. Varieties of earned income include sales of products, fees for services, and contracts with other entities. The applicant could establish a for-profit subsidiary. Many have been very successful, the best example being the for-profit catalogue business of Minnesota Public Radio, which the organization sold to Dayton Hudson in 1998 for a reported $120 million. A supporting organization could be formed to enhance fundraising, for it offers excellent tax benefits to donors (and because individuals make up 90 percent of the charitable dollar, anything that makes an organization more attractive to individual donors will enhance the chances for continuation). The applicant could establish a “friends of” group and organize special fundraisers. The applicant could use the foundation's support as leverage to secure other investments, such as seed capital to start an endowment. And, getting back to grants: foundations, corporations, and the government are not the only sources. Some public charities are also grantmakers of significant size; the Fidelity Investments Charitable Gift Fund, for example, topped $100 million in grants in fiscal 1996 (Billiteri, 1998). Implementing any of these ideas would greatly diversify the base of support for the project beyond foundation funding. Many of the ideas, such as creating a supporting organization and producing earned income, are controlled by the grantee, which means that the organization is no longer held hostage to the whims of funders.

There are many other sources of ideas for project continuation. The National Center for Social Entrepreneurship, based in Minneapolis, has established programs to help nonprofit organizations adapt business standards and practices and thus become less reliant on foundation grants. Among the books that have been published on the subject are Nonprofit Piggy Goes to Market (Simons, Lengsfelder, and Miller, 1984), Part of the Solution (Union of Experimenting Colleges and Universities, 1988), and Revolution of the Heart (Shore, 1995). Applicants should be encouraged to avail themselves of these publications and services as quickly as possible. The time to develop a continuation plan is before the project starts, not, as typically happens, about thirty-two months into a thirty-six-month grant.

Evaluation

Evaluation is the cod liver oil of foundation work, as in “Take it: it's good for you.” Of course it is, for evaluation teaches lessons both grantmakers and grantseekers can learn by no other means, and it provides much-needed evidence of project outcomes. But, like cod liver oil, evaluation can be tough to swallow. Grantmakers and grantseekers alike live in dread of a negative evaluation report that (they fear) could jeopardize the future of a project. Grantseekers complain that grantmakers often use the evaluator as a sort of academic gumshoe, keeping an eye on them from afar. And often, they say, the program officer uses a less than perfect evaluation report as a flimsy pretext to decline interest in further support.

Nonetheless, there is evidence that evaluation is becoming more accepted in the grantmaking world, which is undergoing a shift in the way evaluation itself is conceived. As explained in the report Program Evaluation Practice in the Nonprofit Sector (Fine, Thayer, and Coghlan, 1998): “In recent years there has been a growing debate between two broad approaches to program evaluation. In the more traditional model, an external evaluator is employed as an objective observer who collects and interprets quantitative and qualitative findings and presents the information to management…. In the participatory evaluation model, program staff, clients … and other ‘stakeholders’ of the program are engaged in the evaluation process to increase their evaluation knowledge and skills, and to increase the likelihood that the evaluation findings will be used.”

So where do foundations stand on this question of the “old” (observer) versus the “new” (participatory) evaluation? The answer is that foundations are all over the map. Some are rigorous about evaluation, some less so, and others ignore it altogether. Moreover, there is no correlation between commitment level and style; that is, foundations that are committed to evaluation practice both old and new styles, as do those that are less committed. Although there is no unanimity, the new style seems to be gaining ground. And this style comprises four basic principles of best practice that can help grantmaker and grantseeker alike make wise decisions about conducting evaluation—decisions that will help make evaluation taste less like medicine and more like a plum. These principles are discussed in the paragraphs that follow.

PRINCIPLE 1: Good project evaluation is good grant management, and good grant management is good program evaluation.

Part of the reason that applicants (and many program officers) have regarded evaluation as distasteful has been their perception that it is something alien to, or different from, programming. If the evaluator is viewed as an outsider who sits in judgment of the project and its people, he or she will be feared. The “new” view sees the evaluator as part of the management team for the project. There is no “us versus them” if everyone is “us” and, more particularly, if everyone is focused on the same goal: the highest standards of project management.

PRINCIPLE 2: Project evaluation should be owned primarily by the applicant and designed primarily for the applicant's use.

If the project evaluation is owned by the foundation and designed by the foundation for its use, there is no way to prevent the grantee from regarding the evaluator as a spy for the foundation, and the evaluation itself as a form of social control imposed on the project by the foundation. When the applicants own the project evaluation and design it for their use, their perceptions change dramatically. The evaluator becomes a member of the team, and the evaluation a vital part of the project management plan. This is not to say that foundations cannot use or learn from project evaluations; it is to say that project evaluations are most useful to all when the foundation relinquishes control of them to the applicant.

PRINCIPLE 3: The most important decisions about evaluation need to be made by the stakeholders in the project.

If the stakeholders—the applying organization, its partner organizations, the people they are trying to serve, and the foundation—simply hand over decisions about evaluation to an outside consultant, they are doing a disservice to all involved. As the owners of the project, the stakeholders are best qualified, intellectually and spiritually, to make evaluation decisions. Allowing others to make them all but guarantees that stakeholders will get an evaluation plan that does not fit their needs, and it also guarantees that they will not learn how to create a useful evaluation plan for future projects.

PRINCIPLE 4: The stakeholders need to identify the important questions they wish the evaluation to address.

These questions may be focused on the context in which the project is being launched (What are the external factors that will affect this project?), on the implementation of the project (What can be learned about how to successfully manage such a project?), and especially on outcomes (What changes would the stakeholders like to see as a result of this project?). There need not be a plethora of questions—two or three for each of the headings should be sufficient—but they must be significant. These questions will determine nearly everything else about the evaluation. What sort of data will be required to answer the questions? The answer will determine the methodology to be used. What sort of expertise will be needed to gather the data? The answer will determine who should lead the evaluation process—someone internal to the project or an outside individual or firm. How will this information be used? The answer will determine the need for dissemination or marketing services as part of the project. Taken collectively, all these answers will determine how much the evaluation is likely to cost.

Working with Applicants on Evaluation

Keeping these four principles in mind, you will typically have a great deal of work to do on the evaluation portion of the proposal. Very few applicants will have taken ownership of project evaluation in the fashion described here. In fact, in many proposals, evaluation is not mentioned at all. Your first task, then, is to promote the notion of project evaluation as good project management and to convince applicants that they would be the owners of the evaluation component of the project. Depending on applicants' past experiences with and prejudices about evaluation, accomplishing this can be a very time-consuming process. It is essential to do it, however, or applicants will always regard evaluation as an imposition.

Once you have completed the task of promoting evaluation as a management tool, you will need to encourage the grantseeker to begin the process of making the fundamental decisions about the evaluation component, especially to decide about the important questions that the evaluation needs to address. These decisions should be made before the applicant decides which person or which firm will manage the evaluation of the project. For some projects, it may be appropriate to have a stakeholder take the lead on evaluation. Although this approach is generally the least expensive choice, the obvious conflict of interest involved means that it is best employed for smaller projects or for those in which the likely outcomes are fairly straightforward. If, however, the stakeholder is not skilled in evaluation techniques, the initial low expense might multiply, for it may be necessary to bring in others to clean up the resulting mess. The most expensive option is to hire a professional evaluation firm, which offers a high level of evaluation expertise. In-between options include hiring professors or other experts in the field, or engaging graduate students. In any case, in order to be done right, evaluation costs money. Whatever the cost, it is important to add it to the request. Nothing will sour an applicant on evaluation faster than insisting that the cost of evaluation be covered by taking it out of the originally requested amount.

Finally, it will be necessary for the applicant to choose a methodology. As mentioned previously, the methodology required will flow directly from the type of questions that the evaluation must answer. At one time, most methods of evaluation were classified under three broad rubrics: impressionistic, which was long on observation and short on data; anthropological, which was focused on the reactions of the people touched by the project; and experimental, which mirrored the rigor of the sciences, with baseline studies, control groups, experimental groups, and a heavy reliance on statistical techniques. More recently, evaluators have taken to using “mixed methods” that combine approaches from all three rubrics. Although the complexity involved with using mixed methods can be somewhat daunting, the move toward this approach is encouraging. Because each project is distinctive, evaluations must be as well. Evaluation components should never come “off the rack”: they need to be custom tailored.

It makes little sense to attach a highly complex, million-dollar evaluation to a modest $10,000 project, nor is it any more sensible to select a basic $5,000 evaluation for a complex, million-dollar initiative. In fact, for some types of grants, there is no need for any formal evaluation at all. For instance, an annual “good corporate citizen” operating grant to a local arts council is highly likely to be repeated, unless the organization it supports becomes an outright failure. To rigorously evaluate the outcomes of such a project would be a waste of time and money. The key considerations in a case like this are flexibility and proportionality. You need to be flexible in applying rules about evaluation, and you need to make the evaluations proportional in size and scope with the projects they are assessing.

The cost of the evaluation, as mentioned before, will be largely determined by the questions that must be answered and by the methodology required to answer them. There is no counterpart to the health care concept of reasonable and customary fees when considering the cost of project evaluation. Evaluators have no established pay scales, and there is no generally accepted fee schedule among foundations themselves. The costs of an evaluation will vary with such factors as the number of sites, distance between them, number of people served, scope of the project, methodology chosen, and type of reporting required.

We can nonetheless derive a very rough guideline as to how much a foundation should pay to evaluate a project, by expressing the cost of evaluation as a percentage of the total cost of the project, using data collection and manipulation needs as key indicators. If very little data need be collected or manipulated in order to answer the important questions for the project, 1 to 2 percent of project costs is a reasonable price to pay for such an evaluation. If these data needs are significant but not enormous, and if the complexity level is not too high, 4 to 6 percent of project costs seems a reasonable range for such services. If it will be necessary to conduct a very large amount of research, data gathering, analysis, and publication of results, and if the complexity of these tasks is high, then 10 to 20 percent of total project costs would not be out of line for this work. It is worth noting that the price of an evaluator's service is always negotiable and that the first price he or she asks usually includes room for bargaining. It pays the foundation, therefore, to keep these guidelines in mind when contracting with an evaluator.

Formative and Summative Evaluation

The terminology of evaluation is also undergoing evolution. The “old” way of practicing evaluation referred to evaluations conducted during the life of the project as formative; evaluations conducted at the end of the project were called summative. Formative evaluation allowed for the project managers to get important feedback on the project's development and to make necessary midcourse corrections. Summative evaluation told what happened during the life of a project and explained why it happened. More recently, the distinction between these two terms has blurred. “New” evaluators point out that every formative evaluation has some aspect of summative evaluation in it, and vice versa. These terms have come to be defined mainly in terms of how the evaluation will be used. If the purpose is program improvement, it is formative; if the purpose is to make a go–no go decision regarding further funding or bringing to scale, the evaluation is summative.

It is important to understand that formative evaluation—undertaken for refinement and improvement of the project—should be started early. There is always value in evaluating projects at any stage in their life span, but the sooner it starts, the sooner feedback will flow to the project director, and the sooner midcourse corrections can be effected. And the sooner that process of refinement begins, the better the project outcomes are likely to be. In short, the best way to get a positive summative evaluation is to have an early formative evaluation, and in truth, both formative and summative evaluations should be regarded as a seamless whole.

Choosing the Evaluator

Another question is, Who should be the evaluator: an expert in the field or an expert in evaluation? The dilemma is that the person who really knows the subject at hand often doesn't know anything about evaluation methods, and the expert on evaluation methods often knows nothing about the subject of the grant. In an ideal world, the expert in the field would also be an expert in evaluation methods. Failing that, it would be useful, if finances allow, to hire as a team both types of experts, thus ensuring that the methodology is impeccable and that nuances of the knowledge base are appreciated and included.

If the applicant must choose between one kind of expert and the other, however, there are a few guidelines to follow. One is to consider who will be the primary audience for the evaluation. If it is to be others in the field, it may be best to choose the expert in that field. If the primary audience will include policymakers, it is best to have a methodologically unassailable evaluation, so the evaluation expert would probably be the choice.

Another guideline has to do with the people conducting the project and those benefiting from it. Often they are comfortable with someone who knows their field and their language, and especially with someone who shares their values; they are uncomfortable with those whom they perceive do not fit these criteria. Such facts might argue for choosing the person familiar with the field over the evaluation specialist.

The last guideline is the hardest to judge; it springs from the principle involved in the Hawthorne effect, which holds that the very act of observing a system stimulates changes in the output of that system. Evaluators, whether subject experts or evaluation specialists, need to do their work as unobtrusively as possible. If all other things are equal between the subject expert and the evaluation expert, their subtlety and discretion could become a deciding factor.

Who Hires the Evaluator ?

The question of who hires the evaluator should be easy to settle. If the grantee owns the evaluation, then the grantee should hire the evaluator, and the evaluator should report to the grantee's project director. If the evaluator is hired by the foundation and reports to the program officer, the grantee will never get past the feeling that the evaluation is an imposition and that the evaluator is a spy. It is always useful for the foundation to retain veto power on the hiring of the evaluator, in case the grantee should wish to hire someone with whom the foundation has had a bad experience previously. This veto power, however, should be used as sparingly as possible. If employed more than once per applicant, it will feel to the applicant as though the foundation is trying to control the evaluation by denying the applicant's choice of evaluators.

The great challenge you will face in helping applicants to make all of these decisions about project evaluation is that all of the decisions must be made during the proposal review stage, while keeping everything conditional. Until a proposal is actually funded, no one can be hired to evaluate the proposed project. Yet it is important to launch a formative evaluation simultaneously with the start of the project, for the lessons begin to arise immediately. The challenge, therefore, is to work with the applicant to conditionally select the evaluation questions, design, methodology, and evaluator (and estimate the cost), so that the evaluation will be ready to launch should the project be funded.

A number of excellent primers on evaluation have been published. Among the best are Practical Evaluation (Patton, 1982); Program Evaluation Practice in the Nonprofit Sector (Fine, Thayer, and Coghlan, 1998); and the W. K. Kellogg Foundation Evaluation Handbook (Millett, 1998).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.9.7