Chapter 6
Assessment of Service-Learning

One of the most common concerns regarding service-learning is about assessment. How do we know what students are learning? Do communities really benefit? Is service-learning worth the time, energy, and money devoted to it? This chapter provides guidance in developing and implementing a comprehensive assessment strategy. It is designed to be particularly useful for those who are responsible for the administration of service-learning or for assessment on the institutional level. It is important to note that there is no “one size fits all” assessment approach that will work for all institutions or even for all institutions of a particular type. The wide range of issues that should be considered in choosing an approach to assessment and the myriad possible methods can be found in the responses to the questions this chapter addresses. Assessment of student reflection is addressed in 2.3. Questions 4.6 and 5.7 provide further information about assessment in individual courses and cocurricular service-learning experiences, respectively.

6.1 What does service-learning assessment entail?

Why Is Assessment of Service-Learning Important?

What Are the Different Types of Service-Learning Assessment? What Purposes Do They Serve?

What Are the Elements of a Comprehensive Assessment Plan?

Assessment of service-learning enables its practitioners, participants, supporters, advocates, and funders to gain an understanding of its value to students, faculty, community leaders and members, the institution, and to higher education and society. It can also reveal its effectiveness in achieving its desired outcomes, its cost-benefit ratio for all its constituencies, and what is required to enhance its value and outcomes.

The term assessment can be used as an umbrella for other terms—such as counting, evaluation, benchmarking, outcomes measures, and research—that refer to the systematic gathering of information and processing it to reach conclusions (Hatcher & Bringle, 2010). In the context of student learning and development, assessment also describes the process of determining the extent to which a particular outcome or set of outcomes has been achieved by an individual or a group. In this book, I have used both Hatcher and Bringle's broad definition of assessment and the more specific definition of assessment related primarily to learning outcomes.

Because service-learning is a complex process and involves multiple stakeholders, several forms and levels of assessment are required. An overview of the potential components of a comprehensive plan for the assessment of service-learning follows.

Counting in service-learning helps to answer questions related to numbers of service-learning courses, community partner organizations, student participants, service hours, tons of trash hauled out of a river during a cleanup, children tutored, and the like. While numbers do not reflect the impact of service-learning on students, communities, and institutions, they are one measure of output. Numbers are required in grant applications and other requests for funding, annual reports, national surveys like Campus Compact's Member Survey, and applications for recognition, such as the President's Community Service Honor Roll and the Carnegie Elective Community Engagement Classification.

Evaluation, like counting, is also about outputs. It can measure the quality and effectiveness of a program or course, as well as participant satisfaction. Its focus is on a specific intervention, and it is primarily used internally to improve design and practice. Evaluation can be used to measure how consistent the implementation of a course or program was with its design, whether it kept to its timeline, whether its actual costs matched the budget, and how well it met its goals (Hatcher & Bringle, 2010). Along with counting, evaluation is the most common type of assessment required in reports on grant-funded programs. The findings of evaluations are usually not generalizable.

Benchmarking is often used to determine how one or more aspects of an institution's service-learning program compare with an established standard, programs at other institutions nationally, or programs at peer or aspirational-peer institutions. It is fairly easy to gather and compare data regarding numbers of courses, participants, or staff members; types of curricular or cocurricular offerings; and practices employed. It is more challenging to compare results, such as impacts on students and communities, although there are some multi-institutional groups that are doing this work.

Service-learning outcomes assessment, which is a more specific definition of assessment, measures the extent to which desired outcomes are achieved for students, communities, faculty, and institutions. Its purpose is to gather, analyze, and interpret various forms of evidence to increase outcome attainment by improving practice.

In the context of service-learning, research is systematic, scientific inquiry that is designed to collect, analyze, interpret, and use data to understand, describe, predict, or control an educational phenomenon (Mertens, 2005). It produces information about why specific outcomes do or do not occur or why a program produces or does not produce a particular result. Research both uses and guides theoretical and conceptual frameworks. High-quality research can generalize findings so that they can be used beyond the specific study population.

Assessment, generally speaking, can occur on multiple levels. When the individual is the unit of analysis, assessment seeks to determine changes in individuals or their circumstances. In the case of service-learning student participants, one could examine a student's achievement of learning outcomes in a particular course or cocurricular experience, in the major, as a result of several service-learning experiences, or across the entire college experience. For individual clients of a community partner organization, one could measure, for example, improvement in a child's reading comprehension, a recent immigrant's language skills, or an elderly person's ability to use small motor skills.

Individuals' data can be sampled and aggregated for analysis at course, major, program, experience, or institutional levels. For example, sampled, aggregated, and summarized data related to achievement of particular learning outcomes in a first-year service-learning seminar in a specific major could be compared with sampled, aggregated, and summarized data from a senior capstone course in the same major to determine the extent to which students have improved in critical thinking. Aggregated student data from assessment of achievement of learning outcomes is also required by regional accrediting associations. Disaggregated data can reveal how well subgroups of students are succeeding. As far as the community is concerned, aggregated data can be used to examine whether engaging with service-learning has had an overall positive impact on the lives of individuals in the community. For example, in an ongoing partnership between a faculty member who teaches a service-learning course in public health and a community where there are high numbers of people with obesity and diabetes, the students have provided health education and screening to residents over six semesters. Aggregated data from a door-to-door survey of community residents could be used to determine whether residents' health has improved during the period of the students' interventions. Disaggregated data could show, for instance, that women's health improved more than men's, enabling the faculty member and students to consider alternative ways to deliver their message to male residents.

Assessment at programmatic and institutional levels can also examine such areas as infrastructure for service-learning, environmental factors that act as supports and barriers for service-learning, and cost-effectiveness. Assessment of campus-community service-learning partnerships can address these areas as well as whether the partnership has yielded changes in the broad structures that affect the life of the community.

A comprehensive assessment plan for service-learning includes a variety of types and levels of assessment. The development of an assessment plan can be approached in several ways, ranging from reactive to proactive. For example, a simple plan could begin with a list of requirements for data on an annual basis, such as institutional reports, national surveys, and recognition and award applications, together with a list of sources of existing data from which information can be drawn, such as course catalogs, faculty assessments of learning in service-learning courses, and routine program evaluations. By comparing the requirements with existing data, it is possible to determine what assessments need to be added. A more proactive and useful assessment plan might be based on a grid that identifies desired outcomes—for students, faculty, community partners, and the institution—and the service-learning experiences that are designed to address these outcomes (Hatcher & Bringle, 2010). An assessment team can then determine which types and methods of assessment would provide information about the extent to which the outcomes are achieved. It is unrealistic to assess every outcome annually, but the plan can be used to develop a schedule of which assessments will be conducted in a particular year.

Source of additional information

  1. Hatcher, J.A., & Bringle, R.G. (2010). Developing your assessment plan: A key component of reflective practice. In B. Jacoby & P. Mutscio (Eds.), Looking In, Reaching Out: A Comprehensive Guide for Community Service-Learning Professionals. Boston, MA: Campus Compact.

6.2 What are the possible methods for assessing service-learning?

It is easy to become confused by the number of assessment methods, determining which method is best for a particular purpose, and the varying levels of complexity of implementation. In addition, service-learning is a case in point for the admonition often attributed to Albert Einstein that not everything that can be counted counts, and not everything that counts can be counted. Service-learning assessment can comprise qualitative and quantitative methods and indirect as well as direct assessment. Methods for collecting information that are useful for assessing service-learning are listed below, together with their general purposes, advantages, and disadvantages. The questions that follow address how these methods can be used to gather information from and about different participants and stakeholders in service-learning.

Surveys. 

Along with counting, surveys, including questionnaires and checklists, are the most commonly used quantitative methods of service-learning assessment. They can obtain information quickly and easily in a non-threatening manner. Surveys can be administered inexpensively online or on paper, be completed anonymously, and produce a lot of data from a large number of people. Many existing survey instruments can be used to measure such constructs as attitudes, satisfaction, and perceived learning or improvement. Unless complex statistical comparisons are required, data analysis can be relatively simple. Numerical data can easily be disseminated thorough numbers, tables, and descriptive text.

However, surveys provide self-reported results that may or may not reflect the results that direct assessment might produce. In addition, because surveys are frequently used, respondents may suffer from “survey fatigue” and not complete the surveys or complete them without sufficient thought. The wording of the questions may bias responses, and the data yielded may lack detail and richness.

Achievement Testing. 

When a service-learning experience is integrated into a course to enhance learning of academic content, achievement testing through quantitative methods, such as multiple-choice and short-answer tests, can indicate how well students have learned the material. Unless a very similar course is also offered without service-learning and the same test is administered in both versions of the course, it is difficult to ascertain whether, or to what extent, service-learning enhanced content learning. An example of an introductory chemistry course in which this type of assessment was used is described in 2.5.

Content Analysis of Student Work. 

Content analysis is a research methodology employed by social scientists to study the content of communication by identifying themes and patterns (Steinberg, Bringle, & Williams, 2010). One of the most prevalent methods of assessing student learning is analyzing reflection products. As discussed in Chapter Two, reflection can take many forms, including journals, essays, presentations, creative expression, and portfolios. The most comprehensive means of assessment of student learning is through portfolios. A portfolio is a collection of work and reflective products that is multidimensional and shows progress over time. In addition to its usefulness for individual assessment, data from portfolios can be aggregated for the purposes of course-, program-, or institution-level assessment. In service-learning assessment, rubrics are frequently employed to streamline the assessment process and to add consistency. There are many examples of rubrics available online that can be used or adapted to assess particular desired learning outcomes. I particularly recommend the VALUE rubrics developed by Association of American Colleges and Universities for this purpose. These rubrics are described in 6.4.

Interviews. 

Conducted in person or by telephone, interviews can be used to acquire information that is more in-depth than can be obtained through surveys. Interviews are often used in combination with surveys to learn more about the answers initially provided by respondents. Interviewers may use a protocol, or standard list of questions, or allow the interview to be flexible and free-flowing. Interviews are usually recorded for later analysis. They can be challenging to schedule, take much time to conduct, and may be difficult to analyze. The interviewer's personal characteristics or nonverbal cues may bias the responses. Interview results are not generalizable.

Focus Groups. 

Through focus groups, participants can interact and build on one another's responses. Basically interviews conducted in a group format, focus groups allow for interactive discussions that can explore topics thoroughly and from various perspectives. Focus groups can be an efficient means of gathering a range and depth of information in a short time, quickly and reliably gain common perceptions, and convey information about future opportunities to participants. They are particularly useful for evaluation and marketing. Limitations include the possibility of uneven participation, participants' withholding responses because of the group setting, and the difficulty of bringing several people, usually six to ten, together at the same time. A skilled facilitator is required. As with interviews, facilitator bias may skew results, and recordings or transcripts of focus groups are challenging and time-consuming to analyze. Data is meant to tell a story rather than to provide numbers and is not generalizable.

Observation. 

Viewing and recording operations or behaviors as they occur can be an effective means of direct assessment. Data acquired through observation is especially useful in corroborating and supplementing information that students have produced through reflection or that community partners have provided. Observations can be recorded in a journal or log for later analysis. Rubrics, checklists, or rating scales can be used to acquire quantitative data related to the frequency, characteristics, or absence of particular behaviors. For example, in a course in which working collaboratively to achieve a goal is a desired outcome, the faculty member could allow students to engage in the work during class time and rate the students' performance using a rubric. If one criterion of collaboration is problem solving, the zero-to-fifteen-point rating scale might include the following scale: does not participate in problem solving (0 points); occasionally offers suggestions to solve problems and sometimes demonstrates effort to help the group work together (5 points); generally offers suggestions to solve problems and sometimes encourages group participation (10 points); and frequently involves the whole group in problem solving (15 points). In a tutoring program, direct assessment could assess the effectiveness of tutors using a checklist of positive behaviors like making eye contact with the pupil and using language appropriate to the pupil's age and ability level. However, it can be challenging to interpret observed behaviors and to categorize observations. If the observer is not the classroom faculty member, it can be costly to hire and train individuals to do the observations.

Document Review. 

Review of existing documents, including planning documents, syllabi, websites, brochures, budgets, correspondence, meeting minutes, grant applications, annual reports, assessment data, and the like, can yield information about a program or organization, how it operates, and its effectiveness. Other potentially useful sources of data include grades, faculty activity reports and tenure and promotion applications, and campus and community newspapers and magazines. Although it can be time-consuming, document review is a relatively unbiased and unobtrusive way to evaluate a program. It can be an effective means of self-assessment or of assessment by external evaluators. However, gaining access to existing data can be challenging. In addition, information can be incomplete and may not fully answer the questions evaluators seek to answer without additional assessment through interviews, focus groups, or other methods.

Case Studies. 

The purpose of a case study in service-learning is to develop a full description of a program, course, or partnership and its effects on students, faculty, community partners, and the institution. It is an intensive mode of assessment and can include several of the quantitative and qualitative methods described above. Case studies can be designed to assess what actually occurred, whether it had an impact (either expected or unexpected), and what aspects of the program, course, or partnership led to documented impacts. What is gained in richness through a case study evaluation is counterbalanced by the inability to generalize beyond the immediate case. Not to be undertaken lightly, case studies require considerable time and energy to do properly (Balbach, 1999).

Sources of additional information

  1. Seifer, S.D., Holmes, S., Plaut, J., & Elkins, J. (2002, 2009). Tools and Methods for Evaluating Service-Learning in Higher Education. https://www.nationalserviceresources.gov/tools-and-methods-evaluating-service-learning-higher-education.
  2. Steinberg, K.S., Bringle, R.G., & Williams, M.J. (2010). Service-Learning Research Primer. Scotts Valley, CA: National Service-Learning Clearinghouse. http://csl.iupui.edu/doc/service-learning-research-primer.pdf.

6.3 What issues should we consider in choosing assessment methods?

As is true for any pedagogy or program, a variety of methods and tools can be used to assess service-learning. Despite its importance, it is often challenging to find the time to design and conduct assessment. Therefore, you should think carefully about what you wish to accomplish and what is feasible. Issues to consider in choosing among the many methods include:

Who Is This Information For? 

Students, faculty members, community partners, service-learning center staff, campus administrators, internal and external funders, and trustees are among the stakeholders who may seek information about service-learning.

What Do They Want to Know? 

It is important to know what you need and want to measure. Numbers of service hours or clients served? Achievement of student learning outcomes? Student or faculty attitudes? Impact on community members? Impact on community partner organizations? Whether a program met its goals? How effectively resources were used? Participant satisfaction? Service-learning assessment can produce many kinds of information, so it is important to know exactly what a particular constituent group wants to know. Because time and resources for assessment are almost always limited, it is worthwhile thinking about whether the information the assessment is likely to yield will be useful or merely interesting to stakeholders.

Will Quantitative or Qualitative Data Best Meet the Needs? 

Several factors may affect your decision to use quantitative, qualitative, or mixed methods in your assessment. Quantitative data answers questions in terms of numbers, can be used to compare responses from different groups of people, and can provide statistics in various forms. Qualitative assessment can go deeper into an issue, explore nuances, and provide rich words, descriptions, and details. It focuses on differences in quality rather than quantity. The population studied through qualitative research is generally smaller than for quantitative research, because the depth of data collection does not easily allow for large numbers of participants.

Do You Need Direct or Indirect Assessment? 

Both of these assessment approaches can yield useful data about service-learning. In regard to student learning, direct assessment of service-learning consists of examining samples of students' work based on the desired learning outcomes of a course or cocurricular experience. Such samples could include reflection journals, answers to final exam questions, essays, presentations, or portfolios. Indirect measures of student learning might include self-assessment surveys about what service-learners believe they learned or achieved. From the community perspective, direct measures of the effects of service-learning might include pre- and post-tests of the reading skills of children tutored or how many adult participants in a literacy program passed the English-as-a-Second-Language examination. Indirect measures would include evaluations by the teachers of the children in the tutoring program and surveys of the literacy program participants about their comfort in using English in daily life.

What Sources of Information Already Exist? 

It is likely that existing data may answer some of the questions to which stakeholders seek answers. For example, many institutions participate in regular data-collection efforts such as the annual Campus Compact Member Survey, the Your First College Year survey, the National Survey of Student Engagement, the Multi-Institutional Study of Leadership, accreditation self-studies, and many others conducted by the institutional research, registrar, and financial aid offices. There are also voluntary institutional applications, such as those for the President's Community Service Honor Roll and the Carnegie Community Engagement Elective Classification. It may be possible to add one or more items to existing surveys or other data-collection methods that will provide information more directly related to your needs. Other sources of data are listed in 6.2 in the discussion of document review as a form of assessment.

What Resources Can You Draw Upon? 

Before embarking on the assessment process, you should determine who will be involved and what resources are available in terms of time, expertise, and budget. Designing the study and collecting data are only part of the assessment process. Developing data-collection protocols and processes, purchasing standardized instruments, administering assessment, scoring, data analysis, and creating reports can be costly and time-consuming. Professional assistance with all areas of assessment may be available from the institutional research office or support units located within student or academic affairs. Students, faculty, community partners, and staff in the center for teaching and learning may be able to provide assistance. Collaborating with others who have similar needs for assessment may also help to minimize costs.

The next five questions address assessment of service-learning from the perspectives of its key constituencies: students, communities, partnerships, faculty, and institutions.

6.4 What should assessment of service-learning student participants comprise?

Why Assess the Impact of Service-Learning on Students?

In addition to understanding the effects of service-learning on student participants in individual courses or cocurricular experiences, there are many other reasons to assess student impacts. Faculty and staff members who engage students in service-learning naturally want to know whether their desired learning outcomes were achieved and what it was about the experiences that led to their achievement or lack of achievement. They can then use the data to improve their practice. Service-learning educators also find themselves in the position of explaining to students why they should participate and, thus, find it useful to be able to call upon documentation of how service-learning contributes to student learning and development. Faculty members also can use assessment data to demonstrate to colleagues the academic rigor of service-learning for inclusion in their teaching portfolios and for encouraging others to adopt the pedagogy. As 7.7 indicates, assessment data can also go a long way to demonstrate the value of service-learning in the pursuit of institutional support and in fundraising.

There are various ways to categorize potential effects of service-learning on students. Janet Eyler and Dwight E. Giles, Jr., identify six broad categories of student impact: personal and interpersonal development; understanding and applying knowledge; engagement, curiosity, and reflective practice; critical thinking; perspective transformation; and citizenship (1999). Gelmon, Holland, Driscoll, Spring, and Kerrigan answer the question, “What do we want to know?” with this list of potential student outcomes: awareness of community issues, assets, and needs; quality and quantity of interactions with the community; present and future commitment to service; career decision making and professional skill development; awareness of personal strengths and limits and changes in preconceived beliefs; understanding and applying course content; self-confidence and comfort in diverse settings; sense of ownership of responsibility for outcome of community project and role as learner; demonstrated abilities in oral and written communication and recognition of their importance; and valuing the pedagogy of multiple teachers, including peers, community partners, and faculty (2001, p. 28). Question 6.2 describes various methods that can be utilized in assessing the impacts of service-learning on students. Questions 2.3 and 4.6 address assessment of student learning through reflection and in academic courses, respectively. Question 5.7 covers assessment of cocurricular service-learning experiences. An extensive compilation of scales that can be used to measure a wide range of student outcomes can be found in The Measure of Service Learning: Research Scales to Assess Student Experiences (Bringle, Phillips, & Hudson, 2004).

Although they were not designed specifically for service-learning, the Valid Assessment of Learning in Undergraduate Education (VALUE) rubrics developed by AAC&U are being used by more than 1,000 colleges and universities. The sixteen rubrics are designed for institution-level assessment of AAC&U's Liberal Education and America's Promise (LEAP) Essential Learning Outcomes: civic engagement, creative thinking, critical thinking, ethical reasoning, foundations and skills for lifelong learning, information literacy, inquiry and analysis, integrative and applied learning, intercultural knowledge and competence, oral communication, problem solving, quantitative literacy, reading, teamwork, written communication, and global learning (Rhodes & Finley, 2013). These outcomes, which represent a consensus among educators and employers about the preparation students need to be successful in “civic life and the global economy,” also represent the desired outcomes of curricular service-learning (Rhodes & Finley, 2013, p. 5).

All the regional accrediting bodies have embraced the VALUE rubrics as an acceptable approach for assessment of student learning. The rubrics are useful for both formative and summative assessment within individual disciplines and across general education programs. While they were not developed as grading rubrics, they “can be translated into grading rubrics for specific courses, using the same criteria or dimensions for learning, but the performance descriptors would need to be modified to reflect the course content and assignments being examined, while still preserving the dimensions of learning in the original rubric” (Rhodes & Finley, 2013, pp. 6–7; italics in original). Thus, the rubrics can be adapted to assess either individual service-learning courses or all service-learning courses at a particular institution. Terrel L. Rhodes and Ashley Finley (2013) offer guidance and examples of how the rubrics can be modified for specific institution-wide and course-based assessment of student learning.

Sources of additional information

  1. Association of American Colleges and Universities. (2013, October). VALUE: Valid Assessment of Learning in Undergraduate Education. www.aacu.org/value/rubrics/index_p.cfm?CFID=43192042&CFTOKEN=91897611.
  2. Bringle, R.G., Phillips, M.A., & Hudson, M. (2001). The Measure of Service Learning: Research Scales to Assess Student Experiences. Washington, DC: American Psychological Association.
  3. Eyler, J.S., & Giles, D.E., Jr. (1999). Where's the Learning in Service-Learning? San Francisco, CA: Jossey-Bass.
  4. Gelmon, S.B., Holland, B.A., Driscoll, A., Spring A., & Kerrigan, S. (2001). Assessing Service-Learning and Civic Engagement. Providence, RI: Campus Compact.
  5. Rhodes, T.L., & Finley, A. (2013). Using the VALUE Rubrics for Improvement of Learning and Authentic Assessment. Washington, DC: Association of American Colleges and Universities.

6.5 How should service-learning be assessed from the community perspective?

How Can We Tell Whether Service-Learning Is Worth the Cost-Benefit Ratio to the Community Partner?

How Can We Measure Changes in Individuals or Systems?

There is no doubt that there is far more evidence about the effects of service-learning on students than about its effects on the community. Service-learning educators are challenged by such issues as: What community impacts should we measure? Should we burden our community partner with unwieldy assessment responsibilities? Where do we find the time and expertise to do high-quality assessment beyond simple counts?

As discussed in Chapter Three, potential campus and community partners should hold early conversations about the outcomes each seeks from the relationship and what criteria and measures will be used to assess the extent to which the desired outcomes are achieved. Randy Stoecker and Elizabeth A. Tryon rightly point out that “turning the ship of service learning to point to community outcomes rather than primarily student outcomes actually requires agency staff to involve themselves in steering that ship” (2009, p. 180). However, while campus resources for assessment are often limited, they are likely to be even more limited on the part of the community. As a result, it is important to be realistic and to carefully prioritize what specific information is most needed at what point in time and for what purposes. If, for example, the partnership is funded by a grant, then the information required by the grantor for reports or applications for additional funding may be primary. It is also worthwhile for the partners to discuss what information is easiest to collect that would also be useful in assessing the extent to which community outcomes were achieved. For instance, counting and use of self-report surveys are simple, objective methods that can yield valuable data. How many meals were packaged and delivered? Were more clients of the community partner organization served? Were there shorter wait times by clients for services? Were there more hits on the organization's website? Brief one-time surveys administered orally or in writing to the organization's on-site clients can assess whether there is increased satisfaction with the quality of information or service.

Most service-learning centers conduct routine evaluations of the benefits community partners believe they received and the level of their satisfaction with their participation in service-learning. Subjective questions related to satisfaction of community partners might address whether the students had sufficient knowledge and skills to complete their responsibilities, whether the number of service hours was adequate, and whether there was sufficient communication with campus partners. Formative assessment, even if done informally, is helpful in identifying areas of concern and making adjustments in them before they become insurmountable problems.

As a partnership progresses, additional outcomes are generally sought and more joint activities are planned. Further assessment of the extent to which those outcomes are achieved requires the use of more complex assessment measures. This is especially true if the desired outcomes include positive effects on individuals, groups, and systems. These assessments can be undertaken by students in service-learning courses or independent studies that involve community-based research, graduate students seeking topics for theses and dissertations, and teams of students in courses in measurement and statistics, sociology, education, business, or other fields related to the work of the community partner. A simple but excellent worksheet that can be used to analyze the cost-benefit ratio of participation in service-learning for a community organization is included as Exhibit 6.1. It can enable individuals without extensive knowledge or experience to conduct this assessment.

In summary, evaluation and assessment of service-learning from the community perspective should make results meaningful for the community audience, report at least some results as quickly as possible, determine the organization's cost-benefit ratio, and be sure that the evaluation reveals something that is not already known (Dewar, 1997). Hopefully, a review of the assessment results will enable both partners to make adjustments that will increase the likelihood of positive results in future assessments. It is important to note that assessment results may indicate that the campus-community partnership, like any other partnership, simply is not working and that the partners should consider whether it would be best to terminate it.

Sources of additional information

  1. Dewar, T. (1997). A Guide to Evaluating Asset-Based Community Development: Lessons, Challenges, and Opportunities. Chicago, IL: ACTA Publications.
  2. Scheibel, J., Bowley, E.M, & Jones, S. (2005). The Promise of Partnerships: Tapping into the College as a Community Asset. Providence, RI: Campus Compact.

6.6 How should service-learning partnerships be assessed?

As detailed in Chapter Three, campus-community partnerships are complex and interdependent systems (Sigmon, 1996). By their very nature, they are subject to both evolution and sudden change as a result of fluctuations in resources, clients and their needs, and economic and environmental circumstances (Gelmon, 2003). As Sherril B. Gelmon reminds us, “it is useful to view partnerships from a systems perspective, recognizing that a change affecting any partner is likely to have an impact on multiple aspects of the partnership” (2003, p. 44). In addition, based on the distinction between transactional and transformative partnerships proffered by Enos and Morton in 3.9, campus-community partnerships have the potential to transform individuals, communities, organizations, and institutions for the better. In a transformative partnership, the partners open themselves to the continuing possibility of being transformed in large and small ways (2003). As a result, as partnerships advance from the immediate, transactional stage to open themselves to the potential of transformation, it is important to include the partnership as a unit of analysis in the overall assessment plan for service-learning.

As a partnership moves toward the transformative, a new entity may be created, such as the charter school developed as a result of a partnership among a university's college of education, the county school system, and the state education commission, described in 3.9. Besides the creation of the new charter school, the transformative partnership sought to increase the number of successful applicants from county residents to the university, to increase the number of graduates of the university's college of education who became teachers in low-performing schools, and to provide opportunities for students in service-learning courses in a wide range of disciplines to deepen their understanding of the effects of educational inequity while enhancing their own learning of course content. Assessment in this case would attempt to determine the degree to which the desired outcomes were achieved for each partner as well as the success of the partnership itself. Assessment would include the extent to which the partnership contributed to each partner's attainment of mission and goals, increased the capacity of each partner to serve its constituents, increased the knowledge and skills of the faculty and staff involved, brought new energy to the partner organizations, and increased access to human, fiscal, information, and physical resources. In addition, assessment using the partnership as the unit of analysis would examine the attainment of the partnership's mission and goals—in this case, the charter school—the large-scale social and economic benefits of the partnership, satisfaction with the relationship, potential changes that could lead to greater success, and future possibilities for further growth and transformation.

Sources of additional information

  1. Gelmon, S.B. (2003). Assessment as a means of building service-learning partnerships. In B. Jacoby (Ed.), Building Partnerships for Service-Learning. San Francisco, CA: Jossey-Bass.
  2. Gelmon, S.B., Holland, B.A., Driscoll, A., Spring A., & Kerrigan, S. (2001). Assessing Service-Learning and Civic Engagement. Providence, RI: Campus Compact.

6.7 What should faculty assessment consist of in regard to service-learning?

How Should We Assess the Quality of Service-Learning Teaching?

What Do We Need to Know About the Impact of Service-Learning on Faculty?

How Should We Assess the Benefits and Costs of Service-Learning for Faculty?

In the context of service-learning, the focus of faculty assessment is generally construed as examining the quality of the teaching of service-learning courses. However, it is also important to assess how service-learning affects teaching, learning, and scholarship. Assessment of faculty perceptions of service-learning can inform efforts to increase the satisfaction of service-learning faculty and the number of faculty members who integrate it into their teaching.

Several of the methods described in 6.2 apply to the assessment of the quality of service-learning teaching. One of the simplest is using document review to analyze service-learning syllabi or faculty curriculum vitae. Syllabus analysis is straightforward and is necessary at institutions where there are criteria for service-learning course designation, as described in 4.10. A checklist can be devised for syllabus analysis based on the course designation criteria or, if there is no official service-learning designation, on the Principles of Good Practice for Service-Learning Pedagogy (Howard, 2001) in Exhibit 4.1 or on the list of unique elements of a service-learning syllabus that can be found in 4.7. Analysis of curriculum vitae can be used to learn the extent of faculty engagement with service-learning and, when compared with tenure and promotion results, the effects of service-learning on professional advancement. Criteria for analysis of curriculum vitae might include evidence of integration of service-learning into courses and independent studies, grants based on community-based research or teaching, professional presentations and publications, and community projects, presentations, or reports.

Classroom observation can be an effective means of learning about teaching techniques that can be used to improve service-learning teaching, both for the individual observed and for others. The use of trained peer evaluators is common practice and is readily applicable to service-learning. Experienced service-learning faculty members, equipped with a checklist and questions to use in preparing a narrative report, can gather information related to such topics as use of class time devoted to lecture, group work, reflection, and discussion; relationships between the faculty members and the students; quality of student participation and engagement; and connections between academic content and community experiences.

Student evaluations of teaching can also be useful indicators both to individual faculty members for use in improving their courses and in aggregate form to assess the effectiveness of service-learning teaching in general. Because standard faculty evaluation surveys do not usually provide enough specific feedback about the unique aspects of service-learning, it is wise to supplement these evaluations with questions specifically related to teaching in the context of service-learning. Such questions can either be in the form of a survey or incorporated into reflection.

Because faculty members are such key players in the service-learning enterprise, it is essential to assess the impacts of service-learning on faculty. Understanding their perceived benefits and challenges can facilitate efforts to motivate faculty members to practice service-learning and to sustain them so that they remain involved in the work. Given the time and effort a comprehensive assessment would entail, it is appropriate to select one or two most salient areas to assess at a time. Areas to consider for assessment of the effects of service-learning on faculty, together with sample questions, include:

  • Student learning. Do students learn more academic content? Does service-learning take too much time away from content learning? Do students improve skills? Do they increase their understanding of community issues? Do they learn how the discipline can address social issues? Do you find it too challenging to assess student learning through service-learning?
  • Enhancement of teaching. Does service-learning improve your teaching? Does it encourage you to try other new pedagogies? Do you enjoy teaching more? Less? Does it improve your relationship with students? Does it help you understand your professional strengths and weaknesses? Does it take too much of your time?
  • Effects on scholarship. Does service-learning help clarify focus areas for your scholarship? Does it open up possibilities for developing research partnerships with the community? What have you learned by working with the community? Does service-learning distract you from your research?
  • Effects on career. How do colleagues perceive your work with service-learning? Do your colleagues advise you to concentrate on traditional teaching and scholarship? Do you expect service-learning to enhance or detract from your professional portfolio?
  • Professional development and support. What topics need to be addressed in faculty development? What kinds of support have you found helpful? What additional supports are necessary?

As far as methods, most of those mentioned in 6.2 would be useful in assessing the impacts of service-learning on faculty. It is common practice for the service-learning center to, at a minimum, administer a brief survey at the end of each semester to service-learning faculty. It is beneficial to supplement such surveys with interviews, focus groups, or faculty members' reflective journals to address particular areas of concern that may be identified by analyzing the survey results. It is also worthwhile to consider assessing faculty who are not involved in service-learning to learn what they know about service-learning, what might motivate them to consider adopting it for one of their courses, and what has deterred them from using it.

Source of additional information

  1. Gelmon, S.B., Holland, B.A., Driscoll, A., Spring A., & Kerrigan, S. (2001). Assessing Service-Learning and Civic Engagement. Providence, RI: Campus Compact.

6.8 What assessment should be done at the institutional level?

How Can We Assess the Benefits to the Institution of Service-Learning?

How Should We Assess Our Institutional Infrastructure to Support Service-Learning? Our Institutional Commitment?

How Can We Identify the Internal Obstacles to Service-Learning?

There are several reasons why assessment of service-learning from the institutional perspective is a critical aspect of the overall assessment plan. Service-learning is always strongly influenced by the institutional environment (Gelmon, Holland, Driscoll, Spring, & Kerrigan, 2001). It cannot achieve its substantial potential benefits for students, faculty, the community, and the institution, as enumerated in 1.4, without institution-level commitment and support. Because broad institutional support is required, administrators, board members, and funders need to understand its benefits. They also need to know what barriers have to be addressed. In addition, service-learning initiatives cannot survive and thrive if they are isolated in one corner or pocket of the campus. As 3.6 illustrates, successful, sustainable service-learning should be institutionalized so that it becomes an ongoing, legitimate, and valued element of the institution's organizational culture.

Several frameworks exist that can be used for assessment of the degree of institutionalization of service-learning. The Self-Assessment Rubric for the Institutionalization of Service-Learning in Higher Education (Furco, 2002) is comprehensive and straightforward. It is based on five dimensions, each of which has several components: philosophy and mission of service-learning, faculty support and involvement, student support and involvement, community participation and partnerships, and institutional support. The rubric comprises three stages of institutionalization: critical mass building, quality building, and sustained institutionalization. The components of each of the five dimensions are listed in Exhibit 6.2. The entire rubric, together with instructions for its use, is available at http://talloiresnetwork.tufts.edu/wp-content/uploads/Self-AssessmentRubricfortheInstitutionalizationofService-LearninginHigherEducation.pdf.

Based on the work of Barbara A. Holland, another rubric for assessing institutional levels of commitment to service-learning appears as Exhibit 6.3. It includes seven aspects of commitment—mission; promotion, tenure, and hiring; organizational structure; student involvement and curriculum; faculty involvement; community involvement; and campus publications. The rubric is scored using four levels of relevance to institutional mission: low relevance, medium relevance, high relevance, and full integration (Gelmon, Holland, Driscoll, Spring, & Kerrigan, 2001).

The Council for the Advancement of Standards in Higher Education (CAS) promulgates national standards for use in institutional self-assessment of programs and services, including service-learning. The CAS framework consists of standards and guidelines for the assessment of the overall service-learning program's mission, program, leadership, human resources, ethics, legal responsibilities, equity and access, diversity, organization and management, campus and external relations, financial resources, technology, facilities and equipment, and assessment and evaluation. CAS also offers self-assessment guides that include a comprehensive self-study process for program evaluation (Council for the Advancement of Standards, 2013).

Institutions wishing to conduct assessment from the broad perspective of community engagement, which includes but goes well beyond service-learning, should consider using the framework promulgated by the Carnegie Foundation for the Advancement of Teaching for their Elective Community Engagement Classification. Even if an institution decides not to apply to be recognized under the classification, the framework provides an excellent tool for the purpose of self-assessment. Although it does not include a rubric, responding to the open-ended questions, which requires both data and narrative, would create a thorough case study of the institution's commitments, activities, and outcomes related to community engagement (2013b).

Another instrument useful for institutional self-assessment is the annual Campus Compact Member Survey. Conducted each year since 1986, the survey collects data on student and faculty engagement in service and service-learning, campus infrastructure, faculty roles and rewards, and alumni engagement. The executive summaries published by Campus Compact based on annual survey data are helpful for benchmarking an individual institution's results against national data.

In conducting institution-level assessment of the benefits of service-learning to various stakeholders, obstacles to its further development, and degree of institutionalization, most of the methods described in 6.2 are applicable. It is important to note again here that there are multiple sources of existing data to tap in the process of completing the instruments and rubrics described above. Among these are institutional publications and online presence, mission and goal statements, annual and accreditation reports, strategic plans, student application essays, media reporting, catalogs and course schedules, budget documents, and institutional policies.

Sources of additional information

  1. Campus Compact. (2013d, July). Statistics. www.compact.org/about/statistics.
  2. Carnegie Foundation for the Advancement of Teaching. (2013b). Elective Community Engagement Classification. First-Time Classification Documentation Framework. http://classifications.carnegiefoundation.org/downloads/community_eng/first-time_framework.pdf.
  3. Council for the Advancement of Standards in Higher Education. (2009). Service-learning programs: CAS standards and guidelines. In CAS Professional Standards for Higher Education (7th ed.). Washington, DC: Council for the Advancement of Standards in Higher Education.
  4. Council for the Advancement of Standards in Higher Education. (2013, July). The CAS self-study process. www.cas.edu/index.php/about/applying-cas/.
  5. Furco, A. (2002). Self-Assessment Rubric for the Institutionalization of Service-Learning in Higher Education. http://talloiresnetwork.tufts.edu/wp-content/uploads/Self-AssessmentRubricfortheInstitutionalizationofService-LearninginHigherEducation.pdf.
  6. Gelmon, S.B., Holland, B.A., Driscoll, A., Spring A., & Kerrigan, S. (2001). Assessing Service-Learning and Civic Engagement. Providence, RI: Campus Compact.

6.9 What are the challenges of service-learning assessment? How can we address them?

What Are the Logistical Considerations?

There is no doubt that adding assessment to the many responsibilities that service-learning entails can seem overwhelming. Some of the common challenges in assessing service-learning are:

Time. 

Planning, designing, implementing, analyzing, and reporting assessment takes considerable time and energy. Faculty, student affairs professionals, service-learning center staff, and community partners often tell me that they find it difficult to even think about adding ongoing assessment to their already heavy workloads, and, as a result, it is likely to be relegated to low priority (Gelmon, 2003). I remind them that some assessment is better than none. Keeping assessment simple and manageable is key. Developing an assessment plan that establishes priorities for assessment, starting small, and using existing data are ways to begin the process of developing an assessment database. Another strategy is to start with the ideal design for your assessment and then work backward to what is possible.

Resources. 

As mentioned in 6.3, developing or purchasing assessment instruments; administering qualitative assessment through interviews, focus groups, and observation; and scoring and data analysis can be expensive. It is essential to build the costs of annual assessment into the budget process and to demonstrate its value to those who make financial decisions. Collaborating with individuals in other campus departments might yield benefits, such as the ability to add items to institutional data-collection efforts and free consultation regarding the design and implementation of assessment. Expertise is also an issue. Individuals who manage service-learning centers and teach service-learning courses are not necessarily skilled in assessment design, methods, and analysis. Faculty who are familiar with assigning grades do not necessarily know how to produce data that an institutional assessment plan or accreditation self-study requires.

In addition to assistance from professionals in the institutional assessment office, student affairs, the provost's office, and the center for teaching and learning, graduate or undergraduate students may be able to provide training and support. Students in areas such as research methods, education, or nonprofit administration may be able to take on various aspects of service-learning assessment as individuals or as a class, perhaps even as a service-learning project. Federal Work-Study students earning community-service wages may also be trained to assist with data collection and analysis. Community partner organizations may have staff who can assist with assessment or access to other resources. Community members may be willing to help with assessment tasks in exchange for wages, training, or other benefits such as tuition waivers. Faculty members may find that doing assessment is valuable for their own scholarly interests as well as for inclusion in portfolios for tenure and promotion. Happily, more journals of disciplinary associations and journals on the scholarship of teaching and learning are publishing articles on the scholarship of service-learning pedagogy. Hiring external assessment consultants can provide expertise and an objective perspective, as well as reducing the workload on service-learning educators and gaining confidence and respect for the findings. However, external consultation is costly, may reduce your level of control over the content and process of the assessment, and may prevent you from developing your own capacity to conduct service-learning assessment (Steinberg, Bringle, & Williams, 2010).

Design Issues. 

In addition to the general challenges of assessment, there are issues related to assessment design particular to service-learning. These include inconsistency of how service-learning is defined and practiced, the wide range of settings and experiences, the non-specific nature of reflection, the difficulty of determining causality, and the need to assess a range of individuals and organizations, including students, faculty, community partners, community organizations, the campus-community partnerships, and the institution. The lack of comparison groups and reliance on self-report data are additional issues to contend with. Extensive longitudinal studies would be required to measure effects over time (Jacoby, 2013). Developing and implementing a simple, incremental assessment plan and noting its potential limitations go a long way toward preventing these issues from becoming roadblocks to assessment.

Some service-learning assessment methods are highly subjective, including interviews, focus groups, and observations. For example, two interviewers may ask the same questions but receive different answers, because one interviewer came across as friendlier or more open than the other. In addition, many factors can potentially lead to a particular result, making it difficult to isolate causes or impacts. Data reporting should at least acknowledge these issues (Hatcher & Bringle, 2010).

Another design challenge is the use of pre-post assessments. In a pre-post design, the evaluator measures variables of interest before and after an experience. In the case of service-learning, student participants in a course or semester-long cocurricular experience would respond to the same questions at the beginning and end of the semester. However, as noted above, it is not possible to demonstrate that any changes were actually the result of participation in a particular experience. Further, service-learning can create dissonance for students. Students can enter into a service-learning experience thinking that they know or understand an issue or situation and, once the experience is over, find that they have questions or doubts about what they thought they knew or understood. In these cases, self-report scores for the pre-experiences assessment might be higher than those for the post-assessment.

Protecting Confidentiality. 

An important consideration is how you will protect the rights and confidentiality of participants in assessment. It is possible that respondents may be more open if your procedure is anonymous (i.e., when those who administer the assessment do not know the identity of the participants) or confidential (i.e., where the evaluators know the identity of participants but restrict access to that information). Forms that participants complete and sign to acknowledge their consent to participate in the assessment or to provide their contact information so they can receive the results of the study should be carefully separated from the data collected. You should determine early whether you will need to go through an ethics board or institutional review board. Routine course and program evaluation generally do not require approval from the institutional review board if they are used internally and not generalized or published. However, it is always wise to inquire before proceeding (Steinberg, Bringle, & Williams, 2010).

Access to Information. 

The use of existing sources of information may require access to institutional documents and data, either in print or virtual, that you may need permission to view and use. Access to some data collected from individuals may be subject to confidentiality restrictions, as described immediately above. You may need to write justifications to gain access to institutional data or request assistance from the appropriate administrator. You should consult your community partner about potential issues of access and confidentiality related to the clients, the community site, or organizational records.

Sources of additional information

  1. Hatcher, J.A., & Bringle, R.G. (2010). Developing your assessment plan: A key component of reflective practice. In B. Jacoby & P. Mutscio (Eds.), Looking In, Reaching Out: A Comprehensive Guide for Community Service-Learning Professionals. Boston, MA: Campus Compact.
  2. Seifer, S.D., Holmes, S., Plaut, J., & Elkins, J. (2002, 2009). Tools and Methods for Evaluating Service-Learning in Higher Education. https://www.nationalserviceresources.gov/tools-and-methods-evaluating-service-learning-higher-education.
  3. Steinberg, K.S., Bringle, R.G., & Williams, M.J. (2010). Service-Learning Research Primer. Scotts Valley, CA: National Service-Learning Clearinghouse. http://csl.iupui.edu/doc/service-learning-research-primer.pdf.

Conclusion

This chapter has described why service-learning assessment is important, the range of forms and methods of assessment, considerations regarding prioritizing assessment topics and selecting appropriate strategies, and how to address the incumbent challenges. It also provides specific information about assessment of curricular and cocurricular service-learning from the perspectives of students, faculty, community partners, the campus-community partnership, and the institution. Question 7.7 discusses how to use assessment results to build internal and external support for service-learning, and 9.1 offers a research agenda designed to secure the future for service-learning on the institutional and national levels.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.222.182.66