A quality college continually gauges its progress toward its goals so it can ensure that it will achieve those goals—arrive at its destinations—safely and on time or so it can make adjustments if warranted. It also continually gauges its success in fulfilling its other responsibilities that are discussed in Chapter 5. This chapter reviews some of many potential gauges of a college’s success. (See Chapter 6 for information on gauging success in ensuring your college’s health and well-being and for deploying resources effectively and efficiently.) This chapter provides examples, not recommendations; your measures must fit your college’s purpose, values, goals, and stakeholder needs, and the examples here may not.
Many stakeholders are particularly interested in evidence that your college uses student tuition and fees effectively, prudently, and efficiently to help students achieve their goals. The most important gauges of student success concern student learning, which is discussed later in this chapter. But many stakeholders also want to see information such as the following:
Student experiences and outcomes vary considerably within any college, so students may want answers to these questions broken down for their program of study or college experience (say, for working adults or online students).
Government policymakers want to know how effectively your college meets the needs of today’s students, while prospective students and their families want to know about specific “fit”: how effectively your college meets the needs of students like them, specifically information such as the following:
Government policymakers and employers want to know how effectively your college contributes to the regional or national economy, specifically information such as the following:
Potential gauges of contributions to the public good depend on your college’s goals, but two examples include:
College goals vary widely; only very rarely do I see any two colleges share even similar goals. Because of this, there is no neat list of measures for college goals or other aspects of institutional effectiveness. (Measures of student learning outcomes, discussed in the next section, do generally fall into a few common categories.) The preceding sections of this chapter offer some examples of potential dashboard indicators for college goals; Table 13.1 offers some additional examples.
TABLE 13.1. Examples of Dashboard Indicators for Some Hypothetical College Goals
Goal | Potential Dashboard Indicators |
Provide a student-centered environment. | Responses to relevant questions on the National Survey of Student Engagement (NSSE) or Community College Survey of Student Engagement (CCSSE) |
Students engage in active learning. | Proportion of faculty participating in professional development opportunities to incorporate active learning in their classes |
Proportion of students participating in field experiences such as internships, practicums, and service learning | |
Increase the diversity of the college community. | Student, faculty, and staff profiles |
Strengthen the faculty profile. | Proportion of faculty holding degrees in fields appropriate to what they are teaching |
Proportion of faculty using learning-centered teaching strategies | |
Proportion of classes taught by full-time faculty | |
Promote research and scholarship. | Support for research and scholarship (funds, space, sabbaticals, etc.) |
Number of peer-reviewed research publications and presentations |
A quality college continually gauges not only its progress toward its goals but also its students’ progress toward its learning outcomes. Student learning assessment is really about answering just the following questions:
Thanks in part both to the learning centered movement discussed in Chapter 12 and pressures from accreditors and other quality assurance agencies, higher education has seen considerable progress in efforts to assess student learning, including the following:
Because of this wealth of resources, this chapter provides just a 30,000-foot overview of options for student learning assessment, not detailed information.
Rubrics (Walvoord & Anderson, 2010) are simply rating scales or scoring guides used to evaluate student work. They can be used to evaluate virtually any evidence of student learning short of an objective multiple-choice test. Exhibit 15.1 in Chapter 15 is one example of a rubric. Exhibit 18.2 in Chapter 18 is another example, with a different format, used to appraise a college rather than student learning. For more examples, simply do an online search; plenty will pop up, no matter what the subject or competency. Increasingly popular are the Valid Assessment of Learning in Undergraduate Education (VALUE) rubrics developed by AAC&U (www.aacu.org/value/index.cfm), which assess the association’s LEAP learning outcomes that I discuss in Chapter 11.
Rubrics—and the term “rubric”—now pervade higher education (Kuh, Jankowski, Ikenberry, & Kinzie, 2014). Fifteen years ago, few faculty were using rubrics to evaluate student learning. Today, many if not most faculty do, and they almost always talk enthusiastically about them. It is one of the great success stories of the assessment movement.
Rubrics are the most useful tool we have to assess most learning. If a rubric is given to students with an assignment, students understand faculty expectations and standards . . . and therefore often do a better job learning—and demonstrating—what faculty want. Rubrics help faculty evaluate student work more consistently and fairly. Rubrics can make the grading/evaluation process go faster, because faculty do not need to write as many individual comments on student papers. Completed rubrics give students a clear sense of their strengths and areas for improvement.
Rubrics are not a panacea, however. They are not appropriate with small numbers of students; average scores will bounce up and down too much from one cohort to the next to be meaningful. While scoring student work is far more consistent with a rubric than without one, it is still subjective, and you will not get it right the first time you use one.
Assessment information management systems and published rubrics have sometimes led faculty to feel pressured to use a particular rubric or to format their rubric in a particular way. My response is rubrics have no rules. You can choose from a variety of formats, and there are several ways to arrive at an overall score (Suskie, 2009).
While rubrics are among the easiest and fastest student learning assessment strategies we have, they can nonetheless be a challenge to develop and use across courses or programs. Faculty need to agree on the rubric (not an easy task!) and often need to go through some practice sessions in order to interpret and apply the rubric consistently and fairly. Then there are the logistical details to be worked out. What examples of student work will be collected? How and when will they be scored? How will the results be shared? The larger and more complex your college—the more courses, adjunct faculty, and locations—the greater the logistical challenges. There is no one best answer to these challenges, because what works will depend on campus culture and organization. Assessment information management systems, discussed in Chapter 18, can help, if you choose the best system for your needs.
Published instruments have gained increasing attention over the last fifteen years or so. Most fall into one of four broad categories:
The main advantage of published instruments is that they get you out of the ivory tower; they let you see how your students compare against their peers (Chapter 15). A key concern about published instruments is their usefulness in identifying and making improvements in teaching and learning. In a survey by the National Institute for Learning Outcomes Assessment (NILOA), for example, roughly 80 percent of colleges reported that “[published] test results were not usable for campus improvement efforts” (Jankowski, Ikenberry, Kinzie, Kuh, Shenoy, & Baker, 2012, p. 13). Locally designed student learning assessments are often a better fit with a college’s goals.
Portfolios (Light, Chen, & Ittelson, 2011) are collections rather than single examples of student learning, often evaluated using a rubric. Effective portfolios are learning experiences as well as assessment opportunities; students develop skills in synthesis and integration by choosing items for the portfolio and reflecting on them (Suskie, 2009). A decade ago, portfolios were typically stored in cumbersome paper files, but today a variety of assessment information management systems can store them as electronic portfolios or “e-portfolios.”
Portfolios are rich sources of evidence of student learning. They are great for programs with small numbers of students and for individualized curricula in which students design their own programs of study and set their own learning outcomes. If they include early examples of student work, they can provide evidence of student growth and development. Their key drawback is the time needed to review even a relatively small portfolio. I suggest a score-as-you-go approach: as faculty grade each item that will go in the portfolio, they can concurrently complete a simple rubric or other analysis to be included in portfolio records.
Reflective writing, in which students reflect on what and how they have learned, is a great choice for assessing many attitudes, values, and dispositions that could be faked by students on tests, surveys, and graded papers. They promote students’ skills in synthesizing what they have learned and can thus help them prepare for lifelong learning. I am a big fan of reflective writing, because it can reveal not only what students have learned but why. My favorite reflective writing tool is the “minute paper” (Angelo & Cross, 1993), so named because students should complete it in no more than one minute. I ask students to share the one most important or meaningful thing they have learned and the one question uppermost in their minds. Their replies have transformed my teaching in ways that rubrics or rating scales cannot. Reflective writing is qualitative evidence of student learning and can be analyzed using qualitative research methods (Creswell, 2012).
Local tests and examinations—multiple-choice or essay—have the advantage of being designed by your college’s faculty, so the results are more likely to be relevant and useful than the results from published tests. Their key shortcoming is that, because faculty are typically not assessment experts, local tests can be poorly designed and written, with confusing items or too many questions addressing basic content knowledge rather than thinking skills.
Surveys and self-ratings can add insight, especially regarding attitudes, values, and the impact of out-of-class experiences, all situations for which tests and rubrics may be inappropriate. Their main shortcoming is that, because survey evidence is self-reported rather than observed, it is generally insufficient evidence of student learning. Another concern is survey fatigue (Jankowski, Ikenberry, Kinzie, Kuh, Shenoy, & Baker, 2012; McCormick, Gonyea, & Kinzie, 2013): people today can feel deluged with surveys and therefore disinclined to respond to yet one more.
Grades provide some basic information on student learning; you know you have a problem if most of your students are failing. But grades alone are insufficient evidence of student learning. There are several reasons why (Walvoord & Anderson, 2010), but here is the big one: a course, assignment, or test grade alone is too global to tell you what students have and have not learned. If a student earns a B on a midterm exam, for example, you know that he learned some things well, or he would have earned a C or D, and that he did not learn some things, or he would have earned an A. But the grade alone tells you nothing about what the student has and has not learned well. The evidence that went into the grade, however—scores for each test item and rubric criterion—are more meaningful and useful evidence of student learning.
There are some who argue that focusing on concrete measures of student learning takes the soul out of higher education, focusing on what is easily quantified at the expense of more important aims that are harder to assess. I share their concern. I worry, for example, that the push for online education and career readiness will lead to a focus on specific competencies, such as analyzing data or citing evidence correctly, at the expense of other aims of a traditional college education, such as thoughtful reflection on works of art or compassion for others. A world where these traits are a rarity would be a dismal place.
There are ways to deal with the ineffables, however:
3.129.70.157