CHAPTER 13
GAUGING SUCCESS

A quality college continually gauges its progress toward its goals so it can ensure that it will achieve those goals—arrive at its destinations—safely and on time or so it can make adjustments if warranted. It also continually gauges its success in fulfilling its other responsibilities that are discussed in Chapter 5. This chapter reviews some of many potential gauges of a college’s success. (See Chapter 6 for information on gauging success in ensuring your college’s health and well-being and for deploying resources effectively and efficiently.) This chapter provides examples, not recommendations; your measures must fit your college’s purpose, values, goals, and stakeholder needs, and the examples here may not.

Gauging Student Success

Many stakeholders are particularly interested in evidence that your college uses student tuition and fees effectively, prudently, and efficiently to help students achieve their goals. The most important gauges of student success concern student learning, which is discussed later in this chapter. But many stakeholders also want to see information such as the following:

  • Proportion of students who achieve their educational goals such as earning a degree, earning a certificate, progress toward more advanced study elsewhere, or simply boosting skills (Examples of measures include student retention and graduation rates, such as the proportion of degree-seeking students still enrolled one year after entering and the proportion of students who, three years after entering a community college, have earned an associate’s degree or are still enrolled in college, either at your college or elsewhere.)
  • Proportion of students who achieve their educational goals in a timely fashion (Students who take an extra year to achieve their educational goals may pay more in tuition and fees, plus lose a year of the higher earnings they anticipate.)
  • Proportion of students who, after achieving their educational goal, are in the jobs for which they prepared
  • Ratio of tuition and fees paid by students against what they earn after achieving their educational goals
  • Debt-to-income ratio, comparing graduates’ student loan debt against their earnings

Student experiences and outcomes vary considerably within any college, so students may want answers to these questions broken down for their program of study or college experience (say, for working adults or online students).

Gauging Responsiveness to the Changing College Student

Government policymakers want to know how effectively your college meets the needs of today’s students, while prospective students and their families want to know about specific “fit”: how effectively your college meets the needs of students like them, specifically information such as the following:

  • Profile of the students your college aims to serve: your “target clientele,” as discussed in Chapter 9
  • Profile of the students your college actually enrolls
  • How your college helps students succeed, including what your college does to provide students with optimal learning experiences and the extent to which your college engages in research-based teaching-learning practices
  • Average debt load of students who graduate from your college or leave before graduating

Gauging Economic Development Contributions

Government policymakers and employers want to know how effectively your college contributes to the regional or national economy, specifically information such as the following:

  • Proportions of your college’s students who graduate with the types and levels of knowledge, skills, competencies, and dispositions that employers want and need (Employers may want evidence broken down by occupation.)
  • Number of graduates prepared for high-demand fields
  • Proportion of your graduates who stay and work in your region after graduating
  • Economic impact of your college—including the economic contributions of its students, employees, and their families—on your region
  • Impact of research conducted at your college on the economies of your region and the country

Gauging Contributions to the Public Good

Potential gauges of contributions to the public good depend on your college’s goals, but two examples include:

  • Number of internships at regional non-profit community agencies
  • Attendance by community residents at cultural events

Gauging Achievement of College Purpose and Goals

College goals vary widely; only very rarely do I see any two colleges share even similar goals. Because of this, there is no neat list of measures for college goals or other aspects of institutional effectiveness. (Measures of student learning outcomes, discussed in the next section, do generally fall into a few common categories.) The preceding sections of this chapter offer some examples of potential dashboard indicators for college goals; Table 13.1 offers some additional examples.

TABLE 13.1. Examples of Dashboard Indicators for Some Hypothetical College Goals

Goal Potential Dashboard Indicators
Provide a student-centered environment. Responses to relevant questions on the National Survey of Student Engagement (NSSE) or Community College Survey of Student Engagement (CCSSE)
Students engage in active learning. Proportion of faculty participating in professional development opportunities to incorporate active learning in their classes
  Proportion of students participating in field experiences such as internships, practicums, and service learning
Increase the diversity of the college community. Student, faculty, and staff profiles
Strengthen the faculty profile. Proportion of faculty holding degrees in fields appropriate to what they are teaching
  Proportion of faculty using learning-centered teaching strategies
  Proportion of classes taught by full-time faculty
Promote research and scholarship. Support for research and scholarship (funds, space, sabbaticals, etc.)
  Number of peer-reviewed research publications and presentations

Gauging Student Learning

A quality college continually gauges not only its progress toward its goals but also its students’ progress toward its learning outcomes. Student learning assessment is really about answering just the following questions:

  • Do you have evidence that your students are achieving your key learning outcomes?
  • Does that evidence meet the characteristics of good evidence (Chapter 14)?
  • Are you using evidence not only to evaluate individual students but also to improve what you are doing (Chapter 17)?

Thanks in part both to the learning centered movement discussed in Chapter 12 and pressures from accreditors and other quality assurance agencies, higher education has seen considerable progress in efforts to assess student learning, including the following:

  • A growing number of books, conferences, webinars, and other resources on student learning assessment
  • The work of AAC&U described in Chapter 12 that has advanced us light years in our capacity to understand and assess our general education and liberal education curricula
  • Significant research papers on assessment practices and issues sponsored by NILOA (learningoutcomesassessment.org)
  • A number of published instruments, although evidence of the quality and value of some remain a work in progress
  • Commercial assessment information management systems that can make it easier to collect and make sense of evidence, as discussed in Chapter 18

Because of this wealth of resources, this chapter provides just a 30,000-foot overview of options for student learning assessment, not detailed information.

Rubrics (Walvoord & Anderson, 2010) are simply rating scales or scoring guides used to evaluate student work. They can be used to evaluate virtually any evidence of student learning short of an objective multiple-choice test. Exhibit 15.1 in Chapter 15 is one example of a rubric. Exhibit 18.2 in Chapter 18 is another example, with a different format, used to appraise a college rather than student learning. For more examples, simply do an online search; plenty will pop up, no matter what the subject or competency. Increasingly popular are the Valid Assessment of Learning in Undergraduate Education (VALUE) rubrics developed by AAC&U (www.aacu.org/value/index.cfm), which assess the association’s LEAP learning outcomes that I discuss in Chapter 11.

Rubrics—and the term “rubric”—now pervade higher education (Kuh, Jankowski, Ikenberry, & Kinzie, 2014). Fifteen years ago, few faculty were using rubrics to evaluate student learning. Today, many if not most faculty do, and they almost always talk enthusiastically about them. It is one of the great success stories of the assessment movement.

Rubrics are the most useful tool we have to assess most learning. If a rubric is given to students with an assignment, students understand faculty expectations and standards . . . and therefore often do a better job learning—and demonstrating—what faculty want. Rubrics help faculty evaluate student work more consistently and fairly. Rubrics can make the grading/evaluation process go faster, because faculty do not need to write as many individual comments on student papers. Completed rubrics give students a clear sense of their strengths and areas for improvement.

Rubrics are not a panacea, however. They are not appropriate with small numbers of students; average scores will bounce up and down too much from one cohort to the next to be meaningful. While scoring student work is far more consistent with a rubric than without one, it is still subjective, and you will not get it right the first time you use one.

Assessment information management systems and published rubrics have sometimes led faculty to feel pressured to use a particular rubric or to format their rubric in a particular way. My response is rubrics have no rules. You can choose from a variety of formats, and there are several ways to arrive at an overall score (Suskie, 2009).

While rubrics are among the easiest and fastest student learning assessment strategies we have, they can nonetheless be a challenge to develop and use across courses or programs. Faculty need to agree on the rubric (not an easy task!) and often need to go through some practice sessions in order to interpret and apply the rubric consistently and fairly. Then there are the logistical details to be worked out. What examples of student work will be collected? How and when will they be scored? How will the results be shared? The larger and more complex your college—the more courses, adjunct faculty, and locations—the greater the logistical challenges. There is no one best answer to these challenges, because what works will depend on campus culture and organization. Assessment information management systems, discussed in Chapter 18, can help, if you choose the best system for your needs.

Published instruments have gained increasing attention over the last fifteen years or so. Most fall into one of four broad categories:

  • Subject-specific tests and examinations, such as the Major Field Tests (MFTs) (www.ets.org/mft) and the National Council Licensure Examination (NCLEX) for nurses (www.ncbsn.org/nclex.htm)
  • Tests that aim to assess the intellectual skills and competencies typically developed in general education curricula or throughout undergraduate studies, such as the Proficiency Profile (PP) (www.ets.org/proficiencyprofile) and the Collegiate Learning Assessment (CLA) (www.collegiatelearningassessment.org)
  • Surveys of student experiences, perceptions, and attitudes, such as the National Survey of Student Engagement (NSSE) (http://nsse.iub.edu) and the Your First College Year (YFCY) survey (www.heri.ucla.edu/yfcyoverview.php)
  • Rubrics such as the VALUE rubrics mentioned earlier and the American Council of Teachers of Foreign Languages (ACTFL) proficiency guidelines (http://actflproficiencyguidelines2012.org), which define various proficiency levels of reading, writing, speaking, and listening

The main advantage of published instruments is that they get you out of the ivory tower; they let you see how your students compare against their peers (Chapter 15). A key concern about published instruments is their usefulness in identifying and making improvements in teaching and learning. In a survey by the National Institute for Learning Outcomes Assessment (NILOA), for example, roughly 80 percent of colleges reported that “[published] test results were not usable for campus improvement efforts” (Jankowski, Ikenberry, Kinzie, Kuh, Shenoy, & Baker, 2012, p. 13). Locally designed student learning assessments are often a better fit with a college’s goals.

Portfolios (Light, Chen, & Ittelson, 2011) are collections rather than single examples of student learning, often evaluated using a rubric. Effective portfolios are learning experiences as well as assessment opportunities; students develop skills in synthesis and integration by choosing items for the portfolio and reflecting on them (Suskie, 2009). A decade ago, portfolios were typically stored in cumbersome paper files, but today a variety of assessment information management systems can store them as electronic portfolios or “e-portfolios.”

Portfolios are rich sources of evidence of student learning. They are great for programs with small numbers of students and for individualized curricula in which students design their own programs of study and set their own learning outcomes. If they include early examples of student work, they can provide evidence of student growth and development. Their key drawback is the time needed to review even a relatively small portfolio. I suggest a score-as-you-go approach: as faculty grade each item that will go in the portfolio, they can concurrently complete a simple rubric or other analysis to be included in portfolio records.

Reflective writing, in which students reflect on what and how they have learned, is a great choice for assessing many attitudes, values, and dispositions that could be faked by students on tests, surveys, and graded papers. They promote students’ skills in synthesizing what they have learned and can thus help them prepare for lifelong learning. I am a big fan of reflective writing, because it can reveal not only what students have learned but why. My favorite reflective writing tool is the “minute paper” (Angelo & Cross, 1993), so named because students should complete it in no more than one minute. I ask students to share the one most important or meaningful thing they have learned and the one question uppermost in their minds. Their replies have transformed my teaching in ways that rubrics or rating scales cannot. Reflective writing is qualitative evidence of student learning and can be analyzed using qualitative research methods (Creswell, 2012).

Local tests and examinations—multiple-choice or essay—have the advantage of being designed by your college’s faculty, so the results are more likely to be relevant and useful than the results from published tests. Their key shortcoming is that, because faculty are typically not assessment experts, local tests can be poorly designed and written, with confusing items or too many questions addressing basic content knowledge rather than thinking skills.

Surveys and self-ratings can add insight, especially regarding attitudes, values, and the impact of out-of-class experiences, all situations for which tests and rubrics may be inappropriate. Their main shortcoming is that, because survey evidence is self-reported rather than observed, it is generally insufficient evidence of student learning. Another concern is survey fatigue (Jankowski, Ikenberry, Kinzie, Kuh, Shenoy, & Baker, 2012; McCormick, Gonyea, & Kinzie, 2013): people today can feel deluged with surveys and therefore disinclined to respond to yet one more.

Grades provide some basic information on student learning; you know you have a problem if most of your students are failing. But grades alone are insufficient evidence of student learning. There are several reasons why (Walvoord & Anderson, 2010), but here is the big one: a course, assignment, or test grade alone is too global to tell you what students have and have not learned. If a student earns a B on a midterm exam, for example, you know that he learned some things well, or he would have earned a C or D, and that he did not learn some things, or he would have earned an A. But the grade alone tells you nothing about what the student has and has not learned well. The evidence that went into the grade, however—scores for each test item and rubric criterion—are more meaningful and useful evidence of student learning.

What About the “Ineffables”?

There are some who argue that focusing on concrete measures of student learning takes the soul out of higher education, focusing on what is easily quantified at the expense of more important aims that are harder to assess. I share their concern. I worry, for example, that the push for online education and career readiness will lead to a focus on specific competencies, such as analyzing data or citing evidence correctly, at the expense of other aims of a traditional college education, such as thoughtful reflection on works of art or compassion for others. A world where these traits are a rarity would be a dismal place.

There are ways to deal with the ineffables, however:

  • Try to articulate that ineffable goal clearly, following the suggestions in Chapters 10 and 11. “Ethics” may seem impossible to measure, for example, but it can be articulated as understanding the ethical principles of the discipline and reasoning ethically—both assessable goals.
  • Use qualitative research approaches (Creswell, 2012) for tools such as reflective writing and interviews, analyzing them by looking for common themes in responses.
  • Ask how an ineffable student learning outcome is taught. What class work, homework, and assignments do students complete to help them achieve the outcome? Some outcomes, such as instilling a love of the discipline or a passion for inquiry, are really hard or almost impossible to teach, although everyone hopes students will be inspired. Do not assess what you cannot teach.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.129.70.157