CHAPTER 18
SUSTAINING A CULTURE OF BETTERMENT

As I note at several points throughout this book, while a lot of colleges today have accrued a good deal of evidence, many are not yet using it to inform decisions and advance quality. Why not? Chapter 4 discusses the obstacles I see most frequently to a pervasive, enduring culture of betterment. Beyond the prerequisite of setting clear, justifiable targets defining success, discussed in Chapter 15, I do not have a magic answer to getting evidence off the shelf and using it for betterment. Too much depends on your college’s culture and history. But this chapter will offer you some ideas to consider.

Foster a Culture of Community

All five cultures of quality require your college community to work together to take your college on its journey. Nowhere is this more important than with the culture of betterment. Chapter 7 offers many suggestions on ways to foster a culture of community, including building cultures of respect, communication, collaboration, growth and development, and shared collegial governance. Chapter 7 also suggests offering support such as guidance, professional development, and constructive feedback.

One particularly important strategy is to empower faculty oversight of student learning assessment. Juniata College found, for example, that turning assessment from “a very top-down process that has political overtones” to one controlled by faculty makes it “really rooted and faculty-centered” (Jankowski, 2011, p. 3). The greater the role that faculty have in developing and implementing student learning assessments, the more ownership they have of the process and results, and the more likely that they will take the results seriously and use them to identify and implement advancements in quality and effectiveness. “Perhaps the surest way to ensure that [assessment measures] will be used is to involve the individuals who ought to use them in their initial selection and development” (Banta & Borden, 1994, p. 103) and have them specify “the kinds of data they will consider credible and helpful” (p. 98).

Value Efforts to Change, Improve, and Innovate

Some colleges have a thriving culture of innovation. Carnegie Mellon University’s vision, for example, is to “meet the changing needs of society by building on its traditions of innovation, problem solving, and interdisciplinarity” (n.d., para. 1). One of Excelsior College’s values is “innovation as a source of improvement” (2013, “Values,” para. 3). But many colleges do not yet have cultures valuing innovation and improvement. Concrete, tangible incentives, recognition, and rewards can help nurture such a culture (Kuh, Jankowski, Ikenberry, & Kinzie, 2014).

Incorporate college priorities into criteria for performance review, including merit pay and faculty promotion and tenure (P&T). Review criteria should value and reward work to advance your college’s quality agenda. Some examples:

  • If your college is focusing on advancing its culture of evidence, establish performance evaluation criteria for vice presidents and deans that value their effectiveness in building a culture of evidence within their units, and ensure that faculty P&T criteria value substantive faculty work on assessing student learning.
  • If your college is focusing on advancing its culture of community, establish performance evaluation and P&T criteria that value collaborative work, especially to provide a cohesive education experience.
  • If your college is focusing on advancing its culture of betterment, allow faculty and staff to stumble occasionally as they try their best to improve what they do. Establish performance evaluation and P&T criteria that reward innovation, even if those efforts are not at first successful. Encourage teaching innovation by offering faculty a semester or year of grace from student evaluations of teaching when they try implementing research-informed teaching methods.

Offer stipends, fellowships, or merit pay for extraordinary work that helps to advance your college’s quality agenda. Assessing student learning in courses and programs is simply part of teaching, of course, but above-and-beyond work, such as coordinating the assessment of the general education curriculum, deserves special recognition. “Next to disciplinary accreditation, funding from an external source may be the second most powerful incentive for turning faculty angst and even anger about assessment to acceptance, and even appreciation” (Banta, 2010, p. 3).

A great strategy to promote cultures of evidence and betterment is to offer mini-grants only to faculty who have assessed student learning and are disappointed with the results. Faculty can use the mini-grants to research, plan, and implement improvements and changes that have been suggested by evidence. Business and communication faculty at LaGuardia Community College, for example, used mini-grants to address business students’ underachievement in oral communication by incorporating new activities into introductory business courses (Provezis, 2012). Special mini-grants available only to address disappointing evidence are a wonderful counterpoint to any rumors that such evidence will be treated punitively.

Offer other recognition, such as letters of commendation or awards from college leaders, provided that these are based on fair, consistent criteria. A luncheon, a wine and cheese party, a barbecue, an event akin to a conference poster session, or another celebratory event may also be appreciated, depending on your college’s culture.

Offer other incentives. Some provosts have told me, for example, that they have told their departments, “If you haven’t submitted your assessment report, don’t give me a budget request.”

The Perfect Is the Enemy of the Good

The most frequent “use” of evidence that I see is refining the tool or process used to collect the evidence. Rubrics are tweaked to make them clearer; survey administration procedures are revised to achieve a better response rate. On one hand, this makes sense. People at colleges are very good at research, and good research often includes refining the methodology after a pilot study.

On the other hand, it is a lot easier to change rubric criteria than to use rubric results to make substantive changes to curricula and teaching methods. Research protocols do not call for endless pilot studies, and neither should colleges in their pursuit of quality. Repeated refinements of evidence-collection tools and processes are stalling tactics, putting off the day when the evidence is used to make meaningful improvements (Blaich & Wise, 2011).

There is no perfect measure of quality. Every kind of evidence—and every approach to collecting, sharing, and using evidence—has inherent imperfections and limitations, and any one measure alone provides an incomplete and possibly distorted picture of quality. Some examples:

  • Some important goals cannot be measured meaningfully with single performance indicators. There is no single quantitative metric, for example, that will tell you how well your college is achieving its goals of improving the cultural climate of the region, linking budget decisions to institutional plans, giving students an appreciation of the arts, or graduating students with a commitment to civic engagement (Suskie, 2006).
  • While some published tests of college-level skills and competencies are promising, at this point many remain works in progress regarding their validity, as discussed in Chapter 14.
  • Retention and graduation rates alone do not tell you why students are leaving before graduating, information that is critical in determining whether the rates are adequate.

Because there is no perfect assessment measure, try to look at more than one source of evidence. If one of your goals is to strengthen your college’s financial health, for example, keep an eye on a number of financial measures and ratios.

Keep things simple and cost-effective. Imagine an accreditation review team dropping in unannounced on your college three years after your formal accreditation review. What would it find? Would the processes and evidence of quality in place at the time of the last formal review persist?

The answer lies in the cost and complexity of your college’s quality processes. Simple, cost-effective structures and processes are far more likely to be sustained, while overly complex or ambitious structures or processes, such as an unwieldy governance structure or requests for 20-page annual assessment reports, quickly collapse under their own weight.

Efforts to implement a culture of quality should yield value that justifies the time and expense put into them. Assessments of student learning should not take so much time that they detract from teaching. Independent “blind” scoring of student work is a good research practice, for example, but it is costly in terms of time and perhaps money as well as morale. Is this where your college should be deploying scarce resources? Is the added confidence in the results worth the extra cost?

Start at the end and work backward. Begin by looking at your students shortly before they graduate. If you are satisfied with their achievement of a learning outcome—say, they are writing beautifully—there is no need to drill down into their achievement of that outcome in earlier courses.

Look for the biggest return on investment. Capstone experiences are great opportunities to assess several key program learning outcomes at once. And, no matter how many general education courses your college offers, there are probably no more than 15 or 20 courses that the vast majority of students take to complete general education requirements. Start your assessment of general education learning outcomes by assessing student learning in just those 15 or 20 courses. Your evidence from this initial assessment can have a broad impact on the great majority of your students, with far less work than assessing learning in every general education course.

Do not collect more evidence than you can handle. A ten-page alumni survey will yield ten pages of results that faculty and administrators must tabulate, analyze, and discuss. A two-page survey, while not yielding the same breadth of information, will take far less time to implement, summarize, share, and use.

Minimize the reporting burden. Do all you can to keep reports on evidence, betterment, and the other dimensions of quality to a bare-bones minimum, and streamline and simplify the process of preparing them. The next section offers some specific ideas to consider.

Document Evidence

If evidence is not recorded, it cannot be shared; if it is not shared, it cannot be discussed and used (Bresciani, 2006). If records of evidence and the decisions flowing from them are not maintained, progress cannot be tracked and memory is lost. So a certain amount of recording and preparation of summaries and analyses of evidence is an unavoidable part of developing and sustaining a culture of quality.

In the beginning, templates can help everyone understand good practices. If your college is launching an effort to build a culture of evidence and betterment, templates for documenting assessment processes and for recording and sharing evidence can be useful teaching tools. At this early stage, a detailed template like the one in Exhibit 18.1 may be helpful if it is shared when people begin to plan the collection of evidence, just like a rubric helps students learn if it is shared with them when they receive an assignment. Assessment committees and coordinators, charged with ensuring and advancing cultures of evidence and betterment, can then use the completed templates to offer collegial feedback and support to faculty and administrators on their efforts.

As your college develops a culture of evidence and betterment and people across campus increasingly engage in good practices to collect and use evidence, scale back reports on assessment processes and focus on what is far more important: reports that share evidence and document how evidence is used for betterment. The assessment committee and coordinator can move from reviewing processes to reviewing the shared evidence and records of decisions to confirm that cultures of evidence and betterment continue to be advanced.

Consider assessment information management systems to record and manage evidence. They can simplify the analysis of evidence and preserve evidence through inevitable personnel transitions, but they can also frustrate those whose evidence does not fit with the system’s structure or who find the system’s reports less than transparent. The key is to choose a technology that will meet your college’s needs, not one that will require your practices to conform to its design and structure (Suskie, 2009). Beware of a system designed primarily to get you through accreditation; aim for one that focuses on helping you ensure and improve quality and effectiveness, with accreditation evidence as a by-product. Unless your accreditor requires that evidence be recorded in a certain way, be flexible: offer the system as a tool, but also offer the option of recording and storing evidence in alternative ways.

EXHIBIT 18.1. Sample Template for Documenting Nascent Student Learning Assessment Processes

Student Learning Outcome Create organized, effective visual presentations of information and ideas Write effectively within the discipline
How do students learn this? In what course(s) and/or co-curricular experience(s)? In required courses ABC 105, ABC 310, and the capstone course In every course in the curriculum
How and in what course do they demonstrate that they’ve achieved this outcome? In the capstone course, in which they develop a visual presentation of the results of their research project In the capstone course, in which they prepare a written report on the results of their research project
How and when do you assess the achievement of all students in your program before they graduate and record the results of your assessment?a In the capstone course, a rubric to evaluate their visual presentation In the capstone course, a rubric to evaluate the written report
What do you consider satisfactory achievement of this outcome? WHY? See attached rubric. We expect all students to score at least “satisfactory” on documenting sources of information, because respect for intellectual property is a key value of our program. We expect 85% to score at least “satisfactory” on all other criteria because employers have advised us that, while they are collectively important, none alone is absolutely vital for employee success. See attached rubric. We expect all students to score at least “satisfactory” on all criteria. We expect at least 50% to score “exceptional,” because we consider effective writing a distinctive hallmark of our program, compared to peer programs at other universities.
What are the recent results of your assessment? How many students were assessed? See attached rubric with results. At least 85% of students scored at least satisfactory on all criteria except documenting sources of information, where 28% of students scored “needs improvement.” See attached rubric with results. All students scored at least satisfactory on all criteria for effective writing except in integrating ideas, where 10% of students scored “need improvement.” No more than 40% of students scored “exceptional” on any criterion.
How do the results compare with your expectations for satisfactory learning? Are you satisfied with the results? We are dissatisfied with how well our students are documenting sources of information. Achievement in all other areas meets our standards. We are dissatisfied with how well our students are integrating ideas and with the proportion of students with “exceptional” writing skill. Achievement in all other respects meets our standards.
If you are NOT satisfied with the results, what do you plan to do to improve student learning? When will you implement changes? We have identified assignments in three other required courses that we will modify for Spring 20__ so that they include documenting sources of information. We have identified assignments in three other required courses that we will modify for Spring 20__ so that they require integrating ideas. By Fall 20__ we will complete a proposal for an honors track in our program that emphasizes outstanding writing.
Do you plan to modify your assessment of student achievement of this objective? If so, how? No The relatively low proportion of students with “exceptional” writing may be related to the complexity of the research project. In the coming academic year, we will also evaluate writing skills in a shorter, simpler assignment in ABC465, another required senior-level course.
a If it is not possible or practical to assess all students in your program before they graduate, explain why and provide the number/proportion of students assessed.

Should you keep student work on file? A few specialized accreditors do require colleges or programs to keep actual student work (exams, papers, portfolios, projects, and so on) on file, but otherwise keep only what will be useful to you. It often makes sense to keep a few examples on file—a mix of top-rated, mediocre-but-acceptable, and unacceptable work—to track any shifts in your standards over time. The mediocre-but-acceptable and unacceptable examples can be nice examples of your college’s rigor, should anyone ask.

Periodically Regroup and Reflect

Implementing a culture of quality is a perpetual work in progress. Your college changes; its students change; society and its needs change. I sometimes advise colleges whose communities have been working hard for a few years on some of the cultures of quality, especially the culture of evidence, to take a breather for a semester, regroup, and reflect on what has been accomplished.

  • What is going well?
  • Where have you seen the most progress?
  • What has been a struggle?
  • What has been helpful but has taken too much time, effort, or money?
  • Where has progress been slower than you anticipated?

Rubrics evaluating the status of your college’s cultures of quality can be helpful here (Fulcher, Swain, & Orem, 2012; Penn, 2012). Exhibit 18.2 is an example of one for evaluating the culture of evidence. Consider asking a sample of faculty, administrators, and board members to complete such a rubric, based on their perceptions of what is happening throughout your college. Then repeat the review annually to gauge and document your progress in advancing the cultures of quality.

EXHIBIT 18.2. Rubric to Appraise a College’s Culture of Evidence and Betterment

No plans = No documented evidence that we have plans to do this.
No evidence = Our college appears to be aware that we should do this, but there is no documented evidence that this is happening.
Nascent = We have documented evidence that this is happening in just a few areas.
Some = We have documented evidence—not just assurances—that this is happening in some but not most areas.
Most = We have documented evidence—not just assurances—that this is happening in most but not all areas.
Pervasive = We have documented evidence—not just assurances—that this is happening everywhere: all units, programs, services, and initiatives, no matter where located or how delivered.
  No plans No evidence Nascent Some Most Pervasive
Expected college-wide (strategic), unit, program, and curricular goals are clearly articulated and relevant to students and other stakeholders.            
Targets for determining whether goals are achieved are clear, appropriate, and justifiable.            
Evidence of goal achievement is of sufficient quality that it can be used with confidence to make meaningful, appropriate decisions.            
Evidence is clearly linked to goals.            
Evidence is shared in useful, understandable, accessible forms with relevant stakeholders.            
Evidence is used to inform meaningful decisions, including resource deployment decisions, teaching and learning improvement, and goals and plans.            
Evidence is used to assure relevant public stakeholders of the effectiveness of the college, programs, services, and curricula in meeting stakeholder needs.            
Processes to collect and use evidence have sufficient engagement, momentum, and simplicity to assure that the cultures of evidence and betterment will remain sustained and pervasive.            
This rubric uses ideas in the “Rubric for Evaluating Institutional Student Learning Assessment Processes” published by the Middle States Commission on Higher Education (2008).
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.188.91.44