6

Summative quality assurance systems: not good enough for quality enhancement

Roy Andersson, Anders Ahlberg and Torgny Roxå

Abstract:

In this chapter we scrutinise an elaborate institutional quality assurance model, with aspirations to develop a quality culture which aims to improve student learning, in order to discuss general issues of teaching and learning evaluation strategies. Our analysis suggests that summative student evaluations are useful for institutional quality assurance and quality enhancement at study programme level. However, they appear less efficient for quality enhancement at course module or subject discipline level, that is the loci of teaching and learning. To support quality enhancement of teaching and learning, iterative formative evaluation has greater potential and in order to promote an institutional quality culture, summative as well as formative student evaluations need to be in place, discussed, accepted and understood by all legitimate stakeholders (i.e. students, university teachers and institutional representatives).

Key words

teaching and learning

higher education

quality assurance

quality enhancement

summative assessment

formative assessment

Introduction

In this chapter we use an existing, elaborate, student evaluation system as the background to discussing general issues of evaluation strategies. The setting is the Faculty of Engineering (Swedish acronym LTH), a research-intensive semi-autonomous faculty within Lund University. Lund University, which is one of Scandinavia’s largest institutions for education and research, enrols 46,000 students and has 6,000 employees, within which LTH has arround 8,000 students and 1,400 employees (LTH, 2010).

Several research groups at LTH are considered to be of high international standard, and some are world leaders, such as those in nano-technology, combustion physics, mobile communications, automatic control, laser physics and biotechnology (LTH, 2010). In addition, LTH has earned national and international attention for its theoretically underpinned (Barr and Tagg, 1995; Bowden and Marton, 1999; Biggs, 2003; Ramsden, 2005) programme to improve teaching and learning that started in the year 2000. LTH also has a reputation for taking the enhancement of student learning seriously. The core strategy of the programme is to support scholarship in teaching and learning (Boyer, 1990; Trigwell et al., 2000; Kreber, 2002) among all employees by promoting pedagogical dialogues. This is anticipated to lead to innovation, improved teaching, and better student learning: in other words, an emerging quality culture (Andersson, 2010; Harvey and Stensaker, 2008; Mårtensson et al., 2011).

Despite these efforts the LTH congruent learning-centred elaborate student evaluation system (described in detail below) does not seem to entirely match what could be expected of a teaching and learning quality culture. This indicates that making a student evaluation system successful is not an obvious task, but by scrutinising the enforced evaluation strategies at LTH we hope this chapter will help others to improve their strategies – at both institutional and individual university teacher level.

Terminology

Some of the terms we use vary according to the context, so we will clarify what we mean with some key terms:

 Course or course module: a course module taught and assessed as a stand-alone unit.

 Programme: a degree study programme (Bachelor, Master) consisting of several courses.

 Summative evaluation: an evaluation accomplished after an activity/ course, with the focus on summarising what has happened. It may contain data collected both during and after the activity/course.

 Formative evaluation: an evaluation accomplished during an activity/ course, with the focus on further enhanced student learning.

 Reporting evaluation: an evaluation done primarily to inform other (higher) organisational levels or for specific stakeholders, where documentation is a central part.

 Operational evaluation: a set of formative evaluations done primarily to improve the ongoing teaching and learning activity or course. It is performed by the university teacher in collaboration with the students with the focus on making the university teacher aware of what is happening in the ongoing activity/course and thereby be able to perform an ‘adaptive regulation’ of the teaching.

So, the terms summative and formative, respectively, are merely telling when an evaluation is accomplished, and reporting and operational, respectively, relate to the purpose of the evaluation.

Summative student evaluations

As a part of the programme to improve teaching and learning, LTH developed a uniform faculty-wide student evaluation system, which has been in place since 2003 (Warfvinge, 2003). This includes a questionnaire (web or paper), compulsory dialogue between major stakeholders (lecturers, programme directors and students) and formal public documentation of both evaluation data and comments from the stakeholders. The questionnaire is based on the theoretically and empirically underpinned course experience questionnaire (CEQ) by Ramsden (2005). Answers to 26 multiple choice questions are scored in five clusters of importance for stimulating a deep approach to learning:

 good teaching;

 clear goals and standards;

 appropriate assessment;

 appropriate workload;

 development of generic skills.

In addition overall student satisfaction is monitored, as well as answers to teacher designed open-ended questions on aspects of the particular course module. There is currently a database of 120,000 filled out CEQ forms used for course module analyses, study programme analyses and other thematic analyses.

The procedure for summative student evaluations involves six steps:

1. Students fill out the CEQ form at the end of each course taken. The form is normally filled out on the web.

2. The evaluation system transforms the data into a working report, including all individual comments from the students, the enrolment figures, and the overall results from the examination.

3. The data within the working report is discussed at a mandatory meeting of the course coordinator (main course teacher), student representatives, and the programme director.

4. The course coordinator, the student representatives and the programme director independently write short comments based on the discussions at the meeting. These comments are included in the final report. This is an important part of the process to ensure the major stakeholders are given the opportunity to include their personal reflection on the evaluation in formal public documentation.

5. Statistically processed data and the major stakeholders’ comments from the discussion make up the final report. The individual comments from the students are not part of the final report.

6. The final report is then published on the LTH intranet and sent via e-mail to all students who participated in the course and to the head of department responsible for the course.

In addition, once a year, all programme directors write a summary covering all their courses in an annual report. These reports also include automatically collected data from all the final course reports in the programme (Roxå and Mårtensson, 2010). Steps 1–6 are designed to stimulate improvement of the course module and raise awareness of poor teaching and learning, to avoid them being repeated next time the course module is taught.

In spite of the elaboration of this summative evaluation system and its aspiration of fostering deep approaches to learning, it has not fully created a satisfactory quality culture. There is a lack of engagement by some individual university teachers and programme directors (Borell et al., 2008; Roxå and Målrtensson, 2010). However, a recent study indicates that the CEQ student evaluations work well as a tool for quality assurance (Roxå et al., 2007). But do they contribute to quality enhancement? Here the answer is more complex. They do contribute to enhancement on the programme level, as indicated by the programme directors (Roxå et al., 2007). However, they do not seem to contribute much to the teachers’ efforts to improve teaching within individual course modules (Bergström, 2010). In-depth interviews with individual teachers show that they:

 Pay little attention to statistical descriptions presented in summative evaluations.

 Pay selective attention to answers to open-ended questions.

 Prefer to refer back to their personal experience during interactions with their students.

 Do not view the end-of-course reports as adding in any significant way to their personal experience of what has taken place during the course.

Some argue that summative student evaluations, which unavoidably are drained of detailed information, appear useless to teachers who have personal experience of their course and teaching (Roxå et al., 2007; Roxå and Mårtensson, 2010). Therefore summative student evaluations of teaching seem unable to significantly contribute to an emerging quality culture of teaching and learning. The reason for this, we argue, is that what an organisational culture absorbs or not is always related to what its members attach meaning to (Alvesson, 2002; Schein, 2004). Meaningful routines and procedures tend to be stabilised and used, while activities associated with lesser meaning tend to wither away. Thus, summative evaluations will not contribute to a quality culture as long as the individual teachers find no or limited value in them.

The overall conclusion so far is that in order to promote an emerging quality culture, both summative and formative student evaluations have to be applied. If the teachers are to use student evaluations for improvement purposes, these have to add to their personal experience of teaching. To become useful for individual teachers, statistical data from student evaluations therefore have to be reconstructed to match the personal experience of teaching, while formative evaluations are already richer in this respect. For this reconstruction to happen, a personal commitment from the teacher is needed; or an evaluation format which naturally adds to the teacher’s personal experience. In the following section we therefore, elaborate on the use of formative student evaluations, e.g. classroom assessment (Angelo and Cross, 1993). We consider that these have the potential to add value to the teachers’ personal experiences of teaching.

Based on our findings, we make the following claims about the LTH summative student evaluations system:

 On the institutional or faculty level where we mainly seek quality assurance, the system works well. In the LTH case, it is possible to analyse the CEQ database and retain meaningful answers.

 On the programme level, we seek both quality assurance in relation to the faculty level and quality enhancement in relation to the study programme. In some sense the programme level is a reporting level since it reports a programme summary to the faculty level, but it is also an operational level since course evaluations are collected formatively throughout the year and used to improve the programme. On this level the system works very well.

 On the course level, we mainly seek quality enhancement to improve the course module. On this level the system is not good enough. Although, if we add operational student evaluation, we have shown that the LTH summative student evaluation system in combination with operational evaluation works very well, if the individual teacher engages in a reconstruction of his or her experience.

Therefore, the main purpose of a summative reporting student evaluation system is quality assurance – which, of course, is a very important issue for an institution or faculty. Nevertheless, for an individual university teacher, quality enhancement at course module level is generally a higher priority. This shows that, for a system with both assurance and developmental purposes, a two-fold evaluation process must be undertaken by the university teachers:

 To contribute to the quality assurance process of an institution or faculty, a single teacher needs to understand the importance of fulfilling the reporting evaluation even if it is not sufficient for the teacher’s own course enhancement purposes.

 The individual teacher also needs to understand that the reporting evaluation is not sufficient for the teacher’s own enhancement purposes and that the teacher needs to perform an additional operational evaluation.

Formative operational evaluations

In addition to the summative CEQ system, since 2003 LTH has required that formative evaluations (so-called operational evaluations) are conducted throughout all course modules. There are, however, no requirements on the format, and no top-down control on how this is enforced. These operational evaluations span from formalised, organised classroom assessment techniques to highly informal verbal classroom discussions. The common denominator in all forms is that the university teacher initiates a dialogue with the students where he or she gathers information which is useful for monitoring the students’ learning, that is, for immediate improvement of teaching and learning. When these evaluations work well, the students experience a university teacher who cares about their learning and is serious about teaching. In turn, this has the potential to further stimulate intrinsic student motivation and performance (Angelo and Cross, 1993).

Operational evaluation on an institutional level: a pilot study

In conjunction with the then new faculty-wide policy of mandatory operational evaluation of all LTH courses, a pilot study was launched involving operational evaluation of all courses taught at the Department of Electro Science during the autumn of 2002 (Larsson and Ahlberg, 2003, 2004). Classes were typically large (up to 150 students) to intermediate in size (around 30 students), and some 20 academics were involved in teaching. They were introduced to the principles of classroom assessment techniques (Angelo and Cross, 1993; Black and William, 1996) and were urged to find and modify these techniques to suit their classroom situations. An educational developer was available to answer questions or assist in introducing operational evaluation. After a six-month trial period, operational evaluation activities were evaluated. Questionnaires showed that in large classes (> 80 students), the positive effects of the operational evaluation procedures that had been introduced were obvious to most students. In smaller classes (< 30 students), the outcome was less clear, either due to subtleties in informal classroom communication (not obvious to students), or merely due to the absence of formative evaluation. The study further indicated that operational evaluation did indeed enhance learning, and when encouraged by the institution the university teachers found operational evaluation personally and professionally rewarding, as well as valuable for the students. It was postulated that providing a framework at departmental level effectively supported ongoing operational evaluation regardless of the type of course and the personality of the lecturer. Obvious pitfalls included a teacher’s fear of diverting from the original course plan, or not realising that sticking to the plan might impede learning.

Important aspects of a system supporting an emerging quality culture

In the following we indicate important aspects of a functioning evaluation system. The emphasis is on what should be considered at the institutional level and at the level of the individual teacher, respectively.

Aspects to consider at an institutional level

Aspects concerning summative reporting evaluation systems

A consistent and research-based summative reporting evaluation system is an important ingredient in an institutional quality assurance system. However, since its value for the individual teacher is not always obvious, a top-down discursive pressure is needed to maintain the system. Since the individual teacher’s participation is necessary, we also need to consider the following points:

 The university teachers need to be personally invited and informed about their importance in making the quality assurance process work by conducting the summative evaluation, even if it is not sufficient for the teachers’ own enhancement purposes to improve their teaching.

 Further, just to be informed is not sufficient. The teachers also need to see outcomes from the system to fully understand the value of their input. At LTH, over 120,000 course evaluation forms have been conducted since 2003 and at the institutional level they have been used to show, for example, that ‘the faculty’s teaching has improved over time’ (Almqvist and Larsson, 2008), or that ‘rewarded university teachers also receive higher scores in evaluations’ (Olsson and Roxå, 2008).

 Student evaluations ought not to be made public, unless the comments or reflections of the university teachers who are involved are included in the official documentation.

Aspects concerning operational evaluation

To advance from a quality assurance system, based on summative/ reporting evaluation, to a quality culture, operational evaluation needs to be added as an ingredient. The following points need to be considered in order to increase the chances of making operational evaluation a natural part of a quality culture within an institution:

 Individual university teachers need to understand that a reporting evaluation is not sufficient for course enhancement and that additional operational evaluations are necessary.

 University teachers should be informed about the virtues of operational evaluation, but that is not sufficient. They also need to see that operational evaluations work, which can be done by using operational evaluations regarding their thoughts about study programmes and other aspects during teacher training courses.

 Students are intrinsically motivated if they see that they can influence teaching and learning situations. Therefore, it is important to report back to students on how the information collected from them is utilised.

 Effective communication channels and processes need to be built at the local level (within departments and course teams) to support operational evaluation regardless of the type of course and the personality of the teacher. Institution-wide processes are most likely not as efficient, since operational evaluation is highly context dependent.

Aspects concerning quality culture

Our final general point concerns creating a quality culture:

Evaluation ought not to be perceived as an isolated phenomenon, instead it should become a part of an interactive environment together with other quality enhancing activities. (Andersson, 2010)

Aspects to consider at the teacher level

A successful operational evaluation consists of three equally important parts:

 the teacher should be clear about the objectives;

 the teacher should set the basic conditions for a dialogue;

 the actual operational activities.

In this section we address our own rather concrete recommendations directly to the individual teacher. Some of these suggestions may seem trivial, but teaching routine and lack of support from colleagues often cause even basic habits of good teaching to deteriorate.

Be clear about your objectives

Decide what you want to achieve with your teaching (lecture, lab, seminar, etc.). If you do not do this, it is hard to evaluate anything because all other activities will be more or less random.

Set the basic conditions for a dialogue

To evaluate the effects on student learning from your teaching you need to set the conditions for a dialogue with the students. This is not something that just happens by itself, the university teacher must actively initiate the dialogue.

Some additional concrete things for university teachers to consider are:

image Make yourself available: be on time, do not hurry away from a teaching session. Normally it is in conjunction with classroom activities that the students dare to approach you.

image Show the students that you invite dialogue. For instance, if you ask a question during a teaching session, do not answer it yourself before you have given the students an opportunity to answer. Try to avoid dialogue-restraining questions like ‘Do you understand?’ or ‘Does anyone have any questions?’ The students easily misunderstand these questions and may interpret them as ‘Do you understand – or are you so stupid that you don’t understand everything after my crystal clear instruction?’ Do not point in a military manner with your whole open hand to a student. Do answer the students with respect (trivial is a forbidden word if you want a future dialogue).

image Spend some part of a teaching session discussing a question posed by a student during a break (if you think the question is of general interest). Make sure that you mention that this question came from a student.

image If your operational evaluations do change something, be sure to tell the students so that they realise that their input made the change possible.

Actual operational activities

The actual operational activities can be divided into three areas. The first category includes spontaneous activities:

image Let the students answer questions during teaching sessions by raising hands that you count (or let them raise hands with 0–5 fingers to grade something). This works fine if you want a quick feedback on student activities, even in a lecture with 200 + students.

image Spend part of a teaching session discussing a question posed by a student during a break (if you think the question is of general interest). Make sure that you mention that this question came from a student. (This point has already been noted in the previous section of this chapter.)

image Drop in on your students’ lab or exercise sessions even if you are not responsible for teaching the lab. This shows that you are genuinely interested in both the students and their learning.

The second category includes structured activities (normally for a specific need):

image Use forms (of the plus/minus/interesting type) (Angelo and Cross, 1993).

image Have regular meetings with teaching assistants.

image Visit all your lab sessions with the purpose of collecting something specific.

image Give the students a diagnostic mini-test.

image Remind the students, especially just before any scheduled meeting, that they should tell their student representatives what they want them to take to the course leader.

Finally, there are formalised activities:

image Use weekly reports from all teaching assistants. One example could be to use weekly reports on how the students manage to keep up with the intended pace of the course. Then you can identify and talk to those students who are in danger of falling behind before it is too late (Andersson and Roxå, 2000).

image Use weekly lunch meeting with a few student representatives to discuss issues and reflections raised among the students.

image Use several diagnostic tests to measure the students’ progress. These can be in different forms: anonymous, peer reviewed etc. (cf. Kihl et al., 2007; Weurlander et al., 2010).

Conclusions

Summative student evaluations of teaching are in most cases not enough to improve university teaching. Summative evaluations, especially if they are part of a wider scheme beyond the control of the individual teacher, are always drained of information in comparison to the actual experience of the teacher during a course. Therefore, these evaluations can only complement the experience, or, something that rarely happens, be used to reconstruct the experience in the light of the new data provided by the students. This requires a special effort by the teacher: something only dedicated teachers are likely to engage in. However, summative evaluations are necessary for successful quality assurance, a fact that individual teachers often need to be reminded of.

We have argued that, in order to build a quality culture, operational evaluations must be a regular part of teaching. We have also pointed out important aspects for institutional leaders and individual teachers to consider during the emergence of a quality culture for teaching and student learning.

References

Almqvist, M., Larsson, B. ‘Kan man pa programniva vanda negativa trender?’ [Transl: Is it possible to reverse negative trends on programme level?], 5:e Pedagogiska inspirationskonferensen. Lund, Sweden: LTH, Lund University; 2008.

Alvesson, M. Understanding Organizational Culture. London: Sage; 2002.

Andersson, R. Improving teaching – done in a context. In: Giovannini Maria Lucia, ed. Learning to Teach in Higher Education: Approaches and Case Studies in Europe. Bologna, Italy: Clueb; 2010:57–82. [ed.].

Andersson, R., Roxå, T. Encouraging students in large classes. Proceedings of ACM/SIGCSE Symposium 2000. 2000; 176–179. [Austin, TX, 8–12 March 2000.].

Angelo, T., Cross, P. Classroom Assessment Techniques. San Francisco, CA: Jossey-Bass; 1993.

Barr, R.B., Tagg, J. From teaching to learning – a new paradigm for undergraduate education. Change. 1995; 13–25. [November/December 1995].

Bergström, M., Personal communication, 2010.

Biggs, J. Teaching for Quality Learning at University. Buckingham, UK: Society for Research into Higher Education and Open University Press; 2003.

Black, P., William, D. Meaning and consequences: a basis for distinguishing formative and summative functions of assessment. British Educational Research Journal. 1996; 22(5):537–548.

Borell, J., Andersson, K., Alveteg, M., Roxå, T., Vad kan vi lara oss efter fem ar med CEQ? Transl: What can we learn after five years with CEQ?. 5:e Pedagogiska inspirations-konferensen. LTH, Lund University, Lund, Sweden, 2008.

Bowden, J., Marton, F. The University of Learning. London: Kogan Page; 1999.

Boyer, E.L. Scholarship Reconsidered. Priorities of the Professoriate. Princeton, NJ: Carnegie Foundation; 1990.

Harvey, L., Stensaker, B. Quality culture: understandings, boundaries and linkages. European Journal of Education. 2008; 43(4):427–442.

Kihl, M., Andersson, R., Axelsson, A., Kamratgranskning i stora klasser. Transl: Peer review in large classes. Utvecklingskonferens LU. LTH, Lund University, Lund, Sweden, 2007.

Kreber, C. Teaching excellence, teaching expertise and the scholarship of teaching. Innovative Higher Education. 2002; 27(1):5–23.

Larsson, B., Ahlberg, A., Continuous assessment in engineering education: a pilot studyPedagogisk inspirationskonferens. Lund, Sweden: LTH, Lund University, 2003.

Larsson, B., Ahlberg, A. Continuous assessment. In: Kolmos A., Vinther O., Andersson P., Malmi L., Fuglem M., eds. Faculty Development in Nordic Engineering Education. Aalborg, Denmark: Aalborg University Press, 2004.

LTH, Om LTH. Transl: About LTH. 2010 Available online at: http://www. lth.se/omlth/ [(accessed September 2010).].

Mårtensson, K., Roxå, T., Olsson, T. Developing a quality culture through the scholarship of teaching and learning. Higher Education Research and Development. 2011; 30(1):51–62.

Olsson, T., Roxå, T. Evaluating rewards for excellent teaching – a cultural approach. The HERDSA International Conference, Rotorua, New Zealand. 2008. [1–4 July 2008.].

Ramsden, P. Learning to Teach in Higher Education. London: RoutledgeFalmer; 2005.

Roxå, T., Andersson, R., Warfvinge, P. Making use of student evaluations of teaching in a "culture of quality". 29th Annual European Higher Education Society (EAIR) Forum, Innsbruck, Austria, 26–29. 2007. [August 2007].

Roxå, T., Mårtensson, K. Improving university teaching through student feedback: a critical investigation’. In: Nair S.C., Mertova P., eds. Student Feedback: The Cornerstone to an Effective Quality Assurance System in Higher Education. Cambridge, UK: Woodhead Publishing, 2010.

Schein, E.H. Organizational Culture and Leadership, 3rd edn. San Francisco, CA: Jossey-Bass; 2004.

Trigwell, K., Martin, E., Benjamin, J., Prosser, M. Scholarship of teaching: a model. Higher Education Research and Development. 2000; 19(2):158–168.

Warfvinge, P. ‘Policy for utvardering av grundutbildning’ [Transl: Policy on evaluation of undergraduate courses]. Lund, Sweden: LTH, Lund; 2003.

Weurlander, M., Andersson, R., Axelsson, A., Hult, H., Wernerson, A. How formative assessment act as a tool for learning – theoretical aspects and practical implications. The 7th International Society for the Scholarship of Teaching and Learning Conference incorporating the 18th Improving Student Learning Symposium, Liverpool, UK. 2010. [19–22 October 2010.].

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.23.87.113