5

Student feedback in the Australian national and university context

Denise Chalmers

Abstract:

This chapter provides an overview of the practice and use of student feedback in the Australian national and university context. Australia has been a leader in recognising the importance of student feedback in quality assurance at the national level. The national survey of students’ experience of their study at university has been a key component in the government’s approach to quality assurance in higher education for a number of years and remains a central component of the national quality model.At the university level, the gathering of student feedback on the quality of teaching and subject is common practice, while gathering student feedback at the programme of study and university level is gaining widespread interest. Increasingly, universities are recognising the value of using subject and programme-level student feedback in their quality performance measures as well as integrating the data from their internal feedback surveys with the national surveys to provide a more detailed and multilayered profile of their students’ experiences.

Key words

Australia

national and university context and practices

student feedback on teaching

Introduction

Higher education institutions have been seeking student feedback on teaching or subjects for several decades. Feedback has been sought more informally in the past, but has become more formalised over time. The information gathered has primarily been used as a teaching evaluation tool to inform teachers and subject co-ordinators for the purpose of ongoing development and improvement of teaching and subjects. In contrast, the development and use of surveys to collect student feedback on their experience of their whole degree or the institution is a more recent development. The systematic use of such surveys to gather data across several institutions or the sector is only now being considered in many countries, with increasing emphasis being placed on student feedback surveys by governments and their agencies as an indicator of teaching quality. In Australia, there have been a number of national initiatives to seek feedback from university students, which have placed Australia as a leader in the sector-wide use of surveys of students’ experiences of university teaching.

This chapter provides an overview of the practice and use of student feedback in the Australian higher education context. It does not provide a commentary on the broader issues of using student feedback for quality assurance purposes. For a comprehensive overview of research on student evaluations of university teaching refer to Marsh (2007) and Abrami, d’Apollonia and Rosenfield (2007).

National or sector-wide use of student feedback

The Australian government has taken an active role in promoting quality assurance in universities since the 1980s, when there was a perceived need for universities to improve their efficiency, effectiveness and public accountability. In 1989, the government commissioned a team led by Professor Russell Linke to find performance indicators to assess the quality of higher education. The Linke report asserted that quality would be best assessed using multiple indicators that are ‘sensitively attuned to the specific needs and characteristics of particular disciplines, and which adequately reflect the underlying purpose for which the assessment is required’ (Linke, 1991: 129). The report also suggested that judgements of the quality of teaching must flow from the analysis of multiple characteristics, and involve a range of procedures including qualitative peer and student evaluation. Three categories of indicators on teaching and learning were identified: quality of teaching; student progress and achievement; and graduate employment.

The subsequent decision by the committee to identify the quality of teaching by students’ perceived assessment of teaching quality using the Course Experience Questionnaire (CEQ) was not surprising, given that the developer of the instrument, Paul Ramsden, presented a compelling submission to the committee arguing that ‘the CEQ offers a reliable, verifiable and useful means of determining the perceived teaching quality of academic units in systems of higher education that are based on British models’ (Ramsden, 1991: 129).

An outcome of Linke’s influential report, subsequently supported by the report on benchmarking in Australian universities (McKinnon et al., 2000), was that the CEQ was specifically recommended as an appropriate benchmarking indicator, with the first national administration of the Course Experience Questionnaire (CEQ) to all Australian university graduates in 1993.

Australian Graduate Survey (AGS)

Graduate Careers Australia (GCA) is responsible for the administration of the Australian Graduate Survey (AGS) and works closely with the universities and the Australian government department responsible for higher education to improve the quality of the data, data collection and response rates. The Australian university representative council and GCA jointly released a code of practice and guidelines for the administration of the surveys.

The Survey is comprised of two parts, the Graduate Destination Survey (GDS) and the Course Experience Questionnaire (CEQ) or the Postgraduate Research Experience Questionnaire (PREQ). The AGS is sent to all students who have completed the requirements for a degree in Australian universities, approximately four to five months after graduation. Unlike many surveys of this scale, the AGS is a census survey.

Graduate Destination Survey (GDS)

Australian universities have administered the Graduate Destination Survey (GDS) since 1972. It focuses on details of current employment or study, as well as questions related to job search strategies. Traditionally, this data has been used by universities to advise both prospective and current students and university staff about employment opportunities in different fields of education.

Course Experience Questionnaire (CEQ)

The Course Experience Questionnaire (CEQ) has been administered by Australian universities since 1993. The CEQ was developed as a teaching performance indicator, focusing on aspects of the classroom teaching environment which previous research had found were linked to deep and surface approaches to learning, and higher quality learning (Ramsden, 1991; Wilson, Lizzio and Ramsden, 1997). A subsequent extension of the CEQ was initiated because it was recognised that the original questionnaire emphasised the undergraduate experience taking place within the classroom setting and neglected important dimensions of the current student experience (McInnis et al., 2001).

The CEQ is an extensively validated student feedback survey (Wilson et al., 1997) that has been found to be strongly reliable in a range of different contexts. Few student feedback surveys are as explicitly based on a well-researched theoretical model of learning as the CEQ (Ramsden, 1991). The model recognises that learning is a complex process and the CEQ focuses on student perceptions as a key indicator of this process. However, many of the uses made of the CEQ data ignore this complexity. The CEQ has provided almost two decades of national data that can identify trends for intra-and inter-university comparative purposes that is unrivalled anywhere else in the world.

Traditionally, components of the AGS, and the CEQ scales, have been intended for benchmarking teaching quality, primarily at the degree level, allowing tracking over time of the quality of a specific degree, as well as benchmarking similar programmes of study at different universities. The ability to benchmark has been difficult at times, with relatively low response rates and the use of generic names of programmes of study. Additional criticisms of the CEQ relate to the lagging and aggregated nature of the data, which make it difficult for institutions to use for enhancement purposes and improvement within the university (Barrie and Ginns, 2007). Despite these limitations, many institutions have used the data to inform quality audits, curriculum reviews and internal planning and funding decisions.

Components of the GDS and CEQ have more controversially been used to inform decisions on performance-based funding of institutions, and, more recently, cognate disciplines within institutions through the Learning and Teaching Performance Fund (LTPF). The LTPF scheme was established by the Australian Government in 2003 as part of Our Universities: Backing Australia’s Future, a scheme to reward the ‘higher education providers that best demonstrate excellence in learning and teaching’.

This use of the AGS data has prompted intense discussion within the Australian higher education sector. Some of the concerns have been related to the differential survey practices and response rates between institutions. To address some of these concerns, the government commissioned the Graduate Destination Survey Enhancement Project (GCA, 2009) to improve the quality of responses to the surveys and confidence in their findings and usage (particularly in relation to the LTPF). In a recent evaluation of the LTPF by the responsible government department, it was claimed that the LTPF has been an important catalyst for improvements in data collection for the Graduate Destinations Survey (GDS) and the Course Experience Questionnaire (CEQ) (DEEWR, 2008). For further information on the LTPF and the analysis and use of the AGS data, refer to DEST (2004–2005) and Marks and Coates (2007).

Similar concerns around the lack of consistency in the administration and collection of student data have been experienced in the United Kingdom with the National Student Survey (NSS), first administered in 2005 (Surridge, 2009; Marsh and Cheng, 2008). However, unlike the CEQ, the NSS is administered to students while they are still enrolled in the university, in their final year of study. While, in the UK, funding is not tied to the outcomes of the NSS, they are reported widely in the press and so its impact on university reputation is a concern.

While the LTPF has now been discontinued, the AGS data will continue to be used in university funding decisions. The Australian government has recently released a discussion paper on proposed indicators for higher education performance funding for teaching and learning. Universities will be required to negotiate targets against indicators of performance determined by the government. ‘In the simplest terms, if universities achieve their targets, they will receive performance funding’ (DEEWR, 2009b: 3). It is proposed that the CEQ continue to be used as the student satisfaction component of the performance target. The administration of the AGS is proposed to be broadened to include all graduates of higher education, not just universities, which is currently the case. In addition, the government is considering developing a new ‘University Experience Survey’, which would collect information about first-year students’ engagement and satisfaction with their university studies (DEEWR, 2009b).

The Postgraduate Research Experience Questionnaire (PREQ)

In response to a growing recognition that the CEQ was not appropriate for the increasing number of postgraduate research students, the Australian government commissioned the development and evaluation of the Postgraduate Research Experience Questionnaire (PREQ). It was anticipated that this would operate in much the same way as the CEQ and provide a multi-dimensional measure of the experience of postgraduate research students as part of a large-scale national benchmarking exercise for Australian universities (Marsh and Cheng, 2008).

The PREQ was first trialled in 1999 to investigate the opinions of recent graduates regarding their higher degree by research experience. It is now administered up to twice a year in association with Graduate Careers Australia (GCA) to students who have completed a doctoral or research masters degree in the previous four or five months. Aggregated reports are periodically generated on the postgraduate research experience (e.g. Edwards et al., 2008).

Serious concerns have been raised about the use of the PREQ for benchmarking the overall postgraduate experience at the broad level of the university, and discipline-within-university groups. Arguments have been presented, based on reviews of research in the areas of students’ evaluations of university teaching, teacher/school effectiveness and teacher improvement, that the most important unit of analysis was the individual supervisor (Marsh, Rowe and Martin, 2002). Extensive analysis has subsequently confirmed that PREQ ratings were reliable at the level of individual students but that the results were not particularly relevant for discriminating between universities. The most salient finding was that PREQ ratings did not vary systematically between universities, or between discipline-within-university groups (Marsh, Rowe and Martin, 2002). Marsh and Cheng claim that the results ‘call into question research or practice that seeks to use students’ evaluations of teaching effectiveness as a basis for comparing universities as part of a quality assurance exercise’ (Marsh and Cheng, 2008: 16).

Australasian Survey of Student Engagement (AUSSE)

A survey of student engagement is currently in early development for use across Australian and New Zealand universities by the Australian Council of Educational Research (ACER, 2009). Based on the USA’s National Survey of Student Engagement (NSSE), it is designed to provide data on students’ engagement in university study. It is claimed that the principal focus of the AUSSE is to provide within-university information, as it is intended that it will be sensitive to institutional diversity and will have benchmarking potential within Australia and with the relevant data from the United States and Canada.

Student engagement focuses on the interaction between students and the environment in which they learn. Student engagement data provides information on learning processes, and is considered to be one of the more reliable proxy measures of learning outcomes. The data has the potential to assist institutions make decisions about how they might support student learning and development, manage resources, monitor standards and outcomes, and monitor curriculum and services. It can also indicate areas in need of enhancement.

The AUSSE data collection methodology varies significantly from the AGS, for while the AGS has a census methodology, surveying all eligible students, the AUSSE uses a sampling methodology, surveying a representative sample of first-and later-year students at the participating institutions (ACER, 2009). These students are asked to respond to their current experiences, not in hindsight as for the CEQ and PREQ.

In 2007 and 2008, approximately 30 Australian and New Zealand universities participated in the AUSSE to validate the items for use in Australasia. The survey instrument focuses on students’ engagement in activities and conditions which empirical research has linked with high-quality learning. Information is also collected online on students’ self-reported learning and development outcomes; average overall grades; retention intentions; overall satisfaction; and a range of individual demographics and educational contexts (ACER, 2009). A particular concern to date has been the low response rates, which has impacted on the validation and broad applicability of the survey.

First Year Experience Questionnaire (FYE)

The First Year Experience Questionnaire (FYE) has been administered at five-year intervals since 1994 by the University of Melbourne’s Centre for the Study of Higher Education (Krause et al., 2005). Surveying a stratified sample of first-year students of seven universities in 1994 and 1999, and nine universities in 2004, its goal is to ‘assemble a unique database on the changing character of first year students’ attitudes, expectations, study patterns and overall experience on campus’ (Krause et al., 2005: 1). It draws on the CEQ for many of its items. In addition, the 2004 FYE included items and scales focusing on student engagement, and the role of information and communication technologies in student engagement. However, with the response rate for the 2004 survey at only 24 per cent, concerns are expressed about the representativeness of the findings. The government again commissioned the FYE to be administered in 2009, to a sample of first-year students in a limited number of universities.

While the FYE questionnaire is a research instrument rather than a national or sector-wide survey, its impact has been significant as it provided the evidence base for many universities’ strategies to improve university transition and first-year retention and progression. As a consequence, a number of Australian universities routinely gather data from their first-year students using a variation of the FYE survey or an adaptation of the CEQ in a university-based survey.

Institutional-level use of student feedback

National initiatives that have influenced collection and reporting of student feedback within universities

The LTPF has been briefly referred to earlier in this chapter in relation to the use of the national student feedback data from the AGS and its impact on data collection processes. However, the LTPF has also had a significant impact on the systematic collection and reporting of student satisfaction within the university, as have the Australian University Quality Agency (AUQA) audits carried out in five-yearly cycles. The impact of each of these on institutional level collection and use of student feedback are briefly described.

Learning and Teaching Performance Fund (LTPF)

In the first three years of this scheme over $220m was allocated to a limited number of universities using quantitative evidence drawn from the indicators identified in the Linke report (1991): quality of teaching (CEQ), student progression and achievement and graduate employment (GDS). While the scheme has been controversial (DEEWR, 2008; Access Economics, 2005), it had a significant impact on university practices in collecting and reporting student feedback within the university.

The LTPF scheme initially involved two stages. Entry into the second stage was contingent upon satisfactorily meeting the requirements of the first. The first stage required institutions to submit evidence of having the following available on their website: a current and recent institutional learning and teaching plan or strategy; systematic support for professional development in learning and teaching for sessional and full-time academic staff; systematic student evaluation of teaching and subjects that inform probation and promotion decisions for academic positions; and evidence that student evaluations of subjects were publicly available. While the second stage attracted the majority of attention, as it determined the allocation of funding, the first stage resulted in many universities revising their policies and systems in relation to the systematic student evaluation of teaching and the reporting of these (DEEWR, 2008).

Australian University Quality Agency (AUQA)

A thematic analysis of the first cycle of AUQA audits of learning and teaching was almost silent on student feedback as a quality mechanism, though some commendations on systems of student feedback for individual universities were highlighted (Ewan, 2009). Despite the limited reference to student feedback in the thematic analysis and the few explicit commendations and recommendations, it would be incorrect to suggest that AUQA did not view student feedback as a critical part of a quality system. AUQA guidelines and training explicitly require that student feedback data is used to inform teaching, curriculum development and university quality processes (AUQA, 2004). The few direct references to student feedback in the university AUQA audit reports are likely because they have well-developed systems of collecting feedback from students. What has been less well developed is the effective and systematic use of the information to inform performance management, curriculum review and enhancement and quality review and monitoring. The AUQA cycle’s two quality reviews explicitly link the use of these data to inform quality enhancement and assurance within the university and across universities and disciplines (AUQA, 2008).

Within-university practices and uses of student feedback

Student feedback on teaching

The collection of student feedback on aspects of teaching has gradually evolved over time from a largely informal, formative, private practice, carried out by teachers seeking feedback from their students, to a systematic, whole-of-university approach of gathering feedback from students on the quality of teaching and subjects. The purposes of gathering the feedback from students range from development and enhancement to summative and quality assurance. It is now considered an essential requirement for both quality enhancement and assurance to have a process of systematic gathering of student feedback on teaching.

Historically, student feedback surveys in Australia have been used to provide information to individual teachers for developmental purposes but appraisal and accountability purposes were identified as being legitimate purposes as well (Moses, 1988). The University of Queensland was one of the first Australian universities to establish a whole-of-university approach to seeking feedback from students administered through a central organisation. Prior to this, individuals, schools and departments may have had their own surveys but these were not particularly trusted by staff or students, nor were they systematically administered (Moses, 1988). The institutional teacher evaluation questionnaire had eleven compulsory questions on the lecturer, with the final two global questions asking the student to rate the subject and the overall effectiveness of the teacher. In addition there was the opportunity for the teacher to choose a limited number of additional questions from the item bank. The back of the questionnaire was left largely blank for students’ qualitative comments on the staff member’s strengths and suggestions for improvements (Moses, 1988). The University of Queensland’s student feedback survey questions, structure and process of administration had a significant influence on the subsequent survey design and processes in many Australian universities, as did the University of Sydney’s Students’ Evaluation of Educational Quality (SEEQ) developed by Marsh (1984). In 1999, a review of institutional student feedback surveys concluded that the majority of Australian student feedback surveys were based on the same two or three original surveys (GCCA, 1999, quoted in Davies et al., 2009). Surprisingly little has changed in this twenty-year period, with current student feedback surveys on teaching retaining a similar structure and focus across most universities. The individual questions themselves show considerable wording variation and are rarely validated using accepted psychometric methods. In the main they have been based on face validity, informed by research on effective teaching and influenced by the institutional culture and values.

There have been two recent reviews of Australian university student feedback surveys (Barrie et al., 2008; Davies et al. 2009). The Davies et al., (2009) study investigates the types of questions used in teacher surveys, while the Barrie et al. (2008) study provides a comprehensive overview of current Australian university Student Evaluation of Teaching (SET) uses and practices at four levels: Teacher, Department, Institution and Sector.

All Australian universities have an established student feedback survey directed at the teacher level with the data used primarily to inform the individual teacher’s improvement and development, and as a source of evidence for their performance review and promotion. A number of universities have several types or variations of surveys of teaching to capture the different types of teaching situations, e.g. lecturing, tutorial, laboratory, problem-based, clinical, team teaching or online. The most common survey structure is for a limited number of mandatory questions that are asked across all contexts and include a global question, a limited set of optional questions selected from a database at the discretion of the teacher, and space for students to respond to two or three general questions. However, there is considerable variation across the sector. For example, the University of Western Australia currently offers a fully flexible survey of teaching, with teachers able to select as many questions as they like from a database or to write their own questions. Others offer no flexibility; for example the Universities of South Australia and Southern Queensland student evaluation of teaching survey offer a fixed set of core questions.

Traditionally, student feedback on teaching data has not been routinely incorporated into broader quality reviews of teaching or curriculum. Indeed, many have restrictions on the uses to which teacher-level data could be put, with many universities limiting access to the data solely for the personal use of the teacher. In general, the teacher is responsible for initiating the request for a survey of students on the quality of their teaching and then for using the information for their personal evaluation and improvement.

Student feedback on subjects/units

In response to the need to have access to student feedback data for quality purposes for curriculum development and institutional quality processes and because of the trend for subjects to be taught by teams of teachers, subject-level surveys have been developed. These are generally administered more formally and systematically than teacher-level surveys. For example, many universities specify the frequency and timing of subject surveys: the Universities of Wollongong, Central Queensland and Western Australia survey their students each and every time the subject is offered. The University of Queensland, on the other hand, requires that all subjects are surveyed on a three-yearly cycle, though there is opportunity for them to be surveyed more frequently if necessary.

The questions tend to focus on the subject rather than the teacher, but the structure of the surveys is similar to the teacher surveys. Subject surveys are generally standardised in their wording, number and type of questions, with most allowing no optional questions. Most also provide an opportunity for students to give a response to a limited number of open questions. The subject surveys often sit alongside teacher surveys, and students may be asked to complete both a subject and teacher survey for the one subject. Some universities have combined their teacher and subject questionnaire, for example the Universities of Technology in Queensland and Sydney, but the majority have retained two distinct surveys. As with the surveys on the quality of teaching, subject-level surveys are generally administered towards the end of the study period and so rarely benefit the students who provide the feedback.

The data from the subject surveys is generally made available to a wider audience, including the subject and course co-ordinator, head of school and dean and can be used in curriculum reviews, school reviews and reported to a range of internal and external stakeholders, often in an aggregated report. The extent to which the data from the subject feedback surveys is integrated into a timely cycle of analysis, reporting, action and feedback is highly variable across the Australian universities. The University of Queensland has a comprehensive process where the data is routinely reported to the university, faculty and schools through an online reporting system that brings it together with data on student attrition and progression and the CEQ. However, while other universities are now beginning to move towards using their data in a more integrated way, the extent that the subject data is used in concert with other data such as the CEQ to inform an annual programme of study report or used in conjunction with other student feedback remains more variable.

In the Australian context, all students are provided with the opportunity to give feedback on the teaching and/or the subjects they are studying. Gathering student feedback on the students’ broader student experience of the university at the programme of study level or after their first year of study, for example, is more varied.

Student feedback on courses/programmes of study, institutional experience

The majority of Australian universities do not routinely seek students’ feedback on their broader experiences of the university or their programme of study, though this has been an area of growing interest in the past decade. With the national administration of the CEQ and First-Year Experience survey, universities have had access to their own data for benchmarking and internal purposes. However, the significant delay in getting access to this national data has resulted in a number of universities instituting a university-level student feedback survey on their broader university experiences.

The University of Sydney was one of the first universities to establish an institutional survey of the broader student experience – the Student Course Experience Survey (Asmar, 2002), which draws heavily on the CEQ and is an integral part of the university quality assurance and enhancement model (Barrie, Ginns and Prosser, 2005). The institutional surveys developed by the University of Queensland and Monash University are more extensive than the University of Sydney survey, seeking feedback from students on their experiences of the programmes they are studying, their experiences of the university facilities and services, as well as the extent to which they feel they are developing the graduate attributes identified by their university. Like the University of Sydney survey, they include some of the major scales of the CEQ such as the Good Teaching scale. The University of Queensland also includes some scales from the First Year Experience survey. This allows for benchmarking and comparisons within the institution, across programmes, across other institutions and with the national data sets.

A number of other universities have subsequently introduced a student experience survey. For example, Charles Sturt University administers a modified CEQ survey to students in their first year of study to allow for enhancements to be made to the programmes of study. Curtin University surveys its students on their whole course of study prior to graduation, and has recently introduced a graduate survey to gather recent graduates’ perceptions of their course, as well as an employer survey on employers’ perceptions of how well they feel the graduates are prepared for employment. The University of South Australia also piloted an employer survey in 2008.

Institutional surveys have been subject to greater scrutiny and validation of the items and scales than the subject and teacher surveys. Reporting of the data is co-ordinated and managed centrally and is broken down by organisational unit and programme of study for comparative purposes. Because of the extensive nature of the student experience surveys and concerns of over-surveying students, they tend to be administered periodically, for example biannually.

Few universities allocate funding based on the outcomes of their student surveys, with the universities of Sydney and Queensland being the first to introduce funding schemes that included student feedback data such as the CEQ and student experience surveys. A number of universities have followed suit, initially in the internal allocation of their LTPF funding, and subsequently to focus the faculty and department attention on the quality of teaching.

Conclusions

This chapter has provided an overview of the practice and use of student feedback in the Australian national and university context. Australia has been a leader in recognising the importance of student feedback in quality assurance at the national level. The introduction of a national survey of students’ experience of their study at university has been a key component in the government’s approach to quality assurance in higher education for a number of years and it remains a central component of its quality model.

At the university level, the gathering of student feedback on the quality of teaching is ubiquitous; however, its use can predominantly be considered as serving a quality enhancement role to inform personal and career development. The gathering of student feedback on the quality of the subjects is also widespread, with the data informing both quality enhancement and quality assurance purposes at multiple levels within the university. At the programme of study and university level, a growing number of universities are now surveying their students on their teaching and learning experiences as a way to monitor the quality of their programmes and to inform future directions. Increasingly, universities are recognising the value of using subject-and programme-level student feedback in their quality performance measures as well as integrating the data from their internal feedback surveys with the national surveys to provide a more detailed and multilayered profile of their students’ experiences.

References

Abrami, P., d’Apollonia, S., Rosenfield, S., The dimensionality of student ratings of instruction: an update on what we know, do not know, and need to doPerry, R.P., Smart, J.C., eds. The Scholarship of Teaching and Learning in Higher Education: An Evidence-Based Perspective. Springer, New York, 2007.

Access Economics Pty Ltd, Review of Higher Education outcome performance indicators. Report for the Department of Education, Science and Training (DEST). Canberra: Commonwealth Department of Education, Science and Training. 2005. http://www.dest.gov.au/sectors/higher_education/publications_resources/profiles/review_highered_outcome_perf_indicators.htm [Available online at:, (accessed 23 January 2007).].

ACER, Australasian Survey of Student Engagement (AUSSE), 2009.. http://ausse.acer.edu.au [Available online at:, (accessed 22 July 2010).].

Asmar, C. Strategies to enhance learning and teaching in a research-extensive university. Journal for Academic Development. 2002; 7(1):18–30.

Australian Quality Agency (AUQA), Cycle 1 and Cycle 2 Audit Handbook, 2004.. www.auqa.edu.au [Available online at:, 2008, (accessed January 2010).].

Barrie, S., Ginns, P. The linking of national teaching performance indicators to improvements in teaching and learning in classrooms. Quality in Higher Education. 2007; 13(3):275–286.

Barrie, S., Ginns, P., Prosser, M. Early impact and outcomes of an institutionally aligned, student focused learning perspective on teaching quality assurance. Assessment and Evaluation in Higher Education. 2005; 30(6):641–656.

Barrie, S., Ginns, P., Symons, R., Student surveys on teaching and learning, 2008.. http://www.altc.edu.au/teaching-quality-indicators [Final Report (June). Teaching Quality Indicators Project. ALTC. Available online at:, (accessed July 2010).].

Bligh, J., Lloyd-Jones, G., Smith, G. Early effects of a new problem-based clinically oriented curriculum on students’ perceptions of teaching. Medical Education. 2000; 34:487–489.

Davies, M., Hirschberg, J., Lye, J., Johnston, C. A systematic analysis of quality of teaching surveys. Assessment and Evaluation in Higher Education. 2009; 35(1):83–96.

DEEWR. An Evaluation of the Learning and Teaching Performance Fund. Department of Education, Employment and Workplace Relations (September). Canberra: Commonwealth of Australia; 2008.

DEEWR, Transforming Australia’s Higher Education System. Commonwealth of Australia, Canberra, 2009.. http://www.deewr.gov.au/HigherEducation/Pages/TransformingAustraliasHESystem.aspx [Available online at:, (accessed 22 July 2010).].

DEEWR, An indicator framework for higher education performance funding: discussion paper, 2009.. http://www.deewr.gov.au/HigherEducation/Pages/IndicatorFramework.aspx [December. Available online at:, (accessed 21 December 2009).].

DEST. Learning and Teaching Performance Fund: Issues Paper. Canberra: Commonwealth of Australia; 2004. [April].

DEST. Learning and Teaching Performance Fund: Future Directions Discussion Paper. Canberra: Commonwealth of Australia; 2005. [December].

Edwards, D., Coates, H., Guthrie, B., Nesteroff, S., Postgraduate research experience 2007: the report of the postgraduate research experience questionnaire, 2008.. http://works.bepress.com/daniel_edwards/8 [Available online at:, (accessed January 2010).].

Ewan, C., Learning and Teaching in Australian Universities: A thematic analysis of Cycle 1 AUQA audits, 2009. [AUQA Occasional Publications Number 18. Australian Universities Quality Agency and the Australian Learning and Teaching Council. June.].

GCA, History and development of the AGS, 2009.. http://start.graduatecareers.com.au/ags_overview/ags_history_and_development#census [Available online at:, (accessed 8 September 2009)].

Krause, K.-L., Hartley, R., James, R., Mclnnis, C., The First Year Experience in Australian Universities: Findings from a Decade of National Studies. CSHE, University of Melbourne, 2005.. http://www.cshe.unimelb.edu.au/pdfs/FYEReport05KLK.pdf [Available online at:, (accessed July 2010).].

Linke, R.D. Performance indicators in higher education: Report of a trial evaluation study, 1. Canberra: Department of Employment, Education and Training; 1991.

Marks, G., Coates, H. Refinement of the Learning and Teaching Performance Fund Adjustment Process. Report to Department of Education, Science and Training. Melbourne: Australian Council for Educational Research; 2007.

Marsh, H.W. Students’ evaluations of teaching: Dimensionality, reliability, validity, potential biases and utility. Journal of Educational Psychology. 1984; 76:707–754.

Marsh, H.W. Students’ evaluations of university teaching: a multidimensional perspective. In: Perry R.P., Smart J.C., eds. The Scholarship of Teaching and Learning in Higher Education: An Evidence-Based Perspective. New York: Springer, 2007.

Marsh, H., Cheng, J., National Student Survey of Teaching in UK Universities: Preliminary Results. Higher Education Academy, York, 2008.. http://www.heacademy.ac.uk/resources/detail/national_student_survey_of_teachng_in_uk_universities [Available online at:, (accessed 22 July 2010).].

Marsh, H.W., Rowe, K., Martin, A. PhD students’ evaluations of research supervision: issues, complexities and challenges in a nationwide Australian experiment in benchmarking universities. Journal of Higher Education. 2002; 73(3):313–348.

Mclnnis, C., Griffin, P., James, R., Coates, H. Development of the Course Experience Questionnaire (CEQ). Report funded by Evaluations and Investigations Programme, Higher Education Division. DETYA, Canberra: Commonwealth of Australia; 2001.

McKinnon, K.R., Walker, S.H., Davis, D. Benchmarking: A manual for Australian universities. Canberra: Australian Department of Education, Training and Youth Affairs; 2000.

Moses, I. Academic staff evaluation and development: a university case study. Brisbane: University of Queensland Press; 1988.

Ramsden, P. A performance indicator of teaching quality in higher education: The Course Experience Questionnaire. Studies in Higher Education. 1991; 16(2):129–150.

Surridge, P., The National Student Survey three years on: What have we learned?. Higher Education Academy, York, 2009.. http://www.heacademy.ac.uk/ourwork/research/surveys/nss [Available online at:, (accessed January 2010).].

Wilson, K., Lizzio, A., Ramsden, P. The development, validation and application of the Course Experience Questionnaire. Studies in Higher Education. 1997; 22(1):33–53.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.222.104.196