7

Engineering programmes in the UK: the student feedback experience

James Williams and David Kane

Abstract:

In the UK, collecting feedback from students about their experience of higher education has become one of the key elements in national and institutional quality processes. Engineering students, like those from other disciplines, are encouraged to complete feedback questionnaires at all levels from individual course evaluation forms to institutional-level surveys and the National Student Survey. Engineering suffers more than most from poor recruitment, difficulties in motivating students and higher than average drop-out rates. This chapter explores the ways in which this may be reflected in student experience surveys. A review of student feedback surveys at national and institutional level indicates three key issues. First, engineering students seem to be noticeably poor at responding to student experience surveys. Second, engineering students have been slightly less satisfied with items relating to assessment and feedback than some other disciplines. Third, it is arguable that engineering students are less satisfied with aspects of their experience because of conditions that are local to individual institutions rather than anything specifically relating to the engineering student experience. This indicates that surveys carried out at institutional level provide a degree of detail and depth that is difficult to reflect at the broader, national level.

Key words

UK higher education

National Student Survey

student feedback

engineering education.

Introduction

Recruitment and retention of students in UK engineering faculties have been the focus of noticeable concern at policy and institutional level for some years. A recent report in the magazine New Civil Engineer described MPs’ shock at the high drop-out rate from engineering programmes in comparison with other faculties (Flynn, 2008). Academic research has indicated that, in many cases, young people are unable to follow or complete study programmes in the subject (Parsons, 2003). This is not a new phenomenon and in the UK, as across much of Europe and North America, engineering faculties suffered from poor recruitment particularly in the late 1990s, though this has reportedly changed in recent years (Confederation of British Industry, 2008). Science, technology, engineering and mathematics subjects remained a concern of the Labour government until its demise in 2010, and efforts were made to encourage young people to follow this route through universities (HM Treasury, 2002).

The motivation of young people taking up university places in engineering has attracted much attention in the academic press. As early as 1997, a small-scale research project by the Centre for Research into Quality1 explored the motivations of young people to take up engineering at that university and identified a number of key factors, such as: personal interests (21.9 per cent of respondents), school work (15.4 per cent) and work experience (14.8 per cent). Where the focus is specifically on engineering students, it is primarily on ways of motivating them to study the subject, the implication being that they are unusually unmotivated (Souitaris et al., 2007; Mustoe and Croft, 1999). In particular, there is concern that engineering students are lacking in mathematical expertise and that this is particularly problematic (Sazhin, 1998).

Many previous studies that involve the experiences of engineering students in higher education appear to do so incidentally and are randomly selected samples (Felder et al., 2002). One of the few works that focuses on engineering students focuses specifically on exploring the value of using a student feedback questionnaire in improving the student learning experience (Gregory et al., 1994). However, it appears to use engineering students merely as a case study rather than as a case of particular interest. Its conclusion is important, however, in highlighting that feedback questionnaires are valuable in supporting management in improving the student experience. An exploration of the experiences of engineering students is therefore valuable as part of a wider attempt to understand what deters young people from studying this subject.

Engineering student feedback is usually only published as a result of student feedback surveys, either at the national level, through such instruments as the National Student Survey (NSS)2 or at institutional level, through the many student satisfaction surveys that have been carried out at universities as part of internal quality enhancement processes. Institutional surveys tend to be much more in-depth in their treatment of the experiences of their own students than the NSS which, by its very nature, is limited in scope. Historically, institutional surveys often allow a fuller analysis of student experience by a range of demographics, including discipline.

Student feedback and existing data sources

Much has been written on the importance and uses of student feedback (Richardson, 2005; Williams and Cappuccini-Ansfield, 2007; Harvey, 2003; Williams and Brennan, 2003), so it is unnecessary to rehearse the arguments here. However, it is important to note that student feedback has become accepted as one of the standard pillars of the UK’s quality assurance and enhancement systems and the importance of ‘listening to the student voice’ is widely acknowledged (Symons, 2006). Since the development of the NSS in 2005, as a result of the Cooke Report (Higher Education Funding Council for England, 2002) which explored the information requirements in higher education, data on the broad experience of students across the higher education sector has been collected nationally. A short survey, based on a range of items relating to the core aspects of the student experience, the NSS has proved controversial. Despite this interest in the collection and use of student feedback, surprisingly little research beyond institutional reports and national surveys such as the NSS, has been undertaken that explores the way in which students from different disciplines respond to student experience surveys.

The NSS, although influential, is limited in what it tells us about the student experience at institutional level. In contrast, student experience surveys carried out as part of institutional continuous quality improvement processes, often provide a much fuller and more rounded view of student experiences of higher education at the local level. Over the years, institutional student feedback surveys have used a variety of models but increasingly, they have tended to follow the NSS model in order to allow direct comparisons between the experiences of final-year students and those of students at lower levels (Harvey, 2003). Unfortunately, this has led to a decline in the variation and diversity of the data to be gleaned from such surveys.

Earlier surveys tended to be much more detailed and usually much more tailored to the individual institution. Many student feedback surveys, carried out during the late 1990s and 2000s have used the student satisfaction approach (outlined by Harvey et al., 1997). This approach utilises detailed questionnaires using dimensions that are drawn from discussions with students drawn from a ‘group feedback strategy’ (Green et al., 1994). The survey is therefore closer than the NSS model to the lived experience of the students in the institution because they are indirectly involved in producing it.

There are many institutional publicly available survey reports that have used this approach. Some have been routinely published, such as those of the University of Central England (UCE) until 2007 and those of Sheffield Hallam University (SHU) in the period 2000–2007. Some others have been published on the Internet, such as those of the University of Greenwich. Most have remained as unpublished reports for internal consumption alone.

Response rates of engineering students

Some of the limited work available on student feedback suggests that engineering students are amongst the poorest at responding to student feedback surveys, both nationally and at institutional level. Nationally, engineering students have been amongst the poorest responders to the NSS. Surridge (2007) observed that there has generally been a lower response rate amongst engineering and technology students than in other subject areas. For example, in 2005, the response rate was 54.5 per cent and in 2006 it dropped to 50.5 per cent. This situation has been reflected at institutional level since the mid-1990s, by which time a number of UK institutions were carrying out regular student feedback surveys. At a north western university in 1998, engineering students provided the lowest response rate, namely 15 per cent in design and technology; 9 per cent in science faculty. At a Welsh university in 2001, the engineering faculty gave a 13.5 per cent response rate, which was lowest equal to that of the school of electronics. At the same institution in 2002, the engineers achieved the lowest response rate per school, namely 12.4 per cent. At SHU (MacLeod-Brudenell et al., 2003), engineering was one of the lowest responders (23.6 per cent), although the lowest was computing and management sciences (21.6 per cent). At UCE, engineering students were consistently the poorest responders to the institutional student satisfaction survey for many years (MacDonald et al., 2005: 3). At another Midlands university in 2005, the engineering response rate was the second lowest in 2005 and lowest of all the faculties in 2007.

There has been no published discussion as to why this may be but there are several possibilities. There may be a simple technological answer: engineering students may be more willing to complete electronic surveys than paper-based ones. However, experience does not always bear this out: at the UCE, an electronic survey replaced the paper version but response rates did not noticeably increase, despite claims that the students would be more willing to complete an electronic survey (MacDonald et al., 2005). The lower response rate may simply be a difference in discipline: better response rates are often achieved by social science students, who are generally more used to completing surveys as part of their studies.

Engineering students in the NSS

The NSS results, which are presented as percentage agreement ratings, are published each year on Unistats,3 a publicly available website. A fuller analysis of the survey data has been provided by Surridge (2006; 2007; 2008). The full analysis does not need to be repeated here but the few references to engineering students are worthy of mention. The survey, in operation since 2005, finds that engineering students are generally no less positive than other students about the different aspects of their experience. Indeed, only in the teaching and learning section of the survey have engineering students been markedly less positive than other respondents (Surridge, 2006: 77).

The only noticeable differences are related to specific demographic characteristics, and these relate only to specific areas covered by the survey: teaching and learning, assessment and feedback and ‘overall satisfaction’. The two main demographic influences noted by Surridge were ethnicity and age. Respondents from Asian, mixed and ‘other’ backgrounds were less positive about assessment and feedback than others. Asian respondents were less positive about items relating to learning resources; engineering respondents of African descent were more positive than white students about their overall satisfaction (Surridge, 2006: 116, 123). Surridge found that engineering students were also unusual in displaying no age-related effects in their overall satisfaction and in teaching and learning items (Surridge, 2006: 110, 126).

Surridge has also observed an effect of location on ratings from engineering students. Engineering students who were studying off-site were consistently less positive about the organisation and management items of their courses than students in those subject areas whose teaching was provided wholly by their home institution (Surridge, 2006: 119).

In most of these cases, however, it is important to note that students in engineering and technology courses were not unique. Their responses were similar to those in biological science, computer science, business and administration and creative arts and design courses. In some cases, they were more positive than their counterparts who were studying law.

Engineering student feedback in institutional surveys

At institutional level, student satisfaction survey responses suggest that engineering students differ little from students from other disciplines in their satisfaction with aspects of their experience. In some particular areas, engineering students are noticeably less positive than other students but this appears to be more a result of institutional circumstances rather than fundamental discipline-related issues.

Overall satisfaction

Overall satisfaction with the course varies at different institutions: engineering students are slightly less satisfied than others. For example, at UCE in the period 1992–2007 engineering students were consistently less satisfied than other respondents with this aspect of their experience. Engineering students also consistently provided the lowest rating for the university management, for their faculty and for their department. At another Midlands university, ratings from engineering students in 2007 were lower than others for the overall evaluation items: ‘university overall’, ‘value for money’ and ‘your course’ but higher than most for ‘the university is enhancing your career prospects’4 (University B, 2007: 5). At SHU in the period 2000–2003, engineering students were moderately satisfied with their overall experience of the university in comparison with other students (Sheffield Hallam University, 2003: 2). Similarly, data from the 2009 student satisfaction survey at a university in the south east demonstrates no noticeable difference between engineering students and their counterparts in other disciplines. There is little in the existing data to indicate what is behind engineering students’ overall satisfaction but the variety suggests that individual circumstances determine satisfaction in this case.

Interpersonal skills

Engineering students appear to have been slightly less satisfied with the development of interpersonal skills than other students at some universities. At the UCE, for example, satisfaction with this item noticeably declined in the period 1999–2003, when the item was one of the few at the university that gained a C rating5 (MacDonald et al., 2004: 32, 43). However, satisfaction rose steadily for the next few years and by 2007, the item gained a B rating and was similar to some other faculties (MacDonald et al., 2007: 55, 65). At another Midlands university, most items concerning interpersonal skills were rated as less satisfactory than they were by other students, although engineering students were noticeably more satisfied with the items ‘financial management skills’ and ‘project management skills’. At SHU, interpersonal skills were not regarded by engineering students as any less satisfactory than they were by other students (Sheffield Hallam University, 2004: 23, 41). At a Welsh university, interpersonal skills were not regarded as noticeably different by engineering students compared to students of other disciplines. Again, the variety suggests that local circumstances may determine the satisfaction or otherwise of engineering students.

Work experience and development of employability

Engineering has long had a more direct link to the workplace than many other subjects, thus it may be expected that satisfaction would be linked with aspects of work readiness. Students have expressed concern that they have not been making the links they expected. At a Midlands university, opportunities to make contact with professionals was considered to be of lower importance by engineering students than by those of other faculties. Work experience items were all regarded lower than by other students. At a Welsh university, the organisation of work experience was regarded as unsatisfactory and very important (D rating) in 2001, and engineering was amongst the least satisfied of all faculties. However, the following year, this item was regarded as satisfactory and very important (B rating). At SHU in 2003, the item ‘opportunities for work-related placements’ was regarded as satisfactory and very important (B rating) by engineering students, as it was by most students of other disciplines. However, the separate item ‘opportunities for activities related to enterprise and/or self-employment’ was regarded as adequate and not so important (Sheffield Hallam University, 2003: 23). The following year, items relating to work experience were not regarded as any less satisfactory by engineering students. At UCE, the two items ‘suitability’ and ‘organisation of work experience’ suffered a collapse in satisfaction in 1999 and only a slow recovery until 2004 (MacDonald et al., 2004: 42). By 2007, ‘suitability’ and ‘organisation of work experience’ were both regarded as very satisfactory or satisfactory, respectively, and very important (ratings A and B) (MacDonald et al., 2007: 55).

This was clearly an issue for the Faculty of Engineering and it made efforts to address the concerns raised by students in the 2000 report, when the item ‘opportunities to make links with professionals’ was regarded as unsatisfactory and very important (D rating) by engineering students:

The Faculty of Engineering and Computer Technology will continue to advertise trips organised through the Engineering Society and will continue to support external initiatives for the benefit of students. Students will continue to be provided with opportunities to link with employers through evening events and other schemes. (Bowes et al., 2000: 12)

‘Work experience’ items continued to do less well in engineering than in other faculties at UCE, but, in 2000, the opportunities to make links with professionals was regarded as satisfactory and very important (C rating) and the faculty continued to address the concern:

The Faculty of Engineering and Computer Technology will continue to provide publicised, evening events at which students may meet employers, and will encourage more students to attend. There are also opportunities to link with industry through the CISCO academy.6 (Harvey, 2001: 10)

The faculty continued to address this issue in 2004 and 2005, investigating potential opportunities for facilitating work experience, where possible or appropriate, working with central offices to provide a more effective approach:

The Faculty of Engineering [Technology Innovation Centre] (tic) is working with the Careers Services to improve placement preparation and placing more opportunities on the tic intranet. Events are planned in the tic café to advertise the career services. (MacDonald et al., 2004: 10)

The support for students who wish to undertake a work placement has been improved by increasing the staffing of the placements office, providing more placement preparation seminars and activities and publishing more placement opportunities on the tic’s placement website. (MacDonald et al., 2005: 12)

Work experience is arguably an area that is less likely to be regarded as satisfactory and as a result, harder to improve.

Assessment and feedback

The area where engineering and technology students appear to have been least satisfied over the years is that of assessment and feedback and this reflects the national concern with this area (Williams and Kane, 2008). The usefulness of feedback usually scores more highly than the promptness of feedback but in both, science-related disciplines and especially engineering and technology generally score less well for both items than others. It should also be noted that few faculties score particularly well. This indicates that assessment and feedback may be particularly difficult issues to improve in engineering subjects.

The reasons for such intransigency are not, however, clear from existing evidence. Indeed, as Elton noted (1998: 36), this may be related to structural differences in the form of assessment in ‘hard’ subjects. Indeed, from the qualitative data collected from student experience surveys, engineering students reflect concerns that are common to students in many other disciplines. More specifically, students’ comments suggest that the tutors’ responses to their assignments give them an indication of their progress. Many such comments are not principally about feedback but about the marks themselves; although it is not always clear if students’ use of the word ‘mark’ refers solely to the grade or to the comments and grade applied to the assignment by the tutor as the following quotes indicate:

We need our marks to indicate our progress. By not having any, we have no way of knowing how well/badly we are performing!7

We need to know how well we did.8

Coursework should be published on the Internet. What’s the point of having a ‘mark’ column when the marks are not published on the Internet? All coursework marks should be put up there.9

This is despite Gibbs’ (2006a: 34) view that:

. . . if students receive feedback without marks or grades, they are more likely to read the feedback as the only indication they have of how they are progressing.

Some comments make it clear that it is marks the students are concerned about – real concern with the most basic unit. Other comments demonstrate a deeper understanding of the issue – that a mark is not necessarily self-explanatory:

Very difficult to understand what they say . . . . [H]ow will I improve my assignments. I don’t know why I fared bad[ly] in my test!!10

Feedback on coursework: how can a lecturer justify a relatively low mark with simply a collection of ticks? A complete lack of respect has been shown by a number of lecturers generally.11

Promptness of feedback is an issue because many students do not feel that feedback is returned quickly enough. In many cases, there is evidence of irritation amongst students that they are expected to hand in their work on time but that staff do not return work on time:

Why can work be returned late, but you are not allowed to hand it in late?12

Several times this year I have had very late feedback for coursework that was handed in on time.13

Lecturers need to mark and return work on time! After all, we get penalised for late submission, so what happens if they don’t get it done?14

Assessment and feedback items are consistently regarded as very important by respondents, although closer analysis of UCE data demonstrates that they were in fact regarded as consistently less important by engineering students than others. Art school students ranked the usefulness of feedback particularly highly in 199615 and acting (drama) students regarded it particularly highly in 2007.16 For business school students, the importance of this issue has increased considerably, although the number of items has decreased. Engineering students have not regarded this item as important.17 Education students regarded the promptness of feedback as most important in 199618 and highly in 2007.19 Health students regarded it as most important in 2007.20 Engineering students regarded it as least important in both 1996 and 200721 (Williams and Kane, 2008).

Institutional responses

A fundamental element of the student satisfaction approach is that action based upon the survey results is planned and implemented (Harvey et al., 1997, Part 10). Assessment and feedback is often mentioned in institutional feedback to staff and students as an area in which action is being taken. At UCE:

. . . the issue of promptness of feedback on assignments has always been very important to students and satisfaction varies from faculty to faculty and from course to course. (Harvey et al., 2000: 11)

Publicly available feedback information from UCE demonstrates that the issue of feedback as it is raised by students has been addressed by management in several different ways since the mid-1990s. Faculties that have scored badly on assessment issues have responded by setting realistic targets for assignment turn-around and, importantly, made assessment feedback schedules clearer to their students. Communication is a consistent feature of action. This is combined with closer monitoring of actual turnaround times.

Institutional action plans seldom refer directly to innovative practice as it is described in the literature (e.g. Stefani, 2004–2005; Mutch, 2003). Approaches to assessment and feedback are used that are recognisable as part of a wider trend, such as computer-assisted learning.

Responses to institutional surveys mainly fall into two categories:

image The institution clarifies its procedures to the students.

image The institution recognises that it can improve its own processes.

The most important element in an institution’s response to student feedback is that transparent action is taken. Although a causal link is difficult to prove, it is clear that a rise in satisfaction with an item often coincides with action as a result of the annual survey. In other words, the result of using student data in order to inform improvements appears to have a direct impact on the resulting student satisfaction. Satisfaction is therefore a dynamic process that depends on institutions asking for feedback from their students and acting upon the information. Furthermore, students need to be made aware of the action that has been made so that they can see that the feedback process is worthwhile and not merely an empty gesture.

One response to ‘promptness of feedback on assignments’ is to set realistic targets. In 1995–1996, for example, UCE instituted more realistic turn-around targets:

In response [to low satisfaction with promptness of feedback] the Faculty [of Computing and Information Studies] set a target of a ‘four-working-week turnaround’ on assignments, which has proved very successful. (Centre for Research into Quality, 1996)

At SHU, there is a three-week rule for the return of feedback to students (Harvey et al., 2004), although it has been made clear, publicly, that this is difficult to achieve in some faculties.

In addition to setting realistic targets for the return of feedback on assignments, an important element in the institutional response is to make the schedule of feedback on assignments clearer to students. At the UCE in 2002–2003, there was an attempt to clarify hand-in dates:

In order to enable students to plan their workload, the Faculty of the Built Environment is to identify assessment dates clearly and no longer allow alterations of hand-in dates. (MacDonald et al., 2003:11)

At UCE over the period 1995 to the early 2000s, there was a strategy to provide students with clear guidelines on what to expect. This was aimed not only at informing students about deadlines but to enable them to plan their workload. Indeed, in 1995–1996, faculties such as Health, Social Science, Engineering and Computer Technology wrote assignment turn-around guidelines into their student charters. All this relates to communication with students about what is expected of them.

Better communication on all aspects of assessment is an important issue for students, as argued by Price and O’Donovan (2006), and this is taken into account by institutions that manage a feedback and action cycle. At SHU in 2005, the Faculties of Arts, Computing, Engineering and Sciences offered guidance to staff about providing feedback to students on the virtual learning site and published it in the faculty newsletter (SHU, 2005).

In addition to communication and the clarification of deadlines, better timetabling of assessments is thought by some institutions to help relieve the problems of promptness of feedback.

The tic has introduced a new system to aid the tracking of coursework and ensure that work is returned according to the tic’s published timescales. In future, students will be notified by email when work is ready for collection from the Learning Centre. This will hopefully address the current situation where work has been available for collection but students have not been informed. (MacDonald et al., 2005: 11)

Ensuring an even spread of assessments is one approach that has been used by institutions. At UCE over several years, attempts were made to spread assignments more evenly by changing the teaching programme. At another Midlands university in 2005–2007, attempts have been made to avoid ‘bunching’ of assessments.

At UCE’s Faculty of the Built Environment in 2003–2004, a new postgraduate framework offered a less frequent assessment régime. At another Midlands university, reports from external examiners stated that the university is assessing too much. The response has been to change courses from a 15- to a 20-credit framework. This is reflected in the wider discussion about assessment: modular structure has led to shorter courses and, as a result, more frequent assessment (Gibbs, 2006b).

At this Midlands university, the internal audit indicated that there are two basic pedagogies about assessment:

image assessment tests what students have learnt;

image assessment forces students to learn: by reading a lot.

This reflects the observation by Knight (1995) that assessment is an effective method of making students work.

Overall, the issue of promptness of feedback has been a dominant one for students. Faculties generally explain that this might be because marks are no longer given out until moderated, as changes in assessment marks sometimes take place. Faculties have acted where possible and are investigating the issue. The tic is going to pilot a new system aimed at accelerating the return of marked work in order to enable students to plan their workload. (MacDonald et al., 2003)

A common response to assessment and feedback issues is to institute an effective monitoring system. Different faculties at UCE used different approaches. For example, in the Faculty of Education, in 2005–2006 monitoring of the timing and placing of assessments was carried out in order to make improvements in this area. In 1998–1999 in the Faculty of Engineering, selected modules were audited and students were asked specifically to comment on this issue. The Board of Studies was charged with determining those modules where promptness of feedback was a problem.

In addition, some faculties institute systems to track student coursework to ensure that feedback is provided according to schedules.

The management information systems have caused some delays for students and these are currently under review. (Bowes et al., 2000: 12)

At the Technology Innovation Centre (tic), improvements have been made to the coursework hand-back system and on-line tracking of coursework is now available to students. (MacDonald et al., 2006: 12)

Institutions have tried to increase the immediacy of providing feedback to students. For example, at the UCE Faculty of Engineering and Computer Technology in 1999–2000, management emphasised the use of assessment in class so that students receive immediate feedback. In the School of Property and Construction in the Faculty of the Built Environment that year, there was a commitment to provide general feedback to students within two weeks, with the individual pieces of course work being returned within a further two weeks.

More emphasis is now being given to assessment in class so that students receive immediate feedback. (Bowes et al., 2000: 12)

In some cases, the problem has been simply a lack of academic staff. For example, at UCE, several faculties increased the number of teaching staff and found that promptness of feedback was less of a concern to students. It is generally recognised that such a simple response is not easy in the context of a hugely expanded higher education sector where many of the traditional one-to-one methods of teaching have proved too resource intensive (Gibbs, 2006b: 12). Nonetheless, there is a case for assigning more of an academic’s time to feedback given its importance in the learning process.

In addition, the Faculty has employed more visiting tutors to cope with staff shortages and is undertaking a pilot to provide direct feedback using e-mail. (Bowes et al., 2000: 12)

Increasingly, institutions are exploring the use of alternative methods of feedback to students such as those explored by Bryan and Clegg (2006). Of particular interest currently is the potential for computer-assisted assessment (Swain, 2008). This is in part a response to the Higher Education Funding Council for England’s 2005 strategy for e-learning, but many institutions have been developing electronic methods of assessment and feedback for many years. At UCE, for example, the use of electronic modes of feedback has been developing for some years. In 1999–2000, the Faculty of Engineering and Computer Technology undertook a pilot to provide direct feedback using e-mail. Pilot tutorials using the Internet provided more immediate feedback. By 2003–2004, the tic intranet provided information on extenuating circumstances, appeals procedures and degree classification. Students could now obtain module marks online and download details on any work required for re-assessment. In 2004–2005, staff in the Faculty of the Built Environment were given further training on the university’s electronic information system. In the Faculty of Engineering:

Learning support will be provided through learning materials on the Internet. (Bowes et al., 2000: 12)

Pilot tutorials using the Internet will provide more immediate feedback. (Bowes et al., 2000: 12)

In some cases, institutions attempt to involve students more openly in the feedback return process. At UCE in 2000–2001, students were employed by one faculty to act as co-ordinators. This approach, it was believed, would help to alleviate difficulties in receiving work from visiting tutors, more of whom have been recruited. The faculty reported improvements to the system and students are now able to retrieve their end-of-semester results through the intranet. This indicates an understanding that the students themselves can be a useful resource in the assessment process (Falchikov, 1995).

A frequent element in institutional approaches is to introduce standardised feedback systems. At UCE, for example, in 2001–2002 the Faculty of Law and Social Sciences proposed a new form to facilitate feedback relating to intended learning outcomes and to identify areas in which students can improve academic performance. In 2006–2007, the UCE Business School reviewed the timing of assessments and the overall assessment strategy (for first-year students) in order to support students through the initial challenges of entering higher education. It also reviewed its systems for handing in and returning assignments and set up an assigned ‘hand-in/collection’ point specifically for this purpose.

Satisfaction with aspects of assessment and feedback are sometimes attributed to what were regarded as external factors. At UCE, for example, faculties generally explained poor satisfaction with ‘promptness of feedback on assignments’ as being due to marks being no longer given out until moderated, as the assessment marks are sometimes changed by this process.

Some institutions attempt to draw together examples of good practice internally. At University A, for example, an audit committee set up in 2004 to establish what approaches were being used by high-scoring faculties within the university could be adopted institution-wide. The audit recommended that:

. . . a more formalised policy governing the provision of feedback to students on examinations should be developed and that action be taken to ensure that both staff and students are made fully aware of the faculties’ expectations in this area.

. . . mechanisms already established for the provision of feedback on examinations should continue to be developed and that they be clearly communicated to staff and students.

. . . mechanisms to improve the timescales for the return of feedback on assessed work should continue to be investigated and developed and that measures be taken to ensure that the communication channels for providing information to students in respect of these timescales work as effectively as possible.

University guidance should be provided for faculties governing the electronic provision of feedback on draft work.

A similar approach was taken at University B as a result of its 2005 survey:

A new policy on coursework submission, return and feedback has been put in place, reflecting the good practice exhibited in many areas and will be implemented fully for the 2007–2008 academic year.22

Students, not only those from engineering and technology faculties, do not always collect work when it is marked and available on time. At UCE, for example, one faculty dean observed that whilst it is relatively easy to implement and enforce collection points for assignments to be handed in, it is difficult to enforce the collection of marked work by students. His experience had been that when return points had been designated, many assignments remained uncollected. This experience was also reflected at SHU in 2002:

Pilot work-return sessions were organised by the administration team but many students did not use the opportunity to collect work. The pilot scheme will be repeated in 2002–2003. (Morey et al., 2002)

Similarly, at University B, up to 30 per cent of coursework remained uncollected, in part the pro-vice chancellor observed, because marks were available online.

This raises two questions. First, students may be indifferent to comments and only want grades although this does not seem reasonable in the light of comments from many students wanting feedback so as to improve their next assignment. It is arguable that increasing use of modules and semesters has created a situation where there is no further possibility to improve as there is often one summative assessment in a subject so students do not want the feedback as (they perceive that) it is non-transferable (Gibbs, 2006b: 11).

Second, it may be the case that the coursework is not collected because it is not seen as valuable. Collection of feedback may not be viewed by the students as a beneficial process because it means that they read the comments without the dialogue necessary to explain, explore or contest the comments. Worse, students may not collect feedback because they believe that the lecturers’ comments are insignificant, being scanty and adding nothing to the grade.

Issues relating to the multi-campus university

One of the major issues affecting student satisfaction amongst engineering students at UCE appeared to have been a significant change in location and it is useful to explore this particular example in detail. In 2001, the Faculty of Engineering moved from the main campus at Perry Barr to Millenniun Point, a new prime site in the city centre.23 Although this was heralded as a great new opportunity, satisfaction amongst the university’s engineering students took a tumble in almost all the dimensions covered by the student satisfaction survey for the next few years.

One of the issues that clearly emerges from the surveys is distance created between the new campus and the main campus. Both the Students’ Union facilities and Student Services received particularly low ratings, presumably because they were seen as remote from the new campus. In particular, students complained about the lack of information about Student Services and the lack of visibility of these services on the new campus. This has been a perennial problem since the faculty moved to the new site.

There was a concerted effort by UCE in 2004 to improve the experience of engineering students at the city site by introducing several new measures. First, the Student Union:

. . . improved transport arrangements this year from campuses in order to improve access for tic and other, smaller campuses. (MacDonald et al., 2004)

Second, to make the union space at the faculty more visible:

. . . the layout and facilities in the Union Student room at tic has been revised, including ‘relaxation’ facilities such as widescreen TVs and easy chairs. The Union of Students holds lunchtime events in the tic café. (MacDonald et al., 2004)

In 2004, to raise awareness about the availability and diversity of Student Services; Services have also increased the number of hours of support at tic. (MacDonald et al., 2004).

Did these efforts work? It is difficult to prove causal links but there appears to have been some improvement at UCE over the course of the following few years. In 2005 and 2007, respondents from the new city site were no less positive about aspects of the Student Union and social life than other students generally (MacDonald et al., 2005, 2007).

Conclusions

Engineering departments have suffered from poor recruitment and poor retention rates in the UK for many years but an exploration of student feedback surveys has not clearly identified a particular area of concern in the lived experience of engineering students. Rather, the overall experience of engineering students appears to be little different from that of their fellow students in other faculties. Indeed, they seem generally to be moderately satisfied with their experience, especially when compared with the experiences of, for example, art and design students (Yorke, 2008).

There are only two noticeable and consistent differences between engineers and their fellow students. First, they are less responsive to feedback surveys. This may be the result of the discipline itself. Engineering students are not expected to use questionnaires as part of their training whereas, for example, social science students, who are generally much better at responding, often work with questionnaires. Second, engineering students appear to rate assessment and feedback worse compared to students of other disciplines, suggesting that there are structural issues with the type of assessment usually performed in these subjects (Elton, 1998).

However, in some institutions, there are specific areas in which engineering students are less satisfied than others, although these issues do not coincide at different institutions. Indeed, poor satisfaction levels appear to arise as a result of local circumstances. In many cases, this may be the result of policy decisions at the institutional level and are unlikely to be connected with the discipline in particular. It is noticeable that when concerted action is taken by an institution as a result of listening to the student voice – and such action is prolonged and well communicated – then satisfaction tends to increase over time.

This has important implications for our understanding of the role of student feedback as part of a quality improvement process. Where student satisfaction surveys have been conducted consistently for several years, it is possible to see changes in satisfaction with different aspects of the student experience (Williams and Kane, 2009). Not only can downward trends be clearly identified, but upward trends coincide with action on the part of the institution taken as a result of listening to the students.

The key issue in collecting student feedback, therefore, is what is done with it. Student feedback surveys are not merely measurement tools but are dynamic instruments that need to be used in combination with an institutional quality improvement process. Understanding local circumstances is vital in improving local situations.

References

Bowes, L., Harvey, L., Marlow-Hayne, N., Moon, S., Plimmer, L. The 2000 Report on The Student Experience at UCE. Birmingham, UK: University of Central England; 2000.

Bryan C., Clegg K., eds. Innovative Assessment in Higher Education. London: Routledge, 2006.

Centre for Research into Quality, Student SatisfactionFebruary 1996. Birmingham. UK: University of Central England, 1996.

Confederation of British Industry. Taking Stock CBI Education and Skills Survey 2008. London: Confederation of British Industry and edexcel; 2008.

Elton, L. Are UK degree standards going up, down or sideways? Studies in Higher Education. 1998; 23(1):35–42.

Falchikov, N. Improving feedback to and from students. In: Knight P., ed. Assessment for Learning in Higher Education. London: Kogan Page; 1995:157–166.

Felder, R.M., Felder, G.N., Dietz, E.J. The effects of personality type on engineering student performance and attitudes. Journal of Engineering Education. 2002; 91(1):3–17.

Flynn, S., Engineering students suffer highest drop-out rates, say MPs. New Civil Engineer, 20 February 2008. 2008 Available online at: http://www.nce.co.uk/engineering-students-suffer-highest-drop-out-rates-say-mps/756664 [article (accessed October 2010)].

Gibbs, P. How assessment frames student learning. In: Bryan C., Clegg K., eds. Innovative Assessment in Higher Education. London: Routledge; 2006:23–36.

Gibbs, P. Why assessment is changing. In: Bryan C., Clegg K., eds. Innovative Assessment in Higher Education. London: Routledge; 2006:11–22.

Green, D., Brannigan, C., Mazelan, P., Giles, L. Measuring student satisfaction: a method of improving the quality of the students’ experience? In: Haselgrove S., ed. The Student Experience. Buckingham, UK: Society for Research into Higher Education and Open University Press, 1994.

Gregory, R., Thorley, L., Harland, G. Using a standard student experience questionnaire with engineering students: initial results. In: Gibbs G., ed. Improving Student Learning: Theory and Practice. Oxford Centre for Staff Development: Oxford, UK, 1994.

Harvey, L. The 2001 Report on the Student Experience at UCE. Birmingham, UK: University of Central England; 2001.

Harvey, L. Student feedback. Quality in Higher Education. 2003; 9(1):3–20.

Harvey, L., Ibbotson, R., Leman, J., Marsden, D. Sheffield Hallam University Student Experience Survey 2004: Undergraduate and Taught Postgraduate Students. Sheffield, UK: Sheffield Hallam University; 2004.

Harvey, L., Moon, S., Plimmer, L. Student Satisfaction Manual. Buckingham, UK: Society for Research into Higher Education and Open University Press; 1997.

Higher Education Funding Council for England (HEFCE) (2002) Information on Quality and Standards in Higher Education. London: HEFCE. Available online at: http://www.hefce.ac.uk/pubs/hefce/2002/02_15/02_15.pdf (accessed November 2010).

Higher, HEFCE strategy for e-learning, 2005. March 2005/12. Available online at: http://www.hefce.ac.uk/pubs/hefce/2005/05%5F12/05_12.pdf [(accessed October 2011)].

Investing in Innovation: A Strategy for Science, Engineering and Technology. HM Treasury, London, 2002. Available online at http://www.webarchive.nationalarchives.gov.uk/+/http://www.hm-treasury.gov.uk/media/F/D/science_ strat02_ch1to4.pdf [(accessed October 2010).].

Knight P.T., ed. Assessment for Learning in Higher Education. London: Kogan Page, 1995.

MacDonald, M., Saldana, A., Williams, J. The 2003 Report on the Student Experience at UCE. Birmingham, UK: University of Central England; 2003.

MacDonald, M., Schwarz, J., Cappuccini, G., Kane, D., Gorman, P., Sagu, S., Williams, J. The 2005 Report on the Student Experience at UCE. Birmingham, UK: University of Central England; 2005.

MacDonald, M., Williams, J., Saldaña, A. The 2004 Report on the Student Experience at UCE. Birmingham, UK: University of Central England; 2004.

MacDonald, M., Williams, J., Gorman, P., Cappuccini-Ansfield, G., Kane, D., Schwarz, J., Sagu, S. The 2006 Report on the Student Experience at UCE. Birmingham, UK: University of Central England; 2006.

MacDonald, M., Williams, J., Kane, D., Gorman, P., Smith, E., Sagu, S., Cappuccini-Ansfield, G. The 2007 Report on the Student Experience at UCE. Birmingham, UK: University of Central England; 2007.

MacDonald, M., Williams, J., Kane, D., Gorman, P., Smith, E., Sagu, S., Cappuccini-Ansfield, G. The 2007 Report on the Student Experience at UCE. Birmingham, UK: University of Central England; 2007.

MacLeod-Brudenell, T., Ibbotson, R., Smith, M., Harrison, A., Harvey, L., Leman, J., Fowler, G. The 2003 Report on the Undergraduate and Taught Postgraduate Student Experience at Sheffield Hallam University. Sheffield, UK: Sheffield Hallam University; 2003.

Morey, A., Watson, S., Saldana, A., Williams, J. The 2002 Report on the Undergraduate and Taught Postgraduate Student Experience at Sheffield Hallam University. Sheffield, UK: Sheffield Hallam University; 2002.

Mustoe, L.R., Croft, A.C. Motivating engineering students by using modern case studies. International Journal of Engineering Education. 1999; 15(6):469–476.

Mutch, A. Exploring the practice of feedback to students. Active Learning in Higher Education. 2003; 4(1):24–38.

Parsons, S.J., Overcoming poor failure rates in mathematics for engineering students: a support perspective, 2003. Available online at: http://www.hull.ac.uk/engprogress/Prog3Papers/Progress3%20Sarah%20Parsons.pdf [(accessed October 2010)].

Price, M., O’Donovan, B. Improving performance through enhancing student understanding criteria and feedback. In: Bryan C., Clegg K., eds. Innovative Assessment in Higher Education. London: Routledge, 2006.

Richardson, J.T.E. Instruments for obtaining student feedback: a review of the literature. Assessment and Evaluation in Higher Education. 2005; 30(4):387–415.

Sazhin, S.S. Teaching mathematics to engineering students. International Journal of Engineering Education. 1998; 14(2):145–152.

Sheffield Hallam University. The 2003 Report on the Student Experience at SHU. Sheffield, UK: Sheffield Hallam University; 2003.

Sheffield Hallam University. The 2004 Report on the Student Experience at SHU. Sheffield, UK: Sheffield Hallam University; 2004.

Sheffield Hallam University. The 2005 Report on the Student Experience at SHU. Sheffield, UK: Sheffield Hallam University; 2005.

Souitaris, V., Zerbinati, S., Al-Laham, A. Do entrepreneurship programmes raise entrepreneurial intention of science and engineering students? The effect of learning, inspiration and resources. Journal of Business Venturing. 2007; 22(4):566–591.

2004 Stefani, L. Assessment of student learning: promoting a scholarly approach. Learning and Teaching in Higher Education. 2005; 1(1):51–66.

Surridge, P. The National Student Survey 2005–2007: Findings and Trends. Bristol, UK: Higher Education Funding Council for England; 2008.

Surridge, P. The National Student Survey 2006 Report to HEFCE. Bristol, UK: Higher Education Funding Council for England; 2007.

Surridge, P. The National Student Survey 2005 Report to HEFCE. Bristol, UK: Higher Education Funding Council for England; 2006.

Swain, H., Evaluating students via online assessment both tests what students know and helps develop their understanding. Times Higher Education Supplement, 03 January 2008. 2008 Available online at: http://www.timeshighereducation.co.uk/story.asp?sectioncode=26&storycode=210051&c=1 [(accessed October 2011)].

Symons, R., Listening to the student voice at the University of Sydney: closing the loop in the quality enhancement and improvement cycle. Paper presented at the 2006 Australian Association for Institutional Research (AAIR). 2006. [Forum, Coffs Harbour, New South Wales, November 2006].

Williams, J., Cappuccini-Ansfield, G. Fitness for purpose? National and institutional approaches to publicising the student voice. Quality in Higher Education. 2007; 13(2):159–172.

Williams, J., Kane, D. Exploring the NSS: Assessment and Feedback Issues. York, UK: Higher Education Academy; 2008.

Williams, J., Kane, D. Assessment and feedback: institutional experiences of student feedback, 1996 to 2007. Higher Education Quarterly. 2009; 63(3):264–286.

Williams, R., Brennan, J. Collecting and Using Student Feedback on Quality and Standards of Learning and Teaching in Higher Education. Bristol, UK: Higher Education Funding Council for England; 2003.

Yorke, M. What can art and design learn from surveys of "the student experience"?. Nottingham, UK: Paper presented at the Conference of the Group for Learning in Art and Design; 2008. [8 September 2008].


1.A social research unit based at the then University of Central England, now Birmingham City University.

2.The NSS was introduced in the UK in 2005 as a measurement of the quality of higher education from the students’ perspective. The questionnaire, delivered electronically, contains a relatively small number of dimensions (usually 20–30 questions) and uses a Likert scale of agree/disagree. The survey has, since its inception, been highly controversial yet extremely influential on the policies and strategies of higher education institutions (see Williams and Cappuccini-Ansfield, 2007).

3.Available online at: http://unistats.direct.gov.uk/ (accessed October 2011).

4.To protect confidentiality, two universities have been anonymised as ‘University A’ and ‘University B’. The report cited here comes from University B (2007: 5) and has not been published.

5.The student satisfaction approach, developed by Harvey et al. (1997), is unusual in combining satisfaction and importance ratings as an alphabetical score. In this scheme, A = very satisfactory, B = satisfactory, C = adequate, D = unsatisfactory and E = very unsatisfactory; the letter case represents levels of importance to the students. Hence, capitalised letters indicate that respondents regard an item as very important, lower case letters that an item is important and lower case scores in parenthesis are not so important.

6.An education initiative from the US global conglomerate, Cisco Systems, offering a range of IT networking programmes aimed at preparing students for the eponymous certification exams.

7.Student at University B, Faculty of Engineering, 2007.

8.Student at University A, Faculty of the Built Environment, 2005.

9.Student at University A, Faculty of Engineering, 2007.

10.Student at University A, Faculty of Engineering, 2007.

11.Student at University A, Faculty of Law and Social Science, 2007.

12.Student at University A, Faculty of the Built Environment, 2000.

13.Student at University B, Faculty of Business, 2005.

14.Student at University B, Faculty of Engineering, 2005.

15.Mean importance = 6.51.

16.Mean importance = 6.64.

17.Mean importance = 6.07 in 1996 and 6.08 in 2007.

18.Mean importance = 6.32.

19.Mean importance = 6.25.

20.Mean importance = 6.39.

21.Mean importance = 5.79 in 1996; mean importance = 5.89 in 2007.

22.University B, 2005.

23.Millennium Point is a large multi-purpose building in a regeneration zone in the heart of Birmingham. It was built as a major millennium project in 2000 and houses the city’s science museum, ThinkTank, and the UCE Engineering Faculty.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.56.45