Chapter 4
Evaluation for Policy

Rhetoric and political reality

James, M. (1993) ‘Evaluation for Policy: rhetoric and political reality: the paradigm case of PRAISE’. In R. Burgess, (ed) Evaluation for Policy and Practice (Basingstoke: Falmer Press): 119–138.

Argument

The message of this chapter is not very optimistic. In the light of my experience as an evaluator in an area of national policy, I am forced to acknowledge what I probably always knew deep down: that the classic definition of the role of evaluation as providing information for decision-makers (Cronbach, 1963; Stufflebeam et al., 1971; Davis, 1981) is a fiction if this is taken to mean that policy-makers who commission evaluations are expected to make rational decisions based on the best (valid and reliable) information available to them. Policy-making is, by definition, a political activity and political considerations will always provide convenient answers to awkward questions thrown up by research. Thus the ideal of rational decision-making based on information derived from research-based evaluation is probably illusory. The best that evaluators can hope to do, if they discover that their findings are at variance with the current political agenda, is to make public their evidence. This will give those who are affected by policy a chance to understand the real basis on which decisions are made, and to become aware of the alternatives, some of which may still be realized if there is a will to work in the interstices of policy, or towards changing the policy-makers by means of electoral procedures.

This is not a new idea. It is at the heart of the concept of ‘democratic evaluation’ (MacDonald, 1977; Simons, 1987) which regards evaluation as ‘an information service to the community’ underpinned by the basic value of an ‘informed citizenry’. As such it is potentially subversive of the hegemony of those with the most power to determine policy, which is tantamount to adopting a political (small ‘p’) role itself. This vision of evaluation is a far cry from conceptions of a value-free science, but it acknowledges an important dimension of the contexts in which evaluators work and which acts as an inevitable and powerful constraint on their work.

Although I came across the concept of democratic evaluation fifteen years ago, it was only as a direct result of my experience of policy evaluation that I fully appreciated its import. The reality of the strength of political imperative over the ability of research evidence to persuade policy-makers was brought home to me especially forcefully simply because the outlook for the Pilot Records of Achievement In Schools Evaluation (PRAISE), with which I was involved, from 1985 to 1990 was so propitious. In many ways it promised to provide a model for the way that evaluation could feed into policy-making, yet the fulfilment of this promise was undermined by changes in political priorities. As they say, ‘a week is a long time in politics’ and, except in the most limited circumstances, the time-scale that evaluation demands renders it relatively powerless to influence policy-making at the level at which it is commissioned. Let me explain by way of a brief history.

Evidence

In 1985 a team of researchers from the Open University School of Education and Bristol University School of Education was commissioned by the Department of Education and Science and the Welsh Office to evaluate progress and results in nine pilot schemes set up, with the support of Education Support Grants, to develop records of achievement in secondary schools.

In the preceding years, interest in records of achievement had been stimulated in a number of ways. In 1938, 1943 and 1963, respectively, the Spens, Norwood, and Newsom Reports had all commented on the failure of school examination certificates to provide information about the full range of pupils’ experiences and achievements over ten years of compulsory schooling. Indeed it was deplored that many pupils left school with nothing at all to show for what they had done. The problem became even more acute with the proposal to raise the school leaving age to 16 in 1973; after all, how could anyone expect youngsters to be well motivated if they received no recognition for the extra time spent in school? These twin issues of motivation and recognition of achievement were perceived as increasingly important by a growing number of schools, local authorities and consortia, including examinations boards, who, by developing pupil profiles and records of achievement, sought to tackle the problem and provide more meaningful experiences for pupils, particularly those unlikely to achieve recognition for their efforts through the conventional examination route. In a period of approximately ten years, from the early 1970s to the early 1980s, these grass-roots developments grew into a substantial profiling and records of achievement movement, underpinned by a fairly coherent set of principles. Eventually it came to the attention of the Government (see Fairbairn, 1988, for an historical review, and Brown and Black, 1988, for more detail on the Scottish experience).

The interest taken by Sir Keith Joseph, a former Conservative Secretary of State for Education, in profiling and records of achievement (the two terms were often used interchangeably) was very much related to his particular interest in doing something for the ‘bottom 40 per cent’ of pupils, that is, those pupils who at the time were not expected to achieve GCE or CSE qualifications. Thus he pursued the idea of a government-sponsored records of achievement initiative alongside the Lower Attaining Pupils Programme (LAPP) and the first phase of the Technical and Vocational Education Initiative (TVEI). The first pronouncement to emerge from the Department of Education and Science (apart from an HMI survey in 1982) was a draft statement of policy on records of achievement which was subject to widespread consultation in 1983. While welcomed in principle, teachers and educationists pressed for substantial changes; in particular they sought to shake off the ‘less able only label’ that had been adopted in the 1970s but which was now ideologically unacceptable. The result of this consultation process was a much-revised and well-received Statement of Policy which was published by the DES and Welsh Office in 1984.

The stated intention of the Secretaries of State in 1984 was that a framework of national policy for records of achievement should be established by the end of the decade, and that these national arrangements should be designed to meet four main purposes:

  1. Recognition of achievement. Records and recording systems should recognize, acknowledge and give credit for what pupils have achieved and experienced, not just in terms of results in public examinations but in other ways as well. They should do justice to pupils’ own efforts and to the efforts of teachers, parents, ratepayers and taxpayers to give them a good education.
  2. Motivation and personal development. They should contribute to pupils’ personal development and progress by improving their motivation, providing encouragement and increasing their awareness of strengths, weaknesses and opportunities.
  3. Curriculum and organization. The recording process should help schools to identify the all-round potential of their pupils and to consider how well their curriculum, teaching and organization enable pupils to develop the general, practical and social skills which are to be recorded.
  4. A document of record. Young people leaving school or college should take with them a short summary document of record which is recognized and valued by employers and institutions of further and higher education. This should provide a more rounded picture of candidates for jobs or courses than can be provided by a list of examination results, thus helping potential users to decide how candidates could best be employed, or for which jobs, training schemes or courses they are likely to be suitable (DES, 1984, para. 11).

In fulfilment of these purposes the Secretaries of State expected that, among other things, records of achievement systems should:

  • cover a pupil's progress and activities across the whole educational programme of the school, both in the classroom and outside the school as well (para. 16);
  • concentrate on positive aspects of a young person's school career and personal qualities (para. 13);
  • closely involve pupils in the recording process by giving them an opportunity to make a contribution of their own (para. 36) and by involving them in a process of regular teacher-pupil dialogue (para. 16);
  • ensure that the summary document of record becomes the property of pupils who would be free to decide whether or not to show it to prospective employers and others (para. 40);
  • be based on national guidelines which would provide for a common format for the summary documents of record and establish appropriate forms of validation and accreditation (para. 31). This was regarded as essential if records of achievement were to have credibility in the eyes of potential users (para. 33).

At this point little guidance was given concerning the relationship with other forms of internal recording and reporting to parents, although it was noted that these were important issues for consideration:

An issue which the pilot schemes will need to address is the relationship between the internal reporting, recording and dialogue discussed in this statement and the procedures for the end of term (or end of year) reports which schools send to parents. Also relevant are other reporting processes for parents or for careers purposes. It would clearly be neither sensible nor economic to treat all these activities as totally separate. There may be scope for improving the coverage of end of term reports, and these reports may form a valuable item of agenda for internal discussions between teacher and pupil.

(DES, 1984, para. 38)

Broad coverage and interactive processes were, therefore, seen to be key features to be developed in all recording and reporting systems.

On the basis of this 1984 policy statement, nine pilot schemes were set up involving twenty-two local education authorities and approximately 250 schools and colleges. In the first instance these were to run for three years from April 1985 to March 1988. At the same time the PRAISE national evaluation was established, with funding from the DES and Welsh Office, and a Records of Achievement National Steering Committee (RANSC) was convened. The latter was chaired by an Assistant Secretary at the DES and its membership included teachers, educationists, industrialists and examinations board representatives. RANSC was expected to ‘steer, monitor and evaluate the pilot schemes’ and ‘to oversee the work of the professional evaluating team’ (RANSC, 1989, p. 30). On the basis of the information received direct from the pilot schemes and from the national evaluation, RANSC was asked to

report to the Secretaries of State in the autumn of 1988 on the experience gained and on the implications for introducing records of achievement for all pupils in secondary schools in England and Wales, and to prepare draft national guidelines for such records and recording systems.

The work of independent evaluators was therefore integrated into the structures for policy-making in a way that has rarely been the case. (It also contrasts sharply with the introduction of the National Curriculum which has been subject to no comparable piloting or evaluation.) Indeed the introduction of a national system for records of achievement was to be the culmination of a lengthy and thorough process of piloting, evaluation, deliberation, recommendation, consultation and legislation. However new circumstances, notably the provisions of the 1988 Education Reform Act, significantly altered the predicted course of policy-making with respect to records of achievement.

The delivery of reports from the main phase of work by PRAISE (1985–88) was timed to feed into the deliberations of RANSC. Moreover, since the codirectors of PRAISE were also members of that committee they were able to draw the attention of RANSC to items of evidence with a bearing on particular policy questions. The relationship between PRAISE and RANSC was therefore highly productive and PRAISE findings were able to inform the recommendations of RANSC in a very direct way.

On the whole the RANSC report was well received and in his letter to the Secretary of State dated 13 July 1989, which accompanied the report of the consultation exercise, the Chairman of the School Examinations and Assessment Council recommended that RANSC-style records of achievement should be ‘required for use with pupils across the age range 5–16’ although he thought it would be undesirable to prescribe all the detailed content and procedures.

On the basis of this, one might expect that the eventual outcome of five years of piloting and evaluation – which had cost over ten million pounds of public money – and the report of an official committee, would be a national scheme for records of achievement, as more or less envisaged in the RANSC report. Although questions persisted concerning certain issues and matters of detail, the principles on which records of achievement schemes were founded achieved a remarkable degree of support. Despite this support, it soon became quite evident that a national system of ‘RANSC-style’ records of achievement was not to be introduced and supported by regulations. The reason for this was, I believe, simply that the political agenda had changed.

Shortly after the establishment of the records of achievement pilot schemes and the national evaluation, Sir Keith Joseph was replaced as Secretary of State by Kenneth Baker. Unlike his predecessor, Kenneth Baker was less concerned with single issues, such as the achievements of the less able, and more concerned with root and branch reform of the whole education service. Thus records of achievement ceased to be a priority as far as ministers were concerned. This allowed the pilot schemes, PRAISE and RANSC to get on with their work relatively undisturbed but it always held the possibility that, at the point when policy decisions had to be made, records of achievement would be accommodated to newer initiatives or be lost altogether.

In July 1987, as PRAISE was about to publish an interim report, the DES published its consultation document on a proposed National Curriculum for 5 to 16 year olds. As a consequence the national evaluation team was asked to draw attention in its report to any issues arising for records of achievement. Subsequently, RANSC also offered evidence to the Task Group on Assessment and Testing (TGAT, 1988, Appendix I) and drew attention to common aims and possible tensions between Records of Achievement and National Curriculum Assessment. In particular, RANSC indicated that some of the most central principles and procedures of records of achievement could become a source of tension, namely:

  • the emphasis on the involvement of pupils in assessment and recording;
  • the emphasis on positive achievement;
  • the emphasis on formative, developmental purposes rather than the accountability of schools and local authorities;
  • the emphasis on the accreditation of recording processes;
  • the emphasis on recording pupils' total achievements, including personal qualities and general skills, rather than overemphasizing attainment in single subjects;
  • the emphasis on continuous assessment and recording.

Given the level of awareness at the time, these tensions between records of achievement and proposed National Curriculum assessment arrangements did not receive detailed attention in either the PRAISE final report or the RANSC report although the persistence of these issues was mentioned in both. In fact PRAISE was, on this occasion, ‘steered’ to confine its remarks to what had been observed in pilot records of achievement schemes rather than to speculate over-much on the likely effects of the introduction of national assessments (which, it was now argued, was outside its brief). The content of the RANSC report, which confined itself to rather weak statements such as: ‘Further consideration will need to be given to the role of records of achievement in relation to the national curriculum information requirements’ (RANSC, 1988, p. 17), indicates that similar pressures were operating at this level also.

Strategically it may have been wise not to draw too much attention to ideological tensions at this stage because the student-centred philosophy, which underpinned many records of achievement schemes, no longer had any obvious supporters in the Conservative Government. Had RANSC been more explicit about the philosophical contradictions between the two initiatives, there was a strong possibility that records of achievement, as embodied in the early statement of DES policy and developed in pilot schemes, would simply be dropped. It was already clear that nothing was going to be allowed to stop the National Curriculum and assessment juggernaut that was now on the road and had so much riding on it politically. The best hope was that some system could be worked out that would meet the new demands of Government for a means of reporting National Curriculum assessments as well as fulfilling the original aspirations of records of achievement schemes.

Shortly after SEAC had offered the results of its consultations on the RANSC report to the DES, and before ministers had had a chance to make the expected announcement regarding regulations, the Cabinet reshuffle on 24 July 1989 installed John McGregor as the new Secretary of State for England. He then went on holiday and the task of making the announcement fell to Angela Rumbold, Junior Minister of Education for schools. On the grounds that the requirements of regulations should be kept to a minimum ‘so that they do not add to the volume of work already undertaken by schools in recording and reporting pupils’ achievements’, Mrs Rumbold’s announcement was, as half expected, solely concerned with proposed regulations for annual reports to parents concerning the progress of pupils on National Curriculum subjects in both primary and secondary schools. She did not endorse the whole of SEAC’s advice and she appeared careful to avoid using the term ‘records of achievement’; instead she used the phrase ‘recording and reporting pupils’ achievements’ which had less specific connotations.

This change in emphasis did not go unnoticed by education professionals who interpreted Mrs Rumbold’s statement to mean that records of achievement were being killed off. In response, many groups and individuals who felt that this was one government initiative (alongside teacher appraisal) that had won the support of the profession, fired off letters of protest to both the DES and the educational press. By all accounts ministers were surprised by these brickbats and decided to mollify the profession by commending records of achievement. When the Draft Orders were published for consultation on 5 January 1990, they still remained free of any reference to records of achievement and prescribed only a system of annual reports to parents on achievements within National Curriculum subjects. However, they were now accompanied by a draft circular which referred both to PRAISE (in a footnote) and to RANSC and ‘applauded’ records of achievement which, by recognizing positive achievement and seeking to bring schools’ policies on assessment, recording and reporting into a coherent whole, were seen to be ‘very much in the spirit of the National Curriculum’ (paras 27–9).

At this time additional pressure appears to have been brought to bear on the DES from the direction of the Training Agency. In November 1989 the Confederation of British Industry had called for a ‘skills revolution’ in which records of achievement were seen to have a crucial role (CBI, 1989, paras 39–41). On 5 December 1989 the Training Agency responded to the CBI initiative by sending its own guidelines on the use of records of achievement within TVEI to all Chief Education Officers in England and Wales, who were invited to consider them alongside the draft orders on reporting which were about to emerge from the DES. There were marked contrasts between the two sets of documents and, although the DES would probably have denied disharmony between itself and the Training Agency, any observer might have been forgiven for interpreting the sequence of events as a case of the DES being upstaged. The Training Agency guidelines were very much closer to the recommendations of RANSC than either the draft orders or the draft circular from the DES. Moreover, the influence of the Training Agency on the future development of records of achievement was likely to be substantial. By 1990 all local authorities were involved with TVEI and the idea of profiling and individual action planning was built into TVEI contracts. Therefore, the Training Agency was in a position to insist on ‘RANSC-style’ records of achievement for all 14–18-year-olds involved with TVEI. Although the Training Agency was eventually disbanded, responsibility for TVEI continued within the new Training, Enterprise and Education Division of the Department of Employment.

Undoubtedly, the DES became aware that observers were interpreting these events as evidence that two government departments were pulling in different directions. Thus it became concerned to present a united front at the point when regulations on reporting were finally published. Significantly, on 10 July 1990 when the statutory instrument (no. 1381) was issued with Circular (8/90) entitled ‘Records of Achievement’, the Training Agency also issued its own ‘Guidance for those managing TVEI’, with respect to recording achievement.

Despite appearances, the final orders had not changed a great deal from the draft order stage. The regulations prescribed only the minimum requirements for annual reports to parents on individual pupil’s achievements. At the ends of Key Stages these entailed reporting the results of statutory assessments by profile component and National Curriculum subject in terms of the level (1–10) achieved. In addition, parents were given the right to request information on achievement in the separate attainment targets. In the intervening years, when there is no statutory assessment, schools were required to give a written report on achievement in foundation subjects, although how this is done is a matter for discretion provided that it is made clear that any pupil scores given are not based on ‘statutory arrangements’ for assessments. In all years, schools were also expected to provide ‘brief particulars’ of achievement in other subjects and activities and the results of any public examination taken.

Within the accompanying Circular (8/90) the DES went somewhat further by endorsing the principles of records of achievement, as set out in the RANSC report, and commending the practice of fuller reporting on achievement. The Secretary of State for Education also joined with the Secretary of State for Employment in commending the guidelines for those working with the 14–18 age group in the context of TVEI. These set down principles intended to promote consistency in recording achievement and to establish a common format for documentation.

While the Government had moved back to a position of more overt support for records of achievement, its endorsement of principles, commendation of practice and applause for developments fell a long way short of the introduction of a national framework for records of achievement, supported by appropriate forms of validation or accreditation which were, in 1984, thought to be so essential for securing credibility. What we had, then, was a case of selective attention by the Government to evidence and advice, which it solicited, in order to fit policy to a new and different political agenda.

Undoubtedly, anxiety about the increased workload for schools without increased resources was a major issue. So too were the legal difficulties which would surround the introduction of a full-blown national system of records of achievement. (DES officials said that their legal department threw up their hands in horror at the thought of legislating for processes.) However, the DES must have been aware of the legal issues before it published its policy statement in 1984; how else could it have been made public and used as a basis for extensive pilot work supported by public funds? The education profession was not unaware of these difficulties but, for many, the principles of records of achievement, such as pupil involvement and broad-based positive recording, were fundamental and the feeling was that there should be no turning back. It seems therefore that the most serious challenge to Conservative Government policy (circa 1984) was from the Conservative Government of 1990.

Soon there were indications that, in the absence of clear proposals for a national scheme, some schools and LEAs were pulling back from development. Elsewhere, where commitment was high, the dominant view was that National Curriculum reporting requirements could, and would be accommodated within the broader and more ‘educational’ conception of appropriate assessment, recording and reporting processes which records of achievement provided. If records of achievement were to survive, it now seemed unlikely to be as a result of government policy but because they returned to the grass-roots from whence they came. But this was not quite the end of the story. Towards the end of 1990 the Education Secretary announced that a proposed core format for a National Record of Achievement (NRA) would be sent out for limited consultation. This happened on 5 December and by February 1991 the National Record of Achievement had been formally launched by the DES and the Employment Department with the expectation that it would be used by schools with school leavers in the summer of 1991.

This train of events could lead one to assume that PRAISE, through RANSC, had been influential in the policy-making process after all. However, the NRA was principally the brain child of the Employment Department, not the DES, and the emphasis was on building up a life long record from age 16. The stated aims of the NRA indicated direct descent from the 1984 DES policy statement (compare the following with those quoted earlier in this chapter).

  • to recognize and value the individual's learning and achievement
  • to motivate the learner and support continuing development
  • to act as a summary record for use with third parties
  • to support the management of learner centred delivery.

But the way that these aims were to be put into practice received little attention at this time. To all intents and purposes the NRA was an artifact: a set of documents in a plastic, burgundy-coloured, gilt-edged folder which rapidly became known as the ‘wine list’. All the advice contained in the PRAISE and RANSC reports concerning recording processes and the need for a system of quality assurance to give the document credibility seemed to have been ignored. Even the very practical point that school leavers in July need to complete their records by the preceding January or February, if they are to be used in applications to colleges or prospective employers, seemed to have been overlooked at the NRA launch. Understandably, early take-up of the NRA was patchy, although there was considerable incentive in that it was offered free to schools.

After responsibility for the NRA was handed over to the National Council for Vocational Qualifications (NCVQ) in April 1992, attention turned to incorporating action planning, to revising the document to give less of a ‘school-feel’ and to developing a model for quality assurance. At the time of writing, the results of this NRA development programme are yet to be revealed. For many of those who were closely involved in the DES pilots and PRAISE there is a sense of déjà vu and a strong urge to ask whether the government had learned anything from the £10m of public funds spent on piloting and evaluation in the 1980s.

The evidence suggests that if RANSC, and by implication PRAISE, had an influence, it was less on policy-making within the DES than on thinking, policy and practice within schools, LEAs, and TVEI projects. As someone said to me, ‘PRAISE informed the Zeitgeist ’ and we need to acknowledge that this may, in the long run, be as important as having the direct impact on government policy-making that we were led to expect. Interestingly, in a paper produced for colleagues in TEED, Macintosh (1992) made the point that: ‘As far as schools are concerned it is likely that NCVQ will go back to RANSC whose recommentations are of course squarely in tune with current best practice in schools’ (p. 16). So PRAISE may have an impact on policy making after all, but in a very indirect and delayed way that contradicts received wisdom on the role of evaluation.

An evaluation of the evaluation

So what judgment of the ultimate value of the national evaluation of records of achievement does all this lead to? One judgment would be that despite its considerable achievement in informing RANSC, PRAISE has simply shared the fate of many other evaluations. The weight of evidence gathered in independent inquiry has rarely been able to influence the ultimate shape of policy in the face of strong political imperative; therefore we have to countenance the possibility that the PRAISE report (and even the RANSC report) was consigned to the filing cabinet as far as our sponsor (the DES) was concerned. If this was the case then the other audiences for our report – LEAs, schools and, indeed, bodies such as the Employment Department – may emerge as more important, as we probably suspected all along. In that legislation does not proscribe learner-centred RANSC-style records of achievement, then it is likely that a good proportion of schools and LEAs have drawn on the information in the PRAISE and RANSC reports when making decisions about their assessment, recording and reporting systems. In other words there are decision-makers, other than those at the DES, who have made use of the RANSC report and the PRAISE evaluation.

A desire to provide information for these other audiences was, in fact, a main motivation for including detailed illustrations of practice in the PRAISE report and for publishing detailed whole-school case-studies. We knew that much of this kind of information would be of limited interest to the DES, or even RANSC, but we thought it would provide schools and LEAs with some answers to their questions about what records of achievement systems might look like in particular contexts. Of course, this additional use of the PRAISE evaluation was only made possible because we were able to make public our findings. For a number of reasons PRAISE enjoyed a generally good relationship with DES officials. One reason was undoubtedly to do with the relative lack of political sensitivity concerning the records of achievement initiative in its initial stages; another reason was to do with the channels of regular communication maintained between members of the team and the DES which resulted in very practical assistance in the printing and distribution of reports.

We were well aware that we could not take for granted this kind of support in disseminating our findings beyond the DES and RANSC. As is now common in government-funded research, we had received a contract which gave the DES the power to impose certain restrictions on the conduct of the inquiry, to claim ownership of the data and to limit our freedom to publish and disseminate findings. We had secured agreement that the DES should not unreasonably withold publication, and, at the first meeting of the evaluation sub-committee of RANSC, we had tabled a set of ethical guidelines for the conduct of the research which stated that we should not expect to release data or reports without first negotiating clearance with participants. However, had the DES wished to exercise its power to impose restrictions, the status of PRAISE as an independent evaluation, and our desire to communicate our findings to other audiences, would clearly have been put in jeopardy.

For the reasons already mentioned, PRAISE emerged relatively unscathed; none of our reports, or substantial sections of our reports, was suppressed by our sponsor. This is not to say that publication was entirely plain sailing. We encountered a few situations where we were under pressure, usually from senior management of schools or pilot schemes, but sometimes from the DES, to amend or abandon reports. We had anticipated this kind of situation, hence the ethical guidelines which, for example, made provision for a dialogue to validate and ‘clear’ findings with participants. These procedures also allowed them a right of reply if, in the last analysis, our interpretation of the evidence still differed from theirs. Although negotiation over clearance proved to be a very lengthy process, it ensured that detailed accounts of records of achievement policy and practice, at all levels, became publicly available at various points in the life of the project (a kind of controlled ‘leaking’). It also, I think, strengthened our accounts by helping us to ensure that we represented the range of perspectives and value-positions taken in relation to the initiative. Although there were areas in which we undoubtedly fell short, we felt that we had done as much as we were able to do to fulfil MacDonald’s (1977) aspiration for democratic evaluation to provide an ‘information service to the community about the characteristics of an educational programme’.

Implications

The lesson which can be drawn from this experience is quite simple and probably ‘what everybody knows’ anyway. The power of research-based evaluation to provide evidence on which rational decisions can be expected to be made is quite limited. Policy-makers will always find reasons to ignore, or be highly selective of, evaluation findings if the information does not support the particular political agenda operating at the time when decisions have to be made.

Given this reality, the best that evaluators can do is to try to ensure that their accounts are made public so that the ‘citizenry’, and especially those whose lives or practice will be affected by policy decisions, have the kind of information which will enable them to make considered judgments about the wisdom of decisions taken on their behalf. This aspiration therefore defines the essential role of evaluation as supportive of the democratic ideal.

Of course, at a time when increasing restrictions are being placed on funded research, especially government-funded research, it is not always easy to ensure that research-based evaluation reports pass into the public domain. There are limits to what can be achieved by diplomacy, although I think this was a fairly successful aspect of the strategy adopted by PRAISE. Moreover, the extent to which each project can be expected to work out its own salvation is also limited.

In March 1988, a British Educational Research Association (BERA) seminar was convened in Cambridge to consider issues concerning contractual arrangements between researchers and sponsors and to begin to develop a code of practice for funded educational research. The provisional code that emerged from these deliberations addressed the issues concerning publication – which have been a main theme in this chapter – and recommended that research proposals should have reference to:

  • The principle that researcher(s) have a duty to report to the sponsor and to the wider public, including educational ‘practitioners’ and other interested parties. The right to publish is therefore entailed by this duty to report.
  • The right to publish is essential to the long-term viability of the research activity, to the credibility of the researcher (and of the sponsor when it seeks to use research findings in its support) and to the interest of an open society.

(Elliott, 1989: 16)

This provisional code of practice also considered the conditions under which the right to publish might be legitimately restricted, the right of researchers to dissociate themselves from misleadingly selective accounts of the research, and the principle of dissemination as an integral and ongoing part of the research process (not simply as a final act which can fall victim to suppression).

Members of the BERA seminar were also aware of the need for a collective effort on the part of educational researchers and research institutions in order to encourage support for such a code of practice and to put pressure on sponsors to accept it. For this reason a number of recommendations were made to BERA to engage in dialogue with other relevant agencies, including research sponsors, to promote the adoption of a code of practice and to support arguments against restrictive sponsor control of educational research.

One cannot be entirely sanguine about the likely outcome of such moves, even if they receive widespread support in principle, because research sponsors hold most of the important cards – notably the money in which resides the power. Little can now be achieved in research terms without sponsorship, and even funds for non-sponsored research will be received by universities in proportion to judgments of their research output and their ability to attract sponsorship. Thus researchers have been put in the market-place and are expected to compete against one another for scarce resources.

The strategy of divide-and-rule is not conducive to collective effort in support of academic freedom and the principle of intellectual integrity in a free society. Instead it encourages unreasonable cost-cutting and compliance in order to secure scarce research contracts – and often the jobs of the insecure workforce of contract researchers. Thus, those who would wish to promote an agreed code of practice for educational research and evaluation are not exactly arguing from a position of great strength. However, since one cannot expect to win the war without doing battle, it must be worth working to create a climate of opinion, and a critical mass of support, that will reaffirm the values underpinning the pursuit of truth in a democratic and open society. Unless this happens, most research-based evaluation on policy issues will almost inevitably become a bureaucratic device for rationalizing policy decisions already made or in the offing: ‘an unconditional service to those government agencies which have major control over the allocation of educational resources’, lacking independence and having no control over the use made of information (MacDonald, 1977, p. 226).

If, however, we are able to ensure that the information gets out and, as with PRAISE, has some opportunity to inform the Zeitgeist then we can take some satisfaction from the knowledge that knowledge itself cannot be unknown. Once in the public domain it has the possibility of influencing both practice and policy although the decision-makers might be other than those to whom the evaluation was supposed, initially, to be addressed.

References

Brown, S. and Black, H. ( 1988 ) ‘Profiles and Records of Achievement’. In S. Brown (ed) Assessment: A Changing Practice. Edinburgh: Scottish Academic Press.

CBI (1989) Towards a Skills Revolution: Report of the Vocational Education and Training Task Force. London: CBI.

Cronbach, L. ( 1963 ) ‘Course Improvement through Evaluation’, Teachers’ College Record, 64, pp. 672–83.

Davis, E. (1981) Teachers as Curriculum Evaluators. London: Allen and Unwin.

DES (1984) Records of Achievement: A Statement of Policy.

Elliott, J. (1989) ‘Towards a Code of Practice for Funded Educational Research’, Research Intelligence, February, pp. 14–18.

Fairbairn, D. (1988) ‘Pupil Profiling: New Approaches to Recording and Reporting Achievement’. In R. Murphy and H. Torrance (eds) The Changing Face of Educational Assessment. Milton Keynes: Open University Press.

MacIntosh, H. (1992) Update on Current Assessment and Accreditation Issues (mimeograph 25 March ).

MacDonald, B. (1977) ‘A Political Classification of Evaluation Studies’. In D. Hamilton et al. (eds) Beyond the Numbers Game. London: Macmillan.

Newsom, J. (1963) Half Our Future. London: HMSO.

Norwood, C. (1943) Curriculum and Examinations in Secondary Schools. London: HMSO.

PRAISE (Broadfoot, P., James, M., McMeeking, S., Nuttall, D. and Stierer, B.) (1988) Records of Achievement: Report of the National Evaluation of Pilot Schemes. London: HMSO.

RANSC (1988) Records of Achievement: Report of the Records of Achievement National Steering Committee. London: DES/WO.

Simons, H. (1987) Getting to Know Schools in a Democracy: The Politics and Process of Evaluation. London: Falmer Press.

Spens, W. (1938) Report on Consultative Committee on Secondary Education with Special Reference to Grammar Schools and Technical High Schools. London: HMSO.

Stufflebeam, D. et al. (1971) Educational Evaluation and Decision-making. Itasca, IL: F.E. Peacock for Phi Delta Kappon Study Committee on Evaluation.

TGAT (1988) National Curriculum: Task Group on Assessment and Testing: A Report. London: DES/WO.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.140.252.192