Andrew Likierman
As the use of performance indicators has spread throughout the UK public sector, academic writing has tended to focus on the implications and consequences. The concerns of practitioners, on the other hand, have often been centred on the technical qualities of the indicators and the costs of implementation and operation. Much less has so far been publicly discussed about the application of indicators and their effect, or what officials and managers have learned so far about devising, implementing and using them.
The early lessons outlined below are based on discussions and written feedback from over 500 middle- and senior-grade managers from all parts of the public sector. The text is illustrated by quotations from some of the written responses to requests for comments on the list, which has been tested and refined in the light of comments and suggestions from 20 groups of officials and managers over the same period. Problems of confidentiality (and potential embarrassment) mean that only some of the quotations and examples are attributed to organizations. In other cases the sector is identified, but not the organization.
A number of caveats are necessary in interpreting the list:
The list has also been tested on groups of private sector managers. Apart from the few elements, such as accountability, which are specific to the public sector, the vast majority of the lessons were seen to apply to private sector organizations, although there was a different emphasis on which were seen as the most important.
As performance indicators have been introduced, the phrase: “What gets measured, gets done” has been increasingly heard, often with concern about the implications for what is not measured. This puts the onus on those who devise the measures to ensure that they are appropriately comprehensive. There are certainly some who are reasonably satisfied. A Chief Probation Officer judged that his report to the Probation Committee was “ ‘good enough’ to give ourselves a broad indication as to the key outcomes we are seeking” and an official in the Department of Social Security extended the principle to individuals in emphasizing that individual objectives “should be linked to job purpose and should involve all the key areas for which a jobholder is accountable and which are within his/her control”.
The problems in this area arise when there are elements of the task which are not included in the list of measures. Thus in the Prison Service, one of the goals [sic] is “helping prisoners prepare for their return to the community”. But the 1993/94 Business Plan acknowledges that the key performance indicator – the proportion of prisoners held in establishments where prisoners have the opportunity to exceed the minimum visiting requirements – measures only one aspect of the Service's performance in achieving the goal. More generally, unless the set of indicators chosen covers all elements essential to completing the task, there will be a danger that “What gets measured, gets done” will backfire and that performance will be skewed towards what is being measured. The Chief Executive of a central government agency commented: “An important factor in setting targets, we have found, is that the whole of resource in a particular area should be covered by targets. If an area is not so covered, there is scope for performance to be manipulated by misallocating costs to areas where no penalty is incurred to the advantage of the area where performance is measured”. This is particularly important when some aspects of an organization's operations, including quality, are difficult to measure. A civil servant noted that “A management unit may cover not only policy work, but also routine administration of settled policies, which may be more readily susceptible to targets. There is a danger that the targets may mislead both the staff inside and managers outside the unit to accord greater importance to the routine work”.
The appropriate number of indicators will be particular to the organization and its aims. It will also need to take account of the diversity of its operations. As a Health Department official observed, “… some parts of the Department's work are more susceptible to the use of performance indicators than others”. For some organizations, many indicators will be required, for others few. Too many will make it difficult to focus what is important, too few will distort action. The dangers of the latter have been exhibited to the point of parody many times – as in the preoccupation of the command economies of the former Soviet Union with output, regardless of quality or demand. Nearer home was the notorious case of a member of the Kent Constabulary who encouraged those charged with some offences to confess to others which they had not committed in order to “improve” the clear-up rate. Certainly the Audit Commission, in consulting on Citizen's Charter indicators, recognized the principle of the need to reconcile enough indicators to provide a reflection of sufficient diversity with few enough so that the “big picture” did not get lost.
However, the concerns expressed by those contributing to this study were almost entirely on the dangers of excessive numbers. The need for sampling was emphasized by a senior health executive worried about “the massive public sector tendency to try to measure everything all of the time” and the same sentiments were echoed by a central government agency (“performance indicators should not spawn their own cottage industry of forms and monitoring returns”); by a probation officer (“One of my fears is that we now have so much information being made available that we are unable to use most of it”); and by a local authority (“There is a tendency to have too many indicators … This tendency needs to be resisted, otherwise the indicators lose their impact and value”). From another part of the Health Service came a specific reason for parsimony: “The fewer there are, the better, as this makes them easier to promote and explain”.
The difficulties of measuring certain aspects of an organization's operations mean that there is a tendency for the more easily measurable to push out what is not. By extension, within what is more easily measurable, the tendency is for financial to push out non-financial indicators. Quality has proved notably difficult for organizations to measure, and great care needs to be taken to give it proper weight. A central government department emphasized the need to set and assess quality standards “against agreed competence frameworks. This reduces subjectivity in areas where mechanically measurable criteria are absent”. However, an official in one of the armed services commented gloomily: “We are trying to develop the more difficult ‘quality’ PIs and are not yet in a position to offer any up for general consumption. It is possible we will never be, as the differences between organizations and the nature of services provided by them will profoundly influence their design and use of PIs”.
Quality measures may well take more time than others to develop. Social Services in Enfield have a three-year programme for implementing quality assurance, and an example of successfully completed implementation is Down and Lisburn Health and Social Services. They have developed a system of multidisciplinary quality standards for multidisciplinary staff teams. The method in the service for people over 75 was to ask key groups (including the clients) for the valued features of service and to convert these into quality standards. Process, measurement technique and required records were linked to each standard. The system has been extended to services including child health and personal social services for children aged 0 to 5. Still in the middle of the process is the Central Office of Information, which is replacing quality measures based on timeliness of delivery and conformity with specification (“both of which have been problematical”) by a new indicator based on customer satisfaction. Peer review is also increasingly in evidence as a means of measuring quality. For example this has now become an essential element of the Higher Education Funding Council's periodic reviews of universities.
Since public bodies operate in a political context, care needs to be taken that the indicators reflect political constraints and pressures. The balance here is between adequate accountability and recognition of the pressures of political life. “Data can be misused, particularly by the media” was the comment of a senior manager of a National Health Service (NHS) Trust. A consultant within public health medicine dismissed targets for perinatal and infant mortality as “a classic example of performance indicators … which have been included for political reasons only”. The public impact of league tables in sectors such as education and local government has given rise to particular concern, and caution is also common across Whitehall about the activities of parliamentary select committees, notably the Public Accounts Committee. Whitehall departments enjoying good relationships with “their” select committees felt themselves to be in a better position to avoid misunderstandings than those where relationships were distant. On the other hand, there can be political gains from the adoption of measures. Home Office proposals for key performance indicators for the probation service pointed out: “you will remember that the Audit Commission has commented that a robust system of performance indicators could make a major contribution to enhancing the credibility of the probation service”.
“Attitudes change depending on whether you are calling for PIs or you feel they are being imposed on you” commented the finance office of a quango, and a senior executive of a central government agency emphasized that: “Agreeing performance indicators is a negotiating process in the broadest sense. If this is ignored it will lead to poor commitment and sense of ownership”, and went on “People must understand what is expected of them, and how this was decided. They must be allowed to contribute to the decision-making process”. The Director of Social Services of the London Borough of Hackney emphasized that in involving staff further and further down the organization: “the process of getting them involved and thinking in terms of monitoring and evaluating is, I think, as important as the final document itself “. A regional health authority is “currently negotiating with General Managers of DHAs and FHSAs a shared view of what is effective, appropriate and reasonable … agreement by General Managers is considered necessary in order to secure co-operation and a sense of ownership”.
It will almost always be difficult for those who are not involved in an operation to understand the potential pitfalls of implementing a new system of using indicators, and in a number of cases middle managements have effectively sabotaged imposed systems by not pointing out the pitfalls of implementation. In others, they have helped to ensure that systems are successful by closely working with those charged with implementation. In the case of the City of Sheffield, the Director of the Arts and Museums Department pointed to the case of two theatres where the move was “from grant aid as a simple annual ritual to action with clearly designated outputs … they themselves have come up with a list of nine policy areas, to which they will be applying quantitative measures, and which will form the basis of a service level agreement between the theatres and the City Council”. For local government more generally, the finding of Palmer in this issue that a high proportion of indicators have been introduced as a result of internal management proposals bodes well for success, at least in this aspect of implementation.
Since many indicators are set on an annual cycle, the effect of first introducing them may well be to alter the time-scale of managerial effort to a shorter term and to focus on achievement of success on a year-by-year basis. The Operations Director of the Transport Research Laboratory suggested that, as an alternative to targets which are unrealistically long term, this may not necessarily be bad in itself. However, many organizations require a Long-term perspective, and this needs to be recognized in the nature of, and timescales set for, the indicators chosen. Remaining in the transport field, short-term performance measured by the number of miles of road built may show an improvement if resources are diverted from maintenance. But, if this results in existing roads having to be expensively rebuilt because the lack of maintenance means that they can no longer be patched up, the short-term improvement will be at a considerable cost. The link can be revealed by road-condition indicators, which can be juxtaposed to measures of additions to the road stock.
Unless managers' efforts are fairly reflected in the indicators, they will be seen at best as not relevant, at worst as unfair and/or potentially distorting to the managerial process. This may mean thinking about indicators as applying in different ways. Thus the Central Statistical Office (CSO) noted that some performance measures successfully motivated the generality of staff by creating a sense of achievement and improvement or by directing attention to the public image. Other measures had more impact on senior managers by highlighting policy issues.
Staffordshire's Chief Probation Office commented on one of the major difficulties in identifying true indicators of performance: “we are not necessarily measuring the performance of our organization, but more the decisions of sentencers within the criminal justice system”. Imprecision in the measures chosen may be an indication that there is a problem. As a civil servant pointed out: “Some objectives may be almost entirely within the control of the unit, and others lying outside its control would not be chosen for performance indicators. But many objectives lie in between … it is not easy to find a formula which fairly reflects the efforts of the unit without sliding into the subjectivity of ‘trying to achieve’, ‘seeking to influence’, or ‘facilitating others to …’ ”.
Linked to the previous lesson, whatever the organization, there will be events which are outside the control of managers. The way the indicators are operated needs to combine the technical requirements of control with credibility to managers in recognizing the impact of such events. One way of accommodating the problem is to make the measures more sophisticated. The Employment Service (ES) at first had performance targets which did not take account of labour market conditions. “More recently”, as a section head explained, “placing indicators have been formulated so as to be immune from external factors outside ES control. The headline unemployed placing target is based on an assumed level of vacancies; if actual vacancies differ from this assumption, the level of placings achieved can be viewed in context”. Another way of coping with uncertainty is to recognize that revisions of targets may be essential in the light of experience.
If neither of the above is possible, as Lessons 17 and 18 below indicate, the essential element in maintaining the integrity of the indicators is to ensure that the results are not misused. As a senior officer of Copeland Borough Council put it: “We have encouraged managers not to regard indicators as necessarily reflecting their own performance. While in some cases they do, in many others there are outside influences which, at least in the short term, are outside their control. We do not want to discourage the monitoring of service standards because some of the factors are partly or completely outside the short term control of the manager.”
“We have pushed for equalization of performance within the Area and as a result the worst performing offices have generally come up to the standard of the best” (Area Director, Benefits Agency). “The need to secure effectiveness and efficiency in the contracting process (for purchaser/provider contracts) has led to sharing with General managers a summary of all Health Authority performance in this area” (regional health authority). “We have made significant changes in the light of our experience and that of others, particularly other local authorities” (local authority). These examples of using comparisons do not appear to be as routine as would be desirable in the public sector. Many organizations have devised and used indicators with very little reference to others within the same sector, or even within other parts of the same organization and the tendency to reinvent wheels seems to be common in public sector performance measurement. Experience on the appropriate type of indicators, or their use, may well be available elsewhere, including from comparable organizations in other countries. (Indeed, they may even be available within the UK – a central government department found comparisons between England and Wales, Scotland and Northern Ireland worthwhile in at least raising issues of the reasons for differences.) Even if no comparable organizations are available, the process of finding the reasons why no lessons can be learned has often been useful.
If an indicator has not been used before, it is likely to be very difficult to know what level to set. Careful preparatory work is necessary to see whether a level can be found so that the indicators themselves are not brought into disrepute by being shown to be unattainable or untesting. Many managers feel very strongly on this issue and the words “realistic” and “achievable” occur again and again. “One or two targets which are impossible or too easy to achieve could undermine the whole system” observes the manual of Tonbridge and Malling Borough Council, and from a civil servant: “Successful performance measurement is based on setting realistic and measurable objectives which are clearly linked to the business objectives of the organization and against which assessable success criteria can be established at the outset”. If it is not possible to find a realistic level, a trial period or periods should be used to allow the appropriate levels to be established.
As with any new managerial tool, performance indicators need time to develop. Even with careful preparation, it will take time to discover if there are flaws and unintended side-effects from the way the indicators have been constructed. Thus the Inspectorate of Constabulary started to develop indicators in 1983, and the matrix of 456 indicators first used to make comparisons has been progressively refined. Revisions to central government agency framework documents also indicate the value of the learning process and in local government a district council official commented that “our prime aim is to continuously develop the criteria for success”. In part, revision may also be due to outside circumstances – “What we do and how we measure it is constantly changing” noted a central government agency. The British Airports Authority, when in the public sector, measured passenger service quality with two indicators, one of which was written complaints per 100,000 passengers. The two were seen to be unsatisfactory (for example, negative comments alone are not a good basis for inferring levels of quality), and, over the years, a sophisticated questionnaire procedure has been instituted.
For a variety of reasons, including caution about whether they will be effective and the difficulties of integrating them into existing information and control procedures, indicators have often been introduced into organizations in parallel to existing systems. The result has been costly in time and money and has engendered resentment among managers who have seen the new indicators as an unwelcome (and not necessarily useful) addition to their existing burdens. Early experience of performance indicators in the Health Service was blighted by their apparent irrelevance to many managers. By contrast, one local authority has linked the city council charter to annual service plans and also intends to tie it into the Citizen's Charter. In another, “performance service contracts are now the norm for our managers and in some areas … are used for all staff. This enables us to build PIs into the fabric of the organization and, to coin the hackneyed phrase, the way we do things round here”. Three examples from very different parts of the public sector are those in Brignall's article on Solihull MBC1; the Lord Chancellor's Department, which has integrated circuit objectives into the planning and budgeting processes; and Liverpool's Maritime Housing Association, which has linked individual targets as part of a comprehensive review of the staff structure to the business plan.
In order to ensure that the indicators are trusted, the basis on which they are compiled, as well as the message from the outcomes, needs to be understood so that discussion about any action which needs to be taken is well-informed. The Employment Service Office for Scotland emphasized that “good communications to all levels is an important element in the preparation and implementation of performance indicators”, and the area director of a central government agency wrote: “we are conscious of the need to make those measures that bit more … credible with staff and managers”.
Over-complex indicators can distance those who need to take action from the indicators. A local authority manager complained that “the Arts Council and the Regional Arts Boards have spent a considerable amount of time attempting to develop performance indicators, many of which, in my opinion, are over-complex”. In the case of another local authority, review of the annual service plan after a year showed that “some lessons are clear – terminology needs to be clearly understood …”.
Since the outcomes of many public sector activities cannot be easily measured, proxy indicators will be necessary. But unless they are chosen with care, the proxies can distort the decision-making process by emphasizing an inappropriate outcome. As a Health Department official pointed out: “Within the Health Service, while we are always looking for outcome measures, what we have tend to be proxies for them, in some cases process measures, in others output measures – and we need to be very careful to be clear what is being measured and the extent to which it can bear further interpretation”. In a similar vein, an official of the Management Executive of the NHS in Scotland indicated the importance of moving from measurement of processes towards “whether patients are healthier as a result of (NHS) interventions and whether they are being dealt with in a way they are entitled to expect”. For the Driver and Vehicle Licensing Agency, speed of response and turnaround time were taken as proxies for quality. But, as the Executive Director, Operations noted, market research has indicated that speed is not regarded as the main requirement by customers and new performance measures are being devised. Even after careful review, the final result may still not be wholly satisfactory. The Director of Education for the London Borough of Croydon pointed out that “they are often the outcome of much painstaking failure to find something better”. Finally, it is worth noting an imaginative use of proxies by the Customs and Excise. While drug seizures make the headlines, and the value of drugs prevented from entering the country is monitored, both are acknowledged to be unrelated to the total flow. So the street price is taken as one of the proxies of success.
The introduction of performance indicators can be used, and may require, changes in an organization's internal and external relationships. For example: “The performance targets introduced when the CSO became an Executive Agency”, explained the Head of the Policy Secretariat, “have had a profound positive effect on the way we view our customers and the way they view us. Most revealing of all was the difficulty our customers had in specifying what level of performance they really wanted of us”. Internal reassessment of existing procedures should be welcomed as a manifestation of the fact that the measures are proving their worth. The Chief Executive of Hertfordshire County Council observed that “without other management changes, for example financial devolution, the performance indicator culture will not flourish … PIs would merely bob along the top of the organization making the occasional appearance on the agenda of management teams”.
“Yes, but are the figures right?” is a constant refrain from those who take performance indicators seriously, or need to do so because their own performance is being measured. “The reliability of available data is a considerable factor in the preparation of performance indicators” commented an official of the Department of the Environment's Property Holdings Finance Division. “We need to be careful in the interpretation of data, particularly where there are a small number of samples” wrote an NHS Trust's personnel director, adding, in response to criticism for having too many trained district nurses at a certain grade, “It is questionable whether the use of this tool in order to prepare league tables of providers' performance is valuable since so much of the data is unreliable”. A Chief Probation Officer indicated that some of the data “is quite unreliable since main grade staff have yet to cotton on to the importance of filling in the forms necessary to make this information available”. Without data which is not only accurate, but trusted to be so, many of the potential benefits from introducing performance indicators may be dissipated.
The appropriate managerial response to performance measurement results is a basis for discussion among managers with a view to taking action. Results cannot, on their own, provide “definitive” evidence of success or failure and should be used to raise questions, not to provide answers. “… it is the dialogue that arises from review that is the important message” was the advice of the Chief Executive of Arun District Council. In the case of Birmingham Social Services, questions led to a review of objectives after it was found that nursery occupancy could range from 30% to 90%. Analysis of the reasons for the range showed that the nurseries were also providing other services and that these in part depended on “the objectives and professional leadership of local managers”. By defining objectives more clearly, there was a 15% increase in occupancy levels, despite staffing shortages. Above all, according to the Assistant Director of Social Services, the discussion about the objectives of day nurseries refocussed the tasks and “finally led to a clear definition of a community day nursery and family centre”. More generally, the Audit Commission has long used the technique of publishing profiles of performance of specific activities as a means of encouraging discussion of the reasons for differences between organizations.
The results should always be accompanied by a commentary so that the figures can be analysed and put in context. For example one local authority includes the minimum requirement of “An overview of the quarter … which is a brief summary of the main points of note … accompanied by some short text picking out the main points to note about the indicators and indicating action being taken”. Another, Bracknell Forest Borough Council, has comments in the reports by chief officers where appropriate, as when the improvement in turnround of building control applications was realistically explained by “the better relationship of workload to resources that now exists compared with the peak of the 1980s' boom”. Without a commentary there is considerable danger that the figures will be misused or at least misunderstood, as when, after great efforts were made in one police force to respond to a dramatic increase in car crime, further analysis through the matrix of indicators referred to in Lesson 11 revealed Volkswagen badge stealing as the main reason for the increase.
The level of outside or senior managerial attention to the results indicates how performance indicators are regarded and gives powerful messages to all those involved in preparing and using them. Figures on submission rates in higher education may once have been regarded as “academic” but are now of enormous importance to universities since they affect future allocations of studentships. A central government department noted that for one of their sections “the sensible setting of targets, and senior management's clear interest in their setting and monitoring, has enhanced the section's motivation”. In a central government agency the board holds quarterly performance dialogues with each regional director in preparation for a discussion with the Minister about the results.
Silence in response to results is likely to mean that the figures come to be regarded as irrelevant by those whose performance is being measured. Continued silence will mean that little trouble is taken to fix realistic target levels or respond to the results. Thus when senior officials in a government department who filled in regular annual reviews found that no action or feedback was forthcoming, first less attention was given to the results and then little importance was given to the levels of target set each year. The North East Thames Regional Health Authority, on the other hand, emphasized the importance not only of the information, but of getting it in on time “We have initiated a system of penalty points for lateness and incompleteness of routine returns”.
If the only reaction of senior management is to emphasize the aspects of results involving failure, the message to those whose performance is being measured will swiftly be that great care is necessary to ensure that targets are set at a level that can be achieved. There is also the danger that considerable time will be spent in making sure that the figures “look right”, regardless of the underlying performance, with elaborate alibis to protect those involved. A common criticism among those subject to scrutiny by the Public Accounts Committee is that the Committee has inhibited the development of a more managerial climate by not recognizing the greater element of risk involved in the changing nature of public organizations and by continuing to focus on mistakes, rather than a balance between successes and failures.
Emphasis on one aspect of performance will almost inevitably have effects on other aspects. The use of performance indicators needs to accommodate such effects and the accompanying trade-offs. Failure to do so can result in the performance measurement system as a whole being by-passed or discredited. For example the Employment Service “expresses three placing targets as percentages of the headline unemployed placing target. One consequence of this is that better than expected performance on the headline target can result in under achievement of the percentage subsidiary targets. This slightly paradoxical relationship needs to be clearly understood by managers in order to optimise their ability to meet all the targets and not just the headline target. Similar trade-offs exist between Es' benefit-related targets. Striving to pay people promptly may have consequences for the accuracy with which payments are made … To concentrate single-mindedly on one indicator runs the risk of failing to achieve the other; this may represent less success than narrowly failing to achieve both”. Another agency commented on the indicators that “their real strength lies in their overall effect which is particularly important given the highly interrelated nature of our work”. A third agency recognized the element of trade-off in ensuring that not all indicators can or should carry the same weight and altered the number of indicators after finding that the weighting towards one activity “sent the wrong messages round the organization”. A Ministry of Defence official counselled the importance of careful interpretation: “Apply PIs to those activities that are carrying out the most important operations and not necessarily those where it is easiest to notch up a big score”.
Managers unable to understand the results of performance indicators will not be able to take appropriate action. What is done need not be very sophisticated, as indicated by a local authority's comment, “Emphasis has been placed on good presentation, particularly the appropriate use of graphs and tables etc., so as to make information as easy to assimilate as possible”. The results also need to be presented at the right level of detail – not too aggregated to mask important trends, or too disaggregated for the manager to be overwhelmed with a mass of detail.
Since indicators may be relevant to different time-scales, the timing of results needs to reflect time-scales for decision. “One of the biggest criticisms levelled at the Health Service Indicators over the years have been that by the time they are disseminated to the service the information is out of date and the world has moved on” commented an official. And a central government agency admitted that a yearly, national survey of customer satisfaction was “fairly useless” because action was not possible as result – response times were too long to be meaningful, and disaggregation below national level was not possible.
The results for some indicators will need to be reviewed each month, for others the time-scale may be a year. North West Thames Regional Health Authority quotes quarterly immunization uptake rates as the relevant timescale. For some, highly specific time-periods are relevant, for example at particular points during a contract. The London Docklands Development corporation has established a system that focuses on the elements of project management – input, time and output. Assuming that counters to short-term focus have been built in, this lesson reinforces the importance of identifying the appropriate time-scales for the indicators chosen. The managerial response to the results should also reflect the appropriate time-scales.
Consideration of the early lessons outlined above should help organizations to use performance indicators to better effect. They will not guarantee success, but failure to take them into account could mean not only a waste of managerial time and cash resources but also, potentially more serious, a distortion of managerial action. It could also mean a wasted opportunity for the use of a valuable managerial tool.
1. See S. Brignall, “Performance Measurement and Change in Local Government: A General Case and a Childcare Application”, Public Policy and Management (Oct–Dec 1995).
“Performance Indicators: 20 Early Lessons from Managerial Use”, by Andrew Likierman reprinted with permission from Public Money and Management, Oct–Dec. Copyright 1993 CIPFA.
3.133.158.116