CHAPTER 9

The new public management: a recipe for disaster?

CHRISTOPHER HOOD and MICHAEL JACKSON

 

The ‘new public management’ (NPM)

The term ‘new public management’ (NPM) denotes an administrative philosophy which came into the ascendancy in the 1980s. NPM dominated the public administration agenda in Australia, New Zealand and the UK (cf. Aucoin 1990; Hood 1990, 1991; Hood and Jackson 1991, 177–97), with variants in Canada and the USA (see Pollitt 1990). NPM's doctrines included the following:

A renewed emphasis on the separation of management or ‘delivery’ responsibility from policy or target-setting responsibilities. Within that, there was a renewed emphasis on ‘management mathematics’ (technical methods of investment appraisal and efficiency criteria) rather than on qualitative or ‘people’ issues.

A disaggregated approach to public sector management, involving break-up of administrative units into separately managed and self-sufficient entities, corporatised and privatised and dealing with one another at arms length, on a user-pays basis.

A strong emphasis on cost-cutting rather than overall bureaucratic expansion. Much of NPM's popular rhetoric was taken from what private sector management practice is supposed or portrayed to be. But the emphasis in practice was far more on discipline than on investment, even though the latter is often more characteristic of ‘advanced’ private corporate personnel management practices (cf. Nethercote 1989, 14; Baker 1989, 129).

A management style reflecting traditional private sector corporate practice, in the sense of a preference: for output targets over traditional ‘process’ controls in public bureaucracies; for short-term contract hiring and performance-linked pay over traditional career tenure and uniform fixed salaries; for top-down managerial ‘freedom to manage’ over a pluralist, cross-checking approach to public sector management which limits the power of line managers.

A style of policy-making increasingly based on the ‘new machine polities’, in the sense of a preference for policy driven by opinion polling targeted on marginal constituencies or key electoral groups rather than by paying attention to policy arguments coming from within the public bureaucracy on the basis of long-standing experience (Mills 1986; Hood 1990, 206).

A preference for deregulation, at least in the sense of the replacement of ‘classical’ regulation (Breyer 1982) by ‘light’ regulation, often involving fewer regulators, more ‘self-regulation’ and a conscious attempt to achieve closer cooperation of government and business (corporatism as distinct from corporatisation).

Many of these doctrines, taken separately, are much less new than they appear. They have had their day before in different guises (see Hood and Jackson 1991, 194–5). But even if NPM doctrines are in large part a rediscovery of earlier ideas, there is no doubt that they provide a new fashion, with new slogans and battle-cries.

To the extent that the components of NPM described above involve a revolution from the practices of the immediate past, there are plenty of administrative conservatives who view the new approaches as a recipe for disaster in a metaphorical sense. What is to be discussed here is whether NPM is a recipe for disaster in a literal sense. Does the new approach help to create or develop a set of organisational conditions conducive to the occurrence of socially-created disasters?

Without the ‘hard data’ (Atkin 1977, 1) which would allow us to answer that question definitively, this chapter is devoted to an exploratory discussion of the possibility that NPM doctrines may contain some of the organisational ingredients for the production of socially-created disasters. To develop the argument, we first look at the literature on the social organisation of disaster. We then consider four aspects of NPM which might have disaster-producing effects, looking particularly at the new stress on objectives to the neglect of process. Our conclusion is that there is at least a prima facie case for considering NPM to be in part ‘a recipe for disaster’.

Disasters – and how to organise them

We use the term disaster in its conventional sense, to mean out-of-the-ordinary events which result in extensive death, injury, damage or economic loss, and which reveal the weakness of social arrangements designed or believed to prevent such occurrences. Traditionally, such events were viewed as acts of God. Even so, such ‘acts of God’ were seen in ancient times as ‘socially created’ in the sense that they represented God's anger against avoidable human sin, and were not simply random shifts of nature. In that sense, someone (the victim or enemies) was to blame for every disaster.

Later, European civilisation gained what Elias (1956) terms detachment from moral responsibility for the creation of disasters. Detachment involved relating the occurrence of disasters to ‘impersonal’ physical, geological, climatic, geophysical, chemical and other laws rather than to human responsibility.

Today, we are perhaps returning to the ancient world's perception of disasters as socially created. From a cultural theory perspective, Mary Douglas has suggested that patterns of blame may be shifting from a ‘no fault’ style to an ‘adversarial’ style associated with a ‘sectarian’ culture, for instance in the association of pollution with the sinfulness of corporate greed in contemporary environmentalist rhetoric (cf. Douglas and Wildavsky 1982). More specifically, there is increasing awareness of ‘man-made’ or socially created disasters. After a decade of disasters which include the world's worst-ever industrial accident (the release of methyl isocyanide from Union Carbide's Bhopal plant in 1984, killing 2,000 people), and the world's worst-ever nuclear accident (the Chernobyl nuclear reactor explosion in 1986, which killed 31 people and released radioactive particles across Europe) there is increasing consciousness of risks from potential disasters which are purely social, not divine, in origin.

The distinction between socially-created and natural disasters is admittedly a fuzzy one. Instead of a sharp dichotomy, it is helpful to think of a continuum stretching from purely natural disasters (quite unpredictable events which are solely created by forces outside human control) to purely social disasters (events which are solely the product of decisions made about and within human organisations rather than of shifts in the forces of nature). Between those two extremes is a large group of ‘hybrid’ disasters which are a compound of human decisions and volatile ‘natural’ forces. We illustrate the spectrum in Table 9.1. Probably the bulk even of ‘natural’ disasters today are in the centre rather than on the edge of that continuum – as in the case of bush fires which are started (as they often are) by pyromaniacs or flood damage which is caused by inappropriate zoning decisions or even by flood-abatement works in other localities.

What is of interest here is the way that social disasters are ‘organised’. It is perhaps paradoxical to think of social disasters as ‘organised’, for they are typically associated with the reverse of what we normally think of as ‘organisation’, but it is possible to see them as the product of elaborate human organisation, interacting with natural forces and technology – often of a type of which would be quite difficult to achieve or replicate deliberately (Turner 1976, 56).

TABLE 9.1  The continuum from ‘natural’ to ‘social’ disaster
Type of disaster
Purely natural Hybrid Purely social
Causation Not caused by any human activity and not predictable by current technology Caused by interaction of human activity and and natural forces Caused by human activity alone
Example Meteorite strikes earth without warning Floods ravage community built in known floodplain Multiple failure causes nuclear reactor to melt down

Researchers and analysts concerned with socially-created disasters have suggested some of the ways in which such events are organised. Turner (1978, 189–90), a researcher who was early into the field, identified the two basic ingredients for socially-created disasters as misinformation coupled with a means of discharging large amounts of energy. Perrow (1984), in a later influential book on high-risk technologies, added a third condition which is perhaps a refinement of Turner's first one – an organisational structure which combines highly complex interactions among the parts with ‘tight coupling’. Tight coupling is a condition in which recovery from error is hard to achieve and in which malfunctions in one part of a system immediately affect other parts of the system.

Wildavsky (1985) argues that major disaster can be created by a structure in which emphasis is placed on making each individual part of a system secure, rather than on building ‘resilience’ in to the system as a whole. Wildavsky's argument is directed against strategies based on ‘anticipation’ of disaster, designed to exclude the possibility of it ever occurring, towards the construction of a social system which is ‘resilient’ against the inevitably unanticipated disasters that will occur by the avoidance of ‘tight coupling’ in social organisation. We doubt whether the distinction between ‘anticipation’ and ‘resilience’ is entirely robust, but Wildavsky is certainly far from alone in arguing that measures designed to prevent the occurrence of disasters may, paradoxically, create such disasters.1

If these analysts are correct in their observations, a recipe for creating hybrid or purely social disasters must involve:

(i) coupling up misinformation with sources of energy discharge;

(ii) choosing organisational structures or technologies which combine complex interactions with tight coupling;

(iii) a ‘segmented’ approach to handling social risk.

Energy discharge is an ingredient in plentiful supply, in that today's developed societies are historically distinctive in the degree to which they use energy (see Chateau and Lapillone 1982, 201). Energy can come from ‘natural’ forces (earthquakes, storms) or from socially managed ones (explosions, chemical or biological reactions). We tend to think of ‘energy discharge’ as meaning spectacular explosions, but massive effects can be produced from the chaining of apparently low-energy sources, such as biological or computer viruses.

The misinformation part of the recipe involves the holding of assumptions which are proved to be unfounded, crucial inaccuracies and a lack of a synoptic perspective on a system which is breaking down. Complex bureaucratic and inter-bureaucratic organisation structures are in principle likely sources of this ingredient, provided they are ‘tightly coupled’. The traditional machine-like compartmentalisation of bureaucracies is accentuated further with the advent of sophisticated but failure-prone computer systems on which public and private bureaucracies nowadays depend. If there is any truth in the adage that ‘to err is human, but to really foul things up, you need a computer’, this recipe for system failure is becoming increasingly plentiful.

Common to many socially created and hybrid disasters seems to be an accumulation of minor irregularities or deviations, each of no great moment in itself, but which combine to produce a dramatic failure. An adequate ‘incubation period’ (Turner 1978) seems to be a key element in organising a major disaster, for three reasons. First, there needs to be enough time for a series of crucial signals to be misread or for accumulating evidence to be ignored or misinterpreted by failure to bring it to a single point where its significance could be appreciated.2 Second, there needs to be enough time for organisations to work their way into ‘competency traps’, in which they have learned how to do the wrong thing better (Levitt and March 1988). Third, there needs to be enough time for all the minor events to accumulate and interact before they are ready collectively to produce major system failure. A text-book case is the series of erroneous corrections following a misreading of a default signal for current flowing over a feeder line, leading to the total shutdown of the electric power system in New York City within 20 minutes in 1977 (Pavlak 1988, 22–3).3 The total shutdown of the French national power grid for several hours in 1978, following the failure of only one feeder line (between Paris and Lorraine) is an even more dramatic example of the same kind

The third organisation property – high interactive complexity linked with ‘tight coupling’ – is treated by Perrow essentially as an integral part of certain technologies rather than as built into organisational styles per se (his examples include space missions, nuclear accidents, military early warning systems, genetic engineering, chemical plants, aircraft: see Perrow 1984, 327, Table 9.1) (see also Figure 10.2 below). Wildavsky extends the argument to a broader sphere by suggesting that strategies designed to deal with anticipated

TABLE 9.2  Hypothetical link between organization and social/hybrid disaster, using Perrow's (1984, 327) technology matrix
Interactions
Coupling Linear Complex
Tight Degree of ‘integration’ may help to shape disaster? Accidents ‘normal’ with this technology: organisation irrelevant?
Loose Little scope for organisation to create disaster here, e.g. through communication failure Culture and integration may help to shape disaster

risks on a segmented basis may in fact help to create, or at least exacerbate, disasters. This is because such strategies create a false sense of invulnerability, by failure to recognise the ‘tight coupling’ that is built into modern urban/industrial society, which makes many kinds of disasters unanticipatable, and prevent social learning.

If such analysis has any force, three consequences follow. The first is that organisation matters for the occurrence or non-occurrence of at least some kinds of disasters. This conclusion is opposed to the ‘null hypothesis’ that organisation makes no difference and that disasters are like teenage acne – something that will happen and have to be put up with, irrespective of what patterns of organisation we adopt.

The second (more tentative) implication is that organisation is likely to matter more for some kinds of potential disasters than others. Probably organisation – particularly public organisation – does not matter much for the hazards described by Perrow as associated with tightly coupled and interactively complex production processes. Here the crucial decision seems to lie in whether or not to adopt the technology, since on Perrow's argument major disasters are ‘normal’ for that kind of technology. Nor is it clear that organisation necessarily matters for the kinds of hazards in the opposite corner of Perrow's matrix – i.e. those associated with loose coupling and linear interactions. In this area, there is not much scope for disaster creation through organisational overlaps and underlaps. It is the other two quadrants of Perrow's matrix – loose coupling associated with complex interactions or tight coupling associated with linear interactions – where organisation seems most likely to matter. It could tentatively be suggested that these quadrants contain the elements of many middle-level technological disasters (e.g. Bhopal) and with what we have termed hybrid disasters – i.e. where disasters are created by the interaction of human choice and natural hazards. We spell out this tentative idea in Table 9.2.

Third, we can use these ideas to build up an exploratory picture of what kind of organisation would be needed to avoid the creation of social and hybrid disasters. To some extent, this parallels at a different level the literature on organizational ‘safety culture’ (cf. Westrum 1987; Turner et al 1989). The recipe would seem to require:

(i) A low-energy context – little conjunction of people with volatile sources of ‘natural’ energy discharge, or small amounts of energy under the direct or indirect control of the organisation.

(ii) An information structure placing strong emphasis on reliability and based on a single interactive system rather than a set of decomposable units.

(iii) An authority structure in which warning bells and challenges to received ideas can be initiated from several points, and in which there are multiple redundant units, to strengthen resilience and work against the effects of tight coupling.

(iv) An incentive structure in which output imperatives coexist in procedural and safety goals, and in which counterweights to information-distortion are developed.

The argument which we will develop below is that government organisation, at least in the civil sphere, traditionally possessed many of these four properties, but that it decreasingly does so. Some of the change is certainly attributable to technological change. Some of it, however, may be attributable to the doctrines of NPM.

Government as disaster manager – and creator?

Government has a long-standing role in each of the four conventional stages of disaster management: mitigation, preparation, response and recovery. That role remains controversial to the extent that new types of hazards create new problems of response strategies, new technologies for predicting or preventing hazards develop, and traditional dilemmas remain unsolved – for example, in deciding the mix between anticipation versus resilience, or between degrees of conservatism in condoning risk.

In spite of such debates, few would deny government any role at all in responding to disasters, even in today's era of ideological preference for ‘small government’. But what passes in relative silence even today is the potential of government itself to create disasters, either by its own organisation or by the way that it combines with the wider society.

The lack of attention to this potential is hardly surprising. Until quite recently government organisation, in Australia, New Zealand and the UK had comparatively few of the three ingredients of the recipe for socially-created disaster discussed earlier, at least in the civil sphere. The main energy source directly controlled by government was its military power. Here, it is true, government could create disasters which dwarfed most natural disasters. However, government organisations in the main involved low complexity and loose coupling. They followed what Kochen and Deutsch (1980, 35) call the principle of pluralisation – that is, they consisted of a large number of parallel units operating more or less independently in different places. Hence if one post office or customs house was burned down, the rest of the system was only slightly affected and recovery was straightforward. Misinformation was limited by a traditional system of organisation in which procedural rather than output control was emphasized, experience carried weight, detached career officials (and later, entrenched union representatives) were expected to challenge beliefs of elected politicians and vice-versa, lifetime career service rewarded trustworthiness over time and reduced incentives for bending the rules for short-term advantage. Of course, even that traditional, process-oriented approach did not avoid misinformation altogether or entirely prevent the kind of egregious decisions described by Neustadt and Fineberg (1983).

It is true that government always had the. capacity to create civil disasters, sometimes in efforts to avert them – for example by attempts to concentrate the sick which brought on ‘relief-induced deaths’, as in the case of European treatment of Australian aborigines (Bates 1966).4 Today, however, government's capacity to manufacture social disasters has greatly increased.

There are at least four reasons for making this suggestion. First, government now directly or indirectly controls a host of energy sources far more lethal than traditional military power. Second is the effect of growing urban density and population, which raises government's disaster-creating capacity in the way that it interacts with society – for example, by permitting real-estate development in hazardous areas, or by failure effectively to regulate hazardous processes. Third is the effect of the new technology of administration, notably in automated information processing. Interactive complexity and tight coupling has been built in to public administration by the replacement of the old pluralised structure of independent units by large and highly volatile computer systems that have impressive capacity to create disaster by the effects of human error, malicious damage and fraud, as well as of machine failure (Bennett 1983, Gall 1988).

Fourth – more to the point, for the thesis of this chapter – there is a strong case for arguing that the potential for misinformation within government has been increased by the breakdown of the traditional Public Administration paradigm. Two reasons could be advanced for this assertion. One is that the old Public Administration had a ‘system’ view of government organisation, whereas NPM sees it as a set of discrete organisations to be managed in isolation from one another on broadly private-sector lines. The other is that the old Public Administration put the emphasis on a ‘public service ethic’ among senior administrators (by career tenure and seniority rules which were designed to promote trust and a long-term view) and the cultivation of a broad sense of social responsibility by those called to government service, rather than the narrow concept of ‘bottom line ethics’ (Jackson 1989, 173) fostered by NPM.

Much of this growth in government's capacity to engender socially-created disaster comes from the changing context and technological development. But what we want to suggest here is that NPM adds to these factors some of the organisational ingredients for the production of disasters discussed above, and that this may be of particular importance for ‘hybrid’ disasters and those which have the characteristics of the top-left and bottom-right quadrants of Table 9.2. Four features of NPM in particular seem to have this property. These are:

corporatisation and privatisation;

deregulation and accommodation to business;

cost-cutting and single-goal management;

downgrading of policy expertise by top public managers.

Corporatisation and privatisation

Central to NPM, as explained above, are strategies of corporatisation and privatisation. These could help to create the organisational conditions for major disaster in two ways.

First, by its emphasis on breaking up government organisation into separate autonomous cost centres, NPM may further accentuate the tendency, always present in large-scale organisation, to divide responsibilities for handling services which potentially interact in extreme conditions.5 It may well be that corporatised or privatised entities will be pressured by insurers (provided that these do not act as passive conduits passing on liability for risk) to attend closely to system-specific risks, and privatisation will tend to mean corporation-specific insurance rather than the traditional government policy of acting as its own insurer. Yet such a strategy may also move the focus of attention away from risks which are not system- or corporation-specific but which arise from the interaction of different systems. An extreme example is the management of groundwater and the management of nuclear power plants, which the Chernobyl disaster brought into sudden and dramatic conjunction.

Not only do NPM doctrines reduce the incentives for the ‘corporatised’ component units managing what may be a tightly coupled social or environmental system to focus on interaction effects between services in abnormal conditions, but strategies of corporatisation and privatisation may serve to erect further substantial barriers to intercommunication, cutting the usual compartmentalisation of all bureaucratic organisation even deeper into stone. In particular, heavy emphasis on profit or output targets, user-pays doctrines, commercial secrecy and even competition may accentuate such tendencies. For example, if members of a government scientific organisation are put into competition with each other as NPM requires, they will no longer have the collegial incentive to share information. If each national weather bureau treats its information as a commercial resource, available to others only at monopoly prices, international pooling of meteorological information may become more difficult, such that the overall quality of weather forecasting might fall, in spite of technological improvements.

Second, NPM may serve to increase the likelihood of misinformation by setting up substantial incentives for distortion and concealment of information about organisational functioning. Again, this is a tendency which is endemic to some degree in all organisations, but NPM tends to exaggerate it by the way that it seeks to link reward to performance, as measured by a few key output indicators, both in replacing direct service by contracting (where the contractor has substantial incentives to lie about the level of performance and to conceal information on a ‘commercial secrecy’ basis) and in the management of bureaucracies themselves. Such a ‘bottom line’ management approach exaggerates the incentives, always present in large organisations, to ‘massage’ statistics for the record or to say what superordinates want to hear. Such tendencies often help to lead up to the ‘cultural surprise’ that disaster involves.

Deregulation and accommodation of business

Another central thrust of NPM is a mood of deregulation and greater accommodation of government to business. The doctrine – or ‘mood’ – is that too much regulation by government stifles development, needlessly adding to the ‘compliance costs’ of business. With growing development of high-risk technologies and increasing movement of human populations into hazardous areas, this change could be another powerful trigger for disaster. An obvious example is the relaxation of zoning restrictions on real-estate and other development in hazard-prone areas, and the relaxation of regulatory procedures for hazardous technologies.6 Another case is the nuclear accident at Three Mile Island, Pennsylvania in 1979 (releasing radioactive gases into the atmosphere and bringing the reactor core within 30 minutes of complete meltdown), which has been attributed in part to weak inspection and enforcement by the US Nuclear Regulatory Commission (NRC), coupled with the designation of cooling systems as ‘non-nuclear’ and hence not subject to NRC inspection (Toth 1986).

Again, what this mood does is to accentuate the tendency toward ‘capture’ and lax enforcement which is already endemic in government regulation. It accentuates it both by making a virtue of what was traditionally seen as a failing, and by making regulatory structures take more weight as a result of privatisation strategies.

Emphasis on cutting costs and on management by simple objectives

A third part of NPM – perhaps its strongest feature – is its emphasis on vigorous cost-cutting and on setting clear-cut output targets for the public managers to achieve. Both of these are potential ingredients for the production of what Perrow (1984, 172) terms an ‘error-inducing’ system.

Cost-cutting pressures have the potential to contribute to the occurrence of such disasters in at least four ways. First, such pressures usually result in the heaviest cutbacks being made at the lowest level of government, where the first responsibility for mitigation, preparation and response to disaster is typically placed.

Second, hazard mitigation measures are likely to be in the front line for funding cutbacks (except in the immediate aftermath of a disaster), since this kind of government activity has little voter appeal or political constituency during ‘normal times’ (Ender and Kim 1988, 72). It produces no photo-opportunities. Hence the NPM cost-cutter, with axe swinging into activities without visible effects or which are hard to reduce to some easily measurable bottom line, is likely to see hazard mitigation as a suitable case for pruning. The ethos and incentives of NPM discourage pouring resources into ‘unproductive’ insurance against the worst case that may never happen. In this sense, it does not matter whether one takes the side of ‘resilience’ or ‘anticipation’ as the better disaster strategy. This sort of cost-cutting is likely to be inimical to both.

Third, cost-cutting pressures invariably take the form of back-up and maintenance activity, including the maintenance of human resources by staff training or development measures. Such effects of cost-cutting may not even be consciously taken, but merely implicit in a strategy of accepting the lowest tender for a contract (perhaps in order to be able to demonstrate cost savings from contracting out work formerly performed directly by public bureaucracies), which will put the contractor under such cost pressures that corners may have to be cut. Yet it is staff competence and maintenance activity (as with Chernobyl, Bhopal and Three Mile Island) which is frequently at the heart of the series of minor errors or irregularities which combine to trigger a major disaster. Back-up facilities are costly, often not provided in private business. For example, it was reported in 1986 that only one in 25 Fortune 500 companies had taken precautions against computer disasters (Sydney Morning Herald, 6 October 1986). To the extent that NPM advocates government following standard private sector practice, it might question the logic of elaborate back-up or expensive maintenance of computer systems for air traffic control or road traffic control. But the ‘downside’ here is more than a profits crash.

If prospective back-up facilities are a vulnerable target for the NPM cost-cutter with eyes firmly fixed on the short-term bottom line, maintenance is another soft target, and here again private sector practice often shows the way to save money in the short term. Particularly in a poll-centred policy-making system which suppresses influence of people with experience within the bureaucratic system, maintenance activity is likely to be seen as unexciting and unproductive, yet such activity is often crucial to the safe management of hazardous technology. The obvious private-sector example is the explosion at the Union Carbide plant in Bhopal in 1984. According to Morone and Woodhouse (1985, 171), this accident was caused by a set of factors which included inadequate training of maintenance workers (which resulted in cleaning workers letting water into a chemical storage tank) and cost-cutting measures (the cooling system for the storage tank had been turned off, which had the effect of speeding up the subsequent chemical reaction). Similar causes were alleged to have magnified the effects of the fire at King's Cross underground railway station in London in late 1987, in which 31 people died, mainly from the effect of cyanide fumes from burning plastic wall panels. The inquiry into the disaster found that the station staff had no training in how to evacuate the station in an emergency, and that the physical condition of the station site had at least 40 shortcomings, 22 in the escalator shaft alone (Sydney Morning Herald, 22 June 1988). Whether or not economy had in fact been put before safety in this particular case is debatable, but cutting down on the easy targets has all too much potential to produce this kind of result.

Fourth, cost-cutting can easily be associated with other potentially disastrous factors. One is a drive to reduce redundancy in organisation in order to save costs. In the quick fixes to show results beloved of NPM, ‘rationalisation’ of duplicated facilities is a tempting way to save money. But it is also a good way to increase the odds, of a disaster occurring, and perhaps to reduce ‘resilience’ if it does. It can lead to the generation of misinformation by cutting down on ‘multiplexing’ (‘loose coupling’, multiple redundant channels of organisation).

In addition to the mood of tough cost-cutting built in to NPM, the preference for setting clear single-target objectives for managers to achieve also contains potential for disaster. Trying to reduce public management to a simple McDonald-hamburger formula can easily set up the ‘cultural’ conditions often associated with the creation of disasters by large organisations. This is what Ackoff (1974) calls Type II errors, i.e. getting solutions to the wrong problems rather than the wrong solutions to the right problems, and it is the problem of ‘competency traps’ which was mentioned earlier. It takes a degree of slack, ambiguity, ‘peripheral vision’, to avoid this kind of problem. Asking the right questions is the hard part. NPM not only ignores this problem, but downgrades it by a shift from ‘policy’ to ‘management’ (cf. Boston 1989, 114).

Downgrading the policy competence of top public managers

The final dimension of NPM which deserves some comment in this context is the emphasis on management by executives hired on term contracts rather than traditional career tenure (sometimes with some regard at least to political sympathy), coupled with the downgrading of policy competence by top public managers relative to their ‘managing’ skills, and with a new style of policymaking which relies on paying close attention to public opinion polling rather than to experience developed within the bureaucracy. Both could be potential recipes for disaster.

Emphasis on term contracts for public managers weakens the impact of experience and independent advice on public policy making. At worst, such a system can produce the conditions for ‘groupthink’ – a well-known term coined by Irving Janis (1972, 8) to denote the deterioration of mental efficiency, reality testing, moral judgement and independent critical thinking that can result from excessive in-group cohesion. It can occur in even the most talented group of individuals. Indeed, it is most likely to occur in an able and cohesive group.

The symptoms of groupthink are: illusions of invulnerability leading to imprudent risk-taking; rationalising and discounting of warnings that might lead to reconsideration of policy; uncritical assumption of the in-group's morality; stereotyped views of out-groups as evil, weak and stupid; pressure on dissentients to conform, self-censorship by group members to prevent deviation from the perceived group consensus; a shared illusion of unanimity among group members and ways of screening out information that might destroy the group's shared but untested beliefs (Janis 1972, 197–8). Using examples which include dramatic errors in US foreign policy and spectacular corporate misjudgments, Janis argues that such processes often lie behind the misreading of information which leads to the ‘cultural surprise’ constituted by disasters. But a term-contract approach to hiring public managers, as espoused by NPM, is not likely to create the qualities of independence and dissent that are needed to provide antidotes to groupthink.

A style of policy-making which downgrades the voice of experience from the bureaucracy and emphasizes the voice of the marginal voter through targeted opinion polling, is another aspect of NPM that may have properties which may contribute to the creation of disaster. It is notoriously difficult to assess risks which have low probability but high consequence (which is a definitional feature of disaster). This difficulty may perhaps account for the complacency of citizens in relation to risks of major disaster, in contrast to the competence and realism which they exhibit in weighting up risks involving higher probability and lower consequences.

Such complacency, channelled and amplified into the central arena of policy-making through opinion polling, is likely to reinforce the tendency to under-provide disaster prevention measures. Such measures, like defence in peacetime, are liable to seem unpopular on the basis of shallow, instant-response questioning, because of their tax cost, regulatory intrusiveness, tendency to create alarm (as in the case of emergency drills and warning systems). A poll-centred approach to policy-making is likely to reinforce the low priority already placed on disaster mitigation and preparedness, even though it will not prevent government from getting the blame when avoidable disasters inevitably occur. Ironically, the polls which NPM tends to treat as the oracle for public policy often contain a margin of error that exceeds the differences they reveal between preferences for alternative courses of action.

Conclusion

NPM is presented by its advocates as a recipe for more efficient government and more politically responsive bureaucracy. Whether it will actually achieve these goals remains to be seen. No hard-data evidence has yet been produced to back up these claims: NPM relies, just as did the Old Public Administration before it, on selectively drawn case histories and lines of argument. It is built, like all rhetorical argument, on the logic of what Aristotle (1984, 2157, 2221) called enthymeme rather than dialectic. Enthymeme is a maxim followed by a reason, not an exhaustive investigation of cases.

Even if it turned out to be demonstrably true that NPM is a reliable recipe for cheaper and more responsive government, it is still relevant to ask what might be the side-effects of such a ‘wonder drug’ for public management. One of the possible side-effects, on the arguments presented here, may be a heightened tendency to produce socially-created disasters, particularly of the ‘hybrid’ type. Unless we are prepared to carry Wildavsky's argument to its logical conclusion and argue that the creation of avoidable social disasters is actually beneficial (because every disaster which does not actually destroy the entire human race strengthens ‘resilience’), that is an additional cost which would need to be set against the benefits allegedly delivered by NPM. At the least, there seems to be a prima facie case to answer, since NPM does appear to contain several of the organisational ingredients which have been associated with socially-created disasters. At the worst, NPM could be a disaster waiting to happen.

Acknowledgements

This paper was first presented at a seminar on ‘New Directions in Civil Disaster Management’ at the University of Sydney in May 1989. Thanks are due to the editor of the Canberra Bulletin of Public Administration for permission to include a revised version of that paper in this volume. In addition to the participants at the University of Sydney seminar, we are grateful to Martin Painter (Sydney), Herbert Kitschelt (Duke) and Wolfgang van den Daele (Bielefeld) for helpful comments on a previous draft of this chapter.

Notes

1. An example, on a small scale, is Adams' (1985) ‘risk compensation hypothesis’ as applied to automobile safety measures. Similar phenomena are referred to by Perrow (1984, 131 and 179).

2. The stock example is the information available to the US intelligence services in 1942, which would have enabled them to predict the Japanese attack on Pearl Harbour if the relevant information had been identified and assembled (see Turner 1976, 56).

3. Wildavsky (1985, 7) sees this as an example which supports his ‘resilience’ thesis, in that it was the product of a procedure which attempted to prevent failure in any one part of the city's electric power system, rather than ‘dumping’ loads by blacking out one entire neighbourhood in an emergency.

4. A more recent case of this ‘traditional’ kind of government-created disaster is the measures adopted by the Ethiopian government in 1975. In order to save nomad families from death by famine, famine victims were placed in camps located near to main roads in order to facilitate supply. But the camps lacked proper sanitation, and the nomad peoples had no resistance to mundane diseases such as measles or gastroenteritis. The inevitable result was that the relief operation killed almost as many people as the famine (Knightley 1978).

5. Of course, this is a problem with disaster management organisation itself (see Britton 1986, 120).

6. An extreme example, given by Ruchelman (1988, 53) is the case of those California nuclear power plants which have been permitted to locate on active geological fault lines.

REFERENCES

Ackoff, R.I. (1974). Redesigning the Future: A Systems Approach to Societal Problems, Wiley, New York.

Adams, J. (1985). Risk and Freedom, Transport Publishing Projects, London.

Aristotle (1984). The Complete Works of Aristotle, (the revised Oxford translation, ed. J. Barnes, Bollingen Services LXXI.2), Princeton University Press, Princeton.

Atkin, R.H. (1977). Combinatorial Connectivities in Social Systems: An Application of Simplicial Complex Structures in the Study of Large Organizations, Birkhauser Verlag, Basel.

Aucoin, P. (1990). ‘Administrative Reform in Public Management: Paradigms, Principles, Paradoxes and Pendulums’, Governance, 3(2), 115–37.

Baker, J. (1989). ‘Management Improvement: The Human Dimension’ in Davis et al, 126–37.

Bates, D. (1966). The Passing of the Aborigines, 2nd ed., John Murray, London.

Bennett, J.M. (1983). ‘Large Computer Project Problems and their Causes’, Australian Computer Bulletin 7(3), 18–27.

Boston, J. (1989). ‘Corporate Management: The New Zealand Experience’ in Davis et al, 103–25.

Breyer, S. (1982). Regulation and Its Reform, Yale University Press, New Haven.

Britton, N. (1986). ‘An Appraisal of Australia's Disaster Management System following the “Ash Wednesday” Bushfires in Victoria, 1983’, Australian Journal of Public Administration XLV, 112–27.

Chateau, B. and Lapillone, B. (1982). Energy Demands Facts and Trends, Springer-Verlag, New York.

Comfort, I.K., ed. (1988). Managing Disaster: Strategies and Policy Perspectives, Duke University Press, Durham.

Davis, G., Weller, P. and Lewis, C., eds (1989). Corporate Management in Australian Government, Macmillan, Melbourne.

Douglas, M. and Wildavsky, A. (1982). Risk and Culture, University of California Press, Berkeley.

Elias, N. (1956). ‘Involvement and Detachment’, British Journal of Sociology 7, 226–52.

Ender, R.L. and Kim, J.C.K. (1988). ‘The Design and Implementation of Disaster Mitigation Policy’ in Comfort, 67–85.

Gall, J. (1988). Systemantics, General Systemantics Press, Ann Arbor.

Hood, C.C. (1990). ‘De-Sir-Humphrey-fying the Westminster Model of Bureaucracy’, Governance 3, 205–14.

Hood, C.C. (1991). ‘A Public Management for All Seasons?’, Public Administration 69, 3–19.

Hood, C.C. and Jackson, M.W. (1991). Administrative Argument, Dartmouth, Aldershot.

Jackson, M.W. (1989). ‘Immorality May Lead to Greatness’ in Prasser et al, 160–77.

Janis, I. (1972). Victims of Groupthink: A Psychological Study of Foreign Policy Decisions and Fiascoes, Houghton Mifflin, Boston.

Knightley, P. (1978). ‘Disasters: How the Helpers Make Things Worse’, Sunday Times (London) 25 June, 13.

Kochen, M. and Deutsch, K.W. (1980). Decentralization, Oelschlager, Gunn and Hain, Cambridge, Mass.

Landau, M. (1969). ‘Redundancy, Rationality and the Problem of Duplication and Overlap’, Public Administration Review 29(4), 346–58.

Levitt, B. and March, J.G. (1988). ‘Organizational Learning’, Annual Review of Sociology 14, 319–40.

Mills, S. (1986). The New Machine Men, Penguin, Ringwood, Vic.

Morone, J.G. and Woodhouse, E.J. (1985). Averting Catastrophe: Strategies for Regulating Risky Technologies, University of California Press, Berkeley.

Nethercote, J. (1989). ‘Public Service Reform: Commonwealth Experience’, paper presented to conference of Academy of Social Sciences of Australia on the Public Service, University House, Australian National University, 25 February 1989.

Neustadt, R. and Fineberg, H. (1983). The Epidemic that Never Was, Vintage, New York.

Pavlak, T.J. (1988). ‘Structuring Problems for Policy Action’, in Comfort, 22–36.

Perrow, C. (1984). Normal Accident: Living with High-Risk Technologies, Basic Books, New York.

Pollitt, C. (1990). Managerialism and the Public Services, Blackwell, Oxford.

Prasser, S., Wear, R. and Nethercote, J., eds (1989). Corruption and Reform: The Fitzgerald Vision, Queensland University Press, St. Lucia.

Ruchelman, L. (1988). ‘Natural Hazard Mitigation and Development: An Exploration of the Roles of the Public and Private Sectors’ in Comfort, 53–66.

Toth, I.M., ed. (1986). The Three Mile Island Accident, American Chemical Society, Washington D.C.

Turner, B.A. (1976). ‘How to Organize Disaster’, Management Today March, 56–7 and 105.

Turner, B.A. (1978). Man-Made Disasters, Wykeham Publications, London.

Turner, B.A., Pidgeon, N., Blockley, D. and Toft, B. (1989). ‘Safety Culture: Its Importance in Future Risk Management’, position paper for the second World Bank Workshop on Safety Control and Risk Management, Karlstad, Sweden, 6–9 November.

Westrum, R. (1987). ‘Management Strategies and Information Failure’ in Wise and Debons, 109–27.

Wildavsky, A. (1985). ‘Trial without Error: Anticipation vs. Resilience as Strategies for Risk Reduction’, CIS Occasional Papers 13, Centre for Independent Studies, Sydney.

Wise, J.A. and Debons, A., eds (1987). Information Systems Failure Analysis, Springer, Berlin.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.136.17.12