Chapter 8

ABSTRACT

Economic growth and development today is limited less by the availability of physical resources than by environmental constraints on their utilization. Increasingly, it is the output rather than the input side of the physical transformation process that is causing concern. Putting the lid on economic growth, however, would create intense conflicts between societies at different levels of development. Yet this need only be so if we continue to think of our planetary resource endowments in purely physical terms.

In Chapter 1 we saw that under certain circumstances information resources can effectively be substituted for physical ones but that whatever substitution occurs is the fruit of a learning process. Orthodox economics cannot convincingly handle such learning since it takes the information environment of economic agents as being exogenously given. It is, of course, this givenness that allows market institutions to be cast in such a favourable light.

Our analysis does not call into question the utility of market institutions. Problems only appear when these are handled centripetally and set in competition with alternative institutional arrangements. The information perspective that we have adopted in this book points to a need to broaden the economic agenda to accommodate non-market institutional forms as complements rather than alternatives to markets.

The new technologies introduced by the information revolution point in the same direction. They are increasing our cognitive capacity to process and transmit data and thus effectively reducing the pressure on us to economize on data processing resources. As a consequence it is becoming ever more feasible to transact efficiently from positions lower down the l-space where relations can once more be personalized. In effect, in pursuit of such personalized transactions, western firms are building up the processes of network capitalism inside their organizations.

Our theorizing in this book has been applied at the level of individual human agents and of groupings of such individuals. We hypothesize that it also applies at the level of processes – physical, biological, and psychological – that go to make up such individuals. If so, then the l-space becomes a tool for exploring the way that nature as a whole economizes on information processing.

8.1: THE LIMITS TO NEOCLASSICAL GROWTH

The edition of The Economist that spanned the two weeks from 21 December 1991 to 3 January 1992 – a bridge between two fateful years in Europe's postwar history – commented that environmentalists

are suspicious of economic growth, because they think it will use up too much of the world's natural resources. This is turning many of them against free trade, because free trade means more growth. They are appalled by the thought that the world's population will double in the coming century, because that will eat up resources even faster. The physical earth is becoming more important to them than the people who live on it.1

Citing the collapse of communism in Eastern Europe and the Soviet Union, the leader went on to assert, however, that ‘there is now no alternative to the free market as the way to organize economic life’. Another article in the same edition helped to clarify for the attentive reader specifically how free markets ‘organize economic life’, claiming that ‘Neoclassical economics is now international orthodoxy’.2

The fixity of physical resources, however, has always sat uncomfortably with the horizontal supply curves required by competitive markets. These promise a level of abundance that the earth is in no position to deliver. Orthodox economists, of course, are in no way discomfited since they can always retreat behind the usual ‘as if’ assumptions: in competitive markets economic agents will behave as if supplies were infinitely elastic. Environmentalists, however, could then reply with some justification that since the supply of physical resources is manifestly not elastic to the extent required by efficient markets, economic agents are behaving as if they were being irrational. Economic rationality, they will argue, at least that of the neoclassical variety, may not offer the surest path to ecological salvation.

The argument in favour of limiting our consumption of natural resources has been around since the time of Malthus.

In the past two decades it has undergone a subtle shift. In 1971, a study carried out by Donella and Denis Meadows for the Club of Rome3 forecast a gradual exhaustion of certain key resources on which successful industrialization was thought to depend. The limits to economic growth, the authors hypothesized, were set by the finite availability of specific resources. The case was well presented and well received by the general public; its point for some, however, was somewhat blunted by the realization that technological change could always replace physical inputs as they increased in scarcity by more abundant ones. After all possibilities for technical substitution have run their course, the world may indeed one day run out of resources; but not in our lifetime; not even in our children's.

Nevertheless, in the late 1980s new constraints on humanity's rate of economic growth and development appeared, this time not linked to the availability of physical resources but to their transformation into consumable goods and services. Greenhouse gases, which threaten to significantly raise the surface temperatures of our planet, and the depletion of the life-protecting ozone layer in the upper atmosphere by the over-production of CFCs are both the by-products of excessive and ecologically destabilizing output levels rather than of any growing scarcity of inputs. According to the new arguments, then, even if physical resources were as abundant as horizontal supply curves pretend that they are, and whether on account of technological substitution or not, we would still face limits to economic growth based on the fact that their unrestricted conversion into outputs has been poisoning our planet for some time and is continuing to do so at an increasing pace. Environmental Malthusianism is thus even more restrictive than the earlier version: the limits to growth are not set by supply constraints so much as by the irreducible waste generated by these supplies; it is not so much a question of what the earth can provide but of what it can absorb.

Putting a lid on growth or indeed reversing it – as some environmentalists advocate – would have a devastating effect on global social and economic development. At a stroke it would convert the economic process into a vast zero-sum game both within and between different societies. Suddenly we would all find ourselves fighting for a share of a fixed or a shrinking pie, and although demographic growth every year would continue to push up the number of claimants – some demographers forecast a doubling of the world population between 1990 and the year 2020 – the size of the pie, if it is fixed by the new ecological imperative, could no longer be increased without making it inedible for all.

The deep irony is that at the very moment that, with the demise of Marxism, many thought that distributional issues could finally be taken off the economic agenda, they are once more being thrust under our nose. Socialism had made the equitable distribution of wealth primarily an ethical issue; the environmental revolution appears set to convert it increasingly into a practical matter of survival if a Hobbesian ‘war of all against all’ is not to tear the planet apart. Unsurprisingly, many people remain reluctant to abandon the positive-sum game perspective fostered by neoclassical and Keynesian models of economic growth.

The basic message of this book is that over the long term they may not have to. It is only our obsessive preoccupation with physical resources that makes future economic growth appear zero-sum, and neither a neoclassical, nor a Keynesian, nor indeed a Marxist approach to the problem offers any useful change of perspective.

Yet go back for a moment to the production function of Figure 1.8. It describes the basic premise around which the concepts presented in this book have been built. What does it tell us? Simply this: that in producing a given level of output, any constraints on the consumption of physical inputs can be circumvented by an increased consumption of data inputs and that any restrictions imposed on the use of the latter by the limitations of our data processing apparatus – i.e., bounded rationality, etc. – can in turn be overcome through codification and abstraction by converting data into information (see Figure 8.1). The limitations on global output imposed upon us by emerging ecological constraints on physical processes – associated with greenhouse gases, pollution, the depletion of the ozone layer, etc. – are certainly not imaginary; they have to be taken seriously. But they are far from constituting the absolute limitations on growth that a conventional economic perspective makes them out to be. They can be circumvented by our proven ability to make novel and effective use of data and information; that is to say, by our ability to learn.

images

Figure 8.1 The information production function

Figure 8.1 shows the required learning to be of two kinds: (1) Experiential learning described by a gradual substitution of data for physical resources; this is the kind of integrative and incremental learning described by learning or experience curves in manufacturing and it moves one up the curve AA’ towards the left. (2) Conceptual learning in which information is extracted from data in a discontinuous fashion – a downward movement in the diagram from A’ to B’. The first kind of learning economizes on physical resources through a gradual process of data substitution; the second economizes on the data itself. Combined, they lead us towards the origin in the diagram and thus over time towards an ever-smaller consumption of physical and data inputs for a given level of output.

I believe that Figure 8.1 provides the foundations for a new way of thinking about the role of knowledge in economic processes, one whose constituent elements are described in Chapters 2 to 6 of this book. Let us briefly recapitulate what these are.

8.2: RECAPITULATION

In conformity with the Principle of Least Action, physical systems act to preserve themselves and survive over time, i.e., to fight the forces of entropy that threaten their integrity. Data processing systems – cells, individual human beings, social systems – can effectively reduce their rate of entropy production by minimizing the consumption of physical resources such as space, time, or energy that they require for a given level of activity.4 As we have seen, they do this firstly by increasing their level of data processing activity – i.e., by moving up the transformation curve AA’ of Figure 8.1 and replacing physical inputs by data inputs in their productive activities – and secondly, within the data processing operations themselves, by reducing the volume of data to be handled. Through acts of codification and of abstraction, data processing systems metabolize the data they encounter into information and knowledge, thus keeping themselves decongested and open to the absorption of new experiences. Their metabolism functions all the better where the data inputs are themselves already low in entropy. Properly deployed, codification and abstraction give a data processing system access to low entropy sources; that is, those with a greater initial capacity to do work.5 Such systems are not omnivorous, however, since their ability to process particular kinds of data is partly dictated by the physical substrate of their cognitive apparatus – frogs, for example, do not see the same world as we do,6 and our own has been greatly expanded by the technological amplification of our five senses – and partly conditioned by prior data processing experiences held in memory.

The E-space, presented and discussed in Chapter 2, takes codification and abstraction as two distinct yet interrelated ways of economizing on cognitive effort. Codification economizes on the quantity of data required to form categories; abstraction economizes on the number of categories required to apprehend phenomena. The first gives form to the particular case and the second, by giving generality to a particular form, extends its applicability to new cases. These will now be apprehended by the cognitive apparatus from the outset as more ‘organized’ phenomena and hence as low entropy inputs. The mix of cognitive economies achieved through codification and abstraction by the individual data processor over time, as well as their extent, expresses this learning style.

The codification and abstraction of experience, by converting it into a low entropy input, i.e., knowledge, makes it simultaneously more accessible and useful to others as well as inherently more diffusible. Other things being equal, knowledge that has achieved a highly codified and abstract state will diffuse through a population of data processors faster than knowledge that remains immersed in the data that gave rise to it. The U- and C-spaces of Chapter 3 show why: not only does codified and abstract knowledge enjoy a greater degree of utility, but it is easier to transmit down a communication channel at speed without information losses.

The U- and C-spaces also show, however, that other things are in fact rarely equal and that in addition to the cognitive bias of senders that may place them in a part of the E-space from which effective communication becomes problematic, receivers may either lack the necessary prior investments in codes and concepts required to understand a message, or lack the values that may attune them to the potential relevance of a message – in short, the right orientation is wanting.

Social information processing, therefore, far from being able to assume a frictionless and instantaneous ‘Newtonian’ flow of well-codified abstract data in the information plenum as required by the prevailing economic orthodoxy, is in fact characterized by numerous blockages that express what are often irreducible differences in cognition and values between individuals and groups.

In Chapter 4 we brought individual (E-space) and social information processing (U- and C-spaces) together to create a three-dimensional I-space through which to apprehend the spatiotemporal distribution and flow of data in the data field. By doing this it becomes possible to examine the creation of new knowledge as an emergent property of the field itself. Whether it was generated through an import of fresh external data into the field from outside or through a reconfiguration of data already in the field, it was hypothesized that radically new knowledge would generally flow clockwise in the I-space through a social learning cycle (SLC) made up of six components: scanning, problem-solving, abstraction, diffusion, absorption and impacting. The cycle might be described as an attractor that could be either shaped or dissolved by the pattern of institutional structures that channel the flow of data of the I-space. It has points both of minimum entropy at which new knowledge structures are most articulated and coherent, and of maximum entropy at which such structures are absorbed and digested by a population of data processors, getting dissolved in the process.

The SLC is a creative destroyer: in order to accommodate new knowledge that is epistemologically incompatible or incommensurate with what is currently on offer, it often dislodges existing cognitive investments distributed throughout the I-space, eroding or weakening the institutional structures that house them as it does so.

The institutional structures through which social information processing takes place were presented and discussed in Chapter 5. They are themselves partly products and partly causes of the data field. Although in practice many institutional possibilities and combinations of possibilities are discernible we distinguished four ‘ideal types’:7 markets, bureaucracies, clans, and fiefs. Markets and bureaucracies deal with well-codified and abstract data, differing only with respect to its diffusion – markets favour information sharing, bureaucracies not – whereas fiefs and clans work with uncodified and concrete, contextual data, with fiefs exhibiting a centralizing bias that is much weaker in clans.

All social systems of any size will require a mix of these institutional types. The values and beliefs that reside in a system, however, when combined with the distribution of power within it, will give prominence to certain institutional forms over others and confer on the SLC that activates the system a distinctive signature. In Chapter 5 we also took social interests as important determinants of institutional choice. Effective governance requires a good fit between the generality of social interests pursued – i.e., their degree of abstraction – and the location in the I-space of the institutional structures selected to serve them.

It might be said that the more abstract and universal the interests to be served by a given set of institutions, the larger the population of data processors that can potentially be brought under a unified system of governance, always providing that the institutions so activated have actually been located in the I-space and integrated with each other in a way that favours unified governance. Where this is not the case, the SLC can become blocked or stunted and certain groups will evolve outside the governance structure and in opposition to it.

In Chapter 6, the structuring and sharing of information were taken to be the defining characteristics of a cultural process. Codification, abstraction, and diffusion underlie the production and exchange of information, so that a political economy of information and a theory of culture become in effect indistinguishable from each other.

Cultural systems act centripetally or centrifugally in the data field – i.e., their constituent institutions compete or collaborate from different locations in the I-space with varying degrees of strength. A culturally inspired configuration of institutions provides data processors with situated responses as well as itself making a contribution to the interplay of forces and flows in the data field. Yet, by making the ubiquity of well-codified abstract information a prerequisite for efficient economic transactions, neoclassical economics either confines the economic problems of society exclusively to the market region of the I-space, thus relegating the study of other regions to sociologists or anthropologists, or, if it does explore other regions, it is mainly with a view to finding ways of shifting any plausibly economic transactions that inhabit them into the market region. The efficiency available in bureaucracies, fiefs, or clans is then defined as suboptimal or second best with respect to economic outcomes available in markets, outcomes that, as we have seen, implicitly depend on the nature of the information environment that regulates transactions. How the definition of economic efficiency itself might vary with changes in the information environment is not addressed. Thus whereas a theory of culture makes economic explanation contingent on the information environment, orthodox economic theory, by parametrizing the information environment at the outset, effectively converts theory of culture into an alternative to economic explanation.

Parametrizing transactionally relevant information in this way has been one of the main reasons that the neoclassical orthodoxy has been unable to cope with economic and institutional change.8

8.3: N-AND S-LEARNING

What consequences flow from the occultation of information by orthodox economics? Firstly, by taking the information environment as exogenously given, it takes market exchange by assumption as being the only transactionally efficient institutional order on offer. Other forms of exchange might still have to be tolerated but only in the absence of a viable market option. If, it is argued, the economic problem of society is taken to be the allocation of scarce resources9 then let us not add to society's problems by turning information, the vital lubricant of the allocation process, into a resource that is itself subject to scarcities. Dealing with physical scarcities is challenge enough as it is.

Unsurprisingly, given the nature of the basic assumptions, efficient markets usually turn out to be the most desirable of locations in the I-space. And since casual empiricism indicates that industrialized countries have more of them than do pre-industrial ones – i.e., stock exchanges, commodity exchanges, bond markets, etc. – it seems reasonable to assume that efficient markets are what development effectively requires. Taken as it stands, the belief that in many countries adding efficient markets to the panoply of existing institutions is uncontroversial and unlikely to do much harm. If intelligently followed it is even likely to do some good. It is when the plea for efficient markets goes centripetal, when markets are viewed as adequate alternatives to most if not all existing institutions, that the problems start. All non-market solutions become instances of ‘market failure’, a situation to be tolerated rather than sought. Williamson's quip that ‘in the beginning there were markets’10 says it all.

The belief that development, whether short term or long term, is about ever more codification, abstraction, and diffusion of information – i.e., a conscious attempt to move away from the origin in the I-space – is what I have labelled in Chapter 7 the N-learning hypothesis. It finds its way into theories of short-term economic behaviour such as the rational expectation hypothesis11 or the more evolutionary stage-models of economic growth, whether these move us towards a bureaucratic order but no further (as with Marx's or Durkheim's) or beyond it towards a market one (as with, say, Rostow).12 In the first case we move towards greater codification and abstraction but not necessarily towards greater diffusion. In the second we progress along all three dimensions of the I-space, thus completing half an SLC. In the latter case, the emerging market order becomes a focus for teleonomic explanations of development processes13 and implicitly underpins all beliefs in the convergence hypothesis.

N-learning was contrasted in Chapter 7 with S-learning where development, instead of constituting an adaptive response to the irresistible pull of centripetal forces emanating from the market region of the I-space – a transactional black hole that vacuum cleans the I-space of all other forms of exchange – expresses a steady densification of transactions and transactional structures throughout the I-space. Development in an S-learning regime thus becomes a product of centrifugalism and of the SLCs that make it possible. It requires a continuing evolution and renewal of most existing structures rather than their elimination.

N-learning and the economic orthodoxy to which it gave rise have proved to be a powerful paradigm and one that even in the face of a riding tide of criticism has proved hard to dislodge.14 Its detractors account for its continued incumbency by the lack of a convincing alternative that might compete with it; after all, they argue, it takes a theory to beat a theory.15 Like a Ptolemaic model barnacled with epicycles, therefore, it will continue to serve until something better comes along.

The absence of a competing theory, however, is not the whole story. The neoclassical model's staying power is in no small part due to the fact that it effectively remains a convincing description of efficient market processes. The dissatisfaction it gives rise to is not due to its failure to adequately account for market behaviour: it is traceable to the implicit assumption that in accounting for market behaviour it has put forward a satisfactory explanation of the economic problem of society. Non-market forms of exchange are then either treated as ‘second best’ solutions – the ‘market failure’ approach – or they become the sociologist's or the anthropologist's problem, not the economist's.

It is only if the economic agenda can be broadened to accommodate transactional orders other than markets – i.e., fiefs, clans, bureaucracies – as economically worthy in their own right that the explanatory scope of the neoclassical model will be judged insufficient. Not wrong; just insufficient.

A start has been made in the direction of agenda broadening by the new institutional economics, and the award of the Nobel Memorial Prize for Economics in 1991 to Ronald Coase, one of the founders of the new field, is a pointer of things to come. Yet there is still a long way to go. Institutional economics posits internal organization as an alternative to markets for the governance of economic transactions but does so largely within a neoclassical equilibrium framework. Furthermore, it rests its case on the scarcity of information, not on its form. It is a theory of information asymmetry – and hence of diffusion – which frequently alludes to the problems of codification and abstraction but does not articulate them theoretically. For this reason, when Ouchi talks of clans, for example, he places this transactional form along a one-dimensional continuum that has markets at one end and hierarchies at the other. In effect, Williamson and Ouchi16 each invoke the diffusion dimension of the I-space in their analysis, but the diffusion dimension alone. It offers them a theory of information exchange which they are unable to relate to a theory of information production. The transactional contingencies that we have described in this book and which give rise to different institutional orders can only be fully apprehended if they rest on an integrated and articulate theory of information production and exchange. We have traced the contours of what such a theory might look like in these pages but much work remains to be done to give it formal coherence.

If and when a new theory emerges, N-learning will not thereby have been confuted; neoclassical economics will continue to offer us the most convincing account we have to date of market-driven behaviour. But it will have to be incorporated into a broader explanation of economic processes, one that is free to roam throughout the I-space, unbeholden to the market region. To the extent that theories with greater explanatory power are to be preferred to those with less, one that convincingly endogenizes information and at the same time extends the reach of economic theorizing to forms of exchange that had been left out of account by the neoclassical orthodoxy inevitably threatens the incumbency of the latter.

So what might the I-space explain that the neoclassical model does not? Obviously any answer given here has a highly provisional character since refutable hypotheses have yet to be derived from the framework and tested. But we can at least try to motivate the enterprise.

In three areas, the I-space promises a richer and more realistic account of economic exchange than neoclassical models:

1By converting the data of economic transactions into variables operating along the dimensions of codification, abstraction, and diffusion, the I-space endogenizes information in a way that restores a certain realism to the behaviour of the economic agent. He/she is still an economizer but now aims to save on data processing as well as on physical effort since he/she is constrained to do so both by personal bounded rationality and by limits on the computability of the problems encountered.17 This much, of course, was already known. What the I-space highlights is how far the agent's strategies for economizing on data processing reflect the contingencies of his/her information environment and the institutional arrangements available.

2The convenient equating of transactions external to organizations such as firms with market governance and transactions internal to them with hierarchical or bureaucratic governance disappears in the I-space. The challenge of effective governance is to define and serve the interests of a given transactional population and to align its members behind a particular interpretation of such interests. Yet these interests can in theory be served by any of the transactional forms identified in the I-space, whether singly or in combination. Just as there is competition and collaboration between different institutional forms in the I-space, so there is between different levels of governance, each being a function of the generality of the interests to be served and the size of the population that can be aligned behind it.18 The complexities of economic agency cannot be wished away through the reductionist strategy of first converting a diversity of transactional populations into a unitary economic actor – i.e., the firm or the individual – and then deciding that the actor's data processing is strictly ‘black box’ and ceases to be the economist's problem where it departs from assumptions of perfect rationality.19

3By taking the level of governance – i.e., the generality of the interests defined and the size of the transactional population whose interests are to be served – as a variable rather than as an institutional given we effectively activate a competition between the particularistic and universalistic values that serve a given set of interests as well as between institutional configurations that might express such values. Many models of development assume that in such a contest universalistic values will come to predominate because procedural rationality has the edge over substantive rationality in complex situations.20 Japan's own experience of modernization challenges this view, as does the particularistic brand of network capitalism emerging in China and discussed in the previous chapter. Universalistic values, to be sure, have emerged in both countries. But do they predominate?21

It is the flow of data along the three dimensions of the I-space which, by activating the SLC and promoting S-learning, makes it difficult to durably associate a particular level of governance with a particular institution or set of institutions. S-learning is creative destruction at work. It is the essence of capitalism as described by Schumpeter and as understood by Marx. The neoclassical orthodoxy by equating capitalism with markets, a single staging post along the SLC, ignores its evolutionary character. Dynamic competition brings about irreversible transformations which are not captured by equilibrium models.22 Indeed, Fernand Braudel, studying capitalism from a historical perspective, presents it as the antithesis of efficient markets. His whole thesis is that capitalism cannot be reduced to a market order tout court, that it is an ancient, multifaceted phenomenon that effectively coexisted with various modes of production, sometimes working in harmony with them and at others not.23 Pelikan describes capitalism as a class of regimes taking a regime as a set of institutional constraints on the decision-making of agents.24 Braudel cites the case of Imperial China to show that the existence of markets may be a necessary but hardly a sufficient condition for the emergence of capitalism – a thesis that the current Chinese leadership would doubtless warmly applaud. Markets in China have always been strictly local and small scale; above them the state has kept a close watch. Capitalism, the ability of individuals to exploit new knowledge in ways that transcend the limitations of space and time, never had much chance. If it has a chance today, it will be building, as we saw in Chapter 7, on markets that continue to approximate local networks, albeit now augmented by the possibilities of information technology, rather than on markets of the neoclassical variety. Such network capitalism is centred on the clan region of the I-space.

Economic evolution, like other forms, is driven by the generation of novelty and variety and then by selection from it. Economic models that focus centripetally on one transactional form to the exclusion of others -bureaucracies in the case of mercantilism and Marxism-Leninism, markets in the neoclassical case – perform a premature selection on a necessary institutional variety that is never given the opportunity to emerge. Our case study of the preceding chapter indicated that premature selection may have important practical consequences. The single-minded pursuit of a neoclassical market order in Eastern Europe and the ex-Soviet Union, for example, by stifling the institutional variety necessary for S-learning to take root, may turn out to inhibit rather than to stimulate capitalist processes. And paradoxically, reforming China, if it succeeds in effectively accommodating embryonic markets in a culture that still remains strongly invested in clans and fiefs, may get closer to successfully operating an SLC than post-communist Europe. The Marxist-Leninist proclamations of the ageing Chinese leadership notwithstanding, China will harbour the institutional variety necessary for genuine capitalist evolution. At the time of writing, neither China nor Vietnam are making explicit institutional choices. But by allowing a variety of new and old institutional forms to coexist – albeit in a muddled sort of way – they are also allowing selection mechanisms to work their magic over a wider set of options than a more centripetal approach would allow. In Vietnam, a totally moribund state-owned sector is being allowed to wither away as market processes are taking root. In China, state-owned firms are for the most part in intensive care and many are gradually hollowing themselves out deliberately by contracting out an increasing proportion of their activities to collective or private firms. Markets in the latter case are not being rammed down the throats of state-owned firms through a mechanical process of privatization; they are coming in, so to speak, by invitation. And nowhere faster than in the southern provinces of Guangdong and Fujian, today almost beyond the reach of a conservative central bureaucracy. The visit of Deng Xiaoping to these provinces in January 1992, and in particular to the quasi-capitalistic enclave of Shenzhen bordering Hong Kong, was a nice way of letting it be known that some ideological neglect can be benign.25

8.4: A THEORY FOR THEAGE OF INFORMATION

Beyond opening up new theoretical perspectives on certain established economic problems, the I-space also throws light on one of the most important phenomena of our time, one that has as yet received no satisfactory treatment by the neoclassical orthodoxy: the information revolution.

At first sight the claim is hardly surprising since the I-space purports to be a framework for describing the production and exchange of information. If it cannot handle the information revolution, it might be asked, what can it handle? What might surprise, however, given some of the basic characteristics of this revolution, is the type of prediction derivable from the framework. Let us examine one or two of these by way of illustration.

Two simple features underpin what Carlota Perez has labelled the new ‘techno-economic paradigm’:26

1A massive increase in the volume of data that can be processed by technological means per unit of time.

2A massive increase in the volume of data that can be transmitted by technological means per unit of time.

The physical evidence for the information revolution can be seen everywhere: from the ubiquitous PCs that populate the desks of every office to the myriad electronic games on sale for six-year-olds and older in western department stores.

How do we register the effects of the information revolution in the I-space? By a shift of the codification-abstraction-diffusion curve towards the right in the space. In the language of economists we can think of this shift as an extension of a given population's transaction possibility frontier. It has two consequences, shown in Figure 8.2.

The first is that for a given level of codification and abstraction a larger diffusion population can be reached in a given unit of time. This is shown as line A in the figure. The consequence of this interpretation is that fewer relays are necessary in the data transmission process to eliminate noise and ambiguity from the system but that those that remain need to share a common set of codes and abstract concepts to be effective.

The second consequence derives from the fact that increases in data processing and transmission capacities reduce cognitive pressures to economize. With no extra effort the production and exchange of information between agents can thus occur at a lower level of codification and abstraction than hitherto: videoconferencing replaces the written report and the hand-scribbled fax the laconic telex. The new computing and communication technologies are avid consumers of data and they can digest great quantities of it in an unrefined form.27 Thus for a specified diffusion population with a given degree of spatial scattering, communication will be increasingly repersonalized. Many of the qualities of face-to-face exchange that had been lost through codification and abstraction will now be restored independently of spatial distance. The idea is depicted by line B in the figure. It has in fact already been given a name: high tech – high touch.28

images

Figure 8.2 Shifting the codification-abstraction-diffusion curve to the right in the l-space

In economic organizations both consequences come together: fewer communication relays means fewer people between the top and the base to receive, interpret, reformulate, and retransmit messages – i.e., fewer middle managers and much flatter yet reachable hierarchies; but it also means a greater capacity at the base to absorb and master at an implicit level the abstract codes used at the top and hence a much better educated labour force.29 Simplifying somewhat, we might say that if the organizational revolution which brought forth deep managerial hierarchies in the first seventy years of the twentieth century aimed at de-skilling the workforce through a systematic codification of organizational practice and a depersonalization of organizational relationships, a full exploitation of the information revolution will require a re-skilling of the workforce in order to manage deep SLCs through S-learning strategies.

The cultural implications of this transformation are only just now coming to be understood. It was the advent of the railways and the telegraph that created the conditions for the development of large managerial hierarchies.30 These two communication technologies facilitated moves up the codification scale and instituted bureaucratic and market transactions on a scale that had been hitherto unimaginable. The large-scale organization that emerged from this process generated its own distinctive technological trajectory since certain types of production required capital and managerial inputs that were quite beyond the reach of the small or even medium-sized firm.

The PC, the fax, and the videorecorder, by contrast, are technologies which, on account of their data processing and transmission capacities, favour moves back down the codification scale and towards more concrete forms of exchange – into the region of the I-space where fief and clan transactions predominate. The production systems that they are associated with – flexible manufacturing, CIM, etc. – are correspondingly less concerned with standardization (codification and abstraction) and mass production (diffusion) and more concerned with flexibility and small-scale production.

Firms which today find themselves overinvested in the more codified regions of the I-space are paying a price. Neither bureaucracies nor markets can deal comfortably with discontinuous change. They reduce rather than absorb uncertainty by trying to bracket it and convert it into calculable, i.e., codifiable, risk. Clans and fiefs on the other hand, operating lower down the I-space, confront uncertainty on its own terms, absorbing it through social relationships that promote trust and commitment rather than a narrow adherence to rules.

Correcting for this overinvestment does not require organizations to jettison the gains they have secured for themselves over the past hundred years or so in the upper region of the I-space. It calls on them rather to expand their overall transactional capacities through the internal development of what we have called a centrifugal culture, one in which personal and impersonal forms of exchange can mutually invigorate each other. Many large firms in quest of entrepreneurial renewal in the lower part of the I-space are today exploring clan and fief forms of organizational practice through networking and ‘intrapreneurial’ practices.31 They are in effect building network capitalism from the inside and, in contrast to the Chinese case, they are approaching it from the market rather than from the clan region.

Our earlier discussion of convergence cautions against viewing network capitalism as an ‘attractor’, a stable institutional order towards which both industrialized western firms and Chinese family firms will be propelled by the imperatives of information technology. As we saw in the previous chapter, it is at best one transactional option among others in modern capitalism's ever-evolving institutional repertoire.

8.5: EXTENSIONS

One final thought before bringing this concluding chapter to a close.

The basic proposition that has guided the theory outlined in this book is that nature economizes and that if we, as data processing animals, are found to economize it is because we are part of nature.32 In spite of parallels that might be drawn, therefore, between our own attempts at theorizing and, say, Giddens's theory of structuration or Habermas's theory of communicative action,33 the I-space aims to capture territory that extends beyond the social. It would be reasonable, therefore, to probe its claim to generality by asking to what extent it has application to data processing phenomena beyond the human sphere. Does the SLC, for example, have its counterpart in either biological or purely physical processes? Do the transactional structures of markets, bureaucracies, clans, and fiefs appear in some guise or other outside human society?

To answer such questions satisfactorily would require much research and probably several other books. We can, however, map out some future avenues of enquiry by setting down markers.

A point to note is that we have been at pains to use the term ‘data processing agents’. Although the term can apply to individual human beings, in economics it also applies to firms and in some theories of international relations it is made to apply to nation-states.34 The assumption of methodological individualism in economics, however, is that these uses of the term are but a linguistic convenience since the situated individual remains ‘where the action is’. Methodological individualism denies the explanatory power of social facts: institutions and all forms of organization above the level of the individual emerge as an unintended consequence of human action.35 It also denies, by a timely invocation of the rationality assumption, the relevance of forms of biological and physical organization below the level of the individual. The rationality invoked does not have to be of the synoptic kind required by the neoclassical approach – this amounts to denial of historicity and of the unique and irreversible role of individual acts. Yet even Hayek's more situated rationality ignores the stochastic effects of infra-human processes on human action.36

A second restriction placed on the use of the term ‘data processing agent’ in economics complements the first and originates in the discipline's perception of itself as a social science. Unlike the natural sciences, which deal with the objective interaction of inert matter and energy, the social sciences deal with meaning and interpretation. One cannot infer where an individual is heading for simply by looking back at where he has come from and extrapolating forward as in a mechanical explanation: one has to interpret his intentions. The constant temptation for neoclassical economics to forget the problem of meaning in its pursuit of a Newton-inspired market equilibrium has made it a target of criticism both from within the discipline and from outside it.

Economic agents are ‘intentional systems’37 capable of representing the world to themselves with varying degrees of adequacy. Since the thinking subject can be de-coupled from the immediacy of his environment, his thinking has the possibility of becoming more self-contained; he consequently acquires a measure of autonomy with respect to his environment.38 Yet the thinking subject not only represents the actual world to himself, but also, through his imagination, possible worlds. Their elicitation, however, may owe more to the eruption of internally driven mental or biological processes than to any externally imposed situational rationality.39 Human creativity, for example, may thus originate in singularities which are rooted in the physical rather than the social and which for that reason escape the restrictions of methodological individualism.

If we are to use the I-space outside the human sphere, we may be required to dissolve the boundary that separates the physical from the social. At a methodological level, neoclassical economics did so a century ago in order to allow some of the models and metaphors of the natural sciences to encroach upon the realm of the social sciences. We might wish to go further, but in doing so we must realize that this may allow meaning, hitherto a distinctive – indeed a defining – characteristic of the human realm, to slip its social mooring and wander about through the physical world. Thus not only would physical and biological processes shape the evolution of meaning, but the latter would also be able to act upon the former.

Let us be cautious here. It is easy to fall prey to notions of cosmic consciousness which would impart to our evolutionary production function an unsustainable and quite unnecessary teleological quality. Meaning as used here is nothing other than the restoration of context to data processing, the application of sensitivity to a total situation. In biological processes, for example, sensitivity to context gives rise to the idea that environmental pressures compel selection and that the search processes are directed, or orthogenetic. 40

In Chapter 1 we argued that much of modern molecular biology is built upon the concept of data processing and that so is a good part of modern physics.41 But are biological and physical data processors capable of producing and absorbing meanings as well as data? The question is a controversial one since it seems to impute a capacity for intelligence to non-human agents, an idea that today creates fewer problems in biology than it does in physics where it remains associated with the animism of a Whitehead or a Waddington. Recall from Chapter 3 that data was considered meaningful if it brought about changes in a data processing agent's overall disposition to respond; that is to say, to durable changes in its internal states. In this sense even machines might be considered primitive manufacturers and consumers of meaning in so far as they exhibit a capacity to learn. If we choose to deny this approach to meaning then we are making it almost by definition a property of consciousness and of consciousness alone.

The ability of inanimate data processors to cope with context has become a complex and controversial issue within the artificial intelligence community.42 Without getting involved – the end of a long book hardly seems the place to do so – we leave the reader with the thought that if nature acts informationally as well as energetically, the point of departure that we gave ourselves in Chapter 1,43 then might it not be our incurable anthropocentrism that makes us persist in believing that we are the sole and privileged recipients of whatever meaning naturally available information imparts? Yet if data processors other than man are to be found in nature, and if these are indeed discovered to be producers and exchangers of meaning as well as of raw data, then we hypothesize that the theoretical perspectives offered by the I-space, and, albeit suitably modified, by the structures and processes that it activates, will apply. The codification, abstraction, and diffusion of data will thus turn out to be fundamental expressions of the way that nature, as well as man within it, chooses to economize.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.62.105