Chapter 27
Global Risks: Cause and Consequence of the New Interactions Between Science, Technology, and Society

Jean-Yves Heurtebise

Introduction: The “Anthropogenicity” of Global Social-Environmental Risks

This paper aims at understanding in what sense Global Risks can be understood as being the cause as well as the consequence of the new interactions between science, technology, and society that shaped contemporary societies.1 Let us consider Climate Change.

Climate Change is widely recognized today as being caused by human activities: “Emissions from motor vehicles, power plants, deforestation, and other human sources are warming the Earth and damaging ecosystems and human well-being” (Gonzales 2010). Most scientists agree that Climate Change is among the negative ecological consequences of the diffusion to the whole planet of the patterns of development that emerged with the industrial revolution. The industrial revolution itself was rooted in the scientific revolution of the seventeenth century:2 the application of the principles of mechanics to industrial processes in capitalistic economies lead to the invention of engines multiplying human workload. The engines producing this surplus of workload needed the fossil fuels (coal and oil) whose very consumption contributed to the emission of greenhouse gases from which Climate Change mostly derives.3

The industrialization process changed the whole social landscape. By attracting people from countryside to cities, where factories were located, it induced a new trend toward urbanization. Urbanization, by gathering a large number of people in the same place, creates new problems of hygiene and sanitation, whose resolution induced the medical revolution from which resulted an unprecedented demographic expansion. This demographic expansion in turn supported the global trend toward world industrialization by providing both the sufficient workforce for production and the necessary subjects for consumption.

Thus Climate Change illustrates that Global Risks come, in the first place, as a consequence of new interactions between science (experimental physics), technology (thermodynamic and cybernetic engines), and society (urbanization, medicine, and capitalism). However, Climate Change demonstrates today that the Global Risks it induced are also among the cause of the establishment of new interactions between science, technology, and society: in order to assess the gravity as well as the propensity of Climate Change, substantial progress will be needed both in ecological and meteorological sciences;4,5 to mitigate Climate Change, innovation to improve the efficiency of renewable energies and the reliability of geo-engineering tools are required6 (Robock 2008); finally, new economic taxes on carbon-emitting industries and new financial incentives for eco-friendly devices shall be implemented7 and a new “behavioral awareness” should be developed for world citizens to adopt new ways of living that contribute to save energy and resources.8,9

Saying that Global Risks are at the same time cause and consequence of new interactions between science, technology, and society means that human beings are both their actual producers and their potential resolvers. Risk is indeed an anthropocentric concept. As Ortwin Renn said in the introduction to his co-edited book Global Risk Governance: “Risks are created and selected by human actors” (Renn 2008). However, stating that risks are anthropocentric in order to emphasize their “anthropogenicity” does not equate with saying that risks are subjective. Risks and the perception of risks are not the same. Indeed what we consider to be risky, for personal, social, cultural, economic and/or scientific reasons, does frame our assessment and management of risks;10 but it is equally obvious that the absence of the perception of risks does not equate to the absence of risks. Thus if risks are created by human beings, this does not imply that risks are simply a social construct but it means that they are also and foremost a technological product.

This paper will start by discussing Ulrich Beck’s and Mary Douglas’s conception of risks and then will move on redefining risk from a naturalistic perspective (neither realist nor constructivist). On the basis of this redefinition, it will analyze the interactions between science, technology, and society and, finally, the political consequences for risk management.

Ulrich Beck’s and Mary Douglas’s Understanding of Risks: A Critical Review

One of the first academic accounts on the importance of risks in contemporary societies comes from the works of Ulrich Beck (influenced by the German school of sociology) alongside those of Mary Douglas (influenced by the American school of psychology).

Ulrich Beck’s concept of Risk Society (Risikogesellschaft) stems from an historical reflection on modernity and “post-modernity” or “second modernity.” Modernity means two things in Beck’s interpretation: Enlightenment and Industrialization. Enlightenment implies a renovation of humanity’s cultural agenda defined in two points: deconstruction of regional traditional beliefs and reconstruction of human hopes on the rational basis of techno-scientific progress. Industrialization applies Enlightenment theoretical agenda: peasants’ agrarian life tuned by natural cyclical seasons is replaced by workers’ existence shaped by the continuous rhythm of industrial production.

Second modernity emerged as an “unintended consequence” of modernity; it denotes “a social transformation within modernity, in the course of which people will be set free from the social forms of industrial society” (Beck 1992a: 87). The industrial economy ruled by patriarchal stratification is overturned by the service economy shaped by individual innovations. Politically speaking, the struggle for (majority) political rights is replaced by the fight for (minority) civil rights. But the most important change concerned risk insurance. During the period of the first modernity, industrial risks were covered by the state and private companies as a collective compensation for workers’ labor. However, with scientific-technological progress giving to men a capacity of action exceeding their capacity of control and prediction, global risks emerged as uninsurable: “The residual risk society has become an uninsured society, with protection paradoxically diminishing as the danger grows” (Beck 1992b). Thus the struggle for sharing industrial goods and social benefits (positives) gives place, in the age of second modernity, to the debate about allocating ecological risks and financial “bads” (negatives).11

In Mary Douglas’s interpretation, risks are definitively a social-cultural construction: “There could be no risks, illnesses, dangers, or any reality, knowledge of which is not construed” (Douglas 1997). To illustrate this point, in her book Purity and Danger, she draws a parallel between taboo and pollution: “pollution is a type of danger which is not likely to occur except where the lines of structure, cosmic or social, are clearly defined” (Douglas 1984: 117). Both taboo and pollution expressed a symbolic breaching in social-natural laws: they are contaminating powers, conveying ideas of fault and abnormality. The difference between taboo and pollution is that taboo is meant to protect the community while risks are meant to protect individuals (Douglas 1992: 28). But, precisely because they aim at protecting individuals, risk policies are paving the way of their own impossible resolutions: “Risk is … immeasurable and its unacceptability is unlimited … There can never be sufficient holiness and safety” (Douglas and Wildavsky 1983: 184). For Mary Douglas, the more society agrees to implement policies to deal with risks, the more the “sectarian groups” of political ecologists and environmental activists will raise new demands. Her view expresses a strong criticism of the ethical framework of risk analysis “using nature in the old primitive way: impurities in the physical world or chemical carcinogens in the body are directly traced to immoral forms of economic and political power” (Douglas and Wildavsky 1983: 47).

Douglas’s analysis is based on a typology of social behaviors, named “grid-group typology” (“group” denoting the degree of social participation, and “grid” denoting the degree of social obedience), expressed by four profiles: the individualist (concerned with personal freedom), the isolated (who primarily cares for not being involved in anything whatsoever), the ‘hierarchist’ (believing that authority stands for truth), and the sectarian (thinking that truth alone holds real authority) (David 2005: 41). As for environmental policies, the individualist believes in regulations driven by market economy; the isolated thinks that things are too complicate to be involved with; the hierarchist affirms that administrative regulation supported by rational decisions will provide proper solutions; the sectarian assumes that changing the whole mindset of the dominant group is urgently needed.

It is common to oppose Mary Douglas’s “subjective” conception of risks framed by psycho-social deviant behaviors to Ulrich Beck’s “objective” interpretation of risk rooted in “techno-economic development.”12 However, this opposition is largely overstated. Beck frequently stresses that there is no increase of actual risks but rather an increase of “risk consciousness.”13 In “Risk Society Revisited” he clearly said: “it is cultural perception and definition that constitutes risks. ‘Risks’ and ‘the (public) definition of risks’ are one and the same” (Beck 2000).

Interestingly enough, critics of both Douglas and Beck all stress that their constructivist or culturalist understanding of risks neglects the sociological underpinnings of risks. Robbins, Hintz, and Moore (2010: 88) rightly pointed out the limits of Douglas’s psycho-social typological definition of risks: “Confirming the actual real-world existence of these ideal types of groups has proven difficult.” Similarly Engel and Strasser (1998) disagree with Beck’s hypothesis that risk conflicts have replaced class conflicts and his idea that “risks display an equalizing effect within their scope and among those affected by them”: risk conflicts are not replacing class conflicts but are a new form of their complex materialization.

On this account, Avner de-Shalit showed clearly that the distribution of bads (pollution and toxics) is as unequal as the distribution of goods (income and wealth) and that sociological and ecological unevenness mutually reinforce each other.14 In “risk societies” social inequalities have become “embodied” in the food people from different ethnical groups and social classes eat, in the water to which they have access, in the air they breathe, and in the places where they work and dwell, so that poor and minorities are not only more exposed to risks but have also less means to avoid them (Athanasiou 1996). Defining risks as a social construct is correct so far as this “construction” is not simply a matter of mental perception but rather of material production: the point is not that different people see risks diversely but that different people are affected by them unevenly.

Redefining Risks from an Epistemological Point of View

It is necessary thus to propose a redefinition of the notion of risk. According to the editors of Global Environmental Risk, “Global environmental risk refers to threat (to human beings and what they value) resulting from human-induced environmental change, either systemic or cumulative, on the global scale” (Kasperson, Kasperson, and Dow 2001). This definition can be misleading because we need to distinguish between risk and hazard: hazard is “an object, condition, or process that threatens individuals and society in terms of production or reproduction” while risk is “the known (or estimated) probability that a hazard-related decision will have a negative consequence” (Robbins et al. 2010: 81) If this latter definition is correct in differentiating hazard from risk, however it still lacks an important element: a risk is the assessment about an event which is considered to be hazardous so far as it endangers human or social life.

Eugene A. Rosa rightly defined risk as “a situation or an event where something of human value (including humans themselves) is at stake and where the outcome is uncertain” (Rosa 2003). Rosa’s definition stresses the fact that the notion of risk is anthropocentric: an earthquake occurring in the depths of the Pacific Ocean is a natural event that may be of interest for the geologist; it does not constitute a risk so far as it does not constitute a threat for human society. The definition proposed by Ortwin Renn emphasizes this anthropocentric nature of risk: “risk is an uncertain consequence of an event or an activity with respect to something that humans value” (Renn 2008). But the limits of these two definitions is that the notion of uncertainty and the notion of risk are not clearly distinguished. Actually, as noticed by Frank H. Knight, uncertainty and risk are different concepts that require different kinds of probabilistic tools.15 Risk refers to the probability of the occurrence of the (hazardous) event (objective or frequency probability), while uncertainty refers to the quantity of knowledge we have access to for supporting our belief in its happening (subjective or Bayesian probability). As Terje Aven rightly noticed, it is important to not make a confusion between the notion of subjective uncertainty defined by probabilities and the constructivist interpretation of “perceived risk” illustrated by Ulrich Beck or by Paul Slovic:16

subjective probabilities and related risk assignments are not the same as risk perception. The main difference is that risk perception is based on personal beliefs, affects and experiences irrespective of their validity. Subjective probabilities used in risk assessments are representations of individual and collective uncertainty assessments based on available statistical data, direct experience, models and theoretical approximations which all need justification that must be plausible to others.

(Aven 2010: 87)

There is definitively in the construction of the risk assessment also a part which is coming from the cultural and sociological environment of the observer.17 Risk is defined by our perception of it but it cannot be equated to it (unless we think, with Berkeley, that esse est percipi). The dual nature of “risks,” material and socio-psychological, is something that should be acknowledged: “environmental risks are viewed as partly biophysical threats of harm to people and partly as socially constructed, a product of culture and social experience” (Kasperson et al. 2001). More precisely, risk is threefold: it refers to the severity, the probability, and the uncertainty of a hazardous event impacting human life.

The severity of a risk depends on the spatial scope (area of effect) and on the temporal impact (duration of effect) of the hazard. Defining risk as the same as the perception of the risk is true as far as one limits its definition to the individual or collective perception of the severity of a hazard. The probability of a risk refers to the likelihood of a hazardous event and, more precisely, to the frequency of its (more or less inevitable) occurrence: the realist conception of risk focuses on this aspect of risk assessment. The uncertainty of a risk refers usually to the amount of information a subject has about a potentially hazardous event. It is commonly assumed that “if a person’s knowledge was complete, that person would have no uncertainty” (Windschitl and Wells 1996). However, as far as risk as uncertainty is concerned, one must take into account not only the subjective uncertainty that depends on the degree of our knowledge but also the objective uncertainty that is entailed by the inherent “chaotic” complexity of the world. Contemporary sciences have shown that non-linear deterministic causes can have unpredictable consequences because of interdependent processes of actualization.18 Thus “complete” knowledge of the present does not imply total certainty about future outcomes.19

Such an intrinsic uncertainty is both a specific feature of risks and a factor qualifying its other components: risk-uncertainty means uncertainty regarding both the likelihood and the severity of risks. Subjective uncertainty affects risk-likelihood by implying that there exists an irreducible margin of errors regarding the probability of the occurrence of a risk; objective uncertainty affects risk-severity by implying that it is not possible to state a priori whether a risk will be global or local, temporary or durable. Because of non-linear chains of causality, a local event can have global consequences: a risk which is thought to be local can become global, or “glocal” (Beck 2000), that is, have different local consequences at diverse places all over the world. In this sense, it comes from the very definition of risk that any risk is virtually global. Thus Global Risks are not a specific kind of risk but the potential feature of risk itself.

Finally, we notice that common definitions of risk define it as something “bad” happening to human beings. However, risks cannot be equated to harming hazards and “bads” alone, so far as they also represent opportunities of change: in Mary Douglas’s typology, seeing risk as opportunity is the psychological characteristic of “individualists” (Palmer 1996); in terms of management, it is frequently said that the public sector is risk-averse while the private sector sees risk as an opportunity for highest return in investments (Cowan 2005: 66). More importantly, defining global risks as both cause and consequence of science, technology, and society interactions implies that human beings are not only the victims of hazards that they try to prevent and manage but also the very producers of these hazards. As Nick Bostrom and Milan M. Ćirković said in the introduction to Global Catastrophic Risks: “The most likely global catastrophic risks all seem to arise from human activities, especially industrial civilization and advanced technologies” (Bostrom and Ćirković 2008).

Thus we will propose to define risk as: the severity, probability, and uncertainty of a hazardous event whose management aims at preventing this hazard harming (unevenly) social collectivities and/or resulting (involuntarily) from human activities.

Consequences of the Definition of Risk for Understanding the New Interactions Between Science, Technology, and Society

The first consequence of our definition of risk is that risk is structural. Indeed, different social, geographical, and cultural backgrounds induce in different groups or in different persons different perceptions of the severity of its outcome (Sivak et al. 1989). But whatever the probability of their occurrence, whatever the uncertainty of their nature, risks are unavoidable in so far as the planet in which we live is not a mechanistic engine but an evolving ecosystem.

This paradigmatic shift from a mechanistic vision of the world to a post-Newtonian, ecological one20,21,22 changes not only our understanding of reality but also our perception of human relation to nature as well as our definition of science. As Philippe Descola (2006) and Bruno Latour demonstrated, the emergence of classical physics went with the dissociation of the domain of nature from the dominion of culture, and the separation of the scientific from the sociologic realm. Nature was equated to the universality of (single) objective physical laws while culture was equated to the relativity of (multiple) subjective human customs.23 As a consequence, scientists should be isolated from social interferences to understand an objective nature separated from subjective cultures. This dual separation (of nature from culture and science from society) was obtained at the price of mechanizing the world. Conversely, understanding the world as an open and interconnected ecosystem leads us to realize that human cultures do have an impact on natural systems and that scientific experimentations and theories are embroiled in social experiences and practices.24

Man is indeed one essential part of our continuously evolving planetary ecosystem: “Even in pre-modern societies, human activities may have affected global environment and participated in the shaping of actual landscape and actual atmospheric composition” (Hibbard et al. 2007). Furthermore, with the development of science and technology and with their industrial application to the fabric of modern societies, human actions did have non-linear, irreversible impacts on environment.25 This new state of things creates the condition for global risks to emerge: “the birth of risk analysis lay in the systematic appraisal of highly complex problems involving large uncertainties associated with human interactions with nature and technology” (Kasperson et al. 2001).

Our understanding of the function and position of science in society is thus transformed. The image of science expressed by Robert Merton’s normal science values seems today largely mythical (Bell 2006: 25): science universalism, the first “Mertonian value,” is largely criticized by feminists pointing out the patriarchal organization of the scientific community (Glover 2002); science communalism seems to be difficult to conciliate with the privatization of public research; science disinterestedness is undermined by conflicts of interest between laboratories and companies (Frankel 1996); and science organized skepticism is shadowed by the surge of fraudulent papers (Fanelli 2009).

Because of the limits of Merton’s normal science, Funtowicz and Ravetz (1993) developed the concept of post-normal science. Post-normal science implies that since science is rooted on objective uncertainty, it cannot provide factual predictions but probabilistic forecasts. Since science is rooted on subjective probability, it needs to include social stakeholders in its deliberative process (extended peer-communities). Moreover, since science participates to shape society through technology, it can have both positive and negative impacts; it can not only reduce but also produce risks: “scientific and technological practices are among the main world uncertainty producers, introducing novel and emergent technologies, organisms and forms of life” (Funtowicz, and Strand 2007).

Science is not only a problem-solver but can also be a risk-producer for structural as well as conjectural reasons. The fact that science is a risk-producer is perhaps not something that can simply be avoided by “rationalizing” its practice since it is structurally linked to the fact that important science discoveries can also be made by chance; science as a risk-producer is the negative counterpart of science serendipity (see Gillies in this volume, Chapter 25). Both serendipity and risks illustrates the fact that the parameters that define science mechanisms are not themselves mechanistic. Science being a risk-producer is conjecturally a result of the reshaping of the relation between science, technology, and society in the contemporary world economy. Moreover, and as a direct consequence, when science is no more linked to close mechanistic systems but to open ecosystems, when science is no more separated from society but becomes a part of its edification and a condition of its development, experiments are no longer confined in closed laboratories submitted to controlled parameters but can take place in open fields, conducive to incontrollable effects:

“One of the main feature of contemporary high-power technoscience is that research and implementation are undertaken outside of the secured, controlled and simplified setting of the laboratories, and directly experimented – or we should say experienced – on the socioenvironmental systems of the planet”

(Benessia 2009).

This point is particularly obvious in the case of Genetically Modified Organisms (GMO) (Berlan et al. 2001: 74–75). According to the 2003 Cartagena Protocol on Biosafety, “an estimation of the overall risk posed by the living modified organism based on the evaluation of the likelihood and consequences of the identified adverse effects” should be done to approve its large-scale commercial use. However, on one hand scientific expertise regarding human and ecological direct or indirect adverse impacts of GMO is still ongoing; on the other hand, GMO are already massively produced, planted, and consumed in the United States, Argentina, Brazil, India, and elsewhere. As Angelica Hilbeck et al. (2011) said: “despite over 10 years of large scale commercial production of GM crops in at least five countries, no consensus on the applied ERA [Environmental Risk Assessment] methodologies, let alone agreed standardised testing procedures exist.” Thus, whether GMO use proves to be innocuous or not for either human health or ecosystem equilibrium, in any case, the example of GMO reflects the actual fashion of science, technology, and society new interactions where the use is contemporary to the test and consequences may come before evidence.26

It is for this reason that, on the basis of the acknowledgement of the inherent complexity of social-ecological systems and of the unquantifiable degree of uncertainty attached to current scientific knowledge, the precautionary principle was promoted to minimize risks by implementing measures to prevent (human-generated) hazards from happening: “The emergence of the Precautionary Principle has marked a shift from postdamage control (civil liability as a curative tool) to the level of a pre-damage control (anticipatory measures) of risks” (COMEST 2005). However, the application of the precautionary principle has proved difficult for at least two reasons: first, different interpretations of its meaning can lead to opposite safety policies; second, the scientific expertise involved in a risk assessment may add to its irresoluteness (the precautionary principle acts upon the acknowledgement of scientific uncertainty but the degree to which science is uncertain depends itself on prior scientific experimental results).27

The dispute between the United States and the EU about the use of hormonal substances for the rearing of beef illustrates the difference of interpretations of the Precautionary Principle (PP). Preventing damage can have a strong or a weak interpretation. The PP can mean: (1) it is not possible to do y without the proof that no damage will emerge from doing y (strong interpretation); (2) it is possible to do y unless we have already a proof that damage will result from doing y (weak interpretation). Regarding the beef hormone case, the United States endorsed a weak interpretation of the PP while the EU endorsed a strong one (Rakel 2004). On the basis of the “belief” that hormone beef was safe since there was no proof of damage reported from its consumption, the United States attacked the EU import ban on hormone-raised beef at the WTO. Eventually, the WTO supported US claims on the basis that EU scientists failed to provide evidence “that an identifiable risk arises from the use of any of the hormones at issue for growth promotion purposes in accordance with good practice” (WTO 1997). It is interesting to note that the WTO’s condemnation of the EU import ban clearly shows that the WTO opts for the US interpretation of the PP and thus adopts the procedural culture of its most powerful member. Besides, it is also likely that the EU ban on import of hormone-raised beef could have been motivated not only for the safety of its consumers but also for the protection of its producers. In any case, this example shows that risk assessments are not simply a matter of scientific knowledge influenced by “cultural worldviews” but also a matter of power relations influenced by political and economic factors.

Another difficulty with the precautionary principle comes from the nature of science involvement in risk assessments. Risk assessment is rooted on policymakers’ belief that scientific expertise will reduce the uncertainties that plague the decision. However, as Dale Jamieson (1996) said, “rather than being a cause of controversy, scientific uncertainty is often a consequence of controversy,” since “scientific uncertainty is constructed by both science and society in order to serve certain purposes.” Science can reduce uncertainty in so far as there is no uncertainty about the social function of scientists and their interests in providing the risk assessment. Science cannot help in reducing uncertainties that are not coming from objective causes but from the socio-economic factors shaping the uneven distribution of risks.28 Thus, finally, it is not so much risks but uncertainties that are the target of power relations among different social actors: either uncertainty is convoked by environmental activists to prohibit the use of a new technology in the name of partial knowledge; or it is used by industry to postpone the legal resolution of trials by asking the plaintiffs to provide always more information in the name of incomplete data (Jobin and Tseng 2014).

Redefining Risks and Risk Society: Political Challenges of Risk Management

The fact that Global Risks are at the same time cause and consequence of the new modes of interactions between science, technology, and society implies that progress is non-linear (see Figure 27.1). On one hand, the progress of science and technology increased the ability to predict and, to a certain extent, to prevent global natural risks; on the other, it increased the possibility to generate, either intentionally or unintentionally, global anthropological risks (Goussot 2009). The risk of a nuclear war or a terrorist use of nuclear waste comes from the invention of nuclear power and human needs of energy; the risk of global epidemic disease comes from the overuse of synthetic antibiotics and the potential use of sequenced viruses stored in open database (Kilbourne 2008). Risk as a concept refers to the analysis of the severity, probability, and uncertainty of hazards. Risk as an outcome proceeds from the irreducible indeterminacy between real and possible, true and false, and safe and unsafe.

Because events are unpredictable due to objective world complexity and data are uncertain due to subjective mind limitation, political decisions have to deal with ambivalent risks. Moreover, not only complexity and uncertainty put decision-making at stake, but socio-political disputes themselves reinforce uncertainty (by opening new debates about the value of scientific expertise) and complexity (by creating a new technology of management). Uncertainty is not only the reason for the need of science-based risk assessment but also the consequence of value-based risk management (Di Lucia, Ahlgren, and Ericsson 2012).

c27-fig-0001

Figure 27.1 The ontology of social risks and its cognitive basis.

This dual nature of uncertainty leads to intractable difficulties for risk policies. Risk policies based on post-normal science extended peer communities (i.e., including stakeholders) can have contradictory results (Friedrichs 2011): on one hand, it may broaden the awareness of the general public (and potentially reduce the amount of involuntary risks); on the other, it can weaken people’s belief in the objective nature of scientific expertise (and thus reinforce mistrust about recommendations to avoid risks). Similarly, risk policies based on the precautionary principle lead to political dilemma: either governments prohibit the use of potentially hazardous (“risky”) new scientific-technological products or services, but at the expense of the research opportunities and economic benefices that can come from it; or they allow the use of potentially hazardous (“risky”) new scientific-technological products or services, but at the expense of social-political stability and with the irreversible bio-ecological consequences that may result from them.

The problems of risk societies cannot be resolved only by participatory mechanisms and access to information. First, because these participatory debates are limited in both scope and impact (Pettsa 2004); second, because thinking that the more people will be informed by enlightened experts, the more they will accept governmental policies, supposes that public opposition comes from ignorance; however “conflicts and controversies surrounding risk management are not due to public ignorance or irrationality but, instead, are seen as a side effect of our remarkable form of participatory democracy” (Slovic 1993). The increase of protests against perceived risky facilities (from nuclear power plants to wind turbines to cell sites) are one of the consequences of “democracy” and access to information – and not of their (discursive) deficit.

But that does not mean that “the only solution is a sufficient measure of coercion” (Ophuls 1977: 150). On the contrary, good governance is the institutional condition of possibility of efficient risk management: openness, transparency, and accountability are necessary to identify risks, to integrate the information coming from both civil society and scientific experts and to react to them in a responsible manner (Heldeweg 2005). Ortwin Renn righty stressed that “these requirements are important for all countries but, in particular, for many transitional and most developing countries” (Renn 2008). In developing countries, the fact that the rules of good governance are not strictly applied can amplify the potential emergence of risks: in the case of China, political pressure on scientific expertise can bias the risk assessment of many infrastructures (Boland 1998). Moreover, since local risks can have global consequences, the default of risks assessments in one country can affect the others. Hence the emerging need for universal norms in risk assessments. But is a global governance of global risks possible and, furthermore, if possible, is it even desirable?

First, the systemic characteristic of risks prevents us believing that global risks will be addressed by simply targeting individual risky behaviors. As Elizabeth Shove said regarding policies of Climate Change, the problem is not to persuade individual actors to “behave responsibly” (either according to “universal” criteria of economic efficiency or to moral standards of generational responsibility) but to change the rule of the game in which they define their conducts: “one key condition is to shift the focus away from individual choice and to be explicit about the extent to which state and other actors configure the fabric and the texture of daily life” (Shove 2010). Risks are not subjective in the sense that risky individual behaviors are not the main factor of their emergence and transmission. Social collectives – either public (states and governments at the regional and national levels) or private (national and international profitable organizations) – are primary producers of risks.

Second, as shown by the cases of GMO and hormone beef, different decisions can come from a similar epistemological basis: it is ultimately for the political power to decide whether the risk should be “taken” (i.e., supported by society) or not. This is why global governance is not the ideal solution since it can become itself a vector of global risks. Global risk management can induce the establishment of new mechanisms of global control of society (risk-panopticon)29 that could lead to global civil disobedience. Moreover, as demonstrated by the recent “panarchy theory” (theory of complexity), the more “organized” a system, the less resilient it is. A global governance of risks based on a continuous world survey of pre-identified loci of vulnerability not only may not be able to identify risks coming from unexpected places but will imply a bureaucratic, centralized structure of decision highly vulnerable itself to external disturbance: “part of the puzzle of adaptive management is how to build a non-bureaucratic bureaucracy” (Pritchard and Sanderson 2002).

Finally, since risks are a social and technological construct and product, they cannot be addressed without deeply amending what is dysfunctional in the social-economic structure in and from which they emerge: “Wealth production within a ‘risk society’ typically depends on production technologies that expose citizens to dangerous substances” (Cable, Shriver, and Mix 2008). Addressing the consequences without rectifying the causes will only raise the level of risks. Democratizing discourses about risks (rationally and openly discuss risk potential severity) is the necessary but not sufficient condition of global risk management; democratizing the economy of risks (fairly distribute hazards’ actual impacts) will be more efficient in the long term (if everyone bears the same amount of risk, there will be no more incentive for their asymmetric accumulation in the social-ecological system).

References

  1. Alexander, Jeffrey C., and Philip Smith. 1996. “Social Science and Salvation: Risk Society as Mythical Discourse.” Zeitschrift für Sociologie 25(4): 251–262.
  2. Athanasiou, Tom. 1996. Divided Planet: The Ecology of Rich and Poor. Boston, MA: Little, Brown and Company.
  3. Aven, Terje. 2010. Misconceptions of Risk. Chichester: John Wiley & Sons Ltd.
  4. Beck, Ulrich. 1992a. Risk Society: Towards a New Modernity. London: Sage.
  5. Beck, Ulrich. 1992b. “From Industrial Society to the Risk Society: Questions of Survival, Social Structure and Ecological Enlightenment.” Theory, Culture and Society 9(1): 97–123.
  6. Beck, Ulrich. 1995. Ecological Enlightenment: Essays on the Politics of the Risk Society. Atlantic Highlands, NJ: Humanities Press International.
  7. Beck, Ulrich. 2000. “Risk Society Revisited.” In The Risk Society and Beyond: Critical Issues for Social Theory, ed. Barbara Adam, Ulrich Beck, and Joost Van Loon, 211–229. London: Sage.
  8. Bell, David. 2006. Science, Technology and Culture. Maidenhead, UK: Open University Press.
  9. Benessia, Alice. 2009. “From Certainty to Complexity.” In Science, Society and Sustainability, ed. Donald Gray, Laura Colucci-Gray, and Elena Camino, 10–26. New York: Routledge.
  10. Berlan, Jean-Pierre, et al. 2001. La guerre au vivant. Marseilles: Agone.
  11. Boland, Alana. 1998. “The Three Gorges Debate and Scientific Decision-Making in China.” China Information 13: 25–42.
  12. Bostrom, Nick, and Milan M. Ćirković (eds.). 2008. Global Catastrophic Risks. Oxford: Oxford University Press.
  13. Cable, Sherry, Thomas E. Shriver, and Tamara L. Mix. 2008. “Risk Society and Contested Illness: The Case of Nuclear Weapons Workers.” American Sociological Review 73(3): 380–401.
  14. Capra, Fritjof. 1996. The Web of Life: A New Scientific Understanding of Living Systems. New York: Anchor Books.
  15. Carpenter, Stephen R., et al. 2006. “Millennium Ecosystem Assessment: Research Needs.” Science 314(5797): 257–258.
  16. COMEST. 2005. The Precautionary Principle. Paris: UNESCO, World Commission on the Ethics of Scientific Knowledge and Technology.
  17. Cowan, Neil. 2005. Risk Analysis and Evaluation. Canterbury: Institute of Financial Services.
  18. David, Matthew. 2005. Science in Society. London: Palgrave Macmillan.
  19. Descola, Philippe. 2006. Par-delà Nature et Culture. Paris: Gallimard.
  20. de-Shalit, Avner. 2000. The Environment Between Theory and Practice. Oxford: Oxford University Press.
  21. Di Lucia, Lorenzo, Serina Ahlgren, and Karin Ericsson. 2012. “The Dilemma of Indirect Land-Use Changes in EU Biofuel Policy – An Empirical Study of Policy-Making in the Context of Scientific Uncertainty.” Environmental Science Policy 16: 9–19.
  22. Dodson, John. 2010. “Introduction.” In Changing Climates, Earth Systems and Society, ed. John Dodson, xix–xx. New York: Springer.
  23. Douglas, Mary. 1984. Purity and Danger: An Analysis of the Concepts of Pollution and Taboo. New York: Routledge.
  24. Douglas, Mary. 1992. Risk and Blame: Essays in Cultural Theory. London: Routledge.
  25. Douglas, Mary. 1997. “The Depoliticisation of Risk.” In Culture Matters: Essays in Honor of Aaron Wildavsky, ed. R.J. Ellis and M. Thompson, 121–132. Boulder, CO: Westview Press.
  26. Douglas, Mary, and Aaron Wildavsky. 1983. Risk and Culture: An Essay on the Selection of Technological and Environmental Dangers. Berkeley and Los Angeles: University of California Press.
  27. Emmeche, Claus, Simo Køppe, and Frederik Stjernfelt. 1997. “Explaining Emergence: Towards an Ontology of Levels.” Journal for General Philosophy of Science 28: 83–119.
  28. Engel, Uwe, and Hermann Strasser. 1998. “Global Risks and Social Inequality: Critical Remarks on the Risk-Society Hypothesis.” The Canadian Journal of Sociology/Cahiers canadiens de sociologie 23(1): 91–103.
  29. Fanelli, Danielle. 2009. “How Many Scientists Fabricate and Falsify Research?” PLoS One 4(5): e5738.
  30. Frankel, Mark S. 1996. “Perception, Reality, and the Political Context of Conflict of Interest in University-Industry Relationships.” Academic Medicine 71(12): 1297–1304.
  31. Friedrichs, Jörg. 2011. “Peak Energy and Climate Change: The Double Bind of Post-Normal Science.” Futures 43(4): 469–477.
  32. Funtowicz, Silvio, and Jerome R. Ravetz. 1993. “Science for the Post-Normal Age.” Futures 25(7): 735–755.
  33. Funtowicz, Silvio, and Roger Strand. 2007. “Models of Science and Policy.” In Biosafety First: Holistic Approaches to Risk and Uncertainty in Genetic Engineering and Genetically Modified Organisms, ed. Terje Traavik and Lim Li Ching. Trondheim: Tapir Academic Press.
  34. Glover, Judith. 2002. “Women and Scientific Employment: Current Perspectives from the UK.” Science Studies & Technology Studies 15(1): 29–45.
  35. Gonzalez, Patrick. 2010. “Impacts of Climate Change on Terrestrial Ecosystems and Adaptation Measures for Natural Resource Management.” In Changing Climates, Earth Systems and Society, ed. John Dodson, 5–20. New York: Springer.
  36. Goussot, Michel. 2009. “La représentation du changement.” In Le changement en environnement: Les faits, les représentations, les enjeux, ed. Martine Tabeaud, 69–80. Paris : Publications de la Sorbonne.
  37. Harrison, Ellen Z., Murray B. McBride, and David R. Bouldin. 1999. “Land Application of Sewage Sludges: An Appraisal of the US Regulations.” International Journal of Environment and Pollution 11(1): 1–36.
  38. Heldeweg, Michiel. 2005. “Towards Good Environmental Governance in Europe.” European Energy and Environmental Law Review 14(1): 2–24.
  39. Hibbard, Kathy A., et al. 2007. “Group Report: Decadal-Scale Interactions of Humans and the Environment.” In Sustainability or Collapse? An Integrated History and Future of People on Earth, ed. Robert Constanza, Lisa J. Graumlich, and Will Steffen. Cambridge, MA: MIT Press.
  40. Hilbeck, Angelica, et al. 2011. “Environmental Risk Assessment of Genetically Modified Plants – Concepts and Controversies.” Environmental Sciences Europe 23: 13.
  41. Jamieson, Dale. 1996. “Scientific Uncertainty and the Political Process.” Annals of the American Academy of Political and Social Science 545(1): 35–43.
  42. Jasanoff, Sheila. 1999. “The Songlines of Risk.” Environmental Values 8(2): 135–152.
  43. Jasanoff, Sheila. 2003. “Technologies of Humility: Citizen Participation in Governing Sciences.” Minerva 41: 223–244.
  44. Jobin, Paul, and Tseng Yu-Hwei. 2014. “Guinea Pigs Go to Court: Epidemiology and Class Actions in Taiwan.” In Powerless Science? The Making of the Toxic World in the Twentieth Century, ed. Soraya Boudia and Nathalie Jas. Oxford: Berghahn Books.
  45. Kasperson, Jeanne X., Roger Eugene Kasperson, and Kirstin Dow. 2001. “Global Environmental Risks and Society.” In Global Environmental Risk, ed. Jeanne X. Kasperson and Roger Eugene Kasperson, 1–48. London: United Nations University Press.
  46. Kilbourne, Edwin Dennis. 2008. “Plagues and Pandemics: Past, Present and Future.” In Global Catastrophic Risks, ed. Nick Bostrom and Milan M. Ćirković, 218–231. Oxford: Oxford University Press.
  47. Knight, Frank H. 1921. Risk, Uncertainty, and Profit. Boston, MA: Houghton Mifflin Company.
  48. Latour, Bruno. 1993. We Have Never Been Modern. Cambridge, MA: Harvard University Press.
  49. Latour, Bruno. 2004. Politics of Nature. Cambridge, MA: Harvard University Press.
  50. Luke, Timothy W. 1995. “On Environmentality: Geopower and Ecoknowledge, in the Discourses of Contemporary Environmentality.” Cultural Critique 31: 57–82.
  51. Mokyr, Joel. 2005. The Gifts of Athena: Historical Origins of the Knowledge Economy. Princeton, NJ: Princeton University Press.
  52. Ophuls, William. 1977. Ecology and the Politics of Scarcity. San Francisco, CA: W.H. Freeman and Company.
  53. Palmer, Christina G.S. 1996. “Risk Perception: An Empirical Study of the Relationship Between Worldview and the Risk Construct.” Risk Analysis 16(5): 717–723.
  54. Pettsa, Judith. 2004. “Barriers to Participation and Deliberation in Risk Decisions: Evidence from Waste Management.” Journal of Risk Research 7(2): 115–133.
  55. Pritchard, Lowell Jr., and Steven E. Sanderson. 2002. “The Dynamics of Political Discourses in Seeking Sustainability.” In Panarchy: Understanding Transformations in Human and Natural Systems, ed. Lance H. Gunderson and C.S. Holling, 147–172. Washington, DC: Island Press.
  56. Rakel, Horst. 2004. “Scientists as Expert Advisors: Science Cultures Versus National Cultures?” In Experts in Science and Society, ed. Elke Kurz-Milcke and Gerd Gigerenzer, 3–26. New York: Springer.
  57. Reid, W.V., et al. 2010. “Earth System Science for Global Sustainability: Grand Challenges.” Science 330(6006): 916–917.
  58. Renn, Ortwin. 2008. “White Paper on Risk Governance: Towards an Integrative Framework.” In Global Risk Governance: Concept and Practice Using IRGC Framework, ed. Ortwin Renn and Katherine D. Walker, 3–73. Berlin and Heidelberg: Springer.
  59. Robbins, Paul, John Hintz, and Sarah A. Moore. 2010. Environment and Society: A Critical Introduction. Oxford: Wiley-Blackwell.
  60. Robock, Alan. 2008. “20 Reasons Why Geoengineering May Be a Bad Idea.” Bulletin of the Atomic Scientists 64(2): 14–18.
  61. Rockström, Johan, et al. 2009. “A safe Operating Space for Humanity.” Nature 461(7263): 472–475.
  62. Rosa, Eugene A. 2003. “The Logical Structure of the Social Amplification of Risk Framework (SARF): Metatheoretical Foundations and Policy Implications.” In The Social Amplification of Risk, ed. Nick Pidgeon, Roger E. Kasperson, and Paul Slovic, 47–79. Cambridge: Cambridge University Press.
  63. Semenza, Jan C., et al. 2008. “Public Perception of Climate Change: Voluntary Mitigation and Barriers to Behavior Change.” American Journal of Preventive Medicine 35(5): 479–487.
  64. Shove, Elizabeth. 2010. “Beyond the ABC: Climate Change Policy and Theories of Social Change.” Environment and Planning A 42: 1273–1285.
  65. Sivak, M., J. Soler, U. Trankle, and J.M. Spagnhol. 1989. “Cross-Cultural Differences in Driver Risk-Perception.” Accident Analysis and Prevention 21(4): 355–362.
  66. Slater, Don, and George Ritzer. 2001. “Interview with Ulrich Beck.” Journal of Consumer Culture 1(2): 261–277.
  67. Slovic, Paul. 1992. “Perception of Risk: Reflections on the Psychometric Paradigm.” In Social Theories of Risk, ed. S. Krimsky and D. Golding, 117–152. Westport, CT: Praeger.
  68. Slovic, Paul. 1993. “Perceived Risk, Trust, and Democracy.” Risk Analysis 13(6): 675–682.
  69. Smolin, Lee. 2009. “The Unique Universe: Against the Timeless Multiverse.” Physics World June: 21–26.
  70. Stern, Nicholas, and Laurence Tubiana. 2008. “A Progressive Global Deal on Climate Change.” In A Progressive Agenda for Global Action, ed. Policy Network. London: Policy Network.
  71. Ulanowicz, Robert E. 2000. “Life after Newton: An Ecological Metaphysic.” In The Philosophy of Ecology: From Science to Synthesis, ed. David R. Keller and Frank B. Golley, 81–100. Athens and London: University of Georgia Press.
  72. Wigley, T.M.L. 2006. “A Combined Mitigation/Geoengineering Approach to Climate Stabilization.” Science 314(5798): 452–445.
  73. Windschitl, P.D., and G.L. Wells. 1996. “Measuring Psychological Uncertainty: Verbal Versus Numeric Methods.” Journal of Experimental Psychology: Applied 2(4): 343–364.
  74. WTO. 1997. EC Measures Concerning Meat and Meat Products (Hormones): Complaint by the United States: Report of the Panel, Geneva:.World Trade Organization (WT/DS26/R/USA).

Notes

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.128.226.121