4
The Data World

With Three Days of the Condor, American filmmaker Sydney Pollack (1934–2008) proposed in 1975 a film very much rooted in his time: that of the contestation of powers based on mistrust – and a salutary counter-power embodied in the press. Joseph Turner, the character played by American actor Robert Redford, works in a CIA office in New York. What does he do for a living? He reads a quantity of information found in different media (books, newspapers, reports, etc.), in order to discover unexpected relationships – the strategic intentions of States, organizations, companies, etc. Without knowing it, he uncovered one relationship which precipitated him into a race for life. He gets out of difficult situations thanks to an imagination that takes his pursuers by surprise. One of them wonders how he does it. “He reads a lot”, replies another. This film anticipated by a few decades the importance of data* and the use that can be made of them for different purposes: understanding, learning, knowing, anticipating and acting.

Putting the world into equations in order to understand it, in the way we have approached it in the first chapter of this volume, is limited to physical phenomena that are sufficiently “regular”. The limits of equation-based models are reached for some processes encountered in physics, biology or chemistry that are too complex to allow for effective mathematical or numerical modeling. They are also rapidly reached in the human and social sciences, due to the complexity of the entities studied – these are not as easy to break down into simple elements, whereas this approach is potentially possible with physical systems. The French physicist Pablo Jensen explains this as follows:

[In social systems], it is generally impossible to isolate the effect of a single factor without destroying the [studied] system; and their combination, made up of many interactions, [remains] complex. [JEN 18]

Complementing the mathematical language of equations, data science thus contributes to the production of knowledge about a system, with data allowing, in the same way as equations, the construction of predictive mathematical models.

4.1. Big data

Data-based modeling is at the heart of Big Data* (Figure 4.1). It involves using data from various sources to identify relationships between information and make predictions, where equation-based modeling is not possible.

image

Figure 4.1. The 4 Vs of Big Data (Source: www.shutterstock.com). For a color version of this figure, see www.iste.co.uk/sigrist/simulation1.zip

COMMENT ON FIGURE 4.1.– Volume, Velocity, Variety and Veracity are the four main qualities of the data used in Big Data. Data production has nowadays become inexpensive, and digital techniques make available an ever-increasing amount of data (Volume). Data are available on different media (distributed files, disks, networks, etc.) at an ever lower storage cost, as they are derived from sensors that observe the functioning of an object or provide information about its environment. They are created, stored and processed using different digital systems (personal computers, smartphones, super-computers, etc.) which are becoming increasingly fast (Velocity). Reducing the time between their production and their availability for operation, the reduction in telecommunication costs and the increase in data rates make it possible to simplify their collection. Coded using binary characters, 0 or 1, the data come from different sources and are of a diverse nature: texts, sounds, images, etc. (Variety). In order to be useful in the development of numerical models that contribute to decision support, they must provide the most accurate and verifiable information possible (Veracity), which remains one of the challenges of Big Data. Other data characteristics are necessary for their exploitation, such as the possibility of using them (Validity), representing them (Visualization), protecting them (Vulnerability), considering them relevant (Volatility), or detecting their inconsistencies (Variability).

Data is the essential material for algorithms that contribute to decision support, for example to provide targeted service offers to companies and individuals, to anticipate traffic difficulties on a road network, to optimize energy availability or to assist people in their jobs:

  • – a surgeon or doctor for diagnosis or treatment;
  • – a lawyer practicing law;
  • – an insurance agent in risk management;
  • – a human resources manager in a recruitment process;
  • – an investigator in their investigations;
  • – a journalist in the presentation of systematic results (electoral or sports confrontations, in particular);
  • – an engineer in the predictive maintenance of installations (monitoring the ageing of a bridge, the operation of a machine, etc.).

In the following, we will focus on how data can be used to build models – and how they can interact with simulations based on mathematical models. We will start by discussing the link between equations and data, to recall some statistics on the world of digital data, and then present elementary concepts of data analysis. This will then lead us to discuss the technique of neural networks, as well as artificial intelligence*.

4.2. Data and networks

For American filmmaker David Fincher, love disappointment and social ambition are the origins of Facebook, the story of which is told in a romanticized way by The Social Network [FIN 10]. Created in 2004 by American entrepreneur Mark Zuckerberg, then a student at Harvard University, by 2018 the social network had nearly two billion Internet users worldwide – nearly a third of humanity! Access to this social network has been free since its inception. In exchange, network users post a lot of information about their private lives, interests, tastes and opinions on a wide range of topics. They react to information published by other users with whom they are in contact. Initially expressed simply by the famous “Like” button, this possibility has been enriched by other icons expressing emotions. Facebook users give away so much information about themselves. Digital information has potentially considerable technical and economic value, as it is of interest to many trading companies hoping to use it to understand consumer habits. For some economic analysts, the value of social networks is based on its knowledge of its users: Facebook was listed on the New York Stock Exchange in 2012 and reached an estimated capitalization of more than $400 million five years later.

In 2018, the company found itself in the midst of a political and economic scandal, with an investigation by British journalist Carole Cadwalladr suggesting that information collected on Facebook was being used without the knowledge of some users [CAD 18]. In the context of the 2016 US presidential election, data from the social network were used, among other things, to establish the psychological profile of some voters identified as undecided, in order to send them targeted messages. Known as the Cambridge Analytica Files2, the journalist’s investigation aims to alert citizens to the dangers that certain data analysis techniques can pose, both to democratic processes and to our individual freedoms [NOU 19]. The case is taken seriously by some citizens’ representatives, Mark Zuckerberg having been heard in April 2018 on this case by a US Senate committee of inquiry [NEW 18]. In May 2018, the European General Data Protection Regulation (GDPR) came into force. Voted on two years earlier, and thought about before the issues raised by the scandal mentioned above, it aims to regulate the data collection and use practices allowed by new technologies, in order to protect Internet users. Despite its limitations in terms of its real effectiveness, the GDPR highlights the importance of data in modern and connected societies – and raises the question of collective, as well as individual, means of action to guide its use [SCH 18].

Data are also used to develop artificial intelligence programs. In 2015, Facebook opened a research laboratory in Paris dedicated to these techniques. Integrated with other teams in the company in the United States, it aims to develop tools for voice and image recognition, for example, or machine translation. With the amount of data, especially images, that its users have posted on this site (and others), sometimes identifying people places, the company is able to develop these image recognition algorithms. Perhaps we participate collectively, and without fully knowing it, in the research and development programs of Facebook and other digital giants [ONE 19], by allowing them to use data that another organization (company or research laboratory) would probably have been unable to collect without significant financial resources. Hosting nearly 250 billion images, downloaded at a rate of 200 to 350 million per day, Facebook is the third most popular site on the Web after the Google search engine and the YouTube video channel. The Internet as a whole is handling an ever-increasing amount of data [BUR 17], corresponding to the exponential digital activity of Internet users (Figure 4.2): nearly 2.5 Eo (or 2.5 billion billions of bytes) of data were created in 2018 on the Internet!

image

Figure 4.2. 60 seconds of Internet (Source: www.virtualcapitalist.com). For a color version of this figure, see www.iste.co.uk/sigrist/simulation1.zip

COMMENT ON FIGURE 4.2.– Monthly Internet activity amounts to nearly 42 billion connections on the Facebook social network, 160 billion requests on the Google search engine, 1,650 billion messages exchanged on the WhatsApp application and 8,000 billion emails sent on the various digital messaging systems! (Source: www.virtualcapitalist.com).

For comparison, let us schematically set out the ideas on information storage capacities. A byte encodes a character (letter, number, symbol) using eight bits (0 or 1). A written page, containing on average 250 words or 1,500 characters, represents about 1 kB (one thousand bytes). In the library of the Pleiade, Alexandre Dumas’ novel, The Count of Monte Cristo [DUM 81], contains about 1,500 pages: they represent 1.5 MB (1.5 million bytes). Coding the generic four-letter information, the DNA molecule contains 3.2 billion base pairs, or 3.2 GB. In comparison, a single-layer DVD has a storage capacity of 4.7 GB, a double-layer DVD of 17 GB and a memory card of about 32 GB. In a prospective book, American engineer and researcher Raymond Kurzweil estimates the functional memory of human beings at 10 TB (ten million million bytes):

Based on my own experience in designing systems that can store knowledge in either rule-based expert systems or self-organizing pattern-recognition systems, a reasonable estimate (of a human’s total functional memory capacity) is 1013 octets [KUR 05].

In HPC computing, storage needs are counted in PB (for Petabytes, 1015 bytes, almost a million times the capacity of a smartphone), the archiving means that data centers nowadays offer (Figure 4.3). The latter meet the needs of many sectors of the digital economy and their number is expected to increase significantly with the development of emerging digital technologies, such as artificial intelligence and blockchain*. Storing, sharing and reusing data are also some of the challenges associated with the practice of numerical simulation.

image

Figure 4.3. Map of data center locations in Western Europe (Source: www.datacentermap.com). For a color version of this figure, see www.iste.co.uk/sigrist/simulation1.zip

COMMENT ON FIGURE 4.3.– The number of data centers around the world is growing to meet the needs of a variety of users. As the storage capacity of a data center is often confidential, it is measured in tens of PB. Storing information has a significant energy cost – in the first place to ensure the cooling of storage bays. The operation of declared data centers worldwide would require more than 400 TWh, that is slightly less than half of France’s annual electricity production capacity, estimated in 2018 at nearly 550 TWh (Source: www.rte-france.com, [BAW 16]).

What to do with this amount of information, which is currently estimated to be less than 10% exploited? Learn! Different strategies can serve this purpose – and the mathematical tools used by engineers are often based on concepts that we implement in our lives to find solutions to the different problems we actually encounter. Linking data helps to develop predictions from observations and some statistical tools are useful, with many limitations that should be kept in mind, and which are partially discussed below (Box 4.1).

4.3. Learning from data

With digital techniques, learning is entrusted to machines operating data processing algorithms. Mathematically, learning is about predicting an event y from data X = (xn)1≤nN (like a doctor making a diagnosis based on clinical observations and” analytical data, a player betting on the victory of the English rugby team based on the state of the field, Jonny Wilkinson’s physical fitness, the team’s past results, etc.). It is thus a question of establishing a formal relationship y = ψ(X), ψ designating an explicit mathematical function (a “simple” mathematical expression is proposed, using known functions, for example polynomials) or implicit mathematical function (a relationship is found expressed with known functions).

Starting from a history consisting of a collection of data and associated events, learning techniques seek to develop the function ψ to predict the event using future data:

  • the learning phase implements an optimization algorithm, in order to find, among different candidate functions, the one that best intercepts the data. Different methods contribute to this objective;
    • - learning can be “supervised” (achievements of the event y and the corresponding data X = (xn)1≤nN are known) or “unsupervised” (only the data X are known). There are also intermediate configurations, where learning is “partially supervised”. In these cases, either data X or events y are known incompletely, for a particular type of quantity or for all quantities;
    • - learning can be “passive” (data come from measurement or survey results, and their origin, formatting, etc. is not controlled) or “active” (data are selected, they can be obtained, for example, by means of measurements or simulations, which in the latter case makes it possible to study particular situations and test the system in known configurations);
    • - learning can be accomplished “sequentially” (by processing data in batch) or “continuously” (by processing data in flow);
  • the performance evaluation phase aims to ensure that the function ψ is able to predict the data “correctly” (in the sense of an error criterion).

In general, the process is iterative: the data are divided into two groups, the first for learning and the second for evaluation. A correction is made to the function ψ when it does not achieve the desired performance and the process is repeated until the expected performance is achieved. Learning strategies implement a variety of algorithms and generally compromise between the complexity of the function sought and the quality of the prediction it makes (Figure 4.7).

image

Figure 4.7. Finding a balance between the complexity and representativeness of an explanation (Source: https://www.geckoboard.com/learn/data-literacy/statistical-fallacies/overfitting/). For a color version of this figure, see www.iste.co.uk/sigrist/simulation1.zip

COMMENT ON FIGURE 4.7.– A complex model generally allows for a more accurate representation of initial data than a simpler model, at the risk of being less robust, that is, not being able to account for random variations and/or giving poor results for new data. A simpler model, on the other hand, can balance precision and robustness.

There are many learning and data processing techniques – an example of learning about data, obtained from a calculation, is provided in Box 4.2.

Some learning techniques take advantage of algorithms that produce knowledge in a way similar to what our brain is able to achieve: neural networks, which are particularly effective for learning over a very large amount of data.

4.4. Biological and digital brains

In the introduction to his The Murders in the Rue Morgue, American poet and novelist Edgar Alan Poe (1809–1849) wrote, at the end of the 19th Century, that intellectual faculties have a particular appeal. They are not fully understood and are manifested by their effects:

The mental features discoursed of as the analytical are, in themselves but little susceptible of analysis. We appreciate them only in their effects. We know of them, among other things […]. His results, brought about by the very soul and essence of the method, have, in truth, the whole air of intuition [POE 41].

Understanding these intellectual faculties requires, for some, a thorough study of the functioning of the brain, often considered as the noblest organ of the human body, and long conceptualized as the unique seat of thought and consciousness – or even the soul.

As with other organs, knowledge about the brain is built starting with the knowledge of its anatomy. While it began with the first doctors of Antiquity, such as Hippocrates (460–377 BC), it developed in Europe in the 17th Century, with dissection practices becoming widespread in academic and legal medicine. At that time, the French mathematician and philosopher René Descartes (1596–1650) referred to the pineal gland as “the seat of the soul”. He put forward an anatomical argument: this small gland has a singular shape and is thus distinguished in a brain formed by two symmetrical hemispheres – it would therefore have a special function, that of hosting the soul!

Neuroscience emerged as a specific branch of biology and medicine in the 19th Century, while scientific discoveries in various fields (electricity, magnetism, chemistry, optics, etc.) made it possible to go beyond the mere anatomical description of the brain (Figure 4.12). The first “neuroscientists” highlight the existence of favored brain areas in the performance of certain tasks. In 1859, the French doctor and anthropologist Paul Broca (1824–1880) noted that a lesion in a brain region of the left part of the brain can cause patients to have a significant alteration in speech production. A few years later, the German neurologist and psychiatrist Carl Wernicke (1848–1905) observed that the understanding of language was compromised by lesions in another region of the same hemisphere. The Broca and Wernicke areas, located in the left hemisphere, became “the language region”.

Contemporary to Broca and Wernicke, the British neurologist John Hughlings Jackson (1835–1911) reported in 1852 the cases of patients suffering from a lesion affecting their right cerebral hemisphere. They could no longer identify people around them, got lost in familiar places or failed to orient themselves correctly. Jackson thus demonstrated a right hemispherical dominance for visual and spatial functions, which contribute to the understanding of space. Using a technique developed by the Italian doctor Camillo Golgi (1843–1926) to visualize nerve cells, the Spanish histologist Santiago Ramón y Cajal (1852–1934) observed the extension of nerve cells in the brain. The two scientists thus showed that neurons are the basic structural and functional units of the nervous system in the brain – in 1906, they shared the Nobel Prize for Medicine for this discovery.

image

Figure 4.12. Brain anatomy, Hubert Mayo (1796–1852), engraving, 1827 (Source: E. Finden/Wellcome Collection)

Neuroscience then underwent constant development, integrating advances in medical imaging, data processing and, perhaps tomorrow, numerical simulation (Chapter 7, volume two)! It contributes to the development of many human sciences (medicine, biology, psychology, psychiatry, etc.) as well as digital sciences (computer science and artificial intelligence in particular).

4.4.1. Biological neurons

Two categories of tissue, grey matter and white matter, constitute the brain. The first is made up of the cellular bodies of neurons, their dendrites and other cells. It is responsible for sensory and motor activity, as well as cognitive functions (e.g. reading, arithmetic, attention and memory). The second is made up of glial cells, which form the environment of neurons and axons. Wrapped in a myelin greasy sleeve, they connect the different regions of grey matter so that they can exchange information.

4.4.1.1. Neurons and their connections

A basic cell of the nervous system, a neuron receives and transmits information contained in a bioelectrical signal, the nerve impulse (Figure 4.13). The dendrites connected to the nucleus of the neuron receive this information by stimulation, the axon emitting it in the form of an action potential.

image

Figure 4.13. Simplified diagram of a neuron (Source: Natalia Romanov/www.123rf.com). For a color version of this figure, see www.iste.co.uk/sigrist/simulation1.zip

The axons are surrounded by myelin, a substance that insulates and protects nerve fibers, such as a plastic crown around the conductive wires. In some respects, a neuron functions as a logical gate: closed, it opens above a stimulation threshold. The human brain has just over 85 billion neurons, the intestine nearly 500 million. Neurons connect to each other to exchange these signals through synapses, about 10,000 per neuron: these connections are dynamic (Figure 4.14).

image

Figure 4.14. Neurons in the brain (Source: Dr. Jonathan Clarke, University College London). For a color version of this figure, see www.iste.co.uk/sigrist/simulation1.zip

COMMENT ON FIGURE 4.14.– This photomicrograph shows neurons from a region within the forebrain of the Ground Squirrel. The large bright cells are pyramidal neurons, forming a network in the brain. These are nerve cells from the cerebral cortex that have one large apical dendrite and several basal dendrites (Source: Wellcome Collection).

Neural plasticity is one of the most famous recent discoveries in neuroscience. It refers to the brain’s ability to organize neurons, and therefore the ability of all human beings to learn throughout their lives. Is it one of the current questions researchers are asking about brain function and intelligence? Plasticity and elasticity are terms initially used in materials science, elasticity referring to reversible mechanisms, plasticity designating irreversible modes of transformation, involving a reordering of the material. It calls into question certain conceptions of the functioning of the brain. Among these is the idea that we would be born with a stock of neurons that we would then inexorably lose, that the structure of the brain would establish itself definitively in our early years and that we would use only a fraction of mental capacities.

Our brain is thus able to adapt permanently and its plasticity involves chemical (neuron scale), structural (neural connection scale) and functional (in given areas) changes. These take place at different times and interact. Chemical transformations are observed over short periods of time (from about a second to a minute) and are associated with short-term memory, while structural or functional changes, which organize long-term memory, take place over long periods of time (from day to year).

The mechanisms at play and their effects are specific to each individual and their great variability reflects the richness and diversity of human skills! Neuro-plasticity suggests that our learning ability is a dynamic process. It can be expressed throughout life – but this ability loses its flexibility and effectiveness as we age. What we practice – and what we do not practice – somehow determines our new skills.

4.4.1.2. Brain areas and their communications

Reinforced by increasingly powerful experimental techniques (electroencephalography, magnetic resonance imaging, magneto-encephalography, etc.) and combined with other sciences, neuroscience provides a better understanding of the enigmas of the brain, learning mechanisms and consciousness [KOC 12]. Visualization techniques allow us to map the brain and allow neuroscientists to begin to understand it as a whole (Figure 4.15).

image

Figure 4.15. Healthy adult human brain viewed from the side, tractography (Source: Henrietta Howells, NatBrainLab). For a color version of this figure, see www.iste.co.uk/sigrist/simulation1.zip

COMMENT ON FIGURE 4.15.– Side view of connections in the brain of a healthy 29 year old human female. The brain is viewed as if looking through the head from a person’s right ear. The front of the brain is facing the right side of the image and the back of the brain is on the left. Brain cells communicate with each other through these nerve fibers, which have been visualized using diffusion imaging tractography. Diffusion weighted imaging is a specialized type of magnetic resonance imaging (MRI) scan which measures water diffusion in many directions in order to reconstruct the orientation of bundles of axons. Tractography is used to indirectly model these bundles of axons (nerve fibers), which transmit information between cortical regions at the brain’s surface. The brain measures approximately 18 cm from front to back (Source: Wellcome Collection).

They make it possible to establish scientific results that shed new light on the brain, and to dispel myths about it – the most emblematic being perhaps that of the “brain dichotomy”. Assuming a specialization of hemispheres in learning processes and personality development, this model of the brain spread in the 20th Century on the basis of misinterpretation of the discoveries of Broca, Wernicke and Jackson. It still persists today even though it is not based on scientific results [VER 17a]. If some mental tasks involve specific brain areas (for example, language learning mainly involves the left hemisphere3), the idea of specialization of the two hemispheres is simply wrong: “Even if some tasks or subtasks are rather carried out in one of the two hemispheres, connections run between the two and ensure that the overall task is completed, that the brain as a whole functions well” [PAS 15].

Scientists arrive at these conclusions, among other things, through research using imaging techniques. In 2013, for example, a team of researchers visualized the activated areas in the brains of a thousand volunteers, between the ages of 7 and 77, who were asked to perform all kinds of mental tasks. By studying neural connections and their statistical distribution within or between hemispheres, they showed that the tasks assigned to the left or right brain involve connections that are not exclusively located in one hemisphere or another. Thus, all regions of the brain are affected by our mental activities at different times (Figure 4.16).

While the brain mechanisms that contribute to learning still need to be explored in detail today, they have long inspired computer scientists. For decades, they have been developing formal neural networks and connecting them to obtain “digital brains”4. As a very schematic model of brain dynamics, neural networks are used in many computer applications, as explained by Google researcher Ian Goodfellow:

Neural networks are nowadays used in many computer applications. Computers are able to recognize formal content: image or sound, even human language. They can also analyze the content of a text – for example, they are able to understand whether a film review is positive or negative overall. They are widely used in basic sciences, such as astrophysics, where image analysis makes it possible to identify new celestial bodies (planets, galaxies)…

image

Figure 4.16. “Right/left Brain”: from metaphor to scientific study. For a color version of this figure, see www.iste.co.uk/sigrist/simulation1.zip

COMMENT ON FIGURE 4.16.– Observations of brain functioning suggest that the two hemispheres of the brain are jointly solicited for many tasks: “Our analyses suggest that an individual brain is not ‘left-brained’ or ‘right-brained’ as a global property. […] The lateralization of brain connections appears to be a local rather than global property of the brain networks, and our data are not consistent with a whole-brain phenotype of greater left-brained or greater right-brained network strength across individuals” [NIE 13]. Visualization techniques allow for news research on the brain to unfold. For instance, Mina Teicher, an Israeli mathematician and neuroscientist, is investigating how some scientists use their mental faculties in the practice of mathematics. In 2017, she undertook research in an attempt to gain a better understanding of the reflective processes through which they are used to solve abstract problems. “Do algebra and geometry require the same brain areas? What are the mechanisms at work in a brain mobilized by the practice of mathematics? We seek to answer these questions in order to better understand the ways of learning and reasoning in mathematics. To this end, we use the MEG to visualize brain activity as a whole and its evolution over time at very rapid scales…” Many such studies are developing: they will lead to a better understanding of the dynamics of mental processes – and will maybe change the way we understand human learning. They are accessible using visualization techniques that allow spatial and temporal resolution fine enough to produce results that can be used by neuroscientists.

4.4.2. Digital neural networks

How do digital neural networks learn from the data they analyze? Let us use the works of German photographers Hilla Becher (1935–2015) and Bernd Becher (1931–2007) as an illustration of our remarks. These artists spent a large part of their shooting time documenting urban and industrial landscapes (Figure 4.17).

image

Figure 4.17. Poster of an exhibition dedicated to Hilla and Bernd Becher at the Centre Pompidou from October 20, 2004 to January 3, 2005 (Source: © Centre Pompidou, Paris)

Neutral light conditions offered by an overcast sky; an immutable, frontal and centered framing; a shooting technique with a camera equipped with a lens that reduces deformations. They have thus created typologies of constructions that highlight both their common points and their differences. A systematic approach whose esthetics have been gradually understood. Gas tanks, water towers, grain silos, mine shafts, factories, houses: by looking at the attributes of these constructions, we recognize them by their similarities and we are able to identify them in new environments.

4.4.2.1. Learning by communicating

Neural networks have the ability to learn from data, as if, after analyzing all of Hilla and Bernd Becher’s photographs, they identified a mine shaft in a new image with a very high success rate – and would also find the difference from a gas reservoir! Inventor of neural network learning techniques, Ian Goodfellow explains:

The academic results that boosted interest in neural networks were obtained, among others, by Geoffrey Hinton, a North American researcher, in 2006. Their first concrete applications date back to 2012. A neural network is a calculation function defined by millions of parameters determined in a learning phase. Based on the analysis of a large number of cases, the latter consists in making the gap between the analyzed data and the mathematical representation made of it as small as possible.

The recent success of neural networks and artificial intelligence programs sometimes makes us forget that their origin dates back to the 1940s. It is found in the works of the American scientists Warren Sturgis McCulloch (1898–1969) and Walter Pitts (1923–1969). As researchers in neurology and cognitive psychology respectively, they proposed in 1943 a theoretical formulation of neural activity [CUL 43], which had applications in many fields: psychology, philosophy, neuroscience, computer science, cybernetics and artificial intelligence. Marc Duranton, IT expert at the CEA Research and Technology Department, comments:

In particular, the work of McCulloch and Pitts establishes that any computable mathematical function can be approached by a finite size neural network. This means that the latter are, in theory, ‘universal approximators’: they are capable of performing any mathematical operation, with a given precision, by means of a determined number of elementary functions.

The elementary function of the network is the formal neuron. Modeling, like the organization of the human brain, a neuron connected to its neighbors, it receives information from them that it combines according to the strength of the synaptic connection and triggers an action potential if the received signal exceeds an activation threshold. This dynamic can be modeled by a mathematical function H(ϕ) where H is a given function, simple in form – such as a threshold – or more complex, and ϕ is the weighted sum of the signals received by the neuron, which is written:

image

where vn represents the importance of the neuron’s connection with its nth neighbor, from whom it receives the signal ϕn:

It is possible to interpret the functioning of a formal neuron with a geometric analogy. In the case of a two-input neuron, the weighted sum of the two input signals represents the equation of a line in a plane and the activation function, if taken as the result sign, then indicates whether a point characterized by its input values is above or below that line. For example, by combining three such neurons, a triangle is constructed and the network thus formed is able to isolate a point – that is, to indicate whether it is within or outside the triangle.

The combination of neurons theoretically makes it possible to identify a point in a space, in practice of very large dimensions – the number of entries in the system – in order to say if an element is in a given set, for example if a photograph of Hilla and Bernd Becher is that of a gas tank or not.

The assignment of synaptic weights is inspired by biology and the mechanism of neuronal plasticity. If a neuron is excited after a neuron to which it is connected downstream, this may indicate a causal dependence between the two, and therefore may indicate a strong connection between the two, otherwise, the latter is weak or non-existent. The synaptic weight increases when a connection is requested: this is how networks are dynamically reorganized:

It is possible to program this learning mechanism within networks connecting formal neurons. The system thus obtained evolves by itself by assigning synaptic weights to the neurons that constitute it, according to the examples that contributed to this configuration. However, the process requires a lot of information… and in the 1950s, databases were not as extensive as they are today!

Another limitation was encountered by researchers at that time: the mathematical results showed that some logical functions, such as the or exclusive (XOR), were not represented by a simple neural network!

4.4.2.2. Learning by watching

Artificial intelligence then entered a “first winter”. Meanwhile, some researchers continued to work on these structures based on formal neurons. For example, in 1980, Japanese researcher Kunihiko Fukushima proposed the architecture of “deep neural networks” [FUK 80]. It is based on knowledge of the organization and functioning of the visual cortex (the set of brain regions involved in visual perception), in particular the work of Canadian neurophysiologist David Hubel (1926–2013) and American psychologist Frank Rosenblatt (1928–1971). In the early 1960s, the former understood the functioning of the visual cortex by studying the process of vision on animals [HUB 59], while the latter developed the “Perceptron” model. This two-layer neural network (one realizing a perception, the other a decision) is the first artificial system capable of learning by experience [ROS 58]:

This work opens the way to new techniques: it involves accumulating the layers of neural networks and coupling them with algorithms that ‘learn’ the system thus obtained, i.e. the determination of synaptic weights. In the mid-1980s, the French Yann Le Cun and the American Geoffrey Hinton developed a particularly effective algorithm to operate this learning in a network consisting of three layers. Their discoveries led to a major development in artificial intelligence techniques: automatic shape recognition then made spectacular progress!

Signal processing algorithms developed in the mid-1990s, such as “carrier vector machines”, have proven to be more efficient than those of neural networks in their ability to make predictions with a low error rate, for example in image recognition. Neural networks are thus the subject of less interest and artificial intelligence entered a “second winter”, which continued over a decade, until Geoffrey Hinton’s pioneering work in 2006 on “convolutional neural networks”.

Built on the model of the visual cortex, these neural networks are also thought of as mathematical functions. Organized in successive layers and parameterized using data analysis, they are able to produce an accurate answer to a question about new data. Equipped with calculation rules and a decision parameter, which makes them all the more reactive when they have been called upon, mathematical neurons can be stacked in layers. Each layer filters the information before sending the most relevant information to the next layer [ALL 17]. Receiving information represented by pixels, a first layer of neurons becomes sensitive to repetitions and perceived similarities in images. For example, in the photographs of Hilla and Bernd Becher, the network would learn in a global way the triangular shape of the mine shafts, then the details of the geometry, the presence of pillars and scales – marked by vertical and horizontal traces and the contrasts they induce on the image. A neuron can be activated to detect any of these recurrent information and, together with the other neurons, form a global information, transmitted to the next layer.

image

Figure 4.18. Image representation of the functioning of a neural network by stacking layers (Source: Dimitri Korolev, www.123rf.com). For a color version of this figure, see www.iste.co.uk/sigrist/simulation1.zip

From layer to layer, the network filters statistically repeat information. The last layer thus records global data, forming the learning: exposed to a new image, the network determines its nature. In our example, the network configured to recognize a mine shaft from a multitude of images is thus able to identify a new one in any photograph:

The neural networks that carry out this ‘deep’ learning stack nearly ten layers. In 2017, it is shown that their error rate in some image ‘recognition tests’ is less than 2%… when that of humans is around 5%! These performances are acquired at the cost of a significant calculation effort. Consisting of some 650,000 neurons, some networks have nearly 60 million parameters and learn from a database of nearly 15 million images… Assembling such information requires highly sophisticated image processing and storage techniques. It is within the reach of major digital companies, such as GAFAs: Facebook, for example, handles a daily flow of more than 2 billion images!

4.4.2.3. Learning by playing

Neural network learning algorithms are not infallible. They suffer, for example, from “learning bias”, induced by the data they analyze. Based on the photographs of Hilla and Bernd Becher, an algorithm can apparently “confuse” gas reservoirs or mine shafts, because the common element it “identifies” between the series of images is the white sky, used in both cases as a background for the industrial constructions photographed! The database contains a systematic element that introduces a disturbance in the automatic learning process:

The most recent algorithms are based on ‘learning by reinforcement’. The artificial intelligence system is designed to learn the actions to be performed, from experiments, in order to optimize a quantitative reward over time. There is not necessarily a need to collect data, they are generated by the algorithm itself… This makes it possible, among other things, to eliminate many biases. An example of these systems is the AlphaZero program, which breaks all records in the game of Go and in chess, only based on the rules of these games.

Some advanced algorithms, based on these reinforcement learning techniques, can produce increasingly realistic images that can easily deceive not only the human eye but also image authentication systems. By working on counter-examples to test the robustness of neural networks, Google researchers show that it is possible, for example, to fool a system by slightly disrupting the data it analyses [GOO 15]. They illustrate their demonstration in several cases, including the following one. An image representing a panda is identified as such by a neural network and the level of confidence in the calculation is high. The digital coding of this image is disrupted by a given signal, a complex mathematical function designed to fool the program. Submitted to the modified image – remaining identical to our eyes – the program claims with greater confidence to have identified a monkey (Figure 4.19)!

image

Figure 4.19. The trompe l’oeil technique is not only for humans! A form of digital simulation can lure a neural network… and help improve its robustness (Example adapted from [GOO 15] and illustrated with photographs of a panda [Source: Volodymyr Goinyk, www.123rf.com] and a gibbon [Source: Komkrit Tonusin, www.123rf.com]). For a color version of this figure, see www.iste.co.uk/sigrist/simulation1.zip

Researchers use this type of approach to produce more robust AI algorithms, for example, by contrasting two types of networks. The first network, called “generator”, produces a sample, for example an image, while the second, called “discriminator”, tries to detect if this sample is real or if it is the result of a calculation by the opposing network. This mechanism allows the two networks to learn from each other and is a type of so-called “unsupervised” learning in which the algorithm is programmed to perform its own configuration process.

4.4.2.4. Learning by separating scales

Computer scientists are seeing the effectiveness of neural networks and their applications. We have examples of this in our daily lives: one of the new features introduced in 2017 on the iPhone X is, for example, the device’s recognition of the owner’s face [LEL 17]. Mathematicians cannot yet explain this effectiveness. Bruno Bachimont, a researcher in documentary computing and digital philosophy, explains that the fuzziness that emerges from the functioning of certain algorithms, such as those of neural network learning, creates a kind of illusion. Computer scientists and mathematicians do not yet have enough references to interpret some of the decisions taken in the deep layers of networks [BAC 10a]. This is one of the challenges facing researchers and engineers in mastering algorithmic learning techniques.

With some neural network learning techniques, computer scientists experiment with what other humans have experienced in the history of mathematics [STE 12]. An intellectual amazement before the illumination of the understanding that can take time! As in front of the universe of complex numbers, also called impossible numbers since they braved a prohibition of algebraic calculation. Unthinkable until its discovery (or invention?): that the square of a number is negative! The construction of complex numbers, the discovery of their properties and their use in different fields of mathematics has been progressive (complex numbers have proved invaluable in formalizing certain physical theories, such as electromagnetism or quantum mechanics). A time that we may lack when we have to live the rapid evolution of new technologies… Far from being a danger, this misunderstanding is an opportunity: to learn – to advance knowledge, for the benefit of everyone and to rationally explain an observed functioning, within a theoretical framework that is consistent with experience.

Some mathematicians interpret the algorithmic processes of neural networks by explaining that they are able to separate scales – the large ones contain the main information, the small ones the secondary information (Figure 4.20) – and to prioritize them in such a way as to reproduce the information learned and then recognize it in new situations. The regularity mentioned is that of the characteristics common to the data processed – the shape of the mine shafts photographed by the Becher, for example. The large amount of data comes from the diversity of situations encountered. Neural networks seem to reconcile these two aspects (regularity and variety). Their current effectiveness is due to their versatility: a generic architecture makes it possible to solve problems of a different nature.

image

Figure 4.20. Example of image processing by convolutional neural network (Source: Zachi Evenor, Günther Noack, www.commons.wikimedia.org). For a color version of this figure, see www.iste.co.uk/sigrist/simulation1.zip

Also weighing certain expectations for the potentialities of this technique, some researchers nevertheless point out that the intelligence of neural networks is still precarious today! A two-year-old child recognizes a dog after their first encounter with the animal; neural networks must be exposed to millions of images so that they can recognize them alone in a new image! However, they are learning faster and faster… at a speed that humans cannot reach! One of the limitations mentioned by many AI researchers is that machines still lack an equivalent of common sense – specific to humans? – that learning from millions of examples cannot yet teach them [BRI 15].

The techniques of the 19th and early 20th centuries helped to assist human beings in their work, pushing back certain limits of their bodies; those of the late 20th and 21st centuries are intended to help them understand, potentially pushing back certain limits of their minds [BAC 10a]. Just as Simon de Laplace believed that once accurately modeled, no phenomenon in the physical world could escape analysis, the AI pioneers speculated in the 1950s that if the abilities of the mind could be accurately described, they would then lend themselves to computational reproduction [BRI 15, SAD 13]. Will digital techniques make it possible to simulate human intelligence? Is human intelligence so reducible that it can be modeled, imitated or even totally replaced by a machine (Figure 4.21)?

image

Figure 4.21. “Will artificial intelligence supersede human intelligence?” “Calm down, let us not fall into science fiction”. (Source: © FiX, www.fix-dessinateur.com). For a color version of this figure, see www.iste.co.uk/sigrist/simulation1.zip

4.5. Are humans overwhelmed by machines?

In 1941, a strange game of chess was played on a liner bound for Buenos Aires, pitting an international champion against an unknown passenger capable of competing with the best players. This is the focus of The Royal Game [SWE 43, SAL 17], the famous novel by Austrian writer Stefan Sweig (1881–1942). Exploring the powers of thought, will and survival instinct, it echoes another confrontation between pieces of black and white wood, this time historical. On the occasion of the 1972 World Chess Championship, the “match of the century”, American Bobby Fischer (1943–2008) and the defending champion, Russian Boris Spassky, played against each other. Won by the American, the meeting, to which the leaders of the United States and the Soviet Union gave a political dimension, was held during the summer in Reykjavik (Iceland). Bobby Fischer, a tortured genius whose complex personality is mentioned in the film Pawn Sacrifice (2014), wins a match started in incredible conditions…

4.5.1. All-round machine victories

Nearly 20 years later, in 1997, another party opposed the United States and Russia: the DeepBlue system defeated Russian world champion Garry Kasparov in chess, with the victory of IBM’s program occurring a few years after the dissolution of the USSR. In 2011, the Watson system outperformed Jeopardy! participants. In this game show, the presenter reads an answer and participants are asked to guess the related question. A mode of operation close to inductive reasoning (from data, find a general principle) that makes the strength of algorithmic learning. Like the other participants, Watson showed himself capable of understanding the questions formulated in natural language, answering them in a few seconds and proposing a theme for new questions, according to the rules of this game… He was also wrong several times! In 2016, Korean Lee Sedol, one of the world’s best Go players, lost 4-1 to AlphaGo. The program is based on graph learning and exploration techniques. It is designed to perfect its learning by playing many games against humans and against itself. If the human defeat in front of the machine was experienced as a humiliation [ALE 17, TIS 18], Sedol still won the fourth part of the game – a victory that saved the honor of humans:

I heard people shouting in joy when it was clear that AlphaGo had lost the game. I think it is clear why people felt helplessness and fear. It seemed like we humans are so weak and fragile. And this victory meant we could still hold our own. As time goes on, it will probably be very difficult to beat IA. But winning this one time, it felt like it was enough. One time was enough… (remarks reported in [KOH 17])

In 2017, the Libratus system checked four card players. The program, developed at the American University of Carnegie Mellon in Pittsburgh, was designed to play poker. It defeated the best players in this game in a competition organized in a casino in the city. Unlike Go or chess, poker has a certain amount of luck: this hazard seems to be controlled by the calculation program. One of the human opponents had the impression that he was faced with a program capable of hiding its game, of bluffing (simulating, in order to deceive!) by producing unexpected blows. The role of the developers of these programs is also crucial: the researchers or engineers who design the programs continuously improve them.

AI questions the place of the human in a world where technology evolves at a rate beyond our usual perceptions and immediate understanding. We have computers and algorithms with remarkable predictive, even cognitive, capabilities. Their effectiveness is demonstrated by the feats they achieve and that we have believed our specificity until now. We see computers supplanting humans in many material tasks and learning with some form of initiative. We discover robotic systems that are becoming capable of reproducing certain gestures that only humans have been able to achieve to date. We develop software designed to recognize facial expressions and models of human behavior based on an increasing number and diversity of data. Will we see the emergence of computers or robots that can understand humans, adapt their behavior accordingly… or even consciously modify it?

4.5.2. A war of intelligence?

More discreet, smaller, quieter, machines are designed to be forgotten – and it is perhaps this invisibility that may pose a problem for humans: “Tomorrow, reality will always be coupled with a technological base of intelligence and invisible knowledge […] Accepting that things are partly governed by forces that exceed us, this is the bet that is proposed to us” [RAZ 18]. The prowess of the algorithms and computing machines that operate them never cease to fascinate us, or worry us! For some, it is becoming relevant, even urgent, to think of a measure of the adaptability of human intelligence to digital intelligence [ALE 17]. However, in the context of questions about the performance of algorithms, Antoine Bordes, a French AI researcher at Facebook, explains: “The machine can create a masterpiece, but it is unable to explain why it did it…” (remarks collected by [ETC 18]).

Humans, probably more specifically those in industrialized and rich countries, seem to think of intelligence as their specificity. However, the optimal coordination of work performed by entities performing multiple tasks, without knowing all the data of the problem they are solving, is also a reality in the animal world [JEN 18, SEE 10]. The Dutch biologist and animal behavior specialist Frans de Waal [WAA 17] and the German forest ranger Peter Wohlleben [WOH 17] also invite us to rethink our view of the intelligences of the living world – those of animals and plants [WOH 17] – understood as their ability to adapt to their environment. If humans are overwhelmed by machines in the performance of certain tasks, they can also be overwhelmed by animals on certain faculties that they think they are exclusive! Frans de Waal reports this example of a chimpanzee passing memory tests with consistent and superior success over humans: “This great ape has already disproved the principle that all intelligence tests, without exception, must confirm human superiority…” [WAA 17]. A counter-example challenging certain conceptions within a scientific community that is sometimes reluctant to accept it. The scientific approach requires taking into account new data and updating ways of thinking. This is not easy for anyone – including minds used to this process: “We are used to analyzing and exploring the world, but we panic when the data threatens not to validate our expectations…” [WAA 17].

In the late 1980s, New Zealand political science researcher James Flynn analyzed IQ data and found an increase in IQ in many industrialized countries [FLY 87]. This trend has been attributed to increased quality of nutrition and longer and more widespread schooling. It is also explained by the generalization of information and communication technologies, which can increase general knowledge, abstract reasoning and intellectual agility. More recent work suggests that this trend is reversing. British anthropologist Edward Dutton claims to have shown a significant decrease in IQ during the first decade of 2000. It continues this degradation of certain intellectual skills in many Western countries [DUT 16]. These conclusions, widely reported in the press, make humans fear in particular that their intelligence will collapse inexorably [ENG 18], with some scientists suggesting environmental causes for this finding [GIL 17]. For other researchers, these results were published prematurely – and are based on insufficiently weak and scattered data, as well as a methodology leading to erroneous conclusions [RAM 18b]. According to the latter, an observation over longer periods of time and on larger samples is necessary in order to consolidate or invalidate this observation. It is more likely that humans will begin to see their intellectual capacities reach a limit overall, which is worrying to them because computer programs do not know it… Some imagine that with AI the use of technical solutions to increase the physical and mental capacities of humans will become the norm [ALE 17], others that we could entrust part of our fate to AI. Indeed, if algorithms become better than humans in their predictions, it may be wiser to use them to help us make the best decisions for the future of humanity, say some AI researchers, advocating the development of “advanced artificial intelligence”. An artificial intelligence so advanced that it becomes out of control?

4.5.3. Science fiction

On a planet that was that of apes, primates discover a monolith, a black stone of rectangular shape, radiating the band of an unknown energy: that of knowledge. One of the chimpanzees grabs a bone, discovers that it can become a tool – or weapon – and throws it up in the air. The rotational movement in the sky is accompanied by the Beau Danube Bleu waltz. In the following image, the bone becomes an orbital station, spinning in the dark vastness of space. The art of elliptical storytelling and telescoping two images. In this scene of his science fiction masterpiece, 2001: A Space Odyssey, American filmmaker Stanley Kubrick (1928–1999) exploits the power of cinema in this scene: that of abstracting oneself from the constraints of time [KUB 68]. Produced in the late 1960s, it has become over time one of the major works in the history of cinema. One of the characters in the film is the HAL-9000 supercomputer, equipped with artificial intelligence, whose red eye watches over the spacecraft of the Discovery One space mission. Intelligence… and artificial consciousness. With the consciousness of being alive, comes also the illusion of the power of the ego and the anguish of death. HAL develops them and becomes paranoid because of the humans who force him to lie and hide information, actions that are not in his original mechanisms. He causes breakdowns, lies, guesses secret conversations and endangers the safety of people… who finally manage to disconnect his circuits. In a last digital breath, HAL confides to a survivor of his attacks an ultimate emotion: “I am afraid, Dave…”

Science fiction has magnificently taken up the theme of AI, whose future challenges and stimulates researchers working to develop it. In simplifying their comments, it should be recalled that the latter formulate two hypotheses, two avenues of research on the subject:

  • – the “weak” or “specific” AI: constrained by the environment of the programs that design it, it is able to perform calculations with an efficiency that is inaccessible to humans, but in a very limited field. Rendered by interfaces and tools that can be used by human beings, these tasks assist them (and can replace them more or less partially) in solving different problems. These techniques are a reality. We find them in simple form in various applications available with our smartphones, including writing or automatic guidance systems – while some brands make the AI a selling point for their new models! They contribute to the development of connected and automatic vehicles, to the progress of robotics. They are at work in programs that beat people at strategy games; in image analysis software, used in medicine for example. In some respects, the power of numerical simulation codes and algorithms is part of this set;
  • – the “strong” or “generalist” AI: capable of reproducing all human learning processes – and thus developing conceptual intelligence, emotions, sensitivity, even consciousness? Current artificial intelligence algorithms have the ability to solve specific problems very effectively (playing a game, recognizing shapes, talking to humans, driving a vehicle, detecting an emotion, adapting to an environment, etc.), but not more generic problems. Developing a strong AI requires the convergence of many integrated digital capabilities on a single system. This is a very ambitious task that some researchers, entrepreneurs and engineers are working on. To this day, it remains out of their reach! Some argue that many of the obstacles to this could be overcome or removed – but it is still difficult for humans to grasp its complexity and completeness. For decades, strong AI has stimulated the cinematographic imagination, embodying itself in various ways and taking on different qualities. The HAL-9000 computer previously mentioned, Blade Runner’s “Replicators” [SCO 84], Her [JON 13], A.I. [SPI 01], the Matrix [WAC 99], Robocop [VER 87] or Terminator [CAM 84] are some of the representations of a strong AI5. Its advent would undoubtedly be for humanity a milestone as significant as that of the encounters caused by the European conquests of new worlds [BOO 85, JOF 86, MAL 05], or those, imaginary, of humanity with a otherness from space [CAM 89, CAM 09, KUB 68, NYB 51, SCO 89, SPI 77, SPI 82]. With the strong AI hypothesis, some intellectuals, researchers or entrepreneurs believe that a technical changeover is possible. Humans would be irremediably surpassed by machines served by algorithms and which have become autonomous – able to make decisions on their own and reproduce themselves. Artificial intelligence with this quality would then carry as many dangers (such as becoming uncontrollable, even turning against humans and destroying them?) as opportunities (such as pushing back the current limits of the human being?). This scenario is a hypothesis contained in the Singularity theory.

4.5.4. Science without fiction

We can expect technical innovations to lead to high-performance machines, constantly surprising us with their capacity for action, decision-making or reflection (depending on the meaning we want to give these words). Brighton and Selina [BRI 15] point out, however, that some AI researchers have sometimes made very early and bold predictions about the evolution of their technology. Many of these predictions have still not been realized and remain speculative to this day… Some are sometimes misrepresented as scientific results – and it is still difficult to say whether or not they will be realized and how they would be realized. The risks and opportunities associated with the emergence of a new technique go hand in hand – and the risks are real. Technological progress has always been accompanied by justified uncertainties and fears, as well as more speculative ones. Accidents, underestimated consequences and uses beyond the initial purpose of an invention are a reality. Just think of the knowledge about the atom: the 20th Century has shown that all three are possible (atomic weapons, industrial accidents of magnitude, hazardous waste).

Understanding and knowing a technique helps in understanding its risks in the most informed way possible. Thus, in 2015, prominent scientists and entrepreneurs launched an open letter to warn of potential risks related to an AI branch, referring in particular to applications of computer science, automation and robotics to the development of new weapons [FUT 15]. This call helped to raise awareness among the general public of certain technical and ethical issues that are often confined to a small community. Some of the risks, which it is imperative to imagine and evaluate, are often presented in an exclusively anxiety-provoking way and the comments of media personalities are relayed sometimes without the hindsight required to understand a technique [NAU 19]:

By far the greatest danger of artificial intelligence is that people conclude too early that they understand it. The field of AI has a reputation for making huge promises and then failing to deliver on them. Most observers conclude that AI is hard; as indeed it is. But […] the critical inference is not that AI is hard, but that, for some reason, it is very easy for people to think they know far more about artificial intelligence than they actually do… [YUD 08]

Many AI researchers and engineers, as well as some thinkers, help to explain the springs of this technique and their current limitations6 [HAR 18a, TEG 18, POR 18]. It should be noted that to date, most risks are associated with uses decided by humans [CON 18a, CON 18b] and not by machines that have consciously become autonomous.

Singularity is a mathematical term that refers, for example, to a break in a regular curve – such as a reversal or sudden change of course. The Dutch draftsman Maurits Cornelis Escher (1898–1972) used geometric singularities to create optical illusions (Figure 4.22). The French researcher Jean-Gabriel Ganascia, specialist in artificial intelligence and president of the CNRS ethics committee in France, clearly explains that the probability of seeing a strong AI emerge in the sense of a Singularity is very small [GAN 17]. To date, it remains a hypothesis of reflection and imagination – a useful hypothesis, for example, to contribute to debates on the use of techniques.

image

Figure 4.22. Waterfall, Maurits Cornelis Escher, 1961 (Source: © 2019 The M.C. Escher Company/Netherlands. All rights reserved)

As he explains, the machines remain subject to the conditions of their programming:

Even if they are gifted with learning and the ability to develop their programs, machines do not acquire autonomy because they remain subject to the categories and purposes imposed by those who have annotated the examples used during the learning phase […] Machines do not by themselves modify the language in which the observations that feed their learning mechanism and the knowledge they build are expressed [GAN 17].

In order to shed light on consciousness, Ganascia offers a documented critique of the Singularity hypothesis and facilitates a documented perspective on the fears or speculations associated with it. According to him, it is more of a storytelling than a scientific reality, maintained for various reasons. In particular, he puts forward the economic argument explaining the popularity of the Singularity hypothesis. The financial resources necessary for the development of AI are considerable and some entrepreneurs engaged in fierce economic competition need intense communication [NAU 19] in order to stimulate investment – which may be lacking if some of the announced results have not quickly become reality: it is a matter of avoiding a “third winter” of artificial intelligence… Ganascia also sees a political risk for citizens, in a context where AI techniques can strongly influence human life. With their economic power and databases that are used by some AI programs, large digital companies can be in a position to impose the development choices for AI techniques on their own. This would deprive organized societies of legitimate political and ethical debates and orientations on the use of AI – to which it is desirable, if not essential, that scientists independent of political and economic powers contribute [SAD 18].

In the film Her, the character played by the American actor Joachim Phoenix falls in love with an artificial intelligence with a sensual voice and vocabulary that stimulates the imagination [JON 13]. The use of techniques is also the result of our unconscious, of our projections, sometimes unrelated to their realities. Artificial intelligence is an emerging technique to date. Called to become mature, its development thus raises many scientific, technical, ethical and political questions [CON 17, HAR 18a, PAL 17]. One of the most important issues for individuals is undoubtedly the relationship between them and human intelligence.

4.5.5. Complementarity of intelligence

The French philosopher and writer Éric Sadin believes that conceptual intelligence is not accessible to machines in the same way as it is to humans:

If human intelligence is virtually infinite in some of its capacities, artificial intelligence is virtually unlimited in the indefinitely open horizon of its evolution (…) The faculty of reflexive abstraction is a character of human intelligence, artificial intelligence does not share this disposition… [SAD 13]

Human and digital intelligences can be conceptualized as different and, above all, complementary. After losing the first three games of Go against an AI system, Lee Sedol won the fourth game by playing a masterstroke, against which the program could not find a convincing parry. Adapting to his silicon opponent, whom he finds in some respects “creative” and “surprising”, the human champion finds his style again. Playing against an AI program seems to have strengthened his determination, the amazement of the past defeat, and his memorable games against the machine made him change his view of the game of Go, his way of playing, and himself [KOH 17]. Mathematicians, physicists or engineers who have long used the power of computers may have a longer experience of the complementarity of their intelligences with that of the machine. This is illustrated by this example, which comes from mathematics. Some of its disciplines, such as combinatorics or arithmetic, use computational skills to implement algorithms that help to establish proof of conjecture. Proving mathematical results remains to this day the exclusivity of human intelligence, able to formalize and handle abstract concepts. Digital intelligence makes an indispensable contribution to this by its ability to handle a large number of cases in a short period of time… an area in which human intelligence is surpassed.

One of the first collaborations between human and digital intelligence in this field dates back to the late 1970s. Four colors are sufficient to color a map (Figure 4.23): this conjecture was formulated in 1852 when an English cartographer noticed that he only needed four colored shades to color the cantons of England, without giving the same color to two cantons with a common border.

image

Figure 4.23. Map of Europe colored red, yellow, green and blue – ocean and seas combined are not represented by a color (Source: www.123rf.com). For a color version of this figure, see www.iste.co.uk/sigrist/simulation1.zip

The theorem was established more than a hundred years later. In 1976, two American mathematicians, Kenneth Appel and Wolfgang Haken, used computing power to support their demonstration. They first established theoretical results showing that it was necessary to study a large number of particular configurations, nearly 1,500, in order to demonstrate the conjecture. Performing the calculations required by these studies is tedious for humans! It involved several years of systematic work carried out by a machine. Its efficiency in producing the required calculations is unparalleled – the computer programmed for this purpose has neither shied away from the task nor weakened in its determination to accomplish it!

Another example where the complementarity of human and digital intelligence is necessary is in linguistic analysis. While the ability to learn and use language – literary, scientific, artistic, etc. – may seem specific to human intelligence, it is somehow inculcated in machines! With signal processing techniques, logic and automatic learning tools, researchers and engineers are developing algorithms to give meaning to the information contained in texts written in natural language, for example. Bertrand Delezoïde, multimedia and knowledge engineering expert at CEA List, explains the principle:

Finding information in sources of information as varied as written documents, sound or video recordings and image files, uses ‘unstructured data processing techniques’. Identifying relevant information in these contents in order to exploit it is a strategic issue that interests technical intelligence or economic intelligence, for example.

The analysis of a written text is based on syntactic and semantic tools. The first ones allow us to identify the global structure of a sentence. They are based on theories derived in particular from the work of the American linguist Noam Chomsky. The second refers to the diversity of the words used, their relationships and their meaning. The semantic framework is built with experts in a particular field, holders of the knowledge and its vocabulary:

Inspection reports on installations or machines contain a wealth of valuable information for many industrial sectors, in particular construction (shipbuilding, automotive, aeronautics or civil engineering, etc.). The automatic analysis of this documentation allows them to understand practices, anticipate maintenance operations and initiate process improvement actions.

For the design of new installations, data from a diversified corpus of documents offers engineers the opportunity to identify good practices. The practice of numerical simulation (Chapter 2), for example, produces many written documents. These calculation notes thus contain a great deal of relevant information: one of the current areas of research is to use this data to improve the know-how accumulated by engineers.

Digital tools are developing, moving from a binary logic in which analysis answers simple questions to a mode of dialogue with humans that requires them to interpret the results proposed by AI. One of the fundamental issues for humans is thus of a behavioral nature. It is a question of learning to use these tools, for example with conversational interfaces, chatbots (Figure 4.24), to adapt to them and to integrate them into a practice:

Learning to formulate a question in AI, interpreting an answer is essential for users. The AI may be wrong or not answer the question exactly – with Google, for example, the first answers to a query are not necessarily the most relevant…

Adopting a critical approach to the digital tool and learning to collaborate with it remains one of the most significant challenges in the current development of AI.

image

Figure 4.24. A chatbot offers an interactive dialog window: it is the emerged part of an algorithmic iceberg with which a user must learn to communicate (image representation of a conversational agent [Source: www.123rf.com]). For a color version of this figure, see www.iste.co.uk/sigrist/simulation1.zip

Let us return to the scientific calculation carried out by computers. In the 21st Century, will it do without engineers and be entirely produced by machines? The simulation community of practitioners would not consider it any other way today than prospectively or theoretically. To date, artificial intelligence systems do not interpret simulation results for engineers – and it is still their knowledge of models (their validity assumptions and limitations) that allows them to produce meaningful physical analysis to understand, design and optimize! A calculation performed by numerical simulation always produces a result, but the software that allows it does not give it any validity. Let us illustrate our point with the following example. Tower, chimney or cable in the wind: the shape of a flow downstream of an obstacle determines its aerodynamics and its resistance to weather hazards. Around a cylinder, the air particles separate into vortices driven by the flow. An aerodynamic calculation code used with two numerical methods can give different results for this simple configuration (Figure 4.25). While this difference is of no consequence in an academic case, it is obviously not in an industrial application – for example, in the choice of the body shape of a mass-produced car.

image

Figure 4.25. Two flow calculations on the same configuration, with two different methods [MEL 06]. For a color version of this figure, see www.iste.co.uk/sigrist/simulation1.zip

COMMENT ON FIGURE 4.25.– The calculation shown on the left is not in agreement with the results of experiments documented in the scientific literature. The one on the right has a flow shape that is in line with reality. Both calculations are performed with the same tool, but different methods. In both cases, the simulation provides a numerical result. It is not in a position to certify its validity: only the engineer can!

The scientific computing community is keenly interested in ever more efficient techniques and integrates innovations from learning and research methods into databases in order to develop scientific computing methods. A path is traced and only the future will tell which paths will really be used by computer simulation to benefit from it [BOD 17].

4.5.6. Complexity and robustness

In some tasks, digital intelligence surpasses human intelligence. Will they be out of control? The digital scientific community is currently concerned about this possibility, due to its complexity. Machines constantly interact with humans and other machines. With algorithms designed by different people, each following their own rules, it can be difficult to predict the effect of all their interactions. This is above all a technical issue with multiple consequences for industries that wish to implement them in the long term7. Philippe Watteau, former director of the List Institute at the CEA, explains this with the example of automatic vehicles:

An automobile has more than a hundred ECUs on board, which will be concentrated in one or two new-generation ECUs in the new electrical and electronic architectures of vehicles. They are dedicated to specific tasks: for example, the ABS braking system, the EPS trajectory corrector, etc. These are operated on the same system, which will perform many other functions, such as real-time environmental analysis, on the connected and autonomous car. The complexity of systems is nowadays literally ‘unthinkable’! This is a technical challenge requiring the development of new electronic architectures whose security and vulnerability (to hacking intrusions) constitute one of the challenges of current automatic research. We can no longer think of AI these days without the issue of cyber security…

Formal methods are one of the means available to researchers to deal with the risk of cyber-hacking:

After the certification of the physical world, the certification of the digital world is nowadays developing. Formal analysis is the mathematical proof that a program meets the expected safety and reliability requirements… It is the first line of defense against cyber-attacks to identify the digital flaw in a set of algorithms. For example, it has long been used in the control and command systems of sensitive industrial installations.

The robustness of neural networks, for example, their ability to give a safe result in a wide variety of situations, remains a crucial research area for artificial intelligence – particularly for security reasons. In some respects, the current robustness of AI algorithms is still a development issue. Thus Geoffrey Hinton specifies: “A real intelligence doesn’t break when you slightly change the problem…” (remarks reported by [SOM 17]).

4.5.7. Imitation game

Imitation Game is the term Alan Turing used to characterize the test bearing his name. This evaluates the ability of a machine to blindly imitate human conversation and deceive an interlocutor [TUR 50]. The mathematician thus anticipated a question that has become central today: can machines reproduce human thought? Various machines have been tested and the teams involved in their development claim that some have successfully completed it. The latest was in 2014 by a British team [HUN 14]. The scientific value of this test, which is highly subjective, is the subject of controversy, which attests to questions that continue to divide the research community. Recent advances in AI systems, such as conversational robots, are now entering this debate. In 2009, the creators of the Siri company stated: “within five years everyone will have a virtual assistant, to whom (we will delegate) all kinds of tasks […] and (who will be) able to solve any problem in our place…” (remarks reported by [SAD 13]). Users of this assistant on some smartphones have been able to experience the limitations of this statement to date! However, in 2018, Google announced that its Duplex artificial intelligence system is capable of communicating with humans, without its interlocutors realizing that they are talking to a machine – suggesting that the system successfully passes a Turing test [CHE 19]! Duplex draws its capabilities from a learning process based on data from human telephone conversations. It is able to support simple exchanges such as scheduling appointments at a hairdresser or obtaining a reservation in a restaurant – amazing feats, so far removed from the imaginary faculties of HAL-9000, able to discuss metaphysical questions with his interlocutors [THU 15]!

The project of artificial intelligence is to imitate humans, to digitally simulate their capabilities, as Ian Goodfellow sums it up: “Artificial intelligence programs will in the future be able to achieve everything that people are capable of nowadays; the question is simply when!” The AI project offers as many opportunities as risks and, to date, it depends on humans. It is their reflection. Humans propose models, program calculation codes and algorithms – and decide at the time of their design on certain criteria for choosing them:

[A robot may hesitate.] It is taught to make the best possible choice based, of course, on elements formulated to it – let us say, on an elementary value system that is inculcated in it […]. It is impossible to predict your choice with certainty! (about French astrophysicist Michel Cassé in [CAR 17])

What representations of the world do humans teach to machines? They are not only physical or mathematical, they are also ethical or philosophical. The development of automatic cars, surgical robots or autonomous weapons is at the heart of these questions. These representations of the world come from the humans who conceive them, so “the real problem of robots is not their artificial intelligence, but rather the stupidity and natural cruelty of their human masters” [HAR 18a]. Some programmers’ preferences are instilled in algorithms. A learning program does not have consciousness or values. If we make it learn a process from data derived for example from speeches or writings that common decency finds discriminating, violent or offensive, artificial intelligence will reproduce them in its digital dialogue with humans. Using data from a representation of the world of programmers (that of men, mostly white, living in the Western world and concerning only a small fraction of humanity), we will see the consequences at work in the renditions of algorithms [ONE 16, NOB 17]. At the risk of not making room for difference and diversity [BUR 17].

This poses ethical and political problems if an algorithm is entrusted with the task of making decisions that affect our lives (making a judgment, making a diagnosis, recruiting, granting a loan, scholarship or place at university, connecting with someone, etc.) without us having the opportunity to understand the reasons for this decision or not allowing ourselves the collective or individual means to retain final decision-making power. Thus, Serge Abiteboul and Gilles Dowek, French researchers in computer science and automation, affirm: “It is essential to be able to have recourse to a responsible person who must be able to oppose his decision to that of the algorithm…” [ABI 17].

A technique gradually matures. It is possible for humans to lack the time necessary to master it when it is not designed to adapt to them or when announcements are relayed in a way that is out of step with the hindsight required by scientific validation. However, adaptability remains an essentially human quality: that of AI is still limited to this day. According to Yann Le Cun, one of the future scientific issues is that of unsupervised learning. The latter, as we have mentioned, is marked by a capacity of machines to adapt to new situations in a way that approaches that of humans. Research in this area is only just beginning:

Until the problem of unsupervised learning is solved, we will not have truly intelligent machines. It is a fundamental, scientific and mathematical question, not a technical one. Solving this problem may take many years or decades. The truth is, we don’t know… [CUN 16]

Technical developments often precede ethical reflection or legal regulation – concrete instruments that humans have at their disposal to organize their communities: those of the law.

4.5.8. Priority to the law!

The immediacy to which we are accustomed by information in the digital age is difficult to reconcile with the time of reflection required by scientific evidence or political choices. Mélanie Clément-Fontaine, Professor of Law at the University of Versailles Saint-Quentin, specializes in new techniques. She brings this insight:

When a technology is developed, only the opportunities it offers are generally highlighted: improving living conditions, providing a technical or human advantage, etc. Entrepreneurs often argue that there is a legal vacuum or delay in the law in order to circumvent it and quickly market a new product or service. And it is not uncommon to ask for the right to encourage the emergence of an innovation… at the risk of forgetting that it also has an ‘ethical cost’! Regulation is essential to mitigate the potential risks of an emerging technology. Since each society does not have the same political project, the same ethical culture or the same appreciation of the benefits associated with an innovation, one of the major challenges of the law is to think about the regulation of technologies on an international scale…

The role of the law researcher is to develop a broad knowledge of his or her discipline. Mastering the legal mechanisms made complex by an ever-increasing number of scattered texts, he or she can propose analyses and give an opinion on a new issue and help predict the importance of a risk. Their analyses are used by the legislator to draft a new law, by the judge faced with an unprecedented situation or by the magistrate to anticipate the scope of a decision. A new technique seeks its place in an existing one that it often challenges. Resulting from automation techniques to which digital models in the broad sense contribute, autonomous vehicles of the 21st Century are likely to reshape urban space (Figure 4.25).

The development of the autonomous vehicle raises many ethical questions:

Under French law for instance, the driver is responsible for their vehicle: this principle no longer applies to autonomous vehicles! This technical innovation requires a rethinking of the principle of responsibility. The first level of reflection is that of the decision criteria. By carelessness, inattention or any other reason, a pedestrian crosses the track unexpectedly in front of an autonomous vehicle. What decision should the driving algorithm make: avoid the pedestrian by making a sudden change of trajectory and endangering the lives of the four passengers in the vehicle – or overturn the pedestrian? Ethics is an integral part of the design of algorithms! A second level concerns the ability of algorithms to learn and propose a solution that engineers have not foreseen: where to place human responsibility in this case?

image

Figure 4.26 How will mobility in large urban areas be modified by the arrival of autonomous vehicles? For a color version of this figure, see

The safety of a connected vehicle is based on the data that the embedded software is able to analyze in order to adapt the vehicle’s behavior:

The data useful to the autonomous vehicle comes from the environment (infrastructures, signaling, communications, meteorology, etc.), and from humans themselves! The former are in the public interest: who can have access to them, under what conditions? The latter concern vehicle users and potentially include personal data – such as health data. Protecting personal data or disseminating public data are among the most important legal issues.

What place will autonomous vehicles occupy in the composite landscape of urban mobility? The challenges associated with the emergence of this new entrant are multiple, as explained by Jakob Puchinger, Anthropolis Chairholder at IRT System-X:

Manufacturers and companies in the transport sector are seeking to develop services and products that meet new mobility needs. Emerging technical solutions, such as autonomous vehicles, open up new opportunities. When used in shared mode or in conjunction with other modes of travel, they can help reduce the number of vehicles on the road and parked in heavily congested urban areas…

The service offer based on autonomous vehicles can also be oriented in opposite directions, between equitable and shared access, adapted to the constraints of urban traffic, and a premium service, reserved for an economic elite – at the risk of contributing, for example, to a new urban segregation and an increase in the number of vehicles travelling in a city:

To what extent will the supply of new services be regulated by public authorities? The choices constrained by the development of technologies are among the most important and will determine a part of the life of citizens in large urban areas.

There is no way to predict what the city of tomorrow will look like today. Regulation (of speed, trajectory or consumption) is at the heart of algorithms operating autonomous transport. How will it guide political choices concerning digital techniques?

According to some thinkers, the idea of regulating the development of artificial intelligence (and other techniques) would be doomed to failure. However, the project of international regulation of this technique, like others whose development is disrupting our lives, is not naïve. Despite its limitations, it is a necessity and a matter of collective choices [SAD 18].

In the last century, the international community, whether diplomatic or scientific, has given itself the means to propose a regulatory and legal framework around nuclear energy and its civil or military uses. Words do not prevent industrial accidents or the development of weapon programs in violation of international treaties – but their absence may give rise to even greater fears.

4.5.9. More human than human?

Let us conclude by asking ourselves about the human mind, which still contains as many enigmas, if not more, than turbulence [SAC 85, SAC 10]. A French engineer and mathematician, François Le Lionnais (1901–1984) helped found the OuLiPo [OUL 73] literary movement, of which French writers Raymond Queneau (1903–1976) and Georges Perec are the most famous contributors. He is the author of numerous essays and a dictionary devoted to mathematics – as well as texts on chess and literature. From his passion for knowledge, mathematics and especially the arts, including painting, he drew a life force that also allowed him to survive. Deported to the Dora concentration camp, he drew from his memories and knowledge a material to imagine, in order to withstand the conditions of an extreme daily life. In a moving testimony, reminiscent of the chess player, he writes:

Now broken at my game, I hardly needed the canvases painted by these painters to create my universe of shapes and colors. […] I dream of frescos that would include infinite poles and others whose lines would be functions without derivatives, to others still, multivalent, whose complexity could only manage by means of kinds of ‘Riemann Surfaces’, to a thousand spells so little serious… [LIO 16]

François Le Lionnais’ words testify to the extraordinary ability of humans to adapt and evolve – and to what makes them sometimes unique. The future will tell us what capabilities we will lend or confer to robots and artificial intelligence. Imagination allows us to access some of them. The film Blade Runner [SCO 85] tells how “Replicants”, robots equipped with AI and now harmful to humans, are being hunted. They are designed by the Tyrel Corporation, whose motto is: “More human than human”. In order to ensure their emotional stability, their designers artificially implant a past in algorithms… Robots are able to feel emotions. They also have a finite life span. The last of them, Roy Batty, was finally killed by hunter Deckard (to whom American actor Harrison Ford gives life in the film). He evokes in a last breath his digital memories. The story of the filming tells us that these words were improvised by the Dutch actor Rutger Hauer (1944–2019), embodying the last “Replicant”, a Nexus-6 model:

I have seen things you people wouldn’t believe… Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhaüser Gate. All these moments will be lost in time, like tears in rain…

Advances in biosciences, cognitive sciences, nanotechnologies and informatics are receiving great attention nowadays. They open up new fields of research. They raise great hopes, including the hope of improving the capacities of human beings. The advantages of these prodigious advances are obvious when they make it possible to correct deficiencies, improve living conditions and relieve suffering. They also open up new ethical and political issues.

Considering the incomplete and imperfect human being, the transhumanist project wishes, among other things, to increase our physical and mental faculties by means of the various techniques mentioned above [ALE 17, HAR 16]. Ahead of ethical and political reflection, it challenges the relationship we have with our nature and our future. Some transhumanists are inspired by Raymond Kurzweil’s reflections and predictions. An American engineer, author and entrepreneur, he published a prospective book in 2005 in which he imagines the convergence of these different techniques. The following sentence summarizes the human-machine synthesis to which some transhumanists aspire:

Downloading a human brain means scanning all its essential details and then installing them on a sufficiently powerful computing system. This process would capture a person’s entire personality, memory, talents and history… [KUR 05]

For some neuroscientists, this conception is a mirage:

New technologies aim to transform us into pure minds. […] And this movement tends to intensify with the desire expressed by some to directly control computers through thought, to save more time and eliminate the last constraints imposed by the body. […] But the limit of attention [is physical] because it concerns the very structure of the brain. This attempt to dematerialize us […] is a decoy. [LAC 18]

According to transhumanists, the techniques will provide a solution to the challenges facing humanity and will offer new potentialities (primarily to those who have the means and desire to acquire them). To date, there is no indication that the project of a humanity endowed with the means to live for more than a hundred years is not accessible in the 21st Century. This idea also raises the question of a project of society in which the human being should be ever more: productive, efficient and perfect. For what purpose? Is this a desirable social project? If it were to become technically possible, would the transhumanist project still win the support of humanity? The question remains open and calls for ethical and political choices. “What a human being is able to imagine, algorithms will one day be able to accomplish”, we could write, paraphrasing Jules Verne. Reproducing human intelligence ever more effectively, AI gives us an illusion of perfection, which would lead us to believe that a robot can pretend to be human and that we should fear these metal “Replicators”:

It is not robots that we should fear – but their manufacturers, who, in order to sell as many as possible, risk offering us machines that make us lose the taste for humans [TIS 18].

Observing most clearly the current development of AI [HAR 18a] and questioning its future properties, we question our choices and capacities as humans – and the uses we make (and will make) of the techniques and mathematics that contribute to it.

  1. 1 Remember that equation-based models also use data, such as the mechanical characteristics of materials (metal, wood, concrete, composites) or the initial conditions of air and water flow, the presence of a pollutant in the atmosphere, the distribution of matter in the universe, etc. Equation-based models also produce data such as physical quantities calculated at different points in space and over time, which are useful for understanding simulated phenomena.
  2. 2 Named after Cambridge Analytica, which asserts its expertise in political and commercial communication strategy and offers its clients new ways to predict and influence the behavior of consumers and voters. While some social science and psychology researchers suggest that the data we leave on social networks such as Facebook can be used to discern our personality traits [BAC 10b, HIR 12], it should be noted that the real effect of the tools developed in political marketing is still a subject of study and controversy [AGO 18, CHU 11, HEL 17, MEN 12, HOW 18]. Voters’ choices are the result of complex decision-making processes and global social contexts that algorithms cannot fully capture. In commercial marketing, the success rate of algorithms designed to suggest a product to a consumer based on his personality or purchasing behavior does not need to be very high: as soon as a large mass of consumers is targeted, sufficient profitability can be achieved.
  3. 3 Decoding language is part of learning to read and involves the cooperation of different activities, involving different brain areas. Involving language combined with emotions, sensations and imagination, reading involves a real neural network. Neuroscientists are beginning to understand these mechanisms well and have discovered, for example, that this network does not change with language [ZIE 18].
  4. 4 The neural networks we are talking about here are numerical and not biological entities. Medical scientific research focuses on artificial neurons, devices that can be implanted in the brain and deliver molecules in a controlled way, as needed, for example, to help mitigate the effects of neurodegenerative diseases [SIM 15].
  5. 5 The first an upholder of the law, the second a mercenary, Robocop and Terminator are equipped with extraordinary mechanical abilities (speed, precision, resistance, etc.). Surpassing the humans they hunt down, and beyond the control of their creators, they prefigure robots programmed to accomplish lethal missions, the development of which is a cause for concern in today’s scientific or diplomatic community [DEL 19]. For the American director Steven Spielberg, the A.I. is embodied in an emotionally endowed robot child who never stops pursuing his quest for maternal love, like an iterative algorithm programmed without stopping criteria. Like the Truman Show [WEI 98], inspired by reality TV shows of the late 1990s, the Matrix revisits, at the dawn of the Internet explosion, questions raised by Platonic, Cartesian – or Buddhist – philosophy. What is reality? How do we access it: through our thoughts, our sensations, our emotions – all of these at the same time? The famous Matrix is neither used to calculate nor to guide humans, it is a virtual reality program – i.e. a life-size numerical simulation – designed by an artificial intelligence. Intended to keep human beings in ignorance and enslave them, it obtains the most precious thing in them: their life energy! In Her, the character played by American actor Joaquin Phoenix falls in love with an artificial intelligence, with a sensual voice and vocabulary that stimulates the imagination. Thanks to a particularly effective learning program, she knows everything about him (his habits, his way of thinking, his tastes, his fears, etc.) and constantly adapts to his expectations – as well as those of millions of other humans with whom she interacts, while everyone imagines his unique relationship with her…
  6. 6 See for example the websites: https://futureoflife.org/ai-news/ (scientific reflection and foresight), https://www.sciencedaily.com/news/computers_math/artificial_intelligence/ (scientific popularization) https://www.theguardian.com/technology/artificialintelligenceai/ (general technical journalism).
  7. 7 The contributions of AI techniques are evaluated in different industrial sectors, in particular those of nuclear or oil energy, which have been interested for decades [ABB 83, ALE 91, BER 89, FER 17, MOH 95, UHR 91, OLI 13], as well as, more recently, “conventional” energies [AHM 19] and water production [ANN 19].
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.92.1.156