The Commissariat à l’Énergie Atomique et aux Energies Alternatives (Commission for Atomic Energy and Alternative Energies) is a French public research organization of a scientific, technical and industrial nature. In France, the CEA’s 16,000 employees, nearly a third of whom are women, work in its nine French centers. They are involved in four areas of research, development and innovation: defense and security, nuclear and renewable energies, technological research for industry and fundamental research (material and life sciences).
Relying on a recognized expertise capacity, the CEA participates in the implementation of collaborative projects with numerous academic and industrial partners. It is intended to contribute to technological development and the transfer of knowledge, skills and techniques to industry.
The CEA was decided by French President Charles de Gaulle (1890–1970) in 1945, in the aftermath of the Second World War, when the various uses of nuclear energy took on a modern strategic character. France’s political ambition is to carry out “scientific and technical research with a view to the use of atomic energy in the various fields of science, industry and national defense”.
Nowadays, CEA’s missions have been adapted to many new scientific challenges – in particular those of life sciences and the environment.
The Centre d’Etudes et d’Expertise sur les Risques, l’Environnement, la Mobilité et l’Aménagement is is a French public institution whose work meets the major societal challenges of sustainable development and the management of territories and cities.
CEREMA is a technical expert in various fields (development, transport, infrastructure, risks, construction, environment, etc.) and contributes its knowledge and know-how to improve the living conditions of citizens.
Focused on supporting the French State’s public policies, CEREMA is placed under the dual supervision of the Ministry for the Ecological and Solidary Transition and the Ministry of Territorial Cohesion.
The Centre Européen de Recherche et de Formation Avancée en Calcul Scientifique is a fundamental and applied research center in France. Created in 1987, it specializes in numerical modeling and simulation. Through its resources and know-how in high-performance computing, CERFACS addresses major scientific and technical problems in public and industrial research. Its mission is to:
CERFACS teams include physicists, applied mathematicians, numerical analysts and computer scientists. They design and develop innovative methods and software solutions that meet the needs of the aeronautics, space, climate, energy and environment sectors [HUE 98].
The Centre Technique des Industries Mécaniques is one of the leading industrial technical centers in France. Created in 1965 at the request of French mechanical engineering manufacturers, CETIM’s mission is to provide companies with the means and skills to increase their competitiveness, to participate in standardization, to make the link between scientific research and industry, to promote technological progress and to help improve efficiency and quality assurance.
CETIM has three main missions: the execution of and participation in shared research and development activities, the implementation of a global and personalized offer of services and support to SMEs.
Devoting more than half of its human (700 people including 400 engineers) and technical resources to innovation, CETIM organizes its R&D in four areas: manufacturing processes and materials science, design-simulation-testing loop, sustainable development and expertise in controls and measurements.
The Centre de Coopération Internationale en Recherche Agronomique pour le Développement is the French organization for agricultural research and international cooperation for the sustainable development of the tropical and Mediterranean regions.
Its activities are in the life sciences, social sciences and engineering sciences applied to agriculture, food, the environment and land management.
It works on major themes such as food security, climate change, natural resource management, inequality reduction and poverty reduction.
The Centre National de la Recherche Scientifique is a public research organization. Placed in France under the supervision of the Ministry of Higher Education, Research and Innovation, it produces knowledge and puts it at the service of society.
Present throughout France, CNRS’s 32,000 researchers work in all fields of knowledge, in the organization’s 1,100 research and service units. Physicists, mathematicians, computer scientists: many CNRS researchers contribute to the development of digital simulation and its applications for the benefit of this scientific community.
CNRS was created in 1939 at the initiative of Jean Zay (1904–1944), Minister of National Education and Fine Arts in the Popular Front government, assisted by Irène Joliot-Curie (1897–1956) and Jean Perrin (1870–1942), Nobel Prize winners in Chemistry and Physics in 1935 and 1926, respectively.
It is one of the first global research organizations whose contributions cover both fundamental knowledge – an instrument of scientific sovereignty that is useful to all citizens – and its applications to the economic innovation of France and its partners.
The Direction Générale de l’Armement is a department of the French Ministry of the Armed Forces whose main mission is to prepare the future of France’s defense systems. Within its technical department, various centers of expertise contribute to the development of digital simulation techniques.
It supervises engineering schools, some of whose research laboratories are contributors to current innovations in numerical simulation. It participates in the financing of research organizations such as ONERA, CEA and CNES.
The European Space Agency is the third largest space agency in the world after NASA and the Russian Federal Space Agency. It is an intergovernmental space agency that coordinates space projects carried out jointly by some 20 European countries.
Founded in 1974, ESA coordinates the financial and intellectual resources of its members, and can thus conduct space programs or develop scientific activities beyond the possibilities available, alone, to any European country.
Supporting Europe’s space projects, ESA also ensures that investments in this field contribute to European citizens and humanity as a whole, collaborating with many space agencies around the world. ESA’s research programs aim to produce knowledge about the Earth and its near-space environment, our solar system and the Universe. ESA’s scientific activities contribute to the development of services offered by satellite technologies and support European industries.
Scientists, engineers, information and communication technology specialists, administrative staff: ESA’s teams consist of about 2,200 people representing each of the member countries.
ESA has an annual budget of €5.75 billion, consolidated by the contributions of each member country in proportion to its GDP (this budget represents an average annual contribution of around €20 for the citizens of the member countries).
COMMENT ON FIGURE G.1.– As tourist space travel projects are developing [DAV 18, CAV 18], the experience of going into space, for example aboard the international orbital station, remains to this day the privilege of extraordinary personalities. It mobilizes substantial financial resources, accompanying a range of techniques operated by a human chain with multiple skills [MON 17].
The Grand Équipement National de Calcul Intensif is a French public company owned by the Ministry of Higher Education, Research and Innovation, the CEA and the CNRS. It was created in 2007 by the French government and aims to democratize the use of computer simulation and intensive computing to support French competitiveness in all fields of science and industry.
GENCI provides powerful computers (more than 5 Pflop/s) for French scientists to carry out advanced work that requires the use of digital simulation. In addition, it carries out three missions:
The Institut National de la Recherche Agronomique is a French public institution of a scientific and technological nature. Under the dual supervision of the Ministry of Research and the Ministry of Agriculture, it is the leading agricultural research institute in Europe and the second largest in the world in agricultural sciences.
Founded in 1946 in response to a social demand, that of “feeding France” in the aftermath of the Second World War, INRA conducts research in the service of major social issues. They cover three highly intertwined areas: food, agriculture and the environment.
Its ambition is to contribute to the development of an agriculture that is competitive, respectful of the environment, territories and natural resources, and better adapted to human nutritional needs, as well as to new uses of agricultural products. In the 21st Century, the objective is now to “feed the world sustainably”.
INRA’s missions, like those of many public institutions, are multiple:
The study and in situ experimentation of agricultural practices is at the heart of INRA’s research (Figure G.2). It makes it possible to evaluate production methods and collect data that could be useful for the validation of simulation models.
INRA is located in France in 17 regional centers where just over 8,150 employees (researchers, engineers, technicians, etc.) work in the fields of life sciences, material sciences and human sciences.
The Institut National de Recherche en Informatique et Automatique is a French public research institution dedicated to digital sciences. It promotes scientific excellence in the service of technology transfer and society.
INRIA’s 2,600 employees explore original techniques with its industrial and academic partners.
INRIA thus responds to the multidisciplinary and applicative challenges of the digital transition. At the origin of many innovations that create value and employment, it transfers its results and skills to companies (start-ups, small and medium-sized enterprises and large groups) in areas such as health, transport, energy, communication, security and privacy, the smart city and the factory of the future.
INRIA was created in 1967 as part of the Plan Calcul [MOU 10]. A French government strategy decided by Charles De Gaulle at the initiative of a group of senior civil servants and industrialists, the plan was designed at the time to ensure France’s autonomy in information technologies and to develop a European IT system.
The Institut Français de Recherche pour l’Exploitation de la Mer is a French public institution under the supervision of the Ministry of Ecology, Sustainable Development and Energy and the Ministry of Higher Education, Research and Innovation. The Institute’s research supports the deployment of the French State’s maritime policies, the Common Fisheries Policy of the European Union and national biodiversity strategies.
IFREMER’s research and expertise contribute to:
IFREMER designs and implements observation, experimentation and monitoring tools, and manages oceanographic databases.
Placed under the joint supervision of the French Ministry for the Ecological and Solidary Transition and the Ministry of Higher Education, Research and Innovation, the Institut Français des Sciences et Technologies des Transports, de l’Aménagement et des Réseaux is a French public scientific and technological institution.
IFSTTAR conducts finalized research and expertise in the fields of transport, infrastructure, natural hazards and the city to improve the living conditions of citizens and more broadly to promote the sustainable development of societies.
Its missions are carried out, in particular, for the benefit of the services of the line ministries, other administrations and bodies attached to them, local authorities, European and international institutions, professional associations, companies and users’ associations.
The Instituts de Recherche Technologique are intended to support an industrial innovation strategy in promising markets for French companies. Their purpose is to support long-term partnerships between higher education and research institutions and companies.
Spread throughout France, the eight IRTs (created by the French government in 2010) address cutting-edge techniques (Table G.1), including the digital engineering of the systems of the future, carried out in the Paris region by the IRT System-X.
Table G.1. The eight French IRTs cover advanced technical fields and are distributed throughout the national territory (Data: Ministère de l’Enseignement Supérieur, de la Recherche, et de l’Innovation in France, http://www.enseignementsup-recherche.gouv.fr/)
|BIOASTER||Paris, Lyon||Infectious diseases and microbiology|
|B-Com||Rennes||Images and digital technologies|
|Jules Verne||Nantes||Structural production technologies (composite, metal and hybrid)|
|M2P||Metz, Belfort||Materials, processes and metallurgy|
|Saint-Exupéry||Bordeaux, Toulouse||Aeronautics, space and embedded systems|
|System-X||Saclay||Digital engineering of the systems of the future|
NAFEMS (National Agency for Finite Element Methods and Standards), a not-for-profit organization established in 1983, is the International Association for the Engineering Modeling, Analysis and Simulation Community. NAFEMS aims to establish best practice in engineering simulation and improve the professional status of all persons engaged in the use of engineering simulation. It provides a focal point for the dissemination and exchange of information and knowledge relating to engineering simulation, and also acts as an advocate for the deployment of simulation.
NAFEMS promotes collaboration and communication between communities of industrial and academic practitioners of numerical simulation, by continuously improving the education and training in the use of simulation techniques. It is today recognized as a valued independent authority that operates with neutrality and integrity.
NAFEMS focuses on the practical application of numerical engineering simulation techniques, such as the Finite Element Method for Structural Analysis, Computational Fluid Dynamics and Multibody Simulation. In addition to end users from all industry sectors, NAFEMS’ stakeholders include technology providers, researchers and academics.
Created on July 29, 1958 by Order-in-Council of US President Dwight D. Eisenhower (1890–1969), the National Aeronautics and Space Administration is a government agency responsible for executing the US civil space program. It integrates all US scientific and technical expertise, including that of NACA, the US federal agency responsible for aeronautical research since 1915.
In the context of military supremacy and technical rivalry between the USSR and the USA, the launch of Sputnik-1, the first artificial satellite in history, succeeded a few months earlier by the Soviets, caused a real shock in the USA. The creation of NASA meets the US objective of closing the gap in space control.
The lunar program, announced by US President J.-F. Kennedy (1917–1963) in 1961, led to NASA’s real expansion. In the same year, on April 12, 1961, cosmonaut Yuri Gagarin (1934–1968) was the first man in space, followed by astronaut Alan Shepard (1923–1998) on May 5, 1961. The US made its first successful flight into orbit a year later, when astronaut John H. Glenn (1921–2016) made three revolutions around the Earth aboard Mercury Friendship 7. After flying nearly 130,000 kilometers in space over the 4 hours and 56 minutes of flight, the capsule landed in the sea east of the Bahamas, close to the point calculated by NASA engineers. This long-awaited American success is recounted in the films The Right Stuff [KAU 89] and Hidden Figures [MEL 16], among others.
The latter evokes the personalities of Mary Jackson (1921–2005), Katherine Johnson and Dorothy Vaughan (1910–2008). Black women gifted in science, they contributed to NASA’s programs in the 1960s, at a time when racial segregation was being fought by personalities committed to the struggle for equal rights and when the best interests of space programs required the talents of everyone.
COMMENT ON FIGURE G.3.– The work on aerodynamics done by engineer Mary Jackson is a direct contribution to numerical simulation as we know it today. Mathematician and engineer Katherine Johnson helped to calculate the trajectories and launch windows of many space flights – using the method developed by Euler to solve differential equations. Mathematician Dorothy Vaughan specialized in computer science and programming in FORTRAN, one of the languages used to develop the first layers of computational code. Hidden Figures which is both entertaining and thought-provoking, covers topics as diverse as the place of women in science, the role of humans in major engineering projects and changes in societal attitudes (portraits of Mary Jackson, Katherine Johnson and Dorothy Vaughan drawn by Zelda Bomba).
The US space conquest was accomplished at the cost of technical and human risks – some test pilots and astronauts paid the ultimate price for the space dream. Human spaceflight is the hallmark of the US, remaining NASA’s main activity for many years and accounting for a significant portion of its annual budget (on average $8 billion each year). It suffered painful failures from the beginning, with, for example, the death of the three members of the first Apollo mission: Roger Chaffee (1935–1967), Virgil Grissom (1926–1967) and Ed White (1930–1967) died on January 17, 1967 in a fire in their capsule, on the launch pad of the rocket that was to propel them to the Moon. In 2011, the United States ceased the space shuttle program. Two major accidents probably contributed to this decision: the take-off explosion of the Challenger shuttle on January 28, 1986 and the disintegration of Columbia on its return to the atmosphere on February 1, 2003, after a 16-day mission. 13 astronauts and a teacher, a passenger on the Challenger flight, died in these two accidents. The first is attributed to the failure of a seal on a thruster, the second to a defect in the beam heat shield.
NASA carries out space and aerospace exploration and research missions by various means: probes, satellites, robotic missions, etc. On April 24, 1990, space shuttle Discovery launched the Hubble telescope, named in memory of US astronomer Edwin Hubble (1889–1953), to observe the universe from space. Despite technical difficulties encountered at the beginning of its commissioning, this orbital observatory provides scientists with data that allows them to better understand how the Universe works.
Current missions include the Mars Exploration Rover Discovery program, helping to uncover the secrets of the Red Planet (Figure G.4): on November 26, 2018, the InSight probe landed on Mars [MAL 18]. Earlier in 2018, NASA unveiled its plans for human exploration of the Moon and the Martian system. It published a calendar of assembly and logistics missions and manned missions, with the ambition of going to Mars in the 2030s.
COMMENT ON FIGURE G.4.– A self-portrait of NASA’s Curiosity Mars rover shows the robot at a drilled sample site called Duluth on the lower slopes of Mount Sharp. A Martian dust storm reduced sunlight and visibility in the Gale Crater. The north-northeast wall and rim of the crater lie beyond the rover, their visibility obscured by atmospheric dust (Source: https://marsmobile.jpl.nasa.gov/).
NASA, which also has a science department dedicated to studying the effects of climate change, employs about 20,000 people in 20 institutions, such as the Goddard Space Flight Center, the Jet Propulsion Laboratory, the Langley Research Center and the Kennedy Space Center. Scientific computation as a whole is one of the major techniques contributing to all NASA space projects, as well as to other global space agencies.
In a context of strong economic and scientific rivalry between the US, China and other emerging countries, NASA is once again becoming the voice of the US ambition to send men to the Moon [MIN 19]. On January 2, 2019, China succeeded in placing a probe exploring the hidden side of the Earth’s satellite [JON 19, WAL 19]. On February 22, 2019, Israel flew into space with the objective of placing a probe on the Moon. The project, led by a non-governmental agency [HOL 19], would have cost less than $100 million [CLA 19] and ended in failure [CHA 19].
The National Oceanic and Atmospheric Administration is a US scientific agency within the United States Department of Commerce. It focuses on the conditions of the oceans, major waterways and the atmosphere. NOAA warns of dangerous weather, charts seas, guides the use and protection of ocean and coastal resources, and conducts research to provide understanding and improve stewardship of the environment.
NOAA plays several specific roles in society, the benefits of which extend beyond the US economy and into the larger global community:
The main activities of NOAA are monitoring and observing Earth systems with instruments and data collection networks; understanding and describing Earth systems through research and analysis of that data; assessing and predicting the changes of these systems over time; engaging, advising and informing the public and partner organizations with important information; managing resources for the betterment of society, economy and environment.
The Office National d’Études et de Recherches Aérospatiales is a French public establishment of an industrial and commercial nature. It has nearly 2,000 employees contributing to its missions, which aim to:
ONERA has a fleet of wind tunnels contributing to the qualification of simulation methods and aircraft prototypes. It was created in the aftermath of the Second World War by decree of Charles Tillon (1897–1993), then French Minister of Armaments.
ONERA has contributed to the implementation of many French industrial programs: Concorde or Airbus civil aircraft, Ariane rockets – to name the most prominent.
Computational Fluid Dynamics (CFD) simulation involves using a computational code to solve the equations governing the flow of a fluid, which is also described by its law of behavior and the volumes in which it flows. The finite volume technique is the most commonly used in CFD for applications of interest to engineers.
The simulation of structural dynamics (Computational Structural Dynamics – CSD) consists of using a computational code that takes into account the geometry of the system under study, mathematical laws that translate the mechanical behavior of the materials of which it is made, and solving the equations of motion. Finite element technology is the most widely used in CSD for applications of interest to engineers.
The Direct Numerical Simulation (DNS) of flows consists of solving the conservation equations describing a turbulent fluid flow using a numerical method.
Fluid–Structure Interaction (FSI) refers to the exchange of mechanical energy between a vibrating structure and a fluid; the latter can be stagnant or in flow. In the first case, the interaction results in inertial effects for the structure, to which rigidity and dissipation effects can be added, depending on the frequency range considered. The latter are represented by means of added mass, stiffness and damping operators, which can be calculated numerically or analytically. In the second case, the interaction results in a transfer of energy from the flow to the structure, which can lead, for example, to vibration instability.
The Finite Volume Method (FVM) is a numerical method based on the writing of a conservation balance of a given physical quantity on a set of elementary volumes constituting the mesh of a domain in which a fluid flows, for example. The balance states that the variation of a quantity in a volume is the difference between the inflows and outflows in that volume. The method is widely used in fluid dynamics calculation codes.
High Parallel Computing (HPC) aims to use supercomputers (computers with outstanding computing performance) in order to perform simulations that require significant computing time. This computing power is used, for example, in the fields of meteorology, climatology and fluid mechanics, particularly for turbulent flows. The basic sciences (astrophysics, physics, chemistry) are among the main users of HPC computing resources.
The Lattice Boltzmann Method (LBM) is a fluid dynamics method. By solving Boltzmann’s discrete equation, it simulates the behavior of Newtonian and non-Newtonian fluids on a “mesoscopic” scale. At this scale of description, the fluid is described as a set of particles whose dynamics is rendered by the Boltzmann equation. The latter proposed it based on the work of Bernoulli and Maxwell. A distribution function ϕ models the kinetics of fluid particles. Depending on the time, position and velocity of the particles, its evolution is described by a “transport equation”:
The first term of the first member describes the unsteadiness of the flow, the second term advection, reflecting the fact that particles move at a constant speed between two collisions. The second member reports collisions between particles; these are rendered using different shock models. In the LBM, the equation is solved using a collision-propagation scheme, allowing complex fluid behaviors to be reproduced.
Large Eddy Simulation (LES) is a method of calculating flow to solve some of the turbulent scales (the large scales) and to model the influence of the smaller ones.
Reynolds Average Numerical Simulation (RANS) methods solve turbulent flow equations in the sense of an average, separating the evolution of the mean velocity and pressure fields from the contribution of fluctuations around this average.
The SPH (Smoothed Particle Hydrodynamics) method is a flow simulation method. Based on the description of the movement of fluid particles monitored in their evolution, it makes it possible to represent physical phenomena that are inaccessible to simulation methods using a calculation of their velocity on a fixed mesh. It was developed in the late 1970s in astrophysics to simulate phenomena such as the formation and evolution of stars and galaxies. It then underwent a major expansion through its application in fluid dynamics, being applied to the calculation of compressible, incompressible and multiphase flows.
An algorithm is a procedure describing, using a specific sequence of elementary operations (arithmetic or logical), a systematic approach to solving a problem or performing a task in a given number of steps.
We use algorithms in our daily lives: when following a recipe, or a route proposed by our navigation system or application (Figure G.5), or by making a purchase on the Internet.
Algorithms are becoming increasingly important, particularly in their ability to perform complex calculations, store and transmit information and learn from data [ABI 17].
Artificial intelligence (AI) refers to all the theories, models, and numerical and algorithmic techniques used to produce machines capable of simulating (in the sense of reproducing) human intelligence. To draw up a complete panorama of it goes far beyond the scope of this book. Also, the examples to which we refer here are mainly those of machine learning.
Many algorithms can be implemented in machine learning, particularly depending on the objectives assigned to the AI and the data available. In this book we discuss one of the most well-known techniques, that of neural networks.
The term artificial intelligence first appeared at a conference dedicated to this technique, organized in the 1950s by American computer scientists John McCarthy (1927–2011) and Claude Shannon (1916–2001). At that time, when the best minds were wondering about the possibility of building thinking machines, Alan Turing proposed a test to determine if a machine showed “intelligence”. The device he imagined is as follows. A human interrogator, interacting with two entities without seeing them, must determine which one is the human and which one is the machine. If he is mistaken more often than when he has to distinguish (in the same circumstances) a woman from a man, then the machine passes the test. The Turing test is first of all valid for the empirical purpose it assigns to AI – to make the machine’s performance rival that of a human in different registers deemed to require intelligence. For more than 50 years, Turing’s questions about the possibility of building thinking machines have continued to stimulate the AI research community.
Big Data refers to a set of techniques aimed at collecting and storing data of various kinds, available in large quantities and sometimes in a fragmented way. Algorithms for processing these data aim in particular to establish the links between these data (Figure G.7), in order to propose models that make predictions and contribute to decision support.
Data is the raw material of Big Data, as well as calculations performed in digital simulation or artificial intelligence algorithms. However, having a lot of data is not enough! In particular, developing predictive models requires the use of structured databases that synthesize the experts’ knowledge of a subject or system.
On physical systems, data can come from experimental devices, opportunity measurements (airplane in flight, ship at sea, etc.) and numerical simulations, which allow a broad operational domain to be explored at a lower cost and risk.
Annotating raw data (images, sounds, texts, physical signals, etc.), that is, linking it to a context in order to give it meaning, is thus one of the challenges of Big Data: data acquires greater value for predictive use.
Blockchain is a technique for storing and transmitting information. Known as transparent and secure, it operates without a central control system. The blockchain implementation contributes to the increased demand for data processing and storage.
A blockchain is a database containing the history of all exchanges between its users since its creation. The blocks contain transactions, writing operations performed in a specific order. Shared by its various users, without intermediaries, this database is secure and distributed. This allows everyone to check the validity of the chain.
The uses of the blockchain are potentially varied. For example, it contributes to the transfer of assets (currencies, securities, votes, shares, bonds, etc.) and improves their traceability. It allows smart-contracts, autonomous programs that automatically execute the terms and conditions of a contract, without requiring human intervention once they have started. The fields of exploitation of the blockchain are numerous, particularly in sectors requiring transactions (banking, insurance and real estate), data exchanges (pharmaceutical and artistic industries) or product exchanges (agri-food, aeronautics and automotive industries).
COMMENT ON FIGURE G.8.– The operation of the blockchain is schematically as follows. A user (A) makes a transaction to another user (B). This, like all transactions carried out by users, is distributed on different nodes of a network that ensures validation according to techniques and algorithms depending on the type of blockchain. Once the block is validated, it is time-stamped and added to the block chain. The transaction is then visible to the receiver and the entire network. The decentralized nature of the blockchain, coupled with its security and transparency, promises much broader applications than the monetary field in which, with Bitcoin, it has emerged (Source: https://blockchainfrance.net).
Representation of information in a program: either in the program text (source code) or in memory during execution. The data, often coded, describes the elements of the software such as an entity (thing), interaction, transaction, event, subsystem, etc.1.
Data corresponds to an elementary description of a reality: for example, an observation or a measurement. It is devoid of any reasoning, supposition, recognition or probability. Unquestionable or undisputed, it can be used as a basis for research.
It results from the processing of raw data, that is, data that has not been interpreted and comes from a primary source. The processing makes it possible to give it a meaning and thus to obtain information.
A differential equation has one or more mathematical functions as unknown. It takes the form of a relationship between these functions and their successive derivatives – their variations over time. Mathematicians write differential equations as follows:
The left member of the equation represents the variation in time of a function and the right member represents a relationship between this function and time. Newton and Leibniz gave mathematical meaning to the writing of the derivative, which relates minute variations of two quantities (as expressed by the term of the first member of the equation).
Differential equations were used to construct mathematical models of mechanics. For example, they make it possible to express the law of motion linking the acceleration of a body with the forces exerted on it (acceleration is the variation of speed, itself a variation of displacement). They also represent many physical phenomena (electricity, chemistry, heat, electromagnetism, etc.). They are also used to describe biological, chemical or demographic evolution processes, for example. The Lorenz equation (Chapter 1, first volume) and the Lotka-Volterra equations (Chapter 1, second volume) are examples of differential equations.
The solutions of differential equations can be represented in different ways, for example by plotting the evolution of each of the components of ϕ(t) as a function of time. In some cases, this process makes it possible to visualize a strange attractor, the set of points towards which a system evolves and the dynamics of which is represented by a differential equation: the arabesques obtained could compete with some artists’ creations (Figure G.9)!
Disruption is a noun of Latin origin in which the prefix dis reinforces the intensity of the verb rumpere, to break. A disruption therefore refers to a sudden break. It is used in geology, for example, to refer to the mechanisms of cracking and dislocation of rock layers. Surprisingly enough, the word “disruption” has been turned into a registered concept for marketing purposes [CHR 15, NOR 16]!
Disruption then refers to a so-called “breakthrough” innovation, one that challenges an established practice or market by offering a new good or service. The breakthrough innovation, promoted by the media, would be opposed to incremental innovation, presented as an improvement of a technique or practice.
An equation is a mathematical writing that translates a concrete problem or a physical (or other) principle into abstract language. Solving an equation consists of providing an answer to this problem. This makes it possible, among other things, to understand a physical phenomenon and/or to carry out different virtual experiments using computation.
An equation works like a scale linking quantities separated by a symbol of equality. It involves unknown quantities to be determined. These depend on variable quantities and known data (parameters). The unknown of an equation can be a number (the purchase price of an object or service) or a more complex mathematical entity (e.g. a function, giving the evolution of one physical quantity as a function of another, such as the consumption of an automobile during a journey).
Let us solve an arithmetic problem proposed by French writer Jean-Louis Fournier using an equation:
A salmon leaves Saumur via the Loire at 9:30 am, and it reaches a speed of three knots. A second leaves Saumur at 10 am in the same direction. At what time will the second salmon, which is travelling at four knots, reach the first fish’s tail? [FOU 93]
To answer this question, let us remember that distance is the product of speed and travel time. Let us count the latter from the time of the second salmon’s departure and note it t. The first salmon starts 0.5 hours ahead of the second and wims at a speed of 3 knots; the second catches up with it at a speed of 4 knots. The distance between them is then calculated as t + 0.5 × 3 – 4 × t. It is nil when:
We have written an equation that enables us to answer the question. This one has as unknown the travel time and as parameters the swimming speeds (3 and 4 knots), as well as the advance of the first salmon (0.5 hours). We solve the equation using basic algebraic rules and we determine:
This allows us to answer the question posed: the second salmon catches up with the first an hour and a half after leaving, at 11:30 am.
The finite element method is used to solve partial differential equations numerically. It appeared at the end of the 19th Century, particularly with the work of Maxwell. Known for his contributions to electromagnetism, he is also the precursor of the finite element method. The latter then developed with the need to analyze the structures and strength of the materials they use.
The first studies in this field were carried out in particular by:
These scientists used equations solved analytically, using hand-operated calculations. The mathematical formalization of the finite element method was carried out later, around the middle of the 20th Century. Various mathematicians and engineers contributed to its industrial development, in particular Olek Cecil Zienkiewicz (1921–2009), a Welsh engineer, who devoted most of his scientific life to supporting its development in different fields of modern mechanics [ZIE 67].
Numerical calculation combined with high-performance computer-based resolution techniques nowadays ensures the generalization of the method in many technical fields.
Modeling is using mathematics to represent the world or some of its particular aspects. Abstract objects play the role of real objects and from knowledge of them it is hoped that we can draw an understanding of the world. When the modeling is correct, the study of the mathematical model provides information on the situation, object or structures that the model targets. Under these conditions, it can be used for virtual experiments: testing theoretical hypotheses, checking the functioning of a system, ensuring its reliability, trying to anticipate a phenomenon, etc.
Model reduction methods have been a significant innovation in scientific computation in recent years. They have helped to make the practice evolve towards greater efficiency without losing any precision. Industrial numerical simulations very often use models with a large number of unknowns. They are interested in very large systems that require calculations over long periods of time and that take into account a multitude of physical parameters. Model reduction methods are receiving increasing attention from the industrial world, which is using them more and more systematically.
In mathematical terms, reducing a model consists of retaining from a complete model (containing all the information necessary to describe the system under study) only the contributions of certain quantities, those that are most relevant for the desired solution of the problem. The reduced-order model thus obtained contains information that is incomplete but sufficient to describe the overall behavior of the object. It is just like in a recording of a concert where it would be a question of keeping only the contribution of the major instruments – and also correct the missing contribution so as not to disturb music lovers’ ears! This is like moving from a written problem to a large matrix (several hundred thousand unknowns):
to a similar problem, written on a matrix of small size, or even very small size (several tens of unknowns):
Rating with the index M indicates that the matrices used in the calculation are constructed from the vibration modes. In some cases, the reduction can be significant: from a model containing tens of thousands of degrees of freedom (or unknowns), it is possible to build a model reduced to only a few tens of unknowns!
A numerical simulation is a series of calculations performed on a computer to reproduce a physical phenomenon. It leads to a description of how this phenomenon unfolded, as if it had actually occurred. A numerical simulation can represent complex physical phenomena and is based on a mathematical model with equations.
A partial differential equation involves the variations (derivatives), in time and space, of a given physical quantity, depending on time variables (noted t) and space variables (noted x, y in two dimensions). The derivatives in question may be noted (first and second order derivatives with respect to time), (first order derivatives with respect to each of the space variables) and (derivative of the second order in space), etc.
Partial differential equations are found in many mechanical (solid, fluid) or electromagnetic models: the d’Alembert or Schrödinger equations (Chapter 1, first volume), the Maxwell equation (Chapter 2, first volume) or the Navier–Stokes equations (Chapter 2, second volume) are examples of partial differential equations.
Information concerning ethnic origin, political, philosophical or religious opinions, trade union membership, health or sex life. In principle, sensitive data can only be collected and used with the explicit consent of the individual(s) concerned2.
Research and development (R&D) encompasses creative work undertaken systematically to increase the amount of knowledge available to human communities (companies, communities, states) and the use of this amount of knowledge to develop new applications.
R&D work exclusively includes the following activities:
While the expenses incurred by these various communities to carry out this work are often presented as costs, they contribute above all to their future-oriented development.
Part of this investment is assumed by the citizens themselves, thus participating in the research effort of the communities to which they belong. In OECD countries, in 2016, the number of researchers represented just over 4.5 million full-time equivalent jobs – including nearly 2 million in the European Union (Table G.2).
Table G.2. The number of researchers in different OECD countries in 2016: full-time equivalent jobs, share of researchers in companies and public research (Data: Ministry of Higher Education, Research and Innovation in France/http://www.enseignementsup-recherche.gouv.fr/)
|Country||Number of employees||Private research||Public research|
COMMENT ON FIGURE G.11.– The world’s major economies devote a significant share of their wealth to R&D investment. The research effort can be assessed by considering the ratio between the wealth produced annually by a country and the total amount of investment. This ratio is 2.34% for the OECD as a whole (2.22% for France). In 2018, the United States remained the largest investor in R&D, with just over $475 billion, followed by China ($370 billion) and Japan ($170 billion). The 28 countries of the European Union together invested nearly 350 billion dollars. The two largest investors are Germany (110 billion) and France (60 billion). On average in OECD countries, public research accounts for nearly 30% of total R&D expenditure, including 35% in France (Source: Ministry of Higher Education, Research and Innovation in France, http://www.enseignementsup-recherche.gouv.fr/).
Virtual reality refers to a set of techniques and systems that give humans the feeling of entering a universe. Virtual reality gives the possibility to perform in real time a certain number of actions defined by one or more computer programs and to experience a certain number of sensations (auditory, visual or haptic for example).
Augmented reality refers to a virtual interface, in two or three dimensions, that enriches reality by superimposing additional information on to it.
Virtual or augmented reality also allows manufacturers to simulate operating conditions or machine assembly conditions, for example. These digital techniques make it possible, for example, to train operators in delicate operations and to carry them out with increased safety and ergonomics.