The technological building blocks that we have just described can be found, in whole or in part, in the various fields of computer use. We have selected seven of them: robots, virtual reality (VR) and augmented reality (AR), health, the connected (and soon autonomous?) car, the smart city, smart mobility and the factory of the future.
This sentence of Aristotle (384–322 BCE) holds our attention: “For if each instrument could accomplish its own work, obeying or anticipating the will of others, like the status of Daedalus, or the tripods of Hephaestus, which, says the poet, ‘of their own accord entered the assembly of the Gods’; if, in like manner, the shuttle would weave and the plectrum touch the lyre without a hand to guide them, chief workmen would not want servants, nor masters slaves” (Book I, Part IV)1.
The word robot is a Czech word formed in regard to robota (hard work, drudgery) and was used in 1920 by the writer Karel Tchapek. From time immemorial, humans have sought to design tools capable of facilitating their activities. But they have also tried to create machines with capabilities resembling those of living beings. The history of robotics is part of the development of these two approaches.
There are three main stages in the development of robots.
Without going back to Antiquity, let us see how automatons have multiplied and perfected over time.
The first animated clocks were created towards the end of the first millennium. They can still be seen today on certain bell towers, such as the astronomical clock in Strasbourg Cathedral, created in 1352: at the stroke of 12:30, the automatons shake, the apostles parade, a rooster crows and flaps its wings.
In 1781, Jacquard’s loom, which we mentioned in Chapter 1, selected the needles of the loom on a precise geometrical pattern thanks to a string of punched cards. We already have an important improvement since we can change the pattern by changing the cards.
In 1805, Henri Maillardet built a spring automaton capable of making drawings. This idea of having paintings created by automatons was taken up again, including recently in the exhibition “Artists and Robots” in 2018 at the Grand Palais in Paris.
The ATILF (Analyse et traitement informatique de la langue française) research unit at the CNRS defines a robot as “a device performing, thanks to a microprocessor-based automatic control system, a precise task for which it has been designed in the industrial, scientific, military or domestic field”. Robotics is therefore the set of sciences and techniques allowing the design and realization of robots.
Unlike automatons, robots are sensor-equipped systems capable of acting autonomously. They have sensory organs collecting information about their environment that will influence their activity; this activity is driven by increasingly sophisticated software.
They first appeared with the Industrial Revolution and then invaded many areas that we will cover shortly. From heavy industrial robotics to the medical or military field, via domestic robotics, machines are part of our daily lives.
The vast majority of today’s robots perform repetitive tasks without dynamic learning. The next step is to have a certain autonomy of the machine, in various environments, and adaptability to unforeseen situations, not only according to a pre-set program. These so-called intelligent devices are capable of collecting information extracted from their environment, the processing of which will influence their operation.
Research efforts focus on new fields of application with a stronger concern for improving the learning and intelligence capabilities of today’s robots through sophisticated algorithms.
Robots are used in more and more fields, with constantly evolving characteristics: miniaturization, mobility, adaptability, working environment (land, sea, air), etc.
The first robots appeared in the 1960s, but for a long time they were confined to industrial use (automotive, aerospace, etc.). They were essentially manipulative robots installed in closed spaces (cages). They have multiplied and become more complex; some factories are now increasingly robotized (we can say that our cars are largely built by robots).
There has also been an increase in the automatic management of warehouses and logistics centers, with the use of mobile carts. In October 2019, Amazon opened its first robotized distribution center in France, covering 152,000 m2. On a smaller scale, pharmacies have a robot installed at the back of their premises that stores and delivers boxes of medicines directly to the pharmacy counter in its dispensary.
Some production sites also use robots to replace humans in hostile environment interventions, as is the case in the nuclear industry.
The representation of the environment is an essential aspect in mobility, and the information from the various sensors (radar, cameras, tactile sensors, etc.) must be interpreted in the best possible way. Location and navigation strategies, allowing a mobile robot to move to reach a goal, are extremely diverse and account must be taken of possible unforeseen elements such as moving obstructions. Learning is part of the research related to mobile robotics.
Today, a large number of the developments concern the field of service robotics. A service robot operates autonomously, or semi-autonomously, to provide services useful for the well-being of humans or the proper functioning of equipment, excluding manufacturing operations. Although it still seems far away, the prospect of the massive arrival of robots in our daily lives is no longer science fiction.
Domestic robots intended for the general public made their appearance at the beginning of the 21st century, with, for example, vacuum cleaners (beginning in 2002 with the Roomba robot from the company iRobot). They are multiplying, especially in the field of leisure (toys, companion robots, etc.).
Another sector of service robotics is set to experience strong growth in the coming years: medical robots and more broadly all those robots for assisting medical personnel as well as the elderly or disabled (automated wheelchairs, motor assistance robots, etc.).
Delivery robots appear, and distribution giants prepare them. In some healthcare facilities, a robot is used to transport medications from the central pharmacy to the appropriate departments. Our favorite pizza may soon be delivered to us by a drone.
Finally, in addition to the complexity of tasks and environments, there is an essential factor in service robotics when operating in the presence of humans: how can we guarantee total safety for humans when robots are in their homes?
We have mentioned many types of robots, each of which is intended to perform more or less complex tasks and to replace, or at least help, the human hand in the execution of these tasks. But there is a considerable distance between the capabilities of these robots, even very sophisticated ones, and human capabilities. The dream of roboticists is to be able to create robots “in the image of humans”.
Roboticists would like their robot to have the motor skills of humans because they are essential in many applications. Industry needs robots that can kneel or bend like a human, move around by resting not only on their feet, but also on their knees, hands or elbows, just as we do when we crawl.
Humanoid robotics faces a huge challenge: it is a question of getting as close as possible to the capacities of a complex living being, their motor abilities, as well as their social capacities and, ultimately, their cognitive capacities. Human appearance, which is what is put forward for publicity reasons, is neither the most important nor the most complicated point for roboticists.
There are many jobs in representation, in which appearance counts a lot, and we prefer that the robot that welcomes us looks like a human rather than an assembly of metal parts. This is true, for example, of the hotel receptionist who must be welcoming, the TV news presenter or the guide in a museum. Appearance, the quality of communication and the ability to express emotions all become important criteria.
Cognitive abilities are by far the most complex. Today, we can tell the robot to “go get a bottle of water from the fridge”; tomorrow, we can tell it to “make me lunch”. We will be able to endow robots with a certain intelligence (learning, ability to adapt to the unexpected), but we will still be very far from a robot capable of feeling, opinion, creativity, living in a community, etc. (see section 5.2).
The main obstacle to the widespread use of these robots will not be technological or economic but social. These robots will have to be able to express emotions and react in a way close to that of a real human to be accepted in our daily life.
In addition to technical questions (locomotion, relationship with humans, intelligence, appearance, autonomy, ability to adapt to the unexpected, etc.), there are also philosophical questions (can humans accept being accompanied by robots that resembles them? Does a humanoid have the same responsibility as a human)?
Jobs and skills will continue to evolve, but the human element will always be present. Cobotics (a neologism derived from the words “cooperation” and “robotics”), or collaborative robotics, aims to develop robotic technologies in continuous interaction with humans. The robot is no longer intended to replace a human, but to help him/her, to assist him/her. The human remains focused on the most complex part of the task requiring dexterity, perception, analysis, learning and experience. Thus, a cobot is a robotic device designed, manufactured and used to interact with a human operator.
The fields of application in regard to cobotics are varied, since it is very present in industry, but it is also an important perspective in the fields of health (surgery, rehabilitation, assistance and substitution), home automation, the military field or for training.
Research focuses on the safety and efficiency of human–robot interaction (HRI) and on new cobot architectures, from stress amplification systems that provide power and endurance in human gestures to exoskeletons (articulated and motorized equipment attached to the body via the legs and pelvis, or even on the shoulders and arms, to facilitate movement by adding the force of electric motors).
Robots were first designed to operate in environments where all parameters were precisely controlled, which was the case in an assembly line in the automotive industry. These systems were unable to cope with changes in task and environment structure without reconfiguration or reprogramming. Today, robots can evolve in dynamic spaces in which they interact with other robots and humans.
The question of communication arises, just as it arises between humans in everyday life. If I do not know the language of my interlocutors, if I am visually (or hearing) impaired, my ability to interact is more limited, unless ad hoc devices can compensate for this disability.
Let us take the example of cobots or, more generally, industrial robots when communication with humans is essential. What are the main modalities of an HRI?
Environments linking several robots and several humans exist (warehouses, etc.) and will multiply. Communication will be even more complex.
The development of robots and their ability to replace us in certain tasks can worry us.
A primary concern is that robotization would create fewer jobs than it would destroy. The OECD (Organisation for Economic Co-operation and Development) predicted in a May 2019 report that robotization is expected to reduce employment by 14% over the next 20 years. But many studies tend to show that job creation compensates, at least partially, for the jobs destroyed and that the jobs created are better qualified and better paid (which cannot satisfy those who have lost their jobs).
In several countries, automation technologies are seen as a solution to population decline. This is the case in Japan, and also in Germany where an annual net immigration of 400,000 people would be needed over the next two decades to compensate for the natural decline of the working-age population. Even if automation and AI progress, human intervention, in one way or another, will still be necessary. Some companies such as Toyota have reintroduced humans into production alongside robots in a continuous quality improvement process.
Another problem seems to be of concern: military applications and more specifically “killer drones”. This must be the subject of a real democratic debate. The decision to kill a human should not be made by an algorithm.
Can robots escape the control of humans, or even take power? For me, the answer is no. This is a fantasy that probably relies on imagery from American science fiction films.
Robotics is a multidisciplinary science that mobilizes many research teams, public laboratories and companies. Here are some lines of research:
My years at Irisa allowed me to work with researchers and engineers who develop software and deploy applications based on what we call virtual reality (VR for short). It seems useful to me to reposition this scientific and technological field, and show that it is complex and has multiple facets.
This term has become fashionable with the widespread distribution of video-headsets. But watching a film or video at 360 degrees with a video-headset is not considered VR by researchers. Indeed, this configuration lacks several components such as, mainly, the possibility to directly interact with the content.
Here is a definition: “Virtual reality is the set of sciences and technologies that allow a user to feel present and interact in an artificial environment. Thus the purpose of virtual reality is to allow one or more users a sensory-motor and cognitive activity in an artificial world, created digitally, which can be imaginary, symbolic or a simulation of certain aspects of the real world.”
Two concepts are at the basis of VR: immersion, that is, the use of stereoscopy, eye tracking and other techniques to give the illusion that one is inside a synthetic landscape, and interaction, that is, the possibility for the user to move around and inside the modeled objects, be it a molecule or an entire city, and to interact with these objects.
The very first immersive VR system dates back to the 1950s with Morton Heilig’s Sensorama, which already allowed the display of stereoscopic images, sound, smell and motion effects for a multi-sensory user experience.
Ivan Sutherland proposed the concept of Ultimate Display from 1965 to 1970, with a first prototype of a visualization helmet controlled by facial movements. He is the co-founder, with David Evans, of the company Evans & Sutherland specialized in graphics software and simulation.
In 1983, Jaron Lanier and Thomas Zimmerman invented the Dataglove, a fabric glove with optical fibers that let more or less light through depending on the angle when bending the fingers; in 1984, they founded the company VPL, which designed and marketed the first complete VR equipment.
In 1984, Michael McGreevy piloted the Virtual Workstation program at NASA in preparation for the exploration of the planet Mars. This NASA program was continued by Scott Fisher, another major VR inventor, with, in particular, the integration of several interfaces: VPL gloves, HMD (head-mounted display), 3D audio system, etc. Scott Fisher then founded Telepresence Research, a company specializing in consulting and implementation of virtual environment and telepresence systems for industry and leisure.
At the end of the 1990s, VR began to convince large industrial groups, for example, in the automotive or aeronautics sectors. Since then, VR applications have become affordable for small and medium enterprises.
The CAVE (Cave Automatic Virtual Environment) is undoubtedly the solution offering the most impressive VR immersion.
A user can move around within a cube, the size of a room, whose six faces are backlit screens. On these screens are projected two images presenting two slightly shifted points of view (of the interocular distance) of the same scene. Wearing appropriate glasses enables associating a point of view to each eye to offer an omnidirectional stereoscopic vision to the user. Other devices such as data gloves or haptic arms (force feedback arms) allow the user to interact in the environment. A set of algorithms and software allows a realistic and interactive physical simulation to immerse the user in the 3D virtual environment with which he/she interacts.
Developed at the University of Illinois in 1992, the CAVE was the world’s first VR technology that allowed multiple users to immerse themselves in the same virtual environment at the same time. On the other hand, not only is this type of system complex to configure, but it also has a high cost.
To meet (in part) the constraints of cost and space, an intermediate system has been developed: the SAS Cube, consisting of a floor projection surface and three vertical projection panels.
The workbench is one of the lightest configurations based on projections on large screens. It is composed of two large rear-projected screens that form an L. Ideal for interactive manipulation, this configuration offers semi-immersive visualization. It fits into the work environment like a drawing table. The projection, stereoscopy and head movement recording technologies are the same as for a CAVE.
They are used for scientific visualization, as well as in the automotive industry.
Opaque headsets were the first and virtually the only VR configuration in existence until the early 1990s. While military applications have long been instrumental in advancing this technology, it is now being used in a variety of civilian applications. Visualization is carried out on two small screens, each placed in front of one eye.
The semi-transparent headsets of optical technology propose a superimposition of the virtual on reality thanks to an optical system which makes it possible to see reality by semi-transparency. Unlike project-based virtual environments, reality is here necessarily behind the virtual and therefore cannot hide the virtual.
The dataglove is a sensor-filled glove that allows a user to almost naturally grasp a virtual object and manipulate it, by digitizing the hand’s movements in real time.
A force feedback (haptic) arm allows users to design, model and manipulate objects in a virtual environment with tactile (touch) and kinesthetic (force feedback) perception.
The 3D mouse is a six-dimensional pointing device: three translation and three rotation. Compared to the traditional mouse that translates a two-dimensional input movement, the 3D mouse brings depth.
The applications of VR are numerous and in full expansion.
VR has made it possible to reconstitute many buildings that have disappeared partially or totally, for example, abbeys (Clairvaux, Cluny, etc.). The 3D reconstruction of the Boullongne, a ship of the East India Company launched in 1758, based on plans and historical data, allows historians to discover the real living conditions on board.
With 3D glasses on your face, you can survey the deck and holds as if the ship was real, and even take the helm to steer the boat and climb the mast. This work was done in the SAS Cube of the Immersia VR room (Irisa/Inria) of the Beaulieu university campus in Rennes.
A virtual environment can facilitate the education of nurses, taking into account interpersonal relationships and cognitive aspects. Multidisciplinarity is very common in the design of VR applications. VR can be very useful for surgical or dental simulation, as well as for re-education and rehabilitation.
We can also think of applications in psychotherapy. VR techniques have been tested and evaluated in order to treat certain phobias: fear of spiders, vertigo, fear of flying, social phobia, etc. The patient is subjected to dynamic and interactive 3D stimuli, and his/her cognitive, behavioral and functional performance can then be evaluated and treated.
In various fields, such as automotive and aviation, engineers use VR for engine and part design, reducing testing. VR thus complements modeling and simulation software.
Operators can use the same technology to practice before they start using a new machine, saving time on the production line. The entire workstation is modeled in a realistic 3D that immerses the learner and can integrate the simulation of situations such as incidents, anomalies, etc.
Other examples of VR applications include the following:
Augmented reality (AR) aims to increase the perception of an individual by adding elements in his/her field of vision that allow him/her a better understanding of his/her environment.
While VR is based on the creation of an environment, AR is based on a real environment because it involves the visualization of a real image (most often the user’s immediate physical environment) on which virtual objects are superimposed. AR enables embedding or superimposing the real scene of a video stream captured by a camera virtual still or animated images in real time. The term AR is not entirely accurate because it is not reality that is augmented, but rather the user’s perception. It is therefore two different approaches despite the proximity of the two names. A particular difficulty of AR is related to the constraint of having to perfectly position these virtual objects inside the real images (we talk about tracking technologies or position tracking).
AR and VR applications overlap in part. For AR, these include, in particular:
The advent of smartphones and tablets has made the miniaturization of these devices possible: camera, screen and embedded computing have enabled the development of truly mobile and relevant applications. AR can pose a safety problem: for example, a cyclist wearing AR glasses can be distracted by the information displayed on the screen and risk an accident.
In 1945, the World Health Organization (WHO) defined health as follows: “A state of complete physical, mental and social well-being and not merely the absence of disease or infirmity.” As health is a key societal and economic issue, health actors have sought to use information and communication technologies to improve our well-being and the effectiveness of the care provided by health professionals. As this is a very vast field, we will distinguish here between medical informatics, which is therefore linked to illness, and the contributions of informatics to daily life.
E-health (or digital health) refers to “the application of information and communication technologies to all health-related activities”.
Embedded technologies in medical devices have led to significant improvements in diagnosis and care processes.
Medical imaging, discussed in Chapter 5, is probably the oldest computer technology used in medicine; it has progressed considerably over the last 20 years. Discovered more than a century ago, radiography uses X-rays. Ultrasonography is a technique for exploring the inside of the body based on ultrasound. The scanner also uses X-rays; it scans the area to be explored and reconstructs “slices” of the body. Magnetic resonance imaging (MRI) allows you to visualize invisible details on standard X-rays, ultrasound or CT scan. And there are many other techniques! The images can be computer-processed to obtain a 3D representation of an organ, an animation showing its evolution, etc. They allow a better diagnosis and facilitate surgical interventions when necessary.
Robot-assisted surgery, introduced in the 1980s, is now recognized for its great advantage and is spreading in hospitals. A surgical robot is essentially a system to assist the surgeon’s gesture; it can be coupled with a medical imaging system. The instruments are directed with extreme precision and the robot can be used to facilitate access to difficult areas, limiting the risk of complications and allowing a faster recovery for patients. Remote telesurgery operations are now possible using sophisticated robots during surgical procedures where doctor and patient are in different locations. Physician expertise and robot technology can be combined using VR and sensors.
Telemedicine connects one or more healthcare professionals to each other or to a patient. It covers several types of acts: teleconsultation allows a medical professional to give a remote consultation, tele-expertise allows a medical professional to seek the advice of one or more professionals remotely, telemonitoring allows a medical professional to remotely interpret data collected from the patient’s living place and medical teleassistance aims to allow a medical professional to remotely assist another health professional during the performance of an act. Telemedicine is particularly useful for people who are far away from health professionals.
The Vitale health card, the first version of which dates back to 1998, is a smart card that certifies citizens’ rights to French health insurance. It contains only administrative information. In 2019, the French Ministry of Health announced the launch of a trial of the Health Insurance e-card, an application that can be downloaded from a smartphone or tablet.
The addition of a medical record had been considered, but was abandoned in favor of an ambitious new project, the Shared Medical Record (dossier medical partagé, DMP), a digital health record that has been operational since 2011. Confidential and secure, it stores online health information (care history, medical history such as allergies, test results, hospitalization reports, etc.). It enables this information to be shared with attending physicians and all the healthcare professionals who take care of patients, even in hospital. It can be consulted on a website or via a smartphone application.
Healthcare professionals have access, via the Internet, to many sources of information. But we can all find documentation on health problems that concern us by using search engines.
In all countries, health data is growing exponentially; it is the Big Data phenomenon we have already spoken about. These data have very different typologies (clinical, biological, social, behavioral, demographic, etc.) and also very different formats (text, numerical value, signal, 2D and 3D images, genomic sequence, etc.). Finally, they come from a variety of sources: medical records, clinical trials, administrative databases, patient data (connected objects, applications), social networks, etc. The implementation of coherent databases is an important issue for all healthcare stakeholders. Data Mining or artificial intelligence methods can be used to analyze these large amounts of data, for example, for research organizations, healthcare manufacturers, epidemiological surveillance or diagnostic support.
In France, the Système national des données de santé (SNDS), effective since April 2017, brings together the main existing public health databases. The SNDS aims to improve knowledge on medical care and to broaden the scope of research, studies and evaluations in the field of health. It connects health insurance and hospital data, medical causes of death and data relating to disability.
The creation of the Health Data Hub, planned for 2021, is part of a dynamic of enrichment of the SNDS. Its objective is to promote usage and multiply the possibilities for exploiting health data. It will enable the development of new techniques, in particular those related to artificial intelligence methods. It will also have a role in promoting innovation in the use of health data. The structuring of digital health data and their semantic coding is one of the key elements in the Health Data Hub’s supply.
Our health can benefit daily from information technology.
Connected objects can help people to get to know each other better, to monitor and improve their health. The most common objects are watches and bracelets to measure the number of steps and kilometers traveled, the speed and type of travel, the level of sun exposure, heart rate, blood pressure, etc. Others, more sophisticated, are aids for people who are ill or who have just undergone surgery: a pillbox that sends an alert signal (sound, SMS) if you forget to take your treatment, connected patches for Alzheimer’s patients allowing them to be geo-located, devices that detect a fall of a dependent person and send an alert. More recently, there are trials of drugs containing a sensor in the pill that emits a signal when ingested, and makes it possible to know when the patient has taken his or her treatment. There is no shortage of conceivable applications.
Robots make life easier for elderly and/or dependent people who wish to stay at home as long as possible, as we have already seen. They can also simplify hospitalization at home, and intervene in the treatment of illnesses such as autism, as various experiments have shown.
Another example of the contribution of IT in the health field is the use of AR glasses allowing visually impaired people to regain some independence.
The connection of all these devices is of course essential because it allows the collection of data, some of which can be used by medical staff.
Computers appeared in vehicles in the late 1980s. Today, the automotive industry is one of the sectors that make extensive use of on-board technologies, combining pure computing (programming, design of applications) and electronics (sensors, interfaces, etc.). On average, there are between 40 and 60 computers in cars, and up to 80 for high-end models, integrating data transmission systems. A mid-range car can carry about 150–200 million lines of code.
Engines, temperature sensors, vehicle air conditioning, on-board controls, navigation devices, radar (reversing or more), braking assistance (ABS), voice recognition, permanent vehicle diagnostics, leisure equipment and park assist are increasingly common. The passenger compartment should also become more intelligent and more comfortable (air purity, temperature, lighting, ambient fragrances) depending on the conditions detected: pollution, heat and mood of the occupants.
The first tests of an autonomous (and therefore driverless) car date back to the late 1970s. In the 1980s, research labs specializing in robotics tested prototypes, but it was not until 2010 that the subject became media headlines when Google announced that it was working on this technology, first with modified production vehicles and then with the Google Car designed entirely by Google. Other companies have embarked on projects, such as Uber or the manufacturer Tesla. A few accidents, including two fatalities, made the headlines, but we forget the thousands of deaths on the roads each year in France alone! Now the time has come to focus on autonomous shuttles operating in a secure environment, away from traffic, even if experiments are still being carried out on open roads.
This is a huge market and all the big names in the IT and automotive industries are working in this sector, not to mention a myriad of start-ups. The connected car is not yet a standalone car; there is still a lot to be done and the manufacturers are getting ready, forging alliances among themselves and with the GAFA.
There are generally six levels of autonomy:
We have mentioned several times the growing importance of electronics, software and communication systems in vehicles, especially in cars. For a vehicle to be able to circulate in a totally autonomous way, without any human intervention, it must be able to 1) perceive its environment, 2) analyze and interpret the data it receives, 3) make the right decisions on how to drive the vehicle, all this with 4) guaranteed operational safety.
The autonomous vehicle must be able to identify all fixed or mobile objects in its environment (signs, pedestrians, other vehicles, etc.), to predict the evolution of mobile objects, to establish a map of its environment and to locate itself in it. To do this, the car must be equipped with a multitude of sensors: cameras operating in the visible and infrared, radars, lasers such as lidars (light detection and ranging) and ultrasonic sensors. It is of course essential to process in real time the data coming from these sensors and to merge them, because the information given by sensors of different physical nature complements each other, thus obtaining relevant information.
Computer software will give meaning to the data collected. This software has first undergone a learning phase, using deep learning technologies, to be able to correctly analyze the external environment and recognize, for example, a face or understand a road sign. It has also learned to memorize numerous scenarios. Methods have been developed to refine the location of the vehicle with an accuracy of the order of a meter or even a decimeter.
Depending on the result of the analysis of the data by the software, the fully autonomous car has to make a driving decision. Here too, the software plays a central role in choosing the route to be taken or the maneuver to be carried out (braking, etc.).
The operational reliability of all the bodies mentioned is of course essential in this context. It relies first of all on redundancy of sensors (a sensor can have a problem). As software is at the heart of decisions, it must be validated by formal methods of proof (we refer to Chapter 3 devoted to software). Finally, communications between the car, other vehicles and the environment must be efficient and reliable, taking into account the diversity of networks (cellular, Wi-Fi, Bluetooth, etc.).
The arrival of the autonomous vehicle will have advantages and disadvantages, and will have an impact on society, for example, on the organization of urban space.
Everyone agrees on a reduction in the number of accidents, as the vehicle is not likely to be under the influence of alcohol, to fall asleep, to speed, etc. The road network should be safer and traffic should flow more smoothly, but the road infrastructure will need to be adapted. There will be less time lost for those who spend a lot of time in their cars, especially in traffic jams. Carpooling will be facilitated, as autonomous vehicles can easily pick up their users, thus reducing overall energy consumption.
The arrival of the autonomous car brings not only advantages but also challenges. How will the large amounts of data produced by these systems be secured and used? Who will own the data collected? Who will be able to know that I went on such and such a day at such and such a time to such and such an address? It will have an effect on the labor market: truck, bus and cab drivers will be directly concerned because their jobs could disappear in the long term. The question of cyber security will arise: hackers can already find the frequency emitted by car keys at a distance (within a radius of about 10 meters) and can penetrate or even steal them; with autonomous cars, computer hacking, targeted against one vehicle or an entire fleet simultaneously, will be a major risk. The transfer of responsibility for the driver’s driving to the manufacturer or manufacturers of components of the autonomous vehicle will be a question for lawyers and insurers.
And, finally, how will the population receive it? The extra cost will be high (several tens of thousands of euros, at least initially). Are we ready to entrust ourselves to a vehicle over which we have no control? More simply, the pleasure of driving, which exists for many people, will disappear.
Massive urbanization poses many problems, both for those in charge and for the inhabitants. The aim is to improve the quality of life of city dwellers, reduce costs and energy consumption by making the city more adaptive and efficient, using new information and communication technologies. This concept is not new. The pioneering cities in this field are the megacities of Asia, such as Hong Kong or Singapore. Since then, hundreds of cities around the world, including France, have launched programs with this objective.
New information and communication technologies (home automation, smart sensors and meters, digital media, information devices, networks, etc.) will be at the heart of the city of tomorrow. This concept of the smart city is very global, and each city can focus its intelligence on aspects such as energy savings, public transportation, innovative projects, communication between citizens and their elected officials, etc. There is no single model for a smart city, because all cities draw on their history, geography and multiple specificities.
We are going to give some concrete examples showing the role of information technology in these innovations that can make cities smarter and more at the service of their citizens. The reader who is interested in this subject in a global way will be able to consult with interest Francis Pisani’s report “Voyage dans les villes intelligentes : entre datapolis et participolis5”.
Saving energy and reducing the carbon footprint are among the objectives of all cities. Better energy management is possible: diversifying energy sources and managing them in a global and optimized way thanks to a smart grid; taking advantage of local resources (every territory has local natural energy resources); wind, sun, waves, ground heat, biomass are free sources of energy that are just waiting to be exploited.
Some cities are getting smart street lamps. Equipped with LEDs instead of conventional bulbs, they consume less. Sensors are also installed to detect the proximity of pedestrians. When the streets are empty, the brightness is dimmed to consume less energy. They also collect information on the level of air pollution, noise level and provide Wi-Fi access to passers-by. The information provided is centralized and allows for better overall management.
The Internet of Things and data analysis can be used to optimize energy consumption inside public buildings, for example, by adapting lighting and heating to the lifestyle of their occupants.
The applications of computing are numerous, from the individual home to office buildings or factories, and we have mentioned several with the Internet of Things or robots.
We can say that home automation brings together technologies in the field of electronics, information and telecommunications, designed to make a house smarter. It provides functions related to comfort, energy management, home security, etc.
Let us imagine a smart house in a certain future. I have been woken up by soft music, the atmosphere in the room set as well as possible. The shutters have opened automatically. I go to take my shower whose flow and temperature are adapted to my wishes (these parameters will be different for my wife). Meanwhile, breakfast has been prepared and I can see the news I have programmed (weather, national and international news). Once dressed, I find my car (autonomous perhaps) which has also been prepared (temperature, atmosphere), the garage has opened and will close again as soon as I am gone. Once the last inhabitant has left, the heating will be adjusted to optimize energy consumption and the alarm system will be activated. The cleaning will then be done by robots and the washing machine will choose the right program by analyzing the laundry it contains. I can make sure that my children have arrived at school and I will be able to verify that they came home on time. If something happens in the house, I will be informed in real time on my smartphone. A drone delivers my lunch to me at the exact time I choose. When I get home, the garage opens, and the security system was deactivated when the first member of the family arrived. The groceries were delivered and the house was set to “evening mode”, with a warm atmosphere (lights, temperature, music). After dinner, we have sophisticated multimedia systems that allow everyone to do what they want to do: watch a movie, play with the family or network, chat with a friend, etc. When I decide to go to bed, my environment adapts, unused rooms are turned off and the appropriate alarms are activated.
Of course, some of these functions are also useful in any type of building. You can also imagine what life can be like in the office, what life can be like for the children at school, how other activities such as sports will be carried out, etc. Science fiction? Not totally, because some of these functions already exist. And, do we want to experience this type of scenario?
The first question that arises concerns architecture and urban design. What kind of urbanization do we want? Do we give carte blanche to real estate developers and companies? This question arises as soon as a city plans the renovation of a neighborhood. It is a question of inventing new ways of working together and designing the city. Home, employment, shops and leisure activities must be approached in a coherent way, limiting passenger transportation and thus reducing the energy and environmental impact of mobility. We will discuss mobility again in section 6.6.
The equipment must be accessible by everyone, with smartphones making it possible to locate it and find the best way to access it according to each person’s constraints.
Networks (water, sewers, etc.) can be better managed thanks to sensors that detect leaks and alert maintenance departments. The waste collection services know in real time the level of filling of the containers, thanks to sensors, which allows them to optimize the rounds.
A virtual public service assistant can help and respond to constituents who may encounter problems in the course of their day-to-day activities, such as renewing their identity card or registering for school.
Urban signage is an effective means of communicating with citizens, with technologies that allow information to be tailored to certain characteristics such as the age, gender and areas of interest of the people viewing the sign. Customization of the information is possible.
Communication between citizens and city officials can be made easier, in particular thanks to smartphones: reporting an incident, receiving an alert on the air quality in a particular neighborhood depending on the person’s frailty, etc.
The fight against noise pollution is facilitated by the installation of sensors, for example, on street lamps, whose signals are continuously analyzed. Personal safety is also part of governance for local authorities. Camera networks are only one facet, and the analysis of the numerous data collected (history of events, messages from inhabitants, local contexts) should enable the anticipation of possible difficulties.
Does the smart city outline a multi-speed city? The increasing digitization of cities risks creating new inequalities. It increases the risk of exclusion for certain groups of people, caused by the dematerialization of urban services and administrative procedures.
Smart cities raise other issues, such as data governance and privacy. The smart city is driven by data. Optimizing urban management, inventing new services and responding to the individual needs of residents relies on the collection, storage and processing of increasingly massive amounts of data. To a large extent, even if the data is subsequently anonymized or aggregated, it is personal data. The implementation of an ethical charter to establish rules for data exchange and sharing is necessary. It is a matter of putting in place the necessary safeguards to ensure that this evolution towards smarter cities is in the interest of all and not just for the benefit of a few.
This also raises the question of cyber security. Because of the growing number of devices (connected objects, etc.) and data traffic, smart cities are exposed to many potential security breaches that can impact not only their urban infrastructures, but also hospitals, transportation systems or all kinds of structures they manage.
Whether in the city or in the countryside, mobility is too often associated with congestion, the difficulty of finding efficient means of transport, pollution, the cost of individual transport (vehicle, fuel, tolls) or public transport, wasted time, and so on. People who are isolated, elderly or disabled are finding it increasingly difficult to get around. Some large cities are introducing measures such as alternate traffic patterns or city tolls. Local authorities are seeking to improve the mobility of citizens; IT and telecommunications can help implement solutions leading to what is called smart mobility.
Limiting the construction of new infrastructure by optimizing the use and performance of existing transportation systems, improving road safety, enhancing service quality through real-time information, reducing inequalities that offer opportunities for mobility for all, and ensuring environmental protection: these are the main areas of application of ITS (Intelligent Transportation Systems). Many cities have launched projects that use new information and communication technologies.
The Mobility 3.0 initiative, led by the ATEC ITS France association, which promotes exchanges and experiences between mobility professionals, is the expression of the desire of French players to take up the digital challenge and fulfill its promises in terms of traffic optimization, economic performance, respect for the environment, quality of life, the fight against climate change and road safety.
Ad hoc solutions exist. The operating plans of traffic lights automatically adapted according to the traffic allow them to regulate it (sensors, communication with the vehicles). Improving the operation of public transportation, including autonomous shuttles at sites where they are a plus, reduces the use of private cars. User information can be improved: knowing in real time where I can find an available place to park, thus freeing up traffic and saving fuel, or where to find the nearest free terminal to recharge my electric vehicle. In Santander in northern Spain, the city has created an AR application that allows anyone to point their smartphone at a street in the city to view the bus stops available nearby, the lines that stop there and the delay before the next bus passes.
If the solutions proposed in the city are not easily transposable to the countryside, innovative ideas adapted to the particular context must be found.
The concept of MaaS (Mobility as a Service), born in 2014 in Scandinavia, is based on the principle of conceiving mobility as a service that allows people to go from point A to point B. Like a personal assistant, a smartphone application offers the fastest, cheapest or most comfortable routes, these routes being combinations of multiple modes of transportation, whether public, private or shared (autonomous cabs, metro, carpooling, self-service bicycles, etc.), all with a single subscription and platform, as is the case in Île-de-France with the Navigo card. This requires strong coordination between transportation operators (public or private, cabs, VTC, self-service bicycles, etc.) and companies that integrate their different services. One of the success factors of intermodality is information, and a predictive layer is needed in the applications to anticipate the availability of each mode and guide the user to the most relevant mode of transport at the time of request. Data and predictive algorithms will play a decisive role in making mobility ever more active, fluid and connected. Operators, integrators and a number of start-ups are working to make this concept a reality.
Industry 4.0, the factory of the future, the intelligent factory… So many terms to describe this new model of factory born from the 4th Industrial Revolution, following three major phases of evolution called revolutions: mechanization, driven by the steam engine (half of the 18th century), mass production, driven by electrical energy (half of the 19th century) and automation, supported by electronics and computers (half of the 20th century).
The 4th Industrial Revolution organizes production processes induced by innovations linked to the Internet of Things and digital technologies. Industry 4.0 corresponds in a way to the digitization of the factory. The objectives are numerous: to respond to the growing demand for personalized products from consumers, to respond to current issues of resource and energy management, to optimize and make production cycles more flexible, to create a logistics process capable of rapidly exchanging company information with all its partners, etc. But it fails to recognize that reducing employment is also part of these objectives.
The factory of the future is based on many interconnected technologies related to digital technology, which we have already discussed in previous chapters.
Robots will play an increasingly important role; they will be more autonomous and will communicate with each other and with humans. Cobots, or collaborative robots, will assist human operators. All are equipped with sensors and software.
The design of a new product will increasingly involve numerical modeling (structure, materials), simulation of its behavior and qualities (resistance, etc.). Computer-aided design and manufacturing tools (CAD/CAM) are now widespread. This considerably reduces the time and cost of product development, often avoiding the often lengthy prototyping phase. This modeling can be applied to the entire production process.
The factory of the future encourages a new form of collaboration, articulating vertical integration (integration of players throughout the value chain, from supplier to customer) and horizontal integration (reinforced collaboration between different departments, from marketing to quality control). Information sharing is therefore essential.
The connected objects, embedded on parts, machines and globally in all stages of the production cycle, provide a large amount of information facilitating the monitoring of production rates, reaction to incidents and machine maintenance.
The Cloud enables users to benefit from the computing and storage power of remote computer servers, at lower costs than those of internal IT systems. It must ensure the security and integrity of all the data that can be transferred between the different systems.
Additive manufacturing, also known as 3D printing, allows the production of complex, custom-shaped parts in record time and with great precision.
AR can facilitate industrial maintenance. VR allows the development of manufacturing processes and facilitates the training of the personnel who will contribute to them.
All of these technologies require data and produce large amounts of data that must be able to be analyzed in real time. The role of data and communication systems is therefore essential in the factory of the future.
The objectives of Industry 4.0 are associated with several major issues:
Is this vision of the factory of the future optimistic? The issues presented above are not simple and correspond to challenges that will have to be met. But where do human beings fit into this vision?
Beyond its technical advances, Industry 4.0 will be marked by a complete disruption of the productive process, the disappearance of medium-skilled jobs and their replacement by automatons. The appearance of new professions (computer scientists, engineers, network experts, etc.) will not compensate for the disappearance and will have to be accompanied by intensive and long-term training. Work and paces will be controlled by algorithms. There will be new modes of collaboration and cooperation between employees. The quality of social dialogue will be essential in this change.
Today’s factory is also a social space in which the staff rub shoulders, exchange ideas in front of the coffee machine and build solidarity. What will the factory of the future be like?