5
Technology Building Blocks

It is important to understand the main technological trends that allow us to build the services we have today and that profoundly change our daily lives. We will detail some applications and uses in Chapter 6, as they often combine several of these technologies.

5.1. Embedded systems

The explosion of the Internet of Things (IoT) gave a special importance to the technology of embedded systems, which are autonomous electronic and computer systems dedicated to a specific task, often in real time, with a limited size and low energy consumption. They are called embedded systems because they are generally not seen. It is therefore not a traditional computer that can be used for many applications.

It is often a component of a larger system. It interacts with the external environment by retrieving information via sensors and acting on this environment via actuators to produce an action. We are in a scheme of “information ↔ reaction”. Embedded systems are numerous in transportation (automotive, aviation, etc.), electrical and electronic products (digital watches, telephony, television, washing machines, home automation, etc.), telecommunications, medical equipment, process control (production lines, etc.), smart cards, consumer toys, etc.

The characteristics of an embedded system can be summarized as follows:

  • – it is generally built to perform a single task or function, although more generalized embedded systems are developing;
  • – it must comply with certain constraints: small size, performance, reliability, cost;
  • – it must be able to react continuously, within often specified time frames, to changes in its environment (we discussed real time in Chapter 3);
  • – it has one (or more) microprocessor(s) or microcontroller(s);
  • – it has an internal memory (ROM) to store its software;
  • – it has input/output ports to communicate with the outside world;
  • – it must be robust and reliable in many cases, both in terms of hardware and software, because its operation may concern the safety of people and property, security, the accomplishment of missions;
  • – its energy consumption can be constrained to have a maximum autonomy.

The importance of these criteria varies from one area of use to another.

5.1.1. Specific architectures

Like any computer system, an embedded system is organized into three levels as described at the beginning of Chapter 3: hardware, operating system and application.

Schematic illustration of basic architecture of an embedded system.

Figure 5.1. Basic architecture of an embedded system

The hardware level consists, in most cases, of one or more microprocessor(s) and microcontroller(s), integrated on a chip with memory, and input–output ports often with converters to switch from the analog to the digital world and vice versa. A simple system can be satisfied with an 8- or 16-bit microcontroller, but processors can be very complex in some systems, to the point of having massively parallel architectures. More than 90% of the processors manufactured worldwide are for embedded systems, so this is a huge market.

The operating system should be as small as possible and highly reliable. Several examples of operating systems are given in Chapter 3.

The application must obey the same constraints: efficiency, reliability and compactness. Its development is closely linked to these constraints, the hardware and the operating system.

The design of an embedded system often uses co-design techniques, enabling the hardware and software to be designed for a functionality to be implemented. The steps are the specification (list of system functionalities in an abstract way), hardware and software modeling, partitioning (hardware/software), synthesis of hardware and software (leading to a system-on-chip), and testing.

Training courses exist to strengthen expertise in engineering connected objects, critical embedded real-time systems, security, joint software/hardware design and modeling of complex systems.

5.1.2. Some fields of use

The impressive growth in the number of connected objects (more than 80 billion in 2020, according to various sources) shows that the embedded systems industry is constantly producing new techniques and opportunities. They can be found in transportation, homes and offices, health, communicating objects (starting with our telephones), the industry, etc. Here is an overview of their place in a few areas; they will be found in Chapter 6.

5.1.2.1. Transportation

All means of transport use embedded systems: planes, cars, trains, subways and even electric scooters. We will talk about cars in Chapter 6 and will simply mention avionics here.

Avionics covers techniques related to the electrical equipment, electronics and computer equipment used to fly an aircraft. Embedded systems have radically changed the way an aircraft is piloted: the pilot no longer directly controls the separate elements of the aircraft (engines, ailerons, flaps), but controls the aircraft at a higher level of abstraction. According to its function, each of these computers interacts with a certain number of sensors and actuators, by means of electronic circuits for acquisition and specific commands.

The latest generation of aircraft, recently represented by the Airbus A350, has led to an accelerated increase in the number of application functions to be embedded: flight management, fuel management, anti-collision system, ground proximity warning, equipment monitoring to improve maintenance, cabin environment management, etc. There are now more than 100 computers and software representing tens of millions of lines of code in a modern aircraft.

The importance of software and sensors in airliners may raise questions. The two Boeing 737 MAX crashes in October 2018 and March 2019 involved the MCAS stall protection system and highlighted shortcomings in the development and verification of MCAS software. Humans, in this case the pilot, must be able to regain control of the machine.

5.1.2.2. Home automation

Household appliances have long been equipped with embedded systems: washing machines, induction hobs, refrigerators, etc. Home automation increasingly integrates connected objects, most often using wireless protocols such as ZigBee, Bluetooth or Wi-Fi. In the kitchen, appliances are able to make the different elements (hob and hood, for example) talk to each other. We can imagine a refrigerator knowing what it contains and alerting us that there is no more milk!

The living room is the preferred space for multimedia: TV, connected speakers, games consoles, wall-mounted video projectors, voice assistants, etc. Automatic openings and closings (gate, garage door, shutters, blinds) are increasingly manageable remotely. We can control security cameras and receive images from our phone.

With connected lighting, the dream of controlling all your light bulbs by voice or from your smartphone and tablet becomes a reality. Light bulb Wi-Fi or Bluetooth: there is nothing like it to create a lighting ambience adapted to everyday life.

Energy management is also an interesting sector, especially if several energy sources are used (photovoltaic, heat pump, gas, etc.). We can optimize our consumption and energy needs for heating, air-conditioning, domestic hot water and lighting.

And there are, and will be, many other applications, especially with the objects we wear: phones with their multiple services, watches that record our heartbeat, etc.

5.1.2.3. Infrastructure monitoring

The maintenance of infrastructure such as large networks or engineering structures such as bridges and dams requires regular monitoring work, the cost of which represents a high fraction of the maintenance budget. It is a matter of coping with wear and aging and planning maintenance in the best possible conditions. Bridges are subject to vehicle traffic and climatic actions (frost, heat, wind, etc.). They are monitored, in particular thanks to sensor networks, according to precise procedures to ensure their maintenance, prevent and diagnose possible problems, as well as ensure their maintenance and the necessary repairs.

The monitoring and maintenance of offshore infrastructures involves the use of numerous sensors (pressure, vibration, corrosion, etc.) necessary to monitor their operating condition and wear and tear.

The French rail network SNCF deploys sensors on infrastructures, catenaries and rails: data are collected in real time and processed to warn maintenance departments and even anticipate possible operations to be carried out on the network, which does not prevent us from experiencing delays due to various causes.

The industry makes massive use of embedded systems for process control, through production robotics, as well as to monitor possible anomalies and to secure the working environment of the personnel, for energy management, security (intrusion alert), etc. Industrial sites with major accident risks, known as Seveso sites, are of course particularly concerned.

5.2. Artificial intelligence (AI)

5.2.1. A bit of history

What is called artificial intelligence has an increasingly important role in various applications. In recent years, it has been of interest to many scientific and economic players and has made the headlines.

This is a subject that has long interested me since I was one of the first students to complete a PhD in this field. It was in 1969 at the Faculty of Sciences in Paris. I was interested in the automatic demonstration of mathematical theorems, while others focused on games such as chess. The idea was to show that it was possible to design algorithms and write software that would allow a computer to find solutions in activities considered to be specific to human intelligence.

At that time, it was a real challenge to show that a computer could play chess correctly, that is, by respecting its rules. Since then, the performance of computers and the progress of research, especially in learning, have reached such a level that a computer can beat the best players.

The Larousse encyclopedia defines AI as “a set of theories and techniques implemented to create machines capable of simulating human intelligence.” We will see that this definition is too restrictive. The notion of AI was born in the 1950s thanks to the mathematician Alan Turing, already quoted in Chapter 1 of this book. In his book Computing Machinery and Intelligence, he raises the question of bringing a form of intelligence to machines. But this raises another question: is there a quantifiable definition of intelligence? Creativity and consciousness are part of intelligence! For John McCarthy1, “all intellectual activity can be described with sufficient precision to be simulated by a machine,” which restricts the notion of intelligence. This discipline is therefore at the crossroads of computer science, electronics and cognitive science.

Until the mid-1970s, work on AI was plentiful (research on machine translation, robotics, vision, etc.) and produced some results. But AI was soon to fall victim to its original promises that were not kept despite significant funding, from the American agency DARPA in particular. AI survived its first winter in the mid-1970s.

With the advent of expert systems (programs that answer questions in a given field of knowledge, using logical rules derived from the knowledge of human experts in that field) and the approach of machine learning algorithms that allow computers to train on data sets and use statistical analysis to produce values, AI got a second wind until the end of the 1990s when, once again, everything came to a standstill, for much the same reasons: the results did not live up to expectations. This was the second winter.

It was in the mid-1990s that AI regained an important place thanks to the Internet, the massive amount of data it provides, and the increasing performance of computers. In 1997, the victory of IBM’s Deep Blue computer against Gary Kasparov, world chess champion, popularized the idea that a computer can be smart even though the Deep Blue program was just a set of rules, having a huge memory capacity where thousands of chess games were stored along with the different paths to victory.

We then witnessed a commercial recovery of AI, which became a marketing argument. There are many examples: “a robot vacuum cleaner that is able to clean a room by itself” (course material from a French school); “intelligent washing machine: this intelligent technology detects the laundry load and automatically adjusts the washing settings, to guarantee you perfect results” (advertisement from a major brand). Where is the intelligence? In September 2019, a Google AI manager described an example: a computer is given 2,000 images of clouds associated with the cloud type (cirrus, cumulus, etc.), and then presented with a new image to find the cloud type; the answer is not linked to any intelligence, but to algorithms that compare two images and calculate a similarity rate.

5.2.2. Intelligence or statistics?

If I use the recipe as an example of the algorithm mentioned in Chapter 1, everything goes well if I have all the ingredients indicated in the recipe. But it turns out that my grocer does not have the chervil that is part of this list. What should I replace it with? My idea is to replace the chervil with flat parsley. I think this is the best solution; it is a “heuristic” because it is not an absolute guarantee and another person may make another choice.

In chess, I have the choice between several moves; for each of them my opponent has several choices of reply, and so on. I cannot imagine all the possible moves when I continue this analysis, which results in a tree structure that very quickly becomes extremely complex. At some point, I will have to make a choice that I feel is the best possible based on my experience. It may not be the best choice, but I base my choice on “heuristics”, which is different from chance because my choice is based on experience gained from many games. As such, we are in the field of statistics.

The example of the recipe is simple; the second example is less simple because it requires planning. And some tasks, such as translating a text or conducting dialogue, are much more complicated because they require a lot of knowledge, experience and common sense.

Can we develop algorithms that allow machines to solve these types of problems? What is called AI today is the realization of algorithms enabling the extraction of a property or a piece of information from successive experiments on large quantities of data (Big Data). AI has come out of its second winter thanks to the progress made in learning that is associated with intelligence.

5.2.3. Important work around automatic learning

AI is based on an ambitious goal: to understand how human cognition works and to reproduce it; to create cognitive processes comparable to those of humans. Our knowledge is the result of complex learning processes that take place throughout our lives. Can computers learn? The answer is yes. Can they invent, like humans? The answer is no.

This is a far cry from the general approach of AI capable of solving a wide variety of problems, adapting and learning; but AI specialized for a specific class of problems is much more promising and therefore focuses the bulk of research efforts.

It is the progress of machine learning methods that has allowed AI to develop and find an important place in many fields. This is because we have more and more data and more and more powerful computers. We can distinguish two levels of learning that we are going to summarize: machine learning and deep learning.

Machine learning has been used for many years. Summarizing one example, it has consisted of providing the machine with thousands of different images of an object (an animal, a car, etc.) and telling it each time that it is a dog or a giraffe or a car, etc. The machine gradually adapts its parameters to be able to distinguish which type of object is on an image. This is a bit like the trial-and-error method (we learn little by little from our mistakes as well as from our successes). This applies to words, sounds, etc. This method is also called supervised learning.

Schematic illustration of learning levels. The levels are artificial intelligence, machine learning, and deep learning.

Figure 5.2. Learning levels. For a color version of this figure, see www.iste.co.uk/delhaye/computing.zip

The machine will then be able to correctly classify images of cars or dogs that it has never seen during the learning phase. But can we really talk about intelligence? A system has to analyze tens of thousands of images of dogs in order to recognize a dog in a collection of images with good precision, whereas a very young child can distinguish a cat from a dog.

In the 1990s, work on deep learning, using neural networks (highly simplified models of neurons in the human brain), gave a new perspective to AI research.

This learning and classification system, based on networks of artificial digital neurons, is composed of thousands of units (neurons) that each performs small simple calculations. Each neural network is composed of tens or even hundreds of layers of neurons, each receiving and interpreting the information from the previous layer. The results of the first layer of neurons will be used as input for the calculation of the others. For example, the system will learn to recognize letters before attacking words in a text. This layered operation is what makes this type of learning “deep”.

It is this functioning that allowed the AlphaGo program, developed by Google, to beat the world champion of the game of Go, Ke Jie, on May 27, 2017. This victory was obtained after many training sessions with humans, with other computers and especially with itself.

5.2.4. A multiplication of applications

From the beginning of the 2010s, we have witnessed an explosion of work that has led to very diverse applications.

These so-called intelligent systems allow us to talk to our smartphones relatively naturally (Google Assistant or Siri, for example), ask them questions and listen to answers (more or less relevant) with increasingly realistic voices.

Image recognition is already widely used in many fields such as security, (semi-) autonomous cars, medical image analysis, automatic text translation (even if it is doubtful that an IT system can translate a poem by Louis Aragon better than professional translators), health (with predictive medicine, access to “virtual doctors” in remote areas) and automatic moderation of social networks. AI is being introduced into connected objects, whether they are intelligent vehicles, smart homes, smart cities, surveillance systems or drones and robots. There are many uses: characterization (gender, age, attributes) or identification of users in real time, and behavior analysis (emotion, fatigue, attention). For mass distribution, it can provide a tool for analyzing consumer behavior. On the stock market, algorithms become the first buyers of bonds, stocks and commodities. Is this really intelligence?

A big question is usually overlooked: can we trust the decision made by an AI-based system? It cannot be formally proven that AI provides safe results. This shows that trusting it blindly when the decision may have serious consequences (as in the case of autonomous cars, for example) is a gamble. But humans themselves do not always make the right decision!

In addition, we should not underestimate the ethical issues that AI can raise, especially because of the accumulation of data stored on individuals. The use of so-called predictive technologies in the field of law enforcement or justice raises the problem of individual liberties. My health insurance company could change the amount of my contribution based on predictions from an analysis of my profile. We must therefore be careful that the use of algorithms does not transform our choices of society.

5.2.5. The challenges of AI

The opportunities are such that AI, especially deep learning, is seen as one of the strategic technologies of the future. The few examples of applications mentioned above show this.

All the major Internet groups (Google, Facebook, Apple, IBM, Microsoft, etc.) have launched research programs with funding in the billions of euros, and have created start-ups targeting specific applications.

Several countries, including the United States, China, South Korea and Russia, have embarked on ambitious programs in this area.

The European Union strongly supports activities in the field of AI, with three priorities:

  • – strengthening the EU’s scientific, technological and industrial base;
  • – preparing for socio-economic changes related to AI;
  • – ensuring an appropriate legal and ethical framework.

In France, on November 28, 2018, the government presented a national strategy for research in AI based on the report presented in the spring of 2018 by MP and mathematician Cédric Villani. Funded by the State to the tune of 665 million euros between now and 2022, this strategy aims to establish France sustainably in the top five countries that are experts in AI worldwide. Utopia? In any case, this strategy must be part of a European framework if we want to achieve critical mass in the face of the Americans and the Chinese.

5.2.6. What about intelligence?

Although much studied scientifically, there is no clear definition of intelligence, and many questions remain unanswered. But intelligence cannot be reduced to the ability to solve a specific problem, for example, beating the world chess champion. Real life requires making many more choices than playing chess. Goals are often vague and evaluation difficult. We must take into account the sensitive experience of the world and reject the idea of knowledge reduced to mind and reason.

AI has become much more efficient thanks to the development of learning theories and the tremendous growth in computer performance. Today’s systems, no matter how powerful they are, are specialized: they can only do what they were created to do. They lack the ability to acquire new skills in any field, which is a characteristic of human intelligence. Our learning is unsupervised because it allows us to uncover and understand the world in all its dimensions.

The field of computer science inspired by the human brain is still in its infancy. Although deep learning has become a buzzword in less than three years, there is still much work to be done in this exciting field. Yann LeCun2, one of the pioneers of deep learning, is pragmatic and reminds us that the AI field has often suffered from disproportionate expectations.

This notion of AI is the subject of many debates. Personalities such as Bill Gates, astrophysicist Stephen Hawking, and Tesla CEO Elon Musk expressed their concerns in 2015 about the progress made in the field of AI, which they considered potentially dangerous. But Bill Gates has since reportedly revised his position.

In reality, it is a set of technologies that become essential: they assist us in many tasks. They rely on increasingly sophisticated algorithms to provide an environment for developing services and applications to help us in decision-making. Luc Julia (2019) prefers to talk about “augmented intelligence” in his book L’intelligence artificielle n’existe pas, rejecting the idea that machines will be able to take power over humans.

Will the virtual human brain, the goal of projects such as the Brain Initiative in the United States or the Human Brain Project in Europe, be intelligent?

5.3. The Internet

Chapter 2 gave a brief presentation of the Internet, its history, the protocols used, etc. But the Internet has and will increasingly mark our personal environments and society as a whole. It is therefore an essential technological base.

The evolution of the Internet can be summarized in five phases. The first was the connection of two and then several computers. Then the Web allowed access to shared services. The possibility of connecting mobile devices marked a new stage, the mobile Internet. The arrival of social networks and the communication of groups of people was the fourth phase. Finally, the Internet of Things is the stage we are currently experiencing.

5.3.1. Mobility

Mobile Internet is the set of technologies designed to access the Internet using mobile networks, in particular the networks accessible by our phones. Its very rapid development has been possible thanks to the development of networks, on the one hand (3G, 4G and soon 5G), and terminal equipment, on the other hand.

Tablets, and especially smartphones, with high-definition screens, are repositioning computers to navigate the Web. Arcep (Autorité de regulation des communications électroniques et des postes, the French regulatory authority for electronic communications and postal services) indicated, in its 2018 edition of the “Baromètre du numérique”, that 46% of French people over the age of 12 years use their smartphone to access the Internet.

After 3G networks, 4G networks (introduced in 2013) allowed mobile data to increase loading speed considerably. 4G allows web pages to be displayed almost instantaneously, to stream HD videos without any difficulty, etc.

We must not forget Wi-Fi technology, which makes it possible to avoid using the services of an operator. When American or Brazilian friends arrive at my house, they ask me for the password to my Wi-Fi box, which allows them to communicate with their country for free from their phone! In addition, Wi-Fi access points are multiplying in all public places. Websites are developing interfaces adapted to telephones, particularly in e-commerce.

5.3.2. Social networks

The Internet has witnessed the rise of social networks, some of which have become real social media, allowing Internet users and professionals to create a profile page and share information, photos and videos with their network. The mobile is omnipresent: more than five out of six users use their mobile phone to access social networks.

We can distinguish several categories of social networks: generalist networks (Facebook, Twitter, etc.), professional networks (LinkedIn, Viadeo, etc.), video networks (YouTube, Periscope, etc.), visual networks (Instagram, Pinterest, etc.), and community networks of all kinds. Facebook is the most active network with 2.3 billion users per month worldwide (35 million alone in France).

Online conversations allow us to understand the new democratic balances, as well as the consumption trends of millions of individuals. This is why specialized companies offer services that allow companies to take advantage of the power of social media. However, we can have the feeling that these platforms are useless when delirious emotional flows are pouring out where vulgarity, manipulation and aggressiveness between members can reign supreme. This being the case, they have been unavoidable platforms for several years, even if some countries, such as China, are trying to turn them around.

Dangers associated with social networks exist, such as:

  • – addiction: likes and requests from friends lead to spending more and more time in front of the screen; young people are particularly vulnerable;
  • – cyber-bullying or harassment: the media frequently report news stories, especially in schools, with sometimes dramatic outcomes;
  • – the misappropriation or theft of personal data communicated imprudently by Internet users.

5.3.3. The Internet of Things

This is the world in which objects are able to exchange information and communicate with each other, as well as to communicate and interact with their users using the Internet and also other less known but still efficient communication networks (see Chapter 2).

5.3.4. The Cloud

Also explained in Chapter 2, the Cloud allows us to store and access our data (such as our photos) on any computer or smartphone connected to the Internet, anywhere on the planet! But the protection and use of our data remains questionable.

5.3.5. Blockchain

Blockchain is a technology created in 2008 by an individual known as Satoshi Nakamoto, whom no one has ever seen, with Bitcoin (the much-talked about cryptocurrency) as its first application. Simply put, blockchain is a distributed database system that makes it possible to render forgery-proof transaction history. Blockchain is “a technology for storing and transmitting information that is transparent, secure and operates without a central control body” (definition of Blockchain France).

Blockchain allows a transfer of value (money or other) without an intermediary (bank or other). It allows data to be recorded that are authenticated, certified and cannot be repudiated.

There are public blockchains, open to all, and private blockchains, whose access and use are limited to a certain number of actors. The decentralized nature of the blockchain, coupled with its security and transparency, promises much broader applications than the monetary domain.

The Internet has a very important role in the implementation of this technology. One of the problems encountered is energy consumption because the “miners”, who are responsible for validating transactions, use complex algorithms that involve many computers.

5.3.6. Vulnerabilities

We have discussed security in the different areas of IT technologies and applications, and will continue to do so in the remainder of this book. Vulnerability can have an impact that is not very serious, for example, in connected objects (although the impact could be serious if a hacker took control of your car!).

More generally, what would be the consequences of a coordinated attack on global banking networks or on government networks or those used by airlines? The military are of course very concerned and also use specific networks that are not connected to the Internet, while at the same time developing very advanced research in the field of security (DGA Maîtrise de l’information, located near Rennes, in France, is a very important research center in this field).

5.4. Image processing and vision

Image processing is a discipline of computer science and applied mathematics that studies digital images and their transformations, with the aim of improving their quality or extracting information from them. It began long before the appearance of digital photography, which we all know, since it gradually took the place of analog photography, born at the beginning of the 19th century, with electronic sensors (CCD or CMOS) replacing film.

It is a technology that we will find in a large number of applications.

5.4.1. A bit of history

Image processing began to be studied in the 1920s for the transmission of images via the submarine cable linking New York and London. The first image digitization with data compression reduced the transfer time from more than a week to less than three hours, but the first computers powerful enough to carry out image processing appeared in the 1960s.

Since the beginning of the conquest of space, more than 60 years ago, space imagery has changed our representations of the planet. Its instruments provide a multitude of useful information to meet the major scientific and socio-economic challenges of our time, including climate change.

While the image made its appearance in medicine with the discovery of X-rays by Wilhelm Röntgen in 1896, the computer processing of medical images developed from the end of the 1960s.

Since the 1970s, there has been a diversification in the use of images: geography, biology, astronomy, medicine, agronomy, nuclear, robotics, surveillance and security, industrial control, television, satellite, microscopy, multimedia, etc. This development is closely linked to the progress made by research in the fields of mathematics, computer science and electronics.

5.4.2. Image sources and their uses

We are used to images that represent visible scenes; these are the images of everyday life. But there are many sources of images, associated with physical phenomena and adapted sensors.

X-rays are one of the oldest sources of electromagnetic radiation used in imaging. They are used to locate pathologies (infections, tumors) using radiography or CT scans. They are also used in industry and astronomy. Ultraviolet, which is not visible, is used in the analysis of minerals, gems or for the identification of all kinds of things such as banknotes. Infrared is particularly used in remote sensing (geology, cartography, weather forecasting), microscopy, photography, etc. Radars use microwaves. Radio waves are used in medicine for magnetic resonance imaging or in astronomy. Ultrasound is used in the exploration of oil deposits, or to monitor a pregnancy (obstetrics).

Devices can combine several types of spectra, for example on Earth observation satellites that differentiate between soils, vegetation, snow and clouds, areas with different temperatures, etc.

5.4.3. The digital image

The computer representation of an image is necessarily discrete (made up of separate distinct values), whereas the image itself is continuous in nature (“smooth” variations); the digital image is represented by a set of numbers. It is thus necessary to digitize the analogical image in order to visualize it, print it, process it, store it on a data processing medium and transmit it on a network.

Digitization requires at the same time a discretization of space (sampling) and a discretization of intensities and colors (quantification).

Sampling (Figure 5.3) defines the spatial resolution of the image. A digital image is composed of a finite set of elements, called picture elements or pixels (in 3D, voxels). The more dots per inch (dpi), the better the resolution and therefore the quality of the image.

Each pixel is located by two x and y coordinates in the image frame. A 2D image is therefore an object represented by a two-dimensional array of elementary surfaces (the pixels).

Schematic illustration of sampling an image.

Figure 5.3. Sampling an image

A digital grayscale image is an array of integers between 0 and 255 (the value 0 corresponds to black, and the value 255 corresponds to white), which are therefore encoded on 8 bits (1 byte).

A color image is composed of three independent images to represent the three primary colors (red, green and blue). Each pixel of the color image thus contains three numbers (r, v, b), each being an integer between 0 and 255. We will thus have 24 bits per pixel.

Figure 5.4 shows three examples of displaying an image with different resolutions and therefore different qualities: 1,159 x 298 pixels, 800 x 206 pixels, 320 x 82 pixels.

Photos depict three different resolutions.

Figure 5.4. Three different resolutions (source: Wikimedia Commons). For a color version of this figure, see www.iste.co.uk/delhaye/computing.zip

A video sequence (2D) is a dynamic scene with moving 2D objects. 2D video sequences are a juxtaposition of 2D images, where time is seen as a third dimension.

A volume image (3D) is an object represented by a three-dimensional array of elementary volumes (voxels). A volume can be seen as a stack of 2D images (e.g. scanner sections for 3D reconstruction). A 3D sequence is a dynamic scene with moving 3D objects.

5.4.4. Image storage and compression

We store the images as files. The information that will be stored is the width, height and pixel values. We will also be able to save the name of the author, date, acquisition conditions, etc.

The volume of these files is very large, which poses serious problems, especially when it comes to transmitting them. For example, a color image with dimensions (pixels) 800 x 600 will occupy 1.4 million bytes. The images will be compressed to reduce the amount of information needed to represent them, while minimizing the loss of information as much as possible. Several compression formats exist, and we encounter them in our common use of image-manipulating applications.

The main uncompressed image formats are BITMAP (no loss of quality, but large files) and TIFF (Tagged Image File Format, a format recognized by all operating systems, with large files).

The main lossy compressed image formats are JPEG (with several possible quality levels depending on the compression ratio) whose successor (JPEG 2000) provides a better quality image and GIF (Graphic Interchange Format).

The images in Figure 5.5 show the degradation of quality as a function of the JPEG compression rate, with the volume occupied by this image decreasing from 2.88 MB for the actual size to 38.66 KB (the degradation is particularly visible on the clown’s nose).

Photos depict quality loss and compression ratio.

Figure 5.5. Quality loss and compression ratio (source: Jean-Loïc Delhaye). For a color version of this figure, see www.iste.co.uk/delhaye/computing.zip

In addition, certain standards have been defined for specific areas. This is the case of DICOM, an international standard for the computerized management of medical imaging data.

5.4.5. Computing and images

Computers allow us, on the one hand, to process images (i.e. to act on the components of the image), and, on the other hand, to analyze the images (i.e. to extract information from them). All of this calls upon various scientific fields: signal processing, computer science, statistics, optics, electronics, information theory, etc.

5.4.5.1. Image processing

This is all about getting a new image with different characteristics. We can quote:

  • – restoration, which aims to compensate for damage (noise, blur, etc.) or defects due, for example, to the shooting or the sensor;
  • – enhancement, to increase the quality of the visual perception of the image (brightness, contrast, etc.);
  • – compression, as we have discussed, which provides an image that can be stored and transferred more efficiently at the cost of some degradation;
  • – the tattoo (or watermarking), which allows information to be added, visible or not, in the image;
  • – a large number of transformations such as merging images, special effects, adding or removing elements, image retouching with software such as Photoshop, etc.

5.4.5.2. Image analysis

The purpose of image analysis is to extract information from the image. Among the tools used are filtering, segmentation and contour extraction.

The final step is the semantic analysis of the image in order to give meaning to it. It uses techniques, such as AI algorithms, to interpret the image by locating, characterizing and recognizing objects and other elements in the scene.

5.4.6. Some applications

Image processing is used in many applications. Here are some of them (some of them will be detailed in Chapter 6):

  • – aerial and spatial imagery, with various objectives: monitoring, analysis of natural resources (deforestation, for example), meteorology, mapping, etc.;
  • – medicine: image analysis (cytology, tomography, ultrasound) that facilitates the work of doctors and telesurgery;
  • – the industry with robotic vision or for the control of the manufacturing and quality stages;
  • – sciences: interventions in confined environments (nuclear power plants, for example), astronomy, biology;
  • – numerical indexing of multimedia databases, which consists of characterizing the content of documents and the information they contain;
  • – the military field: surveillance, automatic guidance of vehicles, topography;
  • – smart cities: analysis of traffic, pollution, etc.;
  • – vehicles whose increasing range relies on environmental analysis to detect obstacles.

5.5. Conclusion

Algorithms and software are the basis of these technologies, which are found in many applications that we will see in Chapter 6 and that we use daily or that are hidden from us for various reasons (industrial competition, surveillance, defense secrecy, etc.).

These technologies are the subject of intensive research in public and private organizations and are not without consequences for the evolution of our societies; we will discuss them again in Chapter 7.

  1. 1 John McCarthy (1927–2011) was the main pioneer of artificial intelligence with Marvin Lee Minsky. He received a PhD in mathematics from Princeton University in 1951. From 1962 until his retirement in 2000, McCarthy was a professor at Stanford University. He received the Turing Prize in 1971 for his work in artificial intelligence.
  2. 2 Yann Le Cun, a French researcher in artificial intelligence, was awarded the Turing Prize (considered the equivalent of the Nobel Prize in the field of computer science) in 2019. See his latest book (Le Cun 2019).
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset