2
Computer Networks and Their Applications

2.1. Introduction

Communication is an essential aspect of life for all species, both within (human to human) and between them (human to animal, human to plant, etc.). This communication can concern two actors, one actor and one group, several groups, etc.

We will limit ourselves to communication centered on the transmission of information, as defined in Chapter 1.

Communicating information involves several elements:

  • – the sender: the entity that sends a message. In this chapter, it will generally be computer equipment;
  • – the receiver(s): the entity/entities receiving the message;
  • – the message: an element that carries information of any kind. This message is transmitted from the sender to the receiver(s);
  • – the channel: the path through which the message will circulate, between the transmitter and the receiver. This channel can be a cable, the air, etc.;
  • – the code: the language in which the message is formulated. This language must be understandable, or translated to be understandable by the receiver;
  • – a protocol: this ensures the correct organization of the transmission. In a telephone conversation, not everyone speaks at the same time;
  • – other elements such as the identification of the actors, confidentiality conditions, etc.

We are entering a complex world of hardware, algorithms, software, conventions, etc., and we have to deal with a lot of different things. This world fascinates many research teams, whether in universities or telecommunications industries.

After mentioning a few dates that have marked their development, this chapter will approach computer networks from three angles:

  • – the hardware infrastructure that provides the links between the entities constituting the networks;
  • – the protocols that enable the organizing of the communications in regard to these infrastructures;
  • – the major types of applications that we commonly use, leaving more precise descriptions of the use of information technology for Chapter 6.

We will finish on an aspect that concerns us all: security.

2.2. A long history

Computer networks have undergone tremendous development, from the first connection between two machines to cloud computing. We will limit ourselves to the aspects that have structured this development and to those that we may encounter in our activities.

On the basis of what we know today, the first communication systems date back to the 19th century. Here are some key dates:

  • – 1791: Frenchman Claude Chappe invented the semaphore, also known as the optical telegraph. The semaphore makes it possible to send messages quickly over a long distance using, as its infrastructure, a network of towers surmounted by an articulated arm to transmit coded signals on sight;
  • – 1838: Samuel Morse, an American physicist, developed the system of dots and dashes (we have heard the ti-ta-ta-ti sound in various films!), which is known throughout the world as Morse code. In 1844, he sent the first message on a telegraph line between Baltimore and Washington. From 1846, the Morse telegraph was developed by private companies. It is perhaps the first communication system;
    Photo depicts a Morse code manipulator.

    Figure 2.1. Morse code manipulator (source: Musée des Arts and Métiers). For a color version of this figure, see www.iste.co.uk/delhaye/computing.zip

  • – 1850: William Thomson (Lord Kelvin), a British physicist, imagined the construction of the first transatlantic cable. Cyrus Field, an American businessman and financier, laid the first transatlantic telegraph cable in 1858;
  • – 1876: Alexander Graham Bell, a Scottish-Canadian scientist, engineer and inventor, filed a patent for the invention of the telephone, a system for transmitting voice over electrical wires (although this authorship is controversial). The invention quickly met with resounding success, leading to the creation of the Bell telephone company in 1877;
  • – 1897: Guglielmo Marconi, an Italian physicist, inventor and businessman, considered one of the inventors of radio, filed patents on wireless telegraphy. In 1901, the first wireless telegraphic transmission across the Atlantic was made. The wireless transmission opened up a new era in telecommunications by using the waves for transmission;
  • – 1906: Canadian Reginald Fessendem invented the radio and made the first wireless transmission of speech across the Atlantic in both directions;
  • – 1907: Frenchman Édouard Belin invented the belinograph, a system for the remote transmission of photographs, which was used in the press in the 1930s, until its replacement by the transmission of digital files 50 years later;
  • – 1925: John Baird, a Scottish engineer, demonstrated the transmission of moving images. Baird’s device would become known as “mechanical television”;
  • – 1930: the first telex networks were set up, with teletypewriters now capable of reproducing text typed on a keyboard automatically and remotely on a typewriter. Telex was used all over the world until the 1990s and gradually disappeared with the arrival of fax machines, computer networks and electronic messaging.

During World War II, the laboratories of the warring parties perfected new applications:

  • – 1935: the first radar network was commissioned by the British. The walkie-talkie made its appearance in 1941, in the form of a portable radio transceiver for radio links over short distances;
  • – 1950s: the model of networks known as nodes, connected by links, appeared with the birth of the probabilistic queuing theory;
  • – 1960: Joseph Licklider, from the Massachusetts Institute of Technology, published the article “Man-Computer Symbiosis”, emphasizing the need for simpler interaction between computers. During the Cold War, he worked on the SAGE (Semi-Automatic Ground Environment) project, which was designed to create an air defense system. Designed around a network of computers and numerous radar station sites, this system made it possible to produce, in real time, a unified image of the airspace over the entire US territory and to be able to provide an immediate tactical response in the event of danger. It was operational from 1952 to 1984. SAGE was probably the first computer network. This network was adapted to set up a computer reservation system for American Airlines, the SABRE network, which was deployed in the 1960s;
  • – 1961: Leonard Kleinrock identified a key point that would enable the application of these theories: the concept of a router, that is, a node capable of storing a message, while waiting for the link which it is to be retransmitted on, to become free. Routers play an essential role in today’s networks;
  • – 1964: the principle of packet switching was published for the first time. Packet switching consists of segmenting information into data packets, transmitted independently by intermediate nodes and reassembled at the receiver level. This is an important advance compared to the previously used circuit switching, which required the reservation of communication resources for the entire duration of the conversation and over a complete path between the two machines involved in the dialogue. We will come back to these switching modes later;
  • – 1969: the first packet-switched computer network, ARPAnet, was launched. It connected four American research laboratories and was developed by DARPA, the technology research agency of the US Department of Defense, in close collaboration with major universities (UCLA, Stanford, etc.);
    Schematic illustration of ARPAnet in March 1972.

    Figure 2.2. ARPAnet in March 1972 (source: Wikimedia Commons)

  • – 1971: creation of Cyclades, an experimental French telecommunications network using packet switching, designed by Louis Pouzin with IRIA (Institut de recherche en informatique et en automatique);
  • – 1974: Americans Vint Cerf and Robert Kahn published a book in which they describe TCP/IP (Transmission Control Protocol/Internet Protocol), the protocol that allows heterogeneous networks to communicate with each other;
  • – 1976: establishment of the X.25 standard for packet-switched networks, developed by the CNET (Centre national d’études des télécommunications). In 1978, Transpac, a subsidiary of the telecommunications operator France Télécom, was created to operate the first commercial packet data transmission network in France. The Télétel network used by the Minitels and distributed by France Télécom was based on Transpac’s X.25 network. Operation of the X.25 network ended in 2012;
  • – 1982: creation of the term Internet: a set of interconnected networks using the common TCP/IP protocol. ARPAnet officially adopted the TCP/IP standard;
  • – 1991: invention of the World Wide Web by Tim Berners-Lee, at CERN (European Organization for Nuclear Research, located in Geneva). In the following years, the first navigation software (Mosaïc, Netscape Navigator) and search engines (Yahoo, then Google, etc.) appeared;
  • – 1992: start of the French network RENATER (Réseau national de communications électroniques pour la technologie, l’enseignement et la recherche);
  • – 1990s: development of wired (ADSL) or wireless (Wi-Fi and Bluetooth) broadband networks and mobile Internet (WAP);
  • – 2000+:
  • - the appearance of mobile applications linked to the arrival and democratization of mobile terminals: smartphones, touch tablets, etc.,
  • - the Internet of Things appeared thanks to RFID (Radio Frequency Identification) technology. These connected objects, with their own digital identity, communicate with each other and with people.

2.3. Computer network infrastructure

A computer network is a set of computers that connect to each other and exchange information. In addition to computers, a network can also contain specialized equipment such as modems, hubs, routers and many others that we will discuss in this section.

Computer networks are composed of three elements:

  • – communication media (cables, optical fibers, radio links, etc.);
  • – interconnection equipment (nodes, routers, bridges, gateways, etc.);
  • – terminal equipment (computers, workstations, servers, peripheral devices, etc.).
Schematic illustration with a network diagram with stations and routers.

Figure 2.3. Network diagram with stations and routers. For a color version of this figure, see www.iste.co.uk/delhaye/computing.zip

There are three important characteristics of network infrastructures:

  • throughput, which is the amount of information transmitted through a communication channel in a given time interval. The throughput of a connection is generally expressed in bits per second (with the multiples K for thousands, M for millions, G for billions);
  • transmission delay, which is the time it takes for data to travel from the source to the destination over the network;
  • error rate, which is the ratio of the number of bits received in error to the total number of bits transmitted.

2.3.1. Geographic coverage: from PAN to WAN

Networks can be classified according to their extent, with four very common terms summarizing this extent.

Personal Area Networks (PANs) are restricted networks of computer equipment that are usually for personal use (computer, printer, telephone, etc.).

Local Area Networks (LANs) are mainly intended for local communications, generally within the same entity (company, administration, school, home, etc.), over short distances (a few kilometers maximum). They can connect from two to a few hundred computers with cables or wireless connections. Ethernet LANs are the most common, thanks to the simplicity of their implementation and the gradual increase in connection speeds, from 10 Mbit/s, then 100 Mbit/s, to 1 Gbit/s, then 10 Gbit/s.

Metropolitan Area Networks (MANs) are generally the size of a city and are interconnect networks (LANs or other networks), using dedicated high-speed lines (especially optical fibers).

Wide Area Networks (WANs) interconnect multiple LANs or MANs over large geographic distances, on a national or global scale. The largest WAN is the Internet.

2.3.2. Communication media

To connect two distant forms of computer equipment, a transmission medium is required.

Photo depicts various communication media.

Figure 2.4. Communication media. For a color version of this figure, see www.iste.co.uk/delhaye/computing.zip

Communication media can be cables carrying electrical signals, the atmosphere (or space vacuum) where radio waves circulate, or optical fibers that propagate light waves.

These media have quite different characteristics in terms of useful throughput and reliability, and they “cohabit” in today’s computer networks according to various conditions and constraints (costs, distances, required throughput, etc.). Here are the most commonly used media.

2.3.2.1. Copper cables

The Public Switched Telephone Network (PSTN) was the first medium used, because it existed even before computers. On the subscriber’s side, the network ends with a pair of copper wires connected to a switch. The telephone connected to it converts the speech signal into an electrical signal. The signal then reaches the switch, which directs it to another subscriber, possibly through other switches. France is planning to shut down its PSTN-type telephone network, but this does not mean the end of fixed telephone service: it will continue to be provided over next-generation networks (voice over IP), copper or fiber.

In order to connect computer equipment and transmit/receive digital data independently of conventional (i.e. analog) telephone services, a modem (modulator/demodulator) is used to convert the digital data from the equipment into a modulated signal that can be transmitted over an analog network, and vice versa.

The Integrated Services Digital Network (ISDN) is the digital equivalent of the analog telephone network. It uses the same physical infrastructure, but all signals remain in a digital form, making it more convenient for non-voice applications. It is therefore an extension of digital access to the subscriber.

Dedicated telephone lines, that is, telephone lines reserved for this purpose, have become necessary and popular since the 1970s. They make it possible to link several sites of a company, or the university campuses of a city, for example. They were also the basis for interconnecting large networks before fiber optics became essential.

The Power Line Carrier (PLC) has been used for some time, in low speed, for industrial applications and home automation (devices in the home are integrated into systems that need to communicate with each other in order to manage automation). The principle of PLC consists of superimposing a higher-frequency, low-energy signal on the 50 or 60 Hz alternating current. Therefore, existing electrical wiring is used.

The twisted pair consists of two insulated copper wires about 1 mm thick. These wires are helically wound, one on top of the other, to reduce the disturbing electromagnetic radiation found in parallel wires. The twisted pair can be used to transmit analog or digital signals and has a bandwidth of several Mbit/s over a few kilometers. Due to its satisfactory performance and low cost, the twisted pair is still widely used.

The coaxial cable is a cable with two conductors of opposite poles separated by an insulating material. The cable consists of a central conductor called the core, usually made of copper, which is embedded in a dielectric insulating material. The core is surrounded by a shield, which acts as a second conductor. In computer networks, the coaxial cable has been gradually replaced by fiber optics (for long-distance use, more than one kilometer) since the end of the 20th century.

Photo depicts twisted pairs and coaxial cable.

Figure 2.5. Twisted pairs and coaxial cable. For a color version of this figure, see www.iste.co.uk/delhaye/computing.zip

2.3.2.2. The optical fiber

An optical fiber is a very thin glass or plastic wire that can be a conductor of light and is used in data transmission. By convention, a pulse of light indicates a bit with a value of 1, and the absence of light, a bit with a value of 0. It is increasingly used by operators, in buildings, cities and even in underwater cables, to allow the interconnection of networks worldwide. Its throughput can reach 1 million Gb/s, which is its great advantage.

Photo depicts fiber optic bundle.

Figure 2.6. Fiber optic bundle. For a color version of this figure, see www.iste.co.uk/delhaye/computing.zip

2.3.2.3. Wireless transmission

Cables have a major drawback: they are fixed and do not meet our mobility needs.

A wireless network is a network in which at least two devices can communicate without a wired connection. With wireless networks, a user has the ability to stay connected while traveling within a reasonably large geographical area.

Wireless technologies mainly use electromagnetic waves as a medium. The transmission and reception of these waves is carried out by antennas, integrated in wireless cards. The waves have a defect: they attenuate with the distance they travel and with the obstacles (walls, etc.) they encounter. The use of radio waves for data transmission is becoming increasingly widespread: cell phones, satellite communications, connected objects, etc.

There are several wireless network technologies, differing in the frequency of transmission used and the speed and range of transmissions. The three main standards can be selected according to the geographical area offering connectivity (coverage area): Bluetooth, Wi-Fi and GSM.

Bluetooth is a communication standard that allows the bidirectional exchange of data over very short distances. It has a theoretical data rate of up to 2 Mb/s and a range of 50 m to 100 m. It is often present on devices that operate on battery power and wish to exchange a small amount of data over a short distance: cell phones, laptops and various peripherals (mouse, keyboard, etc.). We are therefore in the field of wireless personal area networks (WPAN).

Wi-Fi is a set of wireless communication protocols governed by the IEEE 802.11 group of standards. With Wi-Fi standards, it is possible to create high-speed wireless local area networks. The range can reach several dozen meters indoors (generally between 20 and 50 meters), if there are no obstructions (concrete walls, for example) between the transmitter and the user. In practice, Wi-Fi makes it possible to connect laptops, office machines, personal digital assistants (PDAs), communicating objects or even peripheral devices, with a very high-speed connection. It is the protocol we use most often in our homes.

GSM (Global System for Mobile Communications) is a digital standard for mobile telephony. The third generation of mobile telephony (3G), whose main standard is UMTS, has made it possible to significantly increase the available bandwidth. Finally, 4G technology is the new generation, which is expanding around the world, with a throughput of up to 1 GB/sec. 5G technology, which is in preparation, will make it possible to download a film in a few seconds and will open up the market to new applications. We are in the field of wireless wide area networks (WWAN).

Even if the main activity of telecommunication satellites is the broadcasting of television programs, they are also used for mobile applications, such as communications to ships or airplanes. However, this could soon change, with operators in many countries launching into the race for mega satellite constellations providing Internet coverage to the entire world from space.

New technologies (e.g. the Loon project launched by Google, which involves deploying Internet coverage to areas that are very difficult to access, via balloons floating at an altitude of 20 kilometers), as well as new uses are being prepared.

2.3.3. Interconnection equipment and topologies

2.3.3.1. Most common equipment

The network card is the most important component. It is indispensable: all the data that needs to be sent and received from a network in a computer passes through it. The MAC address (Media Access Control), composed of 12 hexadecimal characters1, is the physical address of the card, a unique and worldwide address assigned at its manufacture. Your personal computer, smartphone and Wi-Fi box have a MAC address that should not be confused with the address relating to the network (e.g. IP address, which we will discuss in section 2.4.5).

A repeater is an electronic device combining a receiver and a transmitter, which compensates for the transmission losses of a medium (line, fiber or radio) by amplifying and possibly processing the signal, without modifying its content. It is used to duplicate and readapt a digital signal, to extend the maximum distance between two nodes in a network.

A hub (or concentrator) is a piece of hardware that concentrates network traffic from multiple hosts. It has as many ports to connect machines to each other (usually 4, 8, 16 or 32) and acts as a multi-socket to broadcast the information it receives from one port, to all of the other ports. Thus, all the machines connected to the concentrator can communicate with each other.

If the hub is unable to filter the information and transmit it to all the machines connected to it, the switch only directs the data to the destination machine based on its address. If computer 1 sends data to computer 2, only computer 2 will receive it.

A router is a piece of computer network interconnection equipment used to route data between two or more networks, in order to determine the path that a data packet will take. It is used to connect two different networks. For example, it is the boundary between the local network and the external network (Internet or other).

2.3.3.2. Some interconnection topologies

The topology of a network corresponds to its physical architecture. We can retain the following main topologies.

A bus topology is the simplest organization of a network. In a bus topology, all computers share a single transmission line (the bus) via a cable, usually coaxial. This is the common topology of an Ethernet-type local area network.

Schematic illustration of a bus topology structure.

Figure 2.7. Bus topology

It has the advantage of being easy to implement and has a simple function. On the other hand, it is very vulnerable because if one of the connections is faulty, the whole network is affected. In addition, the transmission speed is low because the cable is common.

In a star topology, the computers in the network are connected to a central hub or switch system. Networks with a star topology are much less vulnerable because one of the connections can be disconnected without crippling the rest of the network. However, communication becomes impossible if the central element is no longer working.

Schematic illustration of a star topology structure.

Figure 2.8. Star topology

In a ring network, all entities are connected together in a closed loop. Data flows in a single direction, from one entity to the next. At any given moment, only one node can transmit on the network and there can be no collision between two messages, unlike the bus-type network. This topology is used by the Token Ring and FDDI networks.

Schematic illustration of a ring topology structure.

Figure 2.9. Ring topology

In a hierarchical topology, also called a tree topology, the network is divided into levels. The top is connected to several nodes lower down in the hierarchy. These nodes can themselves be connected to several nodes below them. The weak point of this type of topology is the “parent” computer in the hierarchy, which, if it fails, paralyzes part of the network.

Schematic illustration of a tree topology structure.

Figure 2.10. Tree topology

In a meshed topology, each terminal is connected to all of the others. The disadvantage is that the number of connections required becomes very high. Indeed, the number of cables is n (n - 1)/2, if n is the number of computers. For example, it takes 28 cables to interconnect 8 computers, so this topology is used very little.

Schematic illustration of a mesh topology structure.

Figure 2.11. Mesh topology

Hybrid topologies, combining several different topologies, are the most common. The Internet is an example.

2.3.3.3. Addressing in networks

The presence of a multitude of terminal equipment makes it necessary to define a coherent identification system within the network to differentiate them; this is called addressing. In addition, the network must be able to route information to any addressee according to their address: this is the routing function. When you put a letter in a mailbox, with the recipient’s address, this letter will be picked up by an employee of the company in charge of its routing (e.g. the postal service) and transported to a sorting center. Routing operations, sometimes complex, will allow this letter to arrive at the sorting center, in which the recipient is identified by their postal code. Therefore, a destination address and routing system are required. If you add your address on the back of the envelope, the addressee will be able to reply in the same way.

Early computer networks shared the same protocol and namespace. Each computer had a name, and all of the names were collected in tables that were installed on all members of these networks, which allowed routing. In order to communicate with another network, a computer that was a member of both networks had to act as a gateway and translate addresses from one to the other. It was fairly simple because there were only a few thousand computers at most. This is what I experienced in the late 1980s with the interconnection of the IBM and Digital Equipment “worlds” in universities.

The arrival of the Internet and the passage to millions of interconnected devices complicated the situation. Very precise addressing rules were developed little by little. We will discuss this further in section 2.4.4.

2.3.4. Two other characteristics of computer networks

2.3.4.1. Switching technologies

Switching is necessary when a call is made over several links in succession. Intermediate equipment associates an (inbound) link with another (outbound) link among those available.

In circuit switching (analog process), all of the links (the circuit) used for one communication are reserved for that one communication for its entire duration. Its concept and implementation simplicity made it successful in its use in the first communication networks, such as the telephone. It was the responsibility of the operators of a telephone switchboard to establish communications between users in the early decades of the telephone.

In message switching (digital process), there is no reservation of resources. A connection is only used by a communication during the periods of transmission of these messages. Messages from other communications can use the same links during this communication. Messages arriving at the switching node are processed in the order of arrival, which can generate queues.

Packet switching (digital process) uses the same principle as message switching, but the messages are made up of a succession of packets, whose size is better suited to the efficiency of the transmission. It is the most commonly used process in networks like the Internet. The problem that needs to be solved is the reassembly of the packets that make up the message.

2.3.4.2. Client–server and peer-to-peer architectures

There are two types of network architecture: client–server and peer-to-peer.

In client–server architecture, client machines (machines that are part of the network, such as a personal computer) communicate with a server (usually a powerful machine) that provides services (such as access to files, or a mail server). When the server has responded to the client’s request, the connection is terminated. There are countless examples of these communications, such as consulting a train schedule on the railway company server from your personal computer, or your phone. The client/server model has become one of the main concepts in network architectures.

The term peer-to-peer is often abbreviated to P2P. In a peer-to-peer system, nodes are simultaneously clients and servers of other nodes on the network, unlike client–server systems. The particularity of peer-to-peer architecture is that data can be transferred directly between two stations connected to the network, without passing through a central server. Peer-to-peer systems therefore facilitate the sharing of information and can be used, for example, for file sharing or distributed computing.

Schematic illustration of the comparison of client–server versus peer-to-peer.

Figure 2.12. Client–server versus peer-to-peer

2.3.5. Quality of service

Imperfections in telephone conversations are usually not a problem, but this is not the case for data transmission, as the data must arrive at its destination complete and intact. The equipment involved (transmitter, receiver) must ensure this. The quality of service (QoS) of a data circuit is measured using several criteria:

  • – the error rate (ratio between the number of erroneous bits received and the number of bits transmitted);
  • – packet loss is the non-delivery of a data packet, mostly due to network congestion;
  • – availability (proportion of time during which communication is possible);
  • – the rate (number of bits transmitted per second);
  • – the response time, which is related to the network throughput and the capacity of the equipment involved in the transmission.

The quality of service is subject to precise technical measurements, but for a user, it is quite subjective because it depends on their expectations and the type of network usage they have at any given time. For example, response time may seem acceptable if the user is looking at bus schedules, but completely unacceptable if they are participating in a videoconference, because it can greatly disrupt the flow of exchanges.

If an operator tells the users that the fiber optic connection in their home provides a speed of several hundred million bits per second, has the response time, in their use of the network, improved significantly compared to the connection they had previously? Not necessarily, because any network transaction involves a lot of intermediate equipment and many network sections with different speeds and congestion rates. The “effective” throughput will therefore depend on many parameters and may vary depending on the period and type of transaction.

2.4. Communication protocols and the Internet

2.4.1. The first protocols

Communication media “physically” connect equipment. As in any communication, a method is needed so that two entities can understand each other. A communication protocol is a set of rules that define how communication between two entities in a network should take place. Some of the important functions of a protocol include:

  • – address management (sender and receiver);
  • – management of the format of the exchanged data;
  • – routing between different networks;
  • – detection of transmission errors;
  • – management of information losses when they occur;
  • – flow management (the receiver must not be saturated if it is slower than the transmitter).

The protocols are hierarchically layered, with each one having to deal with specific functions.

In the 1960s, computing was centralized; that is, data was managed on “mainframes” that could be accessed by remote stations. These computers were linked together by networks operating on the basis of protocols developed by their manufacturers. The two most important protocols of this type are DECnet and SNA.

DECnet is a layered network architecture, based on a protocol defined by the Digital Equipment Corporation, the first version of which, in 1975, allowed two PDP-11 minicomputers to communicate. Large networks of computers from this manufacturer, particularly VAX machines, were deployed until the arrival of TCP/IP protocols.

SNA (Systems Network Architecture) is a layered network architecture defined by IBM in 1974. It is a functional architecture of the same type as the OSI reference model (which it precedes by seven years) and is also part of the IBM product family. Like DECnet, it is a proprietary architecture. SNA has been widely used by computer centers in banks, financial institutions and research centers equipped with IBM hardware.

The major flaw of proprietary architectures, such as those that will come to be cited, is that it is not easy to make them communicate with each other, unless an agreement is reached and a communication protocol is written between these architectures.

2.4.2. The OSI model

To solve this problem, in the 1970s, the ISO (International Organization for Standardization) developed a reference model called the OSI (Open Systems Interconnection). This model described the concepts used to standardize the interconnection of systems. It was organized in seven distinct layers, each bearing a number, ranging from the most abstract data (layer number seven) to physical data (layer number one). The OSI standard was published in 1984.

Schematic illustration of the OSI model.

Figure 2.13. The OSI model. For a color version of this figure, see www.iste.co.uk/delhaye/computing.zip

Let us quickly describe the seven layers:

  1. 1) the physical layer provides the means to activate, maintain and deactivate the physical connections necessary for the transmission of a groups of bits;
  2. 2) the data link layer provides the transmission of information between two (or more) immediately adjacent systems and fragments the data into several frames;
  3. 3) the network layer takes care of routing the data from point A to point B and addressing. The objects exchanged are often called packets;
  4. 4) the transport layer provides end-to-end data transmission. It maintains a certain transmission quality. Exchanged objects are often called messages (also for the upper layers);
  5. 5) the session layer provides the means for cooperating entities to synchronize, interrupt or resume their dialogues while ensuring the consistency of the data exchanged;
  6. 6) the presentation layer is in charge of the representation of the information that the entities exchange. It takes care of semantics, syntax, encryption, deciphering, in short, any “visual” aspect of the information;
  7. 7) the application layer acts as an interface to provide access to network services. It includes numerous protocols adapted to the different classes of applications (file transfer, e-mail, etc.).

2.4.3. The history of the Internet

In the United States, the Defense Advanced Research Projects Agency (DARPA), which is responsible for military defense research projects, launched a computer network project in 1966 linking certain American universities. In 1980, this network, called ARPAnet, became a military issue and was divided into two: the university network became NSFnet, funded by the NSF (National Science Foundation). ARPAnet became the heart of the future Internet and a tool for the development of this new technology.

The NSFnet network opened up to the world, and interconnection problems soon emerged. Communication between networks using different architectures (proprietary or not) became too complex.

Extensive research and development work in 1977, in which the differences between the protocols were blurred by the use of a common communication protocol, led to the demonstration of a prototype, called TCP/IP. On January 1, 1983, TCP/IP officially became the only protocol on ARPAnet. The Internet (from “inter-network”), takes the meaning of a worldwide network using the TCP/IP protocol.

The Internet is not a new type of physical network. It offers, through the interconnection of multiple networks, a global virtual network service based on TCP (Transmission Control Protocol) and IP (Internet Protocol) protocols. This virtual network is based on a global addressing that is placed above the different networks used. The various networks are interconnected by routers. Thanks to the growing interest in vast communication networks and the arrival of new applications, Internet techniques have spread to the rest of the world.

2.4.4. The TCP/IP protocol

The OSI model has been developed with a normative vocation (i.e. to serve as a reference in the course of communication between two hosts), whereas the TCP/IP model has a descriptive vocation (i.e. it describes the way in which communication takes place between two hosts).

TCP/IP actually refers to two closely related protocols: a transmission protocol, TCP (Transmission Control Protocol), which is used over a network protocol, IP (Internet Protocol). It is also a set of protocols that are generally used at the application layer, using TCP/IP.

The TCP/IP model is simpler than the OSI model, with only four layers:

1) the network layer includes the physical and data link layers of the OSI model. The only constraint of this layer is to allow a host to send IP packets over the network;

2) the Internet layer is the cornerstone of the architecture. Its role is to allow the injection of packets into any network and the routing of these packets, independently of each other, to their destination. The Internet layer has an official implementation: the IP protocol;

3) the transport layer has the same role as that of the OSI model: to allow even entities to support a conversation. This layer has a main official implementation: the TCP protocol, a reliable, connection-oriented protocol that allows the error-free routing of packets from one machine on the Internet to another machine on the same Internet;

4) the application layer contains all high-level protocols, such as Telnet, SMTP (Simple Mail Transfer Protocol) and HTTP (HyperText Transfer Protocol). It has indeed been noted with use that network software rarely uses the presentation and session layers of the OSI model.

TCP/IP is an open protocol that is independent of any particular architecture or operating system. This protocol is also independent of the physical medium of the network. This allows TCP/IP to be carried by different media and technologies.

2.4.5. IP addressing

Each piece of equipment on a network is identified by an address, called the IP address. The addressing mode is common to all TCP/IP users regardless of the platform they use.

The MAC address, already mentioned, is a unique identifier assigned to each network card, but in a large network, there is no central element that knows the location of the recipient and can send the data accordingly. The IP address system, on the other hand, is used in a process called routing to ensure that the data reaches the recipient. Currently, two versions of IP coexist: IPv4 and IPv6.

In IPv4, the addresses are exactly 32 bits (4 bytes): enough to code 4,294,967,296 different IP addresses. The IP address is composed of four groups of decimal digits, noted from 0 to 255, and separated by a dot (e.g. 86.212.113.159). It is the most widely used protocol in the world. It is used for both local IP addresses and public IP addresses.

To identify each other, the computers that make up the Internet network essentially use a series of numbers, each number (IP address) corresponding to a separate machine. The Internet Corporation for Assigned Names and Numbers (ICANN) coordinates these unique identifiers internationally and brings together, in a non-profit partnership, people from around the world who work to maintain the security, stability and interoperability of the Internet.

Often, in order to connect to a computer server, the user does not give his IP address, but his domain name. A domain is a set of computers connected to the Internet with a common characteristic. The domain name system is hierarchical, allowing the definition of sub-domains whose codes (levels) are separated by a dot. For example, the domain inp.cnrs.fr designates the CNRS Institute of Physics in France. The rightmost part, such as “com”, “net”, “org” and “fr”, is called the top-level domain. The domain name is then resolved to an IP address by the user’s computer using the Domain Name System (DNS). It is possible to initiate a connection only once the address is obtained.

As the structure of the IPv4 address no longer made it possible to respond to all address requests, it was necessary to develop a new structure called IPv6, with 128 bits. This makes it possible to have over 256 billion billion billion billion billion different IP addresses!

2.4.6. Management and use of the Internet

The growing importance of the Internet has led to its very precise organization. The three main regulatory bodies are as follows:

  • – the Internet Society (ISOC) is dedicated to promoting and coordinating the development of computer networks worldwide. It is the most influential, moral and technical authority in the world of the Internet2;
  • – the Internet Activity Board (IAB) provides guidance in coordinating much of the research and development related to TCP/IP protocols;
  • – the Internet Engineering Task Force (IETF) is an international open community of about 100 working groups that develop new Internet standards.

The growth of the Internet has been extraordinary, with 10,000 computers connected in 1987, 2.5 million in 1994, 17 million in 1997, 400 million in 2000 and 3.5 billion in 20173. This development concerns the entire planet, as shown in Figure 2.14.

For France, in 2017 the ARCEP (Autorité de Régulation des Communications Electroniques et des Postes) announced 25 million people connected to high (ADSL)- or very high (fiber optic)-speed Internet, and about 70 million SIM cards providing access to the Internet.

Graph depicts the extension of the Internet in various countries.

Figure 2.14. Extension of the Internet4

2.4.7. Evolving technologies

E-mail and file transfer are the oldest applications on the Internet. But the service that made the Internet popular with the general public is the World Wide Web, which began to spread in 1993 (more details on this are given in section 2.5.1). The rapid increase in the capabilities of computers meant that they were capable of encoding and processing sound or voice, as well as still images or video.

But interactive multimedia applications, such as videoconferencing, need efficient group (multi-user) transmission on the one hand, and performance guarantees on the other hand. Since the Internet is a network that provides a routing service without any guarantee of performance (Best Effort Principle), it was necessary to develop control mechanisms that allowed multimedia applications to adapt their behavior according to the conditions of the network.

The evolution of the Internet is taking place in parallel with an explosion of new applications, which seek to make the best use of the services available at any given moment. Examples include games distributed over the Internet (in which several thousand players around the world can compete on a battlefield or in a board game), or collaboration tools (distance learning, collaboration of doctors around scanner/X-ray images visible and writable by all). More generally, we can expect the Internet to represent a revolution that is at least comparable to the telephone revolution that began in the last century.

2.4.8. What future?

We are seeing that computer networks, especially the Internet, are having an ever-increasing impact on our daily lives. For better or for worse? In 2017, the Internet Society released a report entitled “Paths to our Digital Future”5.

This report analyzes the key driving forces that will have a profound impact on the future of the Internet in the near future:

  • – the convergence of the Internet and physical worlds with the deployment of the Internet of Things (IoT). When everything that can be connected is connected, entire economies and societies will be transformed. However, acute security threats and device vulnerabilities, as well as incompatible standards and lack of interoperable systems could undermine the promise of technology;
  • – the advent of artificial intelligence (AI) promises new opportunities, ranging from new services and scientific breakthroughs to the increase of human intelligence and its convergence with the digital world. Ethical considerations must be prioritized in terms of the design and deployment of AI technologies;
  • – the most pressing danger for the future of the Internet is the growing scope of cyber threats. As new technologies such as AI and IoT increase our dependency on the network, the severity of security challenges and vulnerabilities increase in parallel;
  • – these technological transformations will disrupt economic structures and force companies to think and act like technology companies as billions of devices and sensors connect to the network;
  • – as the Internet grows and extends to more sectors of our economy and society, governments will be faced with a host of new and complex issues that will challenge all aspects of their decision-making. Their responses to these challenges will influence not only freedoms, rights and the economy but also the Internet itself.

The report analyzes three areas of impact:

  • – as the Internet continues to transform all sectors of the global economy, the digital divides of the future are not only about access to the Internet but also about the gap between the economic opportunities available to some and not to others. These new divides will create disparities not only between countries but also within countries;
  • – the future of the Internet is closely linked to people’s ability to see it as a means of improving society and promoting individual rights and freedoms. This trust needs to be confirmed and strengthened;
  • – the march towards greater connectivity will continue to drive new changes in media and society. While democratizing access to information, the whirlwind of information and misinformation that exists online raises real concerns about the long-term effects of new trends such as fake news. Unfettered online extremism and behavior that breaks social conventions will erode social cohesion, trust in the Internet and even political stability.

2.5. Applications

The above tells the story that led to the standardization of communications between computers of all sizes. But networks have changed profoundly: the volume of data traffic, the very rapid increase in the number of sites, broadband (20 Mbit/s at home), transporting multimedia data on the same medium (telephone, television, games, information, etc.), wireless mobile access, etc.

Here are some major areas of application, of which we will see more specific examples in Chapter 6.

2.5.1. The World Wide Web

The Web was invented in 1989 at CERN (European Organization for Nuclear Research), based in Geneva, by a British physicist, Tim Berners-Lee6. Originally, the project, called the World Wide Web or W3, was designed and developed with his colleague Robert Cailliau so that scientists working in universities and institutes around the world could exchange information instantaneously. On April 30, 1993, CERN put the World Wide Web software in the public domain. Tim Berners-Lee left CERN to go to the Massachusetts Institute of Technology (MIT) in 1994, where he founded the World Wide Web Consortium (W3C), an international community dedicated to the development of open web standards.

We all use the Web, without necessarily knowing it. A website is nothing more or less than a collection of files stored on a web server. Web browsers are applications that retrieve the content of pages, located on web servers, to send them to another computer, the latter being called a web client.

The Web is based on three main ideas: hypertext navigation, multimedia support and integration of pre-existing services (e-mail, file transfer, etc.). When writing a document (called a page) on the Web, certain words can be identified as access keys and a pointer to other documents can be associated with them. These other documents can be hosted on computers on the other side of the world.

In October 1990, Tim Berners-Lee described the three technologies that remain the foundation of today’s Web:

  • HTML (HyperText Markup Language): this language, standardized by the W3C, enables formatting documents for the Web;
  • URI (Uniform Resource Identifier): a kind of address that is unique and used to identify each resource on the Web. It is also commonly referred to as a URL;
  • HTTP (HyperText Transfer Protocol): standardized by the IETF, this network protocol allows the recovery of linked resources across the Web.

2.5.1.1. HTML

HTML was invented to allow writing hypertextual documents, also called web pages, linking the different resources of the Internet with hyperlinks. It is a so-called markup language (or structuring language), whose role is to formalize the writing of a document with formatting tags. The tags make it possible to indicate the way in which the document should be presented and the links it establishes with other documents.

Here is an example of an HTML file, with a title and a body of two paragraphs, one of which contains a hyperlink:


<!DOCTYPE html>
<html>
 <head.
 <title>Example HTML file</title>
 </head>
 <body
 A sentence with a <a href=“target.html”>hyperlink</a>.
 <p>
    A paragraph where there is no hyperlink.
 </p>
</body>
</html>

Since HTML does not attach to the final rendering of the document, the same HTML document can be viewed using a wide variety of hardware (computer, tablet, smartphone, etc.), which must have the appropriate software (web browser, for example) to provide the final rendering.

A web browser is a software program designed to view and display especially HTML pages. Technically, it is at least an HTTP client. Let us mention a few web browsers: Netscape (1994), Internet Explorer (1995), Mozilla (1998) and Firefox (2005), Safari (2003), etc.

The standards for this language have evolved (successive versions) to take into account the new possibilities offered by Internet navigation.

2.5.1.2. HTTP

HTTP (Hypertext Transfer Protocol) is a communication protocol developed for the World Wide Web. It was designed for the transfer of hypermedia documents such as HTML. It follows the classic “client–server” model, with a client that opens a connection to send a request, then waits until a response is received. HTTPS (with an S for secured) is the secure HTTP variant.

The best known HTTP clients are web browsers. The user’s computer uses the browser to send a request to a web server. This request asks for a document (e.g. an HTML page, an image, a file). The server looks for the information to finally send the response.

Schematic illustration of HTTP request.

Figure 2.15. HTTP request

2.5.1.3. URL addresses

Website addresses, also called URL (Uniform Resource Locator) addresses, look more or less like this: http://www.example.com. Every document, image or web page has a URL address, which is often used to link to it.

A URL consists of at least the following parts:

  • – the name of the protocol, that is, the language used to communicate on the network. The most widely used protocol is the HTTP protocol that we have just talked about, but many other protocols are useful (FTP, News, Mailto, etc.);
  • – the name of the server: this is the domain name of the computer hosting the requested resource;
  • – the access path to the resource: this last part allows the server to know the location of the resource, that is, in general, the directory and the name of the requested file.

For example, “http://www.xxxx.fr/” identifies company server xxxx and leads to the site’s home page. The URL “http://www.xxxx.fr/presentation.html” identifies the company’s presentation page.

We use less and less directly URLs due to our intensive use of search engines such as Google or Yahoo.

2.5.2. Cloud computing

The electrical equipment in our homes uses energy that comes from “somewhere”. We do not have to worry about the source of this energy, which is diversified and can change depending on the period, since these sources (nuclear power plants, hydroelectric power plants, etc.) are interconnected by networks that guarantee that we are always supplied.

Cloud computing uses the metaphor of clouds to symbolize the dematerialization of computing. It involves moving IT services to remote servers, managed by suppliers and accessible via the Internet, and thus having access to virtually infinite services and resources (storage, computing).

Schematic illustration of cloud computing.

Figure 2.16. Cloud computing

Cloud computing is of interest to individuals (e.g. to store photos and videos), small businesses (which thus have access to resources they could not afford) and large companies alike.

There are three types of cloud computing:

  • – IaaS (Infrastructure as a Service): the operating system and applications are installed by the client on servers to which he connects to work as if it were his own computer. Physical resources are shared by several virtual machines;
  • – PaaS (Platform as a Service): the service provider manages the operating system and associated tools on its platform. The client develops, installs and uses his own applications;
  • – SaaS (Software as a Service): applications are provided as turnkey services to which users connect via dedicated software or a web browser.

The client cannot locate the physical sites that host these services, and these sites are subject to change. The advantages are numerous: cost reduction by shifting IT to the provider, ease of use from any location thanks to the Internet, quality of service, flexibility to manage peak loads, etc. Application and data security is, of course, a critical aspect and it is essential to address it. Some companies avoid this solution for storing and processing highly sensitive data.

The cloud computing market is huge, and there are many solution providers: major computer manufacturers (IBM, HP, etc.), Amazon, Google, Microsoft, OVH in France to name only the most significant. According to Synergy Group, the turnover of cloud computing suppliers reached 180 billion dollars for the period October 2016–September 2017 with an overall growth of 24%.

2.5.3. The Internet of Things

While the Internet was designed for humans to communicate and access information, the idea is that objects can exchange information and humans can acquire information through objects.

Imagine a world where all objects are able to exchange information and communicate with each other, as well as to communicate and interact with their users through the Internet and other less well-known but equally effective communication networks. This is the world of the Internet of Things.

A connected object has the ability to capture data and send it, via the Internet or other technologies, for analysis and use. These billions of connected objects will create an exponential volume of data that will need to be stored, analyzed, secured and restored for various uses.

Analysts predict more than 50 billion connected objects in a few years. This connected and intelligent world is expected to explode the volume of data from 8 zettabytes (8 trillion billion) in 2015 to 180 zettabytes in 2025, 95% of which will be unstructured data (text data, JPEG images, MP3 audio files, etc.); a volume that is expected to be 92% processed in the Cloud (Huawei’s prospective report).

2.5.3.1. Main technologies

In terms of technologies, standardized wireless access (such as Wi-Fi and Bluetooth) currently dominates due to the strong development of consumer applications (connected home, sports/wellness, electronic gadgets) and of course its low cost. New technologies and protocols have been and are still being developed to take into account the constraints and specificities of the many areas of IoT use (energy consumption, for example). We mention below the most used ones.

NFC (Near-Field Communication) is a technology that allows data to be exchanged at a distance of less than 10 cm between two devices equipped with this device. NFC is integrated in most of our mobile terminals in the form of a chip, as well as on certain transport, payment or access control cards for restricted access premises. The reader can simply operate the unlock or be connected to a network to transmit the information corresponding to your entry. In the latter case, you enter the IoT domain.

RFID (Radio Frequency Identification) is a technology that enables memorizing and retrieving data remotely using radio tags (RFID tags) that can be stuck or embedded in objects and even implanted in living organisms (animals, human body). The reading of passive chips can extend up to 200 meters. This technology is widely used in business.

Low-Energy Bluetooth (also known by the acronym BLE and Bluetooth Smart) is replacing NFC and is mainly intended for connected objects where the need for throughput is low and battery life is crucial, as well as nomadic equipment such as smartphones, tablets, watches, etc. The range can be counted in a few dozen meters.

Short-range radio protocols (ZigBee, Z-Wave) are intended for the creation of private local area networks (for home automation, for example). They are energy efficient and offer high data rates.

Low-speed radio protocols (Sigfox, LoRa) are particularly suitable for energy-efficient equipment that emits only periodically, such as sensors.

LTE-A, or LTE Advanced, is a fourth-generation cell phone network standard. It gives the IoT much more performance, and its most important applications concern vehicles and other terminals in motion.

2.5.3.2. Fields of use

The value added by the Internet of Things is in the new uses it will bring. Let us retain the most important ones; we will detail some of them in Chapter 6.

The sector is carried by the smart home: connected security devices (wireless surveillance cameras, alarms, etc.) and those dedicated to the automation of the home (thermostats, locks, intercoms), not forgetting the large connected household appliances (refrigerators, washing machines, etc.) and robots. The “smart city” is another important area: road traffic, transportation, waste collection, various mapping (noise, energy, etc.).

Wearable technologies are developing: connected watches and glasses, smart clothing, etc. They are also found in the monitoring of our health (connected scales, monitoring of patients with chronic diseases), in leisure and sports and in many toys.

Other areas include: environmental monitoring (earthquake prediction, fire detection, air quality, etc.), industry (measurement, prognosis and prediction of breakdowns), logistics (automated warehouses), vehicles that require more autonomy and robots in various environments.

Here is now a very common example: contactless means of payment using a bank card, a cell phone or a bracelet which communicates with payment terminals using the NFC communication protocol already mentioned; we therefore avoid inserting the bank card and entering a confidential code. This protocol allows data to be exchanged at a very short distance (a few centimeters). A chip and an antenna are integrated into your bank card, your cell phone, etc. Via a smartphone, the applications dedicated to NFC payment can also include a certain number of additional and very useful functions such as the automatic taking into account of loyalty cards in stores. But do we know all the features and what is done with the information collected?

2.5.3.3. Towards the interoperability of objects

Each object has an often simple function. But if several objects can be made to collaborate, to make them interoperable, their capabilities will be considerably increased. This is, for example, the ability of industrial robots to communicate directly with each other or the ability of different connected objects involved in flow management (factory, hospital, etc.).

A major obstacle: the connectivity of objects is dominated by proprietary technologies, often developed without technical or legal standardization.

2.5.3.4. Confidentiality and security

Whether in the medical field (patient tracking devices), the automotive industry (connected cars), agriculture (precision farming) or home automation, devices that take advantage of the Internet of Things generate an unprecedented amount of data. These data are often confidential and personal. Are they really protected?

When you turn on your connected speaker, what is said in the room is recorded somewhere. Your question, “What will the weather be like tomorrow?” will be analyzed by the computer system to understand it and provide you with the answer. But your conversation will also be recorded if you are not careful. Who can use this information and for what purpose?

Moreover, connected objects represent a risk in terms of cybersecurity. These devices are designed to be as simple as possible, in order to limit their cost and facilitate their use. However, this simplicity also makes them more vulnerable than other electronic devices such as smartphones.

Gartner Inc., a US-based advanced technology consulting and research firm, announced that global spending on Internet of Things security was expected to reach $1.5 billion in 2018.

2.5.4. Ubiquitous computing and spontaneous networks

2.5.4.1. Ubiquitous computing

The multiplication of connected objects leads to an important aspect of the development of computing. We have gone from “mainframe computers” in the hands of specialists (computer scientists) to personal computers that can be used simply by anyone thanks to highly efficient graphical interfaces. We are entering a third era, one in which computers are disappearing, leaving us in a hyper-connected world in which the computer is invisible.

This vision of ubiquitous computing (the term is derived from the Latin ubique meaning “everywhere”), which is constantly available, was first formulated in 1988 by Mark Weiser of the Xerox Palo Alto Research Center. It is also referred to as ambient intelligence and pervasive computing.

In Mark Weiser’s idea, computer tools are embedded in everyday objects. The objects are used both at work and at home. According to him, “the deepest technologies are those that have become invisible. Those which, knotted together, form the fabric of our daily life to the point of becoming inseparable from it”.

Today’s IT systems are decentralized, diverse, highly connected, easy to use and often invisible. A whole range of discrete devices communicate discreetly through a fabric of heterogeneous networks.

2.5.4.2. Spontaneous and autonomous networks

Spontaneous and autonomous networks, also called ad hoc networks, have an important place in this development. Ad hoc networks (Latin for “who goes where he must go”, i.e. “formed for a specific purpose”, such as an ad hoc commission formed to solve a particular problem) are wireless networks capable of organizing themselves without a predefined infrastructure.

The first research on “ad hoc multi-hop” networks dates back to the 1960s and was carried out by DARPA, as the military was very interested in this approach for the battlefield.

Ad hoc networks, in their most common mobile configuration, are known as MANET (for Mobile Ad hoc NETworks). MANET is also the name of an IETF working group, created in 1998–1999, tasked with standardizing IP-based routing protocols for wireless ad hoc networks.

A MANET network is characterized by:

  • – the lack of a centralized infrastructure. Nodes have one or more wireless interfaces and have routing features that allow a packet to reach its destination from node to node without a designated router;
  • – a dynamic topology. The mobile units of the network add, disappear and move in a free and arbitrary way. Therefore, the topology of the network can change, at unpredictable moments, in a fast and random way;
  • – the heterogeneity of the nodes;
  • – an energy constraint. Mobile equipment generally has limited batteries. Knowing that part of the energy is already consumed by the routing functionality, the services and applications supported by each node are limited;
  • – limited bandwidth;
  • – vulnerability. The possibilities of insertion in the network are greater, the detection of an intrusion or a denial of service more delicate and the absence of centralization poses a problem of information feedback of intrusion detection;
  • – the system can operate in isolation or interface to fixed networks through gateways.

These technologies are particularly used in sensor networks. The sensor is a device that transforms the state of an observed physical and/or logical quantity into a usable quantity. Wireless sensor networks are considered a special type of ad hoc networks where fixed communication infrastructure and centralized administration are absent and nodes play the role of both hosts and routers.

This type of network consists of a set of micro-sensors scattered across a geographical area called the catchment field, which defines the terrain of interest for the phenomenon being captured. The deployed micro-sensors are capable of continuously monitoring a wide variety of ambient conditions.

Sensor networks respond to the emergence of an increased need for diffuse and automatic observation and monitoring of complex physical and biological phenomena in various fields: industrial (quality control of a manufacturing chain), environmental (monitoring of pollutants, seismic risk, etc.), security (risk of failure of large-scale equipment such as a dam), etc.

2.6. Networks and security

2.6.1. Vulnerabilities

Networks are fuelling ever-increasing cybercrime, not to mention their use for malicious political purposes, which has been in the news since 2017.

In May 2017, hackers attacked thousands of governments and businesses around the world with malware, blocking the use of computers and demanding ransom. Considering the largest ransomware cyber attack in history to date, WannaCry infected more than 300,000 computers across more than 150 countries in a matter of hours.

The US-based credit company Equifax was the target of a massive hacking attack in 2017. The information of more than 140 million Americans and more than 200,000 consumer credit card numbers were accessed by hackers. This attack exploited a vulnerability in one of the company’s applications, allowing access to certain secret files.

The main objectives of computer attacks are:

  • – to get access to your computer or system;
  • – to steal information, including personal data and bank data;
  • – to disrupt the proper functioning of the service;
  • – to use the system as a rebound for an attack.

There are several types of malware:

  • viruses are software that can be installed on a computer without the knowledge of its legitimate user;
  • – reticular viruses (botnet) are spread on millions of computers connected to the Internet;
  • – a Trojan horse is a software program that presents itself in an honest light and, once installed on a computer, performs hidden actions on it;
  • – a backdoor is hidden communication software, installed, for example, by a virus or a Trojan horse, which gives an external attacker access to the victim computer through the network;
  • spyware collects, without the knowledge of the legitimate user, information and communicates it to an external agent;
  • – unsolicited electronic mail (spam) consists of massive electronic communications, in particular e-mails, without solicitation of the recipients, for advertising or dishonest purposes. It is a scourge, and it is estimated that 70% of the e-mail circulating around the world is just spam;
  • phishing is a fraudulent technique used by hackers to retrieve information (usually banking information) from Internet users. This technique consists of deceiving Internet users by means of an e-mail that appears to come from a trusted company, such as a bank or a trading company.

A denial of service attack is a type of attack designed to make an organization’s services or resources (usually servers) unavailable for an indefinite period of time. There are two types: denial of service by saturation and denial of service by exploiting vulnerabilities.

It is essential to put in place measures to secure networks to ensure:

  • confidentiality, which aims to ensure that only authorized persons have access to the resources and information to which they are entitled;
  • authenticity, which makes it possible to verify the identity of the actors of communication;
  • integrity, which aims to ensure that resources and information are not corrupted, altered or destroyed;
  • availability, which is intended to ensure that the system is ready to use and that services are accessible.

2.6.2. The protection of a network

We have the ability to protect our personal network or that of our organization against certain attacks from outside. Several complementary methods are available. We can limit communication and visibility from the outside. The most commonly used method is the implementation of a firewall that forces the passage through a single point of control (inbound and outbound: Who? For what?). Its main task is to control traffic between different trusted zones by filtering the data flows that pass through them.

Schematic illustration of a firewall diagram.

Figure 2.17. Firewall diagram

2.6.3. Message encryption

Encryption is a process of cryptography through which we wish to make the understanding of a document impossible for anyone who does not have the decryption key. Cryptography is very old and children, even today, can still have fun encrypting messages, with simple codes, so that their parents do not understand the meaning!

There are two main families of encryption: symmetric and asymmetric encryption. Symmetric encryption allows us to encrypt and decrypt content with the same key, known as the secret key. Symmetric encryption is particularly fast, but requires the sender and receiver to agree on a common secret key or to transmit it via another channel.

Asymmetric encryption assumes that the (future) recipient has a pair of keys (private key, public key) and has ensured that potential senders have access to its public key. In this case, the sender uses the recipient’s public key to encrypt the message while the recipient uses his private key to decrypt it.

2.6.4. Checking its security

It is essential for everyone to ensure the security of their IT environment, even if it is just a personal computer. Here are a few simple rules:

  • – backup your data; do not keep the backups in the same place, check that they are working;
  • – update the anti-virus;
  • – check the security of access (wired or Wi-Fi) to its environment;
  • – manage spam and phishing attempts properly (vigilance, system destruction, reporting to the operator and the organizations in charge of fighting cybercrime).
  1. 1 Hexadecimal characters are numbers from 0 to 9, and letters from A to F.
  2. 2 The Internet Society regularly published its “Global Internet Reports”, which are available at https://future.internetsociety.org/.
  3. 3 Source: Internet Society.
  4. 4 Source: http://www.internetworldstats.com/stats.htm.
  5. 5 Available at: https://future.internetsociety.org/2017/wp-content/uploads/sites/3/2017/09/2017-Internet-Society-Global-Internet-Report-Paths-to-Our-Digital-Future.pdf.
  6. 6 A British engineer born in 1955, he invented the World Wide Web, its protocols and languages (HTTP, HTML), and the URLs used to uniquely identify sites. Creator of the W3C, the organization that regulates the Web, he is also at the origin of the semantic web, which can be interpreted by machines thanks to semantic technologies.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
44.200.101.170