3
Network Optimization using Artificial Intelligence Techniques

Asma AMRAOUI and Badr BENMAMMAR

Abou Bekr Belkaid University, Tlemcen, Algeria

3.1. Introduction

The telecommunications field has greatly progressed in recent years due to the booming markets of mobile phones and Internet and the deployment of broadband networks and intelligent networks. Thanks to these developments, network environments have become more complex, as they permanently process a huge amount of information; this renders network management very difficult.

Communication services providers must currently meet increasing customer demands for better quality services and better customer experience. Telecommunication companies seize these opportunities and are taking advantage of the large amount of data collected throughout the years from their broad customer base. These data are extracted from equipment, networks, mobile applications, geolocalizations and detailed customer profiles.

In order to process and analyze such huge volumes of data and extract useful information, telecommunication companies take advantage of artificial intelligence (AI) to provide better customer experience, improve operations and increase company revenues with new products and services.

Indeed, AI can help identify the anomalies and proactively solve the problems before they affect customers, and hence optimize the network. Network optimization, predictive maintenance and virtual assistants are examples of cases in which AI had an impact on the telecommunications sector.

This chapter deals with AI in general, and defines the various intelligent techniques commonly used in the telecommunications sector, including expert systems (ESs), machine learning, multiagent systems (MASs), but also the Internet of Things (IoT) and big data, which are very trendy and successful in telecommunications companies.

This chapter focuses on four aspects of network optimization: network performances, quality of service (QoS), security and energy consumption. For each of these criteria, an explanation is provided on what their optimization involves and how AI can contribute to better use.

3.2. Artificial intelligence

3.2.1. Definition

Human intelligence is opposed to instinct, which is associated with reflex rather than with elaborated thought. As a science, AI was founded after the Second World War, once the first electronic computers were invented. This science had a twofold objective: to simulate human capacities in an attempt to better understand human intelligence and to replace human labor in some automatic and repetitive tasks. A person’s intelligence is often associated with his/her reasoning and thinking capacity.

The term AI was coined by John McCarthy and Marvin Lee Minsky, who defined it as:

“the construction of computer programs that engage in tasks that are currently more satisfactorily performed by human beings because they require high-level mental processes such as: perceptual learning, memory organization and critical reasoning”.

AI could be considered to help in the design of systems that are able to replicate human behavior in their reasoning activities.

For several years now, intelligence has almost always been associated with learning capacities. Because of learning, an intelligent system can execute a task and improve its performances with experience.

3.2.2. AI techniques

3.2.2.1. Expert systems

An expert system (ES) is a tool that can replicate the cognitive mechanisms of a human expert in a particular domain. It is one of the paths potentially leading to AI. More precisely, an ES is a software that can answer questions by means of reasoning based on facts and known rules.

ESs are generally composed of:

  • – knowledge base;
  • – interface;
  • – inference engine.

The knowledge base is a set of data that are used by the inference engine. It stores the field-specific knowledge of the system. It gathers all the knowledge of an expert in the respective field.

The knowledge base contains:

  • engagement standards (expert knowledge): basic information and system configuration information, measures, laws, parameters and contractual data;
  • inference rules (know-how): set of logic deduction rules used by the inference engine;
  • facts (experience) base: set of data based on which the system starts to operate. This base is enriched as the system makes deductions. This work space is a sort of short-term memory, where the system also stores pending rules, subproblems, etc.

Interfaces are used for the dialogue between the expert, in charge with creating the knowledge base, and the machine.

The inference engine is the mechanism that enables the inference of new knowledge from the system’s knowledge base. It is the system’s brain and it is used for triggering the rules and chaining them one after the other.

The two most commonly employed mechanisms for triggering rules are:

  • – forward chaining;
  • – backward chaining.

An ES is different from classical software. Indeed, classical software is developed around a set of algorithmic processes. Problem resolution follows a sequence of stages that is well defined by the programmer. An ES can integrate the capacity to determine by itself the processes adapted to a given state of the input parameters, in other terms a sequence of stages that was not predefined by the programmer for this state. This difference between classical software and an ES is essentially due to the method of organization and use of specialized knowledge.

The main advantage of an ES is given by its very high performances in solving the problems encountered during the expertise period for which rules were formulated. Nevertheless, for a large size domain, the number of rules increases significantly and their maintenance is increasingly difficult. Indeed, it should be possible to carry on the expertise of the studied domain, formulate new rules and manually correlate them with all existing rules.

An ES is therefore highly adapted to fields that change to a very little extent. On the other hand, if the field is very dynamic, certain expert rules may very rapidly become obsolete and weaken the system, rendering it unable to solve certain problems. These weaknesses of rule-based ESs led to the development of a new approach to the representation of expert knowledge.

There are two types of ESs:

  • – rule-based classical ESs, which formulate rules in order to describe and understand the propagation of faults and alarms in a telecommunication network;
  • – model-based evolutionary ESs, which draw their inspiration from AI sciences and consider a phenomenon as understood only if it can be replicated or simulated. This category includes model-based diagnostic methods that develop reasoning on an explicit representation of the network structure and operation and the methods that try to artificially learn the network behavior without modeling it.

3.2.2.2. Case-based reasoning

Case-based reasoning (CBR) is a paradigm of AI, which involves solving a new problem, referred to as “target problem” based on a set of already solved problems. CBR is analogical reasoning that globally satisfies what is known as the “analogy square”, as illustrated in Figure 3.1.

Schematic illustration of an analogy square of ES.

Figure 3.1. Analogy square of ES

The research of similar source cases is naturally essential in the cycle. Let us recall that the source case to be chosen is normally the case whose problem description is the closest possible to the description of the target problem.

Reuse involves reusing a similar case, in order to have a reasoning trace of the target case, while the review enables a correction, so that the case has a correct solution.

Learning a new solved case is an opportunity for the knowledge base to become enriched.

The principle of CBR involves the recovery, adaptation and execution of the solutions to previous problems in order to evaluate current problems. The past diagnostic solutions are stored as cases in a knowledge base. The cases contain the most relevant characteristics of past diagnostic solutions; they are adapted and used to solve the new problems.

The experience acquired through the diagnosis of these new problems constitutes new cases stored for future use. This system integrates the capacity to learn not only from its previous correct diagnostic solutions, but also from its failures. Indeed, when the attempt to diagnose a situation fails, the system identifies and logs the reason of this failure, so that it can remember it during future diagnoses.

A CBR system has a case base. For each case, there is a detailed description of the problem and a solution. Moreover, an engine is needed in order to use this information. The engine finds the cases that are similar to the new problem to be solved. After analysis, the engine provides an adapted solution that must be validated. Finally, the engine adds the problem and its solution to the case base.

3.2.2.3. Machine learning

These are techniques arising from AI, enabling machines to learn, in a more or less autonomous manner, to accomplish tasks without being explicitly programmed.

Machine learning refers to the development, analysis and implementation of methods enabling a machine to evolve through a learning process, and hence fulfill tasks that are difficult or impossible to accomplish by more classical algorithmic means.

There are three big types of machine learning:

  • supervised learning: the algorithm attempts to predict a phenomenon or a measure based on the history of achievements of the latter. The database is formed of labeled data;
  • unsupervised learning: does not involve the prediction of a measure; the algorithm rather attempts on its own to detect the characteristic structures or groups in a given set of observations. Data are not labeled; the objective is then to find a relation between data;
  • reinforcement learning: the intelligent agent observes the effects of its actions and deduces the quality of its actions in order to improve its future actions. The action of the algorithm on the environment generates a return value (reward or punishment) that guides the learning algorithm.

To conclude, the main objective of machine learning is to automatically extract and use the information present in a dataset. But the actual potential of machine learning resides in processing data that were never seen previously, while finding the correct answers. For this reason, the core of machine learning is the amount and quality of data, as well as the choice of the best machine learning algorithm that can be integrated in our data.

3.2.2.4. Neural networks

The human brain is composed of a set of interconnected neurons transmitting elaborated models of electrical signals. Dendrites receive the input signals and based on these inputs, a neuron produces an output signal via an axon (Shiffman 2012).

Artificial neural networks draw their inspiration from the biological operation of the human brain, and therefore, by analogy with a biological neuron, an artificial neuron is perceived as an autonomous processor with unidirectional channels for communication with other neurons connected to it.

An artificial neuron has several input channels operating as dendrites, and only one output channel operating as an axon. The connection points between neurons are known as “synapses”. The typical operation of an artificial neuron is to calculate a weighted sum of the input signals and generate an output signal if this sum exceeds a certain threshold. The weighted sum of input signals is done by the combination function, which multiplies the input vector by a transformation matrix. The output signal is generated by the function.

Figure 3.2 represents the structure of an artificial neural network.

Schematic illustration of artificial neural network.

Figure 3.2. Artificial neural network (Decourt 2018)

3.2.2.5. Multiagent systems

A multiagent system (MAS) is a group of agents, each of which has one or several elementary competences. The purpose is to have these agents work together in order to solve a problem or accomplish a specific task. It is a sort of intelligence distribution, each autonomous agent having only a local view of the problem or an elementary task of the work to be done.

Ferber and Perrot (1995) define an MAS as follows:

“A multi-agent system is composed of the following elements:

  • – an environment, which is a space generally having a metrics;
  • – a set of objects located in space; they are passive; they can be perceived, destroyed, created and modified by the agents;
  • – a set of agents, which are the active entities of the system;
  • – a set of relations, which interconnect the objects;
  • – a set of operations enabling agents to perceive, destroy, create, transform and manipulate the objects;
  • – a set of operators in charge of representing the application of these operations and the reaction of the world to this modification attempt (laws of the universe)”.

MASs are generally used when the problem is too complex to be solved by a single system, due to several software or hardware limitations. A specific case of use is when components have multiple interrelations. MAS are an excellent tool to ensure autonomous control in a widely distributed system with very dynamic characteristics.

MASs are certainly the ideal solution for scenarios requiring a system that must dynamically adapt when new components are added or removed and these components must easily adapt when the environment undergoes modifications. It should be kept in mind that one of the most important advantages of MASs is their modularity, which enables simpler programming, in the sense that adding new agents to an MAS poses no significant problem; this explains their scalability (Amraoui 2015).

The interest of the agent-based solution resides in the entire absence of a central entity reacting to agent operation, which provides high resistance and reliability (if an agent breaks down, the system continues to operate).

3.2.2.6. Internet of Things

The IoT is a term that generally describes a system in which physical objects are connected to the Internet, now known as “ecosystem of connected objects”.

IoT starts in the physical world with sensors that gather information; this information is then forwarded because of system interconnection and integration; data are finally processed and stored in order to be analyzed and used.

An essential characteristic is that IoT can transform ordinary objects into devices. They can be identified by an IP address, record states via sensors and have memory capacity via microchips. The integrated minicomputers enable them to selfcontrol, manage their environment and automatically exchange data. Because of machine learning they are sometimes even capable to recognize and generalize models and draw conclusions in order to adapt to situations and continuously optimize.

3.2.2.7. Cloud computing

The official definition of cloud computing was given by Mell et al. (2011): “Cloud Computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction”.

Cloud computing stands for providing various hardware and software solutions via Internet. Processor performances, storage space and software environments can be rented by users in order to extend or replace their own infrastructure. Cloud computing gives users the possibility to store an enormous amount of data as well as access it from any place and at any moment. In order to use the stored information, the users must have basic Internet connection.

3.2.2.8. Big data

Big data is a term that describes any collection of data whose volume and complexity are such that it is difficult to process it by classical tools for application processing. It is a generic term employed to designate the strategies and technologies used for gathering, organizing and analyzing vast sets of data.

3.3. Network optimization

Network optimization involves improving network operation in terms of security and reliability, performance and rapidity, QoS and, obviously, energy consumption.

This section gives an outline on how AI optimizes networks.

3.3.1. AI and optimization of network performances

With the growing demand for Wi-Fi and the integration of connected objects in our lives, the lack of visibility and network control are major factors in computer management. Therefore, continuously ensuring the same QoS and the same performance is a challenge to be met. One promising solution is the implementation of a cognitive network known as intelligent.

Cognitive networks are a specific type of networks capable of learning, predicting and improving performances, security and user experience. These solutions use the cloud, data analysis, machine learning and AI to determine basic performances, follow activity and identify problems.

The large amount of data entering the network from different nodes inevitably requires very high computation power. With the use of AI, all these data must be studied within machine learning. Hence, the network can understand, for example, the moment when the applications reach non-optimal performances or compare a constant flow of historic and current data.

3.3.2. AI and QoS optimization

Recent technological progress has enabled the low-cost manufacturing of small elements, such as small wireless sensors used for on-site measurement of ambient conditions. These sensors are used due to their low energy consumption, low radio range, weak memory and low cost.

It is also worth noting that multimedia traffic has recently significantly increased, as recent technologies (IoT, for example) add new traffic types, and especially video flows. According to a Cisco report1, in 2021 video traffic will be three times that of 2016. Moreover, this traffic represents 82% of the total internet traffic in 2021.

This indicates that the network QoS will continue to be a requirement for the real-time transmission of unpredictable data. Consequently, heavy tasks will still have to be executed in the Cloud.

This being said, FAI have no control and generally no knowledge on the Wi-Fi access points used by mobile devices users, therefore they cannot guarantee that QoS is provided as promised.

The term QoS does not concern only the rate, packet loss, latency or jitter. It is also a matter of availability. To be able to further optimize networks and enable an efficient energy management, high reliability and availability, it is important to provide communications security and this requires AI integration for better dynamic management of the network traffic.

With the use of AI techniques, it is possible to discover the various types of flows being transmitted in the network. Traffic models can thus be obtained, helping in the decision-making process.

In Nowicki and Uhl (2017), the authors consider that multimedia traffic can be more efficiently managed using AI techniques. Their paper proposes an intelligent system to provide QoS and Quality of Experience (QoE) in the video monitoring of the traffic generated by the equipment in the IoT.

Once lost, network quality cannot be reestablished. But a possibility exists to integrate quality and cognitive intelligence at each end of the connection.

Machine learning can also be useful for mitigating the risks related to network unavailability or to security exploits. For example, with cognitive radio (CR) (Benmammar et al. 2012), the application knows that you are preparing to go through a black coverage area and it can consequently be proactive.

Further works exist in the literature and they rely on meta-heuristics, such as the paper by Benmammar (2017). The latter uses the Shuffled Frog Leaping Algorithm (SFLA) meta-heuristics to improve QoS in a CR network. The authors’ objective is to maximize throughput and minimize error rate and energy consumption in this type of network.

3.3.3. AI and security

According to a study conducted by Cigref (2018), over one in two companies have been hit by cyberattacks, which is enormous in terms of costs.

In France, only 29% of the companies consider cybersecurity a high priority challenge2. Even more worrying, only one company in two has implemented a strategy focusing on the fight against cyber risks.

In order to counteract this type of attacks, employees must be aware of basic security measures and of the need to use antivirus software and firewalls. But this is obviously not enough to provide full security and for this purpose there are other more evolved and high-performance solutions based on AI, and more specifically on machine learning. These new methods enable the easier detection of anomalies and generate alerts quite rapidly in order to inform system administrators.

Unlike classical methods such as antivirus software, which make sure the machine contains no specific signature (an indication that a program is considered malicious or not), machine learning-based systems learn how to search for the various characteristics of malware in order to learn their behavior and be able to detect them more rapidly; these systems are therefore more flexible.

Techniques known as intelligent are intensely used in the fight against spam and phishing and have yielded good results. The latter can also be used to protect the system against attacks coming from the inside (from ill-intentioned employees). In this sense, AI can conduct a behavioral analysis: it studies the behavior of a computer and makes it possible to alert those in charge with system security in case of deviant computer behavior (data leakage attempt, for example).

This being said, while the operation of intelligent techniques enable them to analyze situations and behaviors, this requires a large amount of data to be able to yield efficient and satisfactory results. A lack of data may therefore lead to false results and hence to false alerts; this is why full automation of a security system is impossible, as human intervention is essential in some cases.

The current AI engines use statistical data for classification (malicious or honest), but their capacity may also become a weakness. Indeed, the machine learning engine is equivalent to humans in terms of learning capacity, but at a larger scale and with much higher speed.

For an AI to be efficient, its learning engine is fed a very large amount of information. As it receives information, the engine builds a statistical model that enables it to autonomously determine when the sought-for phenomenon occurs.

In fact, since the strength of statistical algorithms lies in their capacity to recognize models and diagrams, the attackers may progressively adapt their behaviors to appear normal or act in a manner that induces confusion.

Moreover, many systems may detect anomalies at the beginning, but after a while they learn to accept them as normal behaviors. This offers the attackers an advantage, as they can mask their activities by observing normal behaviors, such as the use of the “https” protocol to send data to a server.

Attackers may add execution stages that do not contribute to reaching their objective, but are designed to make the process appear normal. Moreover, weak signals that seem harmless to the human analyst may prove efficient in deceiving machine learning algorithms.

Finally, it can be said the AI brings an additional security layer and it can significantly slow down computer hackers. Indeed, even though a human can deal with many threats per hour, he can be rapidly overtaken by a significant flow of threats. AI is helping humans in their processing of security incidents and can even rapidly suggest or apply remediation actions.

Intelligent systems are frugal consumers of CPU and RAM resources compared to traditional antivirus software and do not necessarily need an Internet connection. They no longer need to know the threat in order to block it, nor do they require permanent updates, as the model relies on a statistical approach. The system analyzes a large amount of data having the various characteristics of a file, its potential signature, size and code, all these recurrent series of bits. Then it is assigned a score, which tells if a file may be executed or not.

3.3.4. AI and energy consumption

When used in energy production or consumption, AI operates by means of sensors installed in the control systems. This enables real-time processing of data. Because of this, system anomalies or malfunctions are detected and dealt with much more rapidly. Once the problems are highlighted, the faulty system or equipment can be replaced; this enables maximum optimization of energy efficiency.

The development of connected objects associated with the use of AI technologies enables the deployment of tools aiding the intelligent consumption and management of energy. It also enables the deployment of systems for real-time prediction and management based on storage and self-consumption.

Energy consumption and production prediction for real-time energy management purposes can be done using regression algorithms such as random forest or restricted Boltzmann machine. In terms of energy efficiency improvement, k-means clustering methods can be used.

3.4. Network application of AI

3.4.1. ESs and networks

3.4.1.1. ES for machine maintenance

An expert diagnostic system is a series of computer applications that integrate a large base of knowledge or expert reasoning and automatically infer the root causes of the observed anomalies.

It is a computer system intended to determine what causes an equipment failure by analyzing and representing the knowledge and reasoning of one or several maintenance specialists. It makes it possible for an operator with average or even poor technical knowledge to look for the probable cause of a failure, by providing the system with minimum information, such as the type of defective machine and the observed failure mode.

Knowledge modeling is the most important part in the design of a computer-aided diagnosis system, therefore the elements to be studied and their relations should be properly defined.

The facts base and the work space of the computer-aided diagnosis system are enriched as the system is deployed in terms of selecting the most probable cause of the failure and its adequate remedy.

To build a base of valid reasoning, diagnosis expert knowledge can be formalized as rules, decision trees, propositional logic, etc.

In the case of ES aiding machine maintenance (Kaushik et al. 2011; Raja’a and Jassim 2014), the knowledge base contains the machine-specific knowledge provided by maintenance experts. This knowledge takes the form of facts and rules.

In this case, learning relies on the data generated by the equipment of the network to be diagnosed and it essentially involves interpolation or induction-based solving of the reverse problem of fault and alarm propagation in a telecommunication network.

3.4.1.2. ES for the diagnosis of a multiplexer network

In Lor (1993), the authors developed a system for the diagnosis of multiplexer networks. Diagnostic expert knowledge is classified into two categories: generic expert knowledge and precise diagnostic task-specialized expert knowledge. The ES uses a database of static and dynamic information required during the diagnosis process. This information refers to the relations between logical entities (channel groups) and the physical entities (equipment and links), such as routing information, attributes of physical entities and incidence relations between network nodes and links.

A line can be diagnosed in two stages. The first step is to collect the available data on the line, such as the power levels transmitted and received by the equipment of the line, supply voltages, polarization currents, temperatures of the line equipment, transmission error counters and alarms observed. Each datum is stored in a key performances identifier (KPI). Then the predefined expert rules use these KPI to generate an indication or a final diagnostic decision referred to as “conclusion”.

Diagnostic expert de la ligne cliente - DELC (Expert Diagnosis of the Client Line) is a rule-based ES developed by Orange Labs France for the automated diagnosis of the Digital Subscriber Line (xDSL) and of the Gigabit Passive Optical Network (GPON) of Fiber To The Home (FTTH) type.

3.4.2. CBR and telecommunications networks

It often happens, particularly in a complex CBR system, such as the one involved in the diagnosis of a telecommunications network with a broad diversity of anomaly signatures, that an adaptation of preexisting solutions is required.

A CBR system for fault diagnosis in a DUMBO network was proposed in Melchiors and Tarouco (1999). This system uses knowledge on diagnostic cases stored in an incident ticket system in order to propose diagnostic solutions to new anomalies. This system aims to facilitate the stages of diagnosis and resolution of network management problems.

The knowledge unit of a CBR system is the case and not the rule. A case is easier to articulate, examine and evaluate than a rule (Hounkonnou 2013). A CBR system is also capable of learning from its own errors/failures and improving its performances. The phase of evaluation of solutions to new problems is worth paying attention to. Indeed, poor evaluation may drive the integration of erroneous cases in the knowledge base and thus cause the entire system to drift.

3.4.3. Automated learning and telecommunications networks

Machine learning can be employed to diagnose a larger number of faults than rule-based ESs are able to; it can diagnose problems outside of its expertise field, although in such cases its performances are lower.

The diagnosis of a telecommunications network requires a comprehension of the phenomenon of fault and alarm propagation in this network. This comprehension enables the acquisition of relevant knowledge in order to automatically solve the reverse problem of fault and alarm propagation.

To be able to diagnose anomalies occurring in a telecommunications network, the diagnosis system may be a learning system, which has the induction capacities enabling it to use its knowledge base to find the root causes of new anomalies, previously unknown to it.

This method no longer uses a reasoning base specialized on accurate diagnosis tasks, such as ES and CBR, but knowledge on the behavior or operation of a telecommunications network. This knowledge is used to build a structured and explicit representation of the network operation. The complexity involved by the development of a model-based diagnosis system is due to the fact that a large-scale telecommunications network is very often heterogeneous and dynamic, with a large number of equipment of various types.

In Łgorzata Steinder and Sethi (2004), the authors explain that model building is only the first stage in the network diagnosis based on a model of the respective network. The second stage involves the development or implementation of an algorithm based on the model. The algorithm starts with the entities that triggered the alarms and explores the relations between the network entities formalized by the model. Their algorithm is also able to determine the correlated alarms and thus localize the offending entities of the network.

In Yu et al. (2009) and Fan et al. (2012), the authors explain how automated learning, and especially artificial neural networks, can be used for intrusion detection.

The model-based approach is easy to deploy and modify and it is appropriate for a large-scale network if the information related to network resources is available.

3.4.4. Big data and telecommunications networks

3.4.4.1. Big data and customer service improvement

Telecommunications companies collect enormous amounts of data from call recordings, mobile phone use, network equipment, server logs, invoicing and social networks, thus providing much information on their customers and their network. With big data technologies, telecommunications companies will use these data to improve their activity through advanced analyses.

With the rapid expansion of smartphones and other connected mobile devices, providers of communication services must rapidly process, store and draw information from the diversified volume of data going through their networks. Big data analyses are used in order to:

  • – help improve efficiency by optimizing network use, improving customer experience and strengthening security;
  • – predict the periods of intensive network use and target the stages to reduce congestion;
  • – identify the customers with the highest chances of failing and target the stages enabling turnover prevention;
  • – identify the customers with the highest chances of having difficulties paying their invoices and target the stages for payment collection improvement.

Due to the significant volume of data, it is important to process them near the source, and then efficiently transfer them to various data centers for further use.

Real-time analysis of events is key to timely analysis of network services in order to improve customer satisfaction. Abandoned calls, locations with average network coverage quality, low download speed, unacceptable waiting time, etc., are examples of potential analysis subjects.

In network applications, the key to successful use of big data is to focus on problems and not on data points.

In terms of network administration, big data are collected from probes deployed in various points, as well as by means of network layer software installed on client and server equipment. When they are presented in a system infrastructure with standard administration, a part of this information may correspond to common management practices.

3.4.4.2. Big data and security

In big data, there are more significant volumes of data, but these are especially exponential, variable and from different sources.

Since the company can have a view on entire volumes of data carried daily by its information system, instead of waiting for problems to occur in order to process them, it can attempt to identify all the potentially signaling events.

Once the risk is identified, a protection is implemented to prevent its propagation. A proactive view on computer security can then be deduced, all the more so as the accuracy of the collected information enables better identification of threats by tracing them back to their source.

3.4.5. MASs and telecommunications networks

Agent techniques have as their main task developing knowledge engineering that reduces information processing to knowledge-based reasoning. These techniques also enable the development of software engineering techniques adapted to service delivery.

The telecommunications field offers the perspective of open environments, either in the Web or in future network services. Moreover, it enables the exploration of various agent techniques: mobile agents, web assistants and knowledge reasoning agents.

Several decades ago, when companies wanted to have a private telecommunications network, they used a telecommunications infrastructure of their own. Then these requests were satisfied by private networks composed of connections let to an operator. The use of these connections can be subcontracted to the operator. These connections, which are not part of the public networks, ensure a QoS, for example the band required between several given points and full confidentiality of exchanged data.

Virtual private networks (VPNs) are offered as private networks implemented on public networks. With a VPN, any increasing demand for temporary connections can be met and the bandwidth that is not used by a company at a given instant is potentially available for another use. MAS can be used for the automation of VPN supply requiring several network services providers and for the automation of network resources negotiation in this context.

The agents designate software components for the decentralized control or monitoring of network resources. They are used for the development of cooperation strategies enabling the coordination of the assignment or supervision of resources depending on various authorities, as well as for the development of strategies for the control of network overload that could be generated by the signaling related to new services.

3.4.5.1. MAS and CR

In Mir (2011), the author proposes cooperation between PUs (Primary Users) and SUs (Secondary Users), and among SUs only. Agents are deployed on user terminals to cooperate and agree on contracts governing the spectrum assignment. SU agents coexist and cooperate with PU agents in a CR environment ad hoc using messages and decision-making mechanisms. Given that the internal behaviors of the agents are cooperative and selfless, this enables them to maximize the utility function of other agents without additional costs in terms of exchanged messages.

Nevertheless, resource allocation is an important challenge in CR systems. It can be realized by negotiation between secondary users (Li 2009; Qian et al. 2011). In Qian (2011), the authors propose an agent-based model for spectrum negotiation in a CR network. In this model, instead of direct spectrum negotiation between PU and SU, a broker agent is included. This means that the PU or SU equipment does not require high intelligence, since it does not need to conduct spectrum detection or other more complicated tasks of the CR. The objective of this negotiation is to maximize the benefits and profits of the agents in order to satisfy the SU. The authors proposed two situations, the first of which uses only one agent that uses and dominates the network, while in the second several agents are competing.

One study was conducted by Xie et al. (2007) on the CR in the wireless local area network (WLAN) referring to the possibility of introducing agent technology; in other terms, they try to solve the problem of radio resource allocation by associating WLAN resources management in a decentralized environment by means of MAS. For this purpose, they propose an agent-based approach for information sharing and decision distribution among multiple WLAN in a distributed manner.

In Amraoui (2015), a multi-agent architecture is proposed involving three levels: the first one is the physical level, where the authors made several remarks on the type of terminal used, followed by the cognitive level, where they proposed a MAS-based modified cognition cycle, and finally the behavioral level, where they studied various potential behaviors of the agents during spectrum negotiation.

3.4.5.2. MAS and transport networks

Transposing agent-oriented notions to the transport field is in agreement with the characteristics of these two fields. Autonomy, distributed behaviors and partially observable environment characteristics are in fact present. The existing approaches focus on the properties of MASs: emergence, self-organization and cooperation. Moreover, the evolution of a large number of vehicles on a shared road network perfectly corresponds to the resource conflict problems studied via MAS (Guériau 2016).

Intelligent transport systems offer a set of tools relying on the latest advances in terms of computation power, communication and perception, in order to produce a supervised, integrated, universal and approachable system. MAS seem to be the best path to improvement, both in terms of interaction and distributed computation.

Intersection management can also be improved by the use of MAS. Indeed, the agents can be deployed at the intersection level or for each light, and their cooperation enables the optimization of cycles in response to the demand.

MAS can also prove practical for traffic control and congestion management, when agents can cooperate and negotiate to provide better road traffic management and a more intelligent transport network.

3.4.6. IoT and networks

In order to be able to adapt in real time to a given situation, connected devices must understand the value of the information they gather and learn from each other. Thanks to AI and analytics, they then make adequate decisions within autonomous systems.

Nowadays, the objects we use in our homes, offices, hospitals and factories are in the connection phase. Providing them with autonomous learning and personalization capacities, AI will bring them in a disruption phase.

The combination of IoT and AI leads to an actual change of perspective. Sectors such as security, health, industry and energy are potential beneficiaries of the advantages offered by this combination.

An example of application of the AI + IoT combination in the field of security is that of security software embedded in connected cameras featuring computer vision, which are able to identify a person in a crowd and alert the competent authorities because of shape recognition techniques. In particular, the software can count the number of persons in a room, identify a person with a criminal record or as a wanted person, authorize a person’s access in confidential areas, etc.

In the health field, the connected objects combined with AI have a place in most situations. For example, due to a connected camera or a connected pair of glasses, they enable the identification of disease symptoms. The connected sensors can transmit vital data concerning a patient to a platform featuring AI, which calls a nurse or a care assistant if needed and is able to provide all the details of the event.

In Srinidhi et al. (2018), the authors propose several AI-based algorithms for IoT network optimization. Indeed, they approach several types of algorithms, such as genetic algorithms, which use multiobjective criteria for the selection of the best sensors with a maximal storage space.

3.5. Conclusion

With the acceleration and transformation of telecommunication networks because of new technologies, operators must improve the efficiency of their services while reducing costs.

In our opinion, the use of AI and of data science can improve network performances, reliability and security.

Indeed, with AI, the network is able to automatically react to any potentially occurring significant overload. A network will be able to detect an overload and automatically create a number of virtual machines required for the amount of input traffic.

Fault diagnosis in a large-scale telecommunications network is a complex problem that presents an interest for both telecommunications operators and the AI community. This problem is the object of many research works and various approaches were proposed relying on ESs, CBR systems and machine learning-based systems.

In our view, in the future telecommunications networks will become fully autonomous and will no longer depend on human intervention, due to AI and especially to big data technologies.

3.6. References

Amraoui, A. (2015). Vers une architecture multiagents pour la radio cognitive opportuniste. PhD Thesis, University of Tlemcen, Algeria.

Benmammar, B. (2017). Optimisation de la QoS dans un réseau de radio cognitive en utilisant la métaheuristique SFLA (Shuffled Frog Leaping Algorithm) [Online]. arXiv:1703.07565.

Benmammar, B., Amraoui, A., and Baghli, W. (2012). Performance improvement of wireless link reliability in the context of cognitive radio. International Journal of Computer Science and Network Security, 12(1), 15–22.

Cigref (2018). Cybersécurité: visualiser, comprendre, décider [Online]. Report. Available at: https://www.cigref.fr/wp/wp-content/uploads/2018/10/Cigref-Rapport-Cybersecurite-Visualiser-Comprendre-Decider-Octobre-2018.pdf.

Decourt, O. (2018). Les réseaux de neurones expliqués à ma fille [Online]. Available at: https://od-datamining.com/knwbase/les-reseaux-de-neurones-expliques-a-ma-fille/.

Fan, W., Bouguila, N., and Ziou, D. (2012). Variational learning for finite Dirichlet mixture models and applications. IEEE Transactions on Neural Networks and Learning Systems. IEEE, 23(5), 762–774.

Ferber, J. and Perrot, J.-F. (1995). Les systèmes multiagents: vers une intelligence collective. InterEditions, Paris.

Guériau, M. (2016). Systèmes multiagents, auto-organisation et contrôle par apprentissage constructiviste pour la modélisation et la régulation dans les systèmes coopératifs de trafic. PhD Thesis, University Claude Bernard Lyon 1.

Hounkonnou, C. (2013). Active self-diagnosis in telecommunication networks. PhD Thesis, European University of Brittany and University of Rennes 1.

Kaushik, A., Barnela, M., Khanna, S., and Kumar, H. (2011). A Novel Expert System for PC Network Troubleshooting and Maintenance. International Journal of Advanced Research in Computer Science (IJARCS), 2(3), 201–203.

Łgorzata Steinder, M. and Sethi, A.S. (2004). A survey of fault localization techniques in computer networks. Science of Computer Programming, 53(2), 165–194.

LI, Husheng (2009). Multi-agent Q-learning of channel selection in multi-user cognitive radio systems: A two by two case. 2009 IEEE International Conference on Systems, Man and Cybernetics. San Antonio, Texas, USA, 1893–1898.

Liang, Q., Feng, Y., Lin, G., Xiaoying, G., Tian, C., Xiaohua, T., Xinbing, W. and Mohsen, G. (2011). Spectrum trading in cognitive radio networks: an agent-based model under demand uncertainty. IEEE Transactions on Communications, 59(11). IEEE, 3192–3203.

Lor, K.W.E. (1993). A network diagnostic expert system for Acculink multiplexers based on a general network diagnostic scheme. Proceedings of the 3rd IFIP/IEEE International Symposium on Integrated Network Management. San Franciso, USA.

Melchiors, C. and Tarouco, L.M.R. (1999). Fault management in computer networks using case-based reasoning: DUMBO system. International Conference on Case-Based Reasoning. Springer, Berlin, Heidelberg, 510–524.

Mell, P. and Tim, G. (2011). The NIST definition of cloud computing [Online]. Available at: https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-145.pdf.

Mir, U. (2011). Utilization of cooperative multiagent systems for spectrum sharing in cognitive radio networks. PhD Thesis, University of Technology of Troyes.

Nowicki, K. and Uhl, T. (2017). QoS/QoE in the heterogeneous Internet of Things (IoT). In Beyond the Internet of Things, Batalla, J.M., Mastorakis, G., Mavromoustakis, C.X., and Pallis, E. (eds). Springer, Basel, 165–196.

Raja’a, A.K. and Jassim, R.O. (2014). Expert system to troubleshoot the wireless connection problems. International Journal of Science, Engineering and Computer Technology, 4(8), 238.

Shiffman, D. (2012). The Nature of Code: Simulating Natural Systems with Processing. Shannon Fry, USA.

Srinidhi, N.N., Dilip Kumar, S.M., and Venugopal, K.R. (2018). Network optimizations in the Internet of Things: A review. Engineering Science and Technology, 22(1), 1–21.

Xie, J., Howitt, I., and Raja, A. (2007). Cognitive radio resource management using multiagent systems. 4th IEEE Consumer Communications and Networking Conference. Las Vegas, USA.

Yu, W., He, H., and Zhang, N. (2009). Advances in neural networks. 6th International Symposium on Neural Networks (ISNN 2009). 26–29 May 2009. Wuhan, China.

1 Cisco Visual Networking Index, Forecast and Methodology, 2016-2021.

2 According to the IPSOS reference document from 2017 and 2018 report by PWC, available at: https://www.pwc.fr/fr/assets/files/pdf/2018/10/ pwc-barometre-cybersecurite-septembre-2018.pdf.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
54.144.233.198