4
Management and Orchestration of Network Slices in 5G, Fog, Edge, and Clouds

Adel Nadjaran Toosi Redowan Mahmud Qinghua Chi and Rajkumar Buyya

4.1 Introduction

The major digital transformation happening all around the world these days has introduced a wide variety of applications and services ranging from smart cities and vehicle‐to‐vehicle (V2V) communication to virtual reality (VR)/augmented reality (AR) and remote medical surgery. Design and implementation of a network that can simultaneously provide the essential connectivity and performance requirements of all these applications with a single set of network functions not only is massively complex but also is prohibitively expensive. The 5G infrastructure public–private partnership (5G‐PPP) has identified various use case families of enhanced mobile broadband (eMBB), massive machine‐type communications (mMTC), and ultra‐reliable low‐latency communication (uRLLC) or critical communications that would simultaneously run and share the 5G physical multi‐service network [1]. These applications essentially have very different quality of service (QoS) requirements and transmission characteristics. For instance, video‐on‐demand streaming applications in eMMB category require very high bandwidth and transmit a large amount of content. By contrast, mMTC applications, such as the Internet of Things (IoT), typically have a multitude of low throughput devices. The differences between these use cases show that the one‐size‐fits‐all approach of the traditional networks does not satisfy different requirements of all these vertical services.

A cost‐efficient solution toward meeting these requirements is slicing physical network into multiple isolated logical networks. Similar to server virtualization technology successfully used in cloud‐computing era, network slicing intends to build a form of virtualization that partitions a shared physical network infrastructure into multiple end‐to‐end level logical networks allowing for traffic grouping and tenants' traffic isolation. Network slicing is considered as the critical enabler of the 5G network where vertical service providers can flexibly deploy their applications and services based on the requirements of their service. In other words, network slicing provides a network‐as‐a‐service (NaaS) model, which allows service providers to build and set up their own networking infrastructure according to their demands and customize it for diverse and sophisticated scenarios.

Software‐defined networking (SDN) and network function virtualization (NFV) can serve as building blocks of network slicing by facilitating network programmability and virtualization. SDN is a promising approach to computer networking that separates the tightly coupled control and data planes of traditional networking devices. Thanks to this separation, SDN can provide a logically centralized view of the network in a single point of management to run network control functions. NFV is another trend in networking gaining momentum quickly, with the aim of transferring network functions from proprietary hardware to software‐based applications executing on general‐purpose hardware. NFV intends to reduce the cost and increase the elasticity of network functions by building virtual network functions (VNFs) that are connected or chained together to build communication services.

With this in mind, in this chapter, we aim to review the state‐of‐the‐art literature on network slicing in 5G, edge/fog, and cloud computing, and identify the spectrum challenges and obstacles that must be addressed to achieve the ultimate realization of this concept. We begin with a brief introduction of 5G, edge/fog, and clouds and their interplay. Then, we outline the 5G vision for network slicing and identify a generic framework for 5G network slicing. We then review research and projects related to network slicing in cloud computing context, while we focus on SDN and NFV technologies. Further, we explore network slicing advances in emerging fog and edge cloud computing. This leads us to identify the key unresolved challenges of network slicing within these platforms. Concerning this review, we discuss the gaps and trends toward the realization of network slicing vision in fog and edge and software‐defined cloud computing. Finally, we conclude the chapter.

Table 4.1 lists acronyms and abbreviations referenced throughout the chapter.

Table 4.1 Acronyms and abbreviations.

5G 5th generation mobile networks or 5th generation wireless systems
AR augmented reality
BBU baseband unit
CRAN cloud radio access network
eMBB enhanced mobile broadband
FRAN fog radio access network
IoT Internet of Things
MEC mobile edge computing
mMTC massive machine‐type communications
NaaS network‐as‐a‐service
NAT network address translation
NFaaS network function as a service
NFV network function virtualization
QoS quality of service
RRH remote radio head
SDC software‐defined clouds
SDN software‐defined networking
SFC service function chaining
SLA service level agreement
uRLLC ultra‐reliable low‐latency communication
V2V vehicle to vehicle
VM virtual machine
VNF virtualized network function
VPN virtual private network
VR virtual reality

4.2 Background

4.2.1 5G

The renovation of telecommunications standards is a continuous process. Practicing this, 5th generation mobile network or 5th generation wireless system, commonly called 5G, has been proposed as the next telecommunications standards beyond the current 4G/IMT advanced standards [2]. The wireless networking architecture of 5G follows 802.11ac IEEE wireless networking criterion and operates on millimeter wave bands. It can encapsulate extremely high frequency (EHF) from 30 to 300 gigahertz (GHz) that ultimately offers higher data capacity and low latency communication [3].

The formalization of 5G is still in its early stages and is expected to be mature by 2020. However, the main intentions of 5G include enabling Gbps data rate in a real network with least round‐trip latency and offering long‐term communication among the large number of connected devices through high‐fault tolerant networking architecture [1]. Also, it targets improving the energy usage both for the network and the connected devices. Moreover, it is anticipated that 5G will be more flexible, dynamic, and manageable compared to the previous generations [4].

4.2.2 Cloud Computing

Cloud computing is expected to be an inseparable part of 5G services for providing an excellent backend for applications running on the accessing devices. During the last decade, cloud has evolved into a successful computing paradigm for delivering on‐demand services over the Internet. The cloud data centers adopted virtualization technology for efficient management of resources and services. Advances in server virtualization contributed to the cost‐efficient management of computing resources in the cloud data centers.

Recently, the virtualization notion in cloud data centers, thanks to the advances in SDN and NFV, has extended to all resources, including compute, storage, and networks, which formed the concept of software defined clouds (SDC) [5]. SDC aims to utilize the advances in areas of cloud computing, system virtualization, SDN, and NFV to enhance resource management in data centers. In addition, cloud is regarded as the foundation block for cloud radio access network (CRAN), an emerging cellular framework that aims to meet ever‐growing end‐user demand on 5G. In CRAN, the traditional base stations are split into radio and baseband parts. The radio part resides in the base station in the form of the remote radio head (RRH) unit and the baseband part is placed to cloud for creating a centralized and virtualized baseband unit (BBU) pool for different base stations.

4.2.3 Mobile Edge Computing (MEC)

Among the user proximate computing paradigms, MEC is considered as one of the key enablers of 5G. Unlike CRAN [6], in MEC, base stations and access points are equipped with edge servers that take care of 5G‐related issues at the edge network. MEC facilitates a computationally enriched distributed RAN architecture upon the LTE‐based networking. Ongoing research on MEC targets real‐time context awareness [7], dynamic computation offloading [8], energy efficiency [9], and multi‐media caching [10] for 5G networking.

4.2.4 Edge and Fog Computing

Edge and fog computing are coined to complement the remote cloud to meet the service demands of a geographically distributed large number of IoT devices. In edge computing, the embedded computation capabilities of IoT devices or local resources accessed via ad‐hoc networking are used to process IoT data. Usually, an edge computing paradigm is well suited to perform light computational tasks and does not probe the global Internet unless intervention of remote (core) cloud is required. However, not all the IoT devices are computationally enabled, or local edge resources are computational‐enriched to execute different large‐scale IoT applications simultaneously. In this case, executing latency sensitive IoT applications at remote cloud can degrade the QoS significantly [11]. Moreover, a huge amount of the IoT workload sent to remote cloud can flood the Internet and congest the network. In response to these challenges, fog computing offers infrastructure and software services through distributed fog nodes to execute IoT applications within the network [12].

In fog computing, traditional networking devices such as routers, switches, set‐top boxes, and proxy servers, along with dedicated nano-servers and micro‐data centers, can act as fog nodes and create wide area cloud‐like services both in an independent or clustered manner [13]. Mobile edge servers or cloudlets [14] can also be regarded as fog nodes to conduct their respective jobs in fog‐enabled MCC and MEC. In some cases, edge and fog computing are used interchangeably, although, in a broader perspective, edge is considered as a subset of fog computing [15]. However, in edge and fog computing, the integration of 5G has already been discussed in terms of bandwidth management during computing instance migration [16] and SDN‐enabled IoT resource discovery [17]. The concept of fog radio access network (FRAN) [18] is also getting attention from both academia and industry where fog resources are used to create BBU pool for the base stations.

Working principle of these computing paradigms largely depends on virtualization techniques. The alignment of 5G with different computing paradigms can also be analyzed through the interplay between network and resource virtualization techniques. Network slicing is one of the key features of 5G network virtualization. Computing paradigms can also extend the vision of 5G network slicing into data center and fog nodes. By the latter, we mean that the vision of network slicing can be applied to the shared data center network infrastructure and fog networks to provide an end‐to‐end logical network for applications by establishing a full‐stack virtualized environment. This form of network slicing can also be expanded beyond a data center network into multiclouds or even cluster of fog nodes [19]. Whatever the extension may be, this creates a new set of challenges to the network, including wide area network (WAN) segments, cloud data centers (DCs), and fog resources.

4.3 Network Slicing in 5G

In recent years, industries and academia have undertaken numerous research initiatives to explore different aspects of 5G. Network architecture and its associated physical and MAC layer management are among the prime focuses of current 5G research. The impact of 5G in different real‐world applications, sustainability, and quality expectations is also gaining predominance in the research arena. However, among the ongoing research in 5G, network slicing is drawing more attractions since this distinctive feature of 5G aims at supporting diverse requirements at the finest granularity over a shared network infrastructure [20, 21].

Network slicing in 5G refers to sharing a physical network's resources to multiple virtual networks. More precisely, network slices are regarded as a set of virtualized networks on the top of a physical network [22]. The network slices can be allocated to specific applications/services, use cases or business models to meet their requirements. Each network slice can be operated independently with its own virtual resources, topology, data traffic flow, management policies, and protocols. Network slicing usually requires implementation in an end‐to‐end manner to support coexistence of heterogeneous systems [23].

The network slicing paves the way for customized connectivity among a high number of interconnected end‐to‐end devices. It enhances network automation and leverages the full capacity of SDN and NFV. Also, it helps to make the traditional networking architecture scalable according to the context. Since network slicing shares a common underlying infrastructure to multiple virtualized networks, it is considered as one of the most cost‐effective ways to use network resources and reduce both capital and operational expenses [24]. Besides, it ensures that the reliability and limitations (congestion, security issues) of one slice do not affect the others. Network slicing assists isolation and protection of data, control and management plane that enforce security within the network. Moreover, network slicing can be extended to multiple computing paradigms such as edge [25], fog [13], and cloud that eventually improves their interoperability and helps to bring services closer to the end user with less service‐level agreement (SLA) violations [26].

Apart from the benefits, the network slicing in current 5G context is subjected to diversified challenges, however. Resource provisioning among multiple virtual networks is difficult to achieve since each virtual network has a different level of resource affinity and it can be changed with the course of time. Besides, mobility management and wireless resource virtualization can intensify the network slicing problems in 5G. End‐to‐end slice orchestration and management can also make network slicing complicated. Recent research in 5G network slicing mainly focuses on addressing the challenges through efficient network slicing frameworks. Extending the literature [26, 27], we depicted a generic framework for 5G network slicing in Figure 4.1 The framework consists of three main layers: infrastructure layer, network function layer, and service layer.

Image described by caption and surrounding text.

Figure 4.1 Generic 5G slicing framework.

4.3.1 Infrastructure Layer

The infrastructure layer defines the actual physical network architecture. It can be expanded from edge cloud to remote cloud through radio access network and the core network. Different software defined techniques are encapsulated to facilitate resource abstraction within the core network and the radio access network. Besides, in this layer, several policies are conducted to deploy, control, manage, and orchestrate the underlying infrastructure. This layer allocates resources (compute, storage, bandwidth, etc.) to network slices in such way that upper layers can get access to handle them according to the context.

4.3.2 Network Function and Virtualization Layer

The network function and virtualization layer executes all the required operations to manage the virtual resources and network function's life cycle. It also facilitates optimal placement of network slices to virtual resources and chaining of multiple slices so that they can meet specific requirements of a particular service or application. SDN, NFV, and different virtualization techniques are considered as the significant technical aspect of this layer. This layer explicitly manages the functionality of core and local radio access network. It can handle both coarse‐grained and fine‐grained network functions efficiently.

4.3.3 Service and Application Layer

The service and application layer can be composed by connected vehicles, virtual reality appliances, mobile devices, etc. having a specific use case or business model and represent certain utility expectations from the networking infrastructure and the network functions. Based on requirements or high‐level description of the service or applications, virtualized network functions are mapped to physical resources in such way that SLA for the respective application or service does not get violated.

4.3.4 Slicing Management and Orchestration (MANO)

The functionality of the above layers are explicitly monitored and managed by the slicing management and orchestration layer. There are three main tasks in this layer:

  1. Create virtual network instances upon the physical network by using the functionality of the infrastructure layer.
  2. Map network functions to virtualized network instances to build a service chain with the association of network function and virtualization layer.
  3. Maintain communication between service/application and the network slicing framework to manage the life cycle of virtual network instances and dynamically adapt or scale the virtualized resources according to the changing context.

The logical framework of 5G network slicing is still evolving. Retaining the basic structure, extension of this framework to handle the future dynamics of network slicing can be a potential approach to further standardization of 5G.

According to Huawei, a high‐level perspective of 5G network [28], Cloud‐Native network architecture for 5G has four characteristics:

  1. It provides cloud data center–based architecture and logically independent network slicing on the network infrastructure to support different application scenarios.
  2. It uses Cloud‐RAN 1 to build radio access networks (RAN) to provide a substantial number of connections and implement 5G required on‐demand deployments of RAN functions.
  3. It provides simpler core network architecture and provides on‐demand configuration of network functions via user and control plane separation, unified database management, and component‐based functions.
  4. In an automatic manner, it implements network slicing service to reduce operating expenses.
Chart depicting taxonomy of network-aware VM/VNF management in SDCS: Objective; Approach; Technique; Evaluation.

Figure 4.2 Taxonomy of network‐aware VM/VNF Management in software‐defined Clouds

In the following section, we intend to review the state‐of‐the‐art related work on network slice management happening in cloud computing literature. Our survey in this area can help researcher to apply advances and innovation in 5G and clouds reciprocally.

4.4 Network Slicing in Software‐Defined Clouds

Virtualization technology has been the cornerstone of resource management and optimization in cloud data centers for the last decade. Many research proposals have been expressed for VM placement and virtual machine (VM) migration to improve utilization and efficiency of both physical and virtual servers [29]. In this section, we focus on the state‐of‐the‐art network‐aware VM/VNF management in line with the aim of the report, i.e., network slicing management for SDCs. Figure 4.2 illustrates our proposed taxonomy of network‐aware VM/VNF management in SDCS. Our taxonomy classifies existing works based on the objective of the research, the approach used to address the problem, the exploited optimization technique, and finally, the evaluation technique used to validate the approach. In the remaining parts of this section, we cover network slicing from three different perspectives and map them to the proposed taxonomy: Network‐aware VM management, network‐aware VM migration, and VNF management.

4.4.1 Network‐Aware Virtual Machines Management

Cziva et al. [29] present an orchestration framework to exploit time‐based network information to live migrate VMs and minimize the network cost. Wang et al. [30] propose a VM placement mechanism to reduce the number of hops between communicating VMs, save energy, and balance the network load. Remedy [31] relies on SDN to monitor the state of the network and estimate the cost of VM migration. Their technique detects congested links and migrates VMs to remove congestion on those links.

Jiang et al. [32] worked on joint VM placement and network routing problem of data centers to minimize network cost in real‐time. They proposed an online algorithm to optimize the VM placement and data traffic routing with dynamically adapting traffic loads. VMPlanner [33] also optimizes VM placement and network routing. The solution includes VM grouping that consolidates VMs with high inter‐group traffic, VM group placement within a rack, and traffic consolidation to minimize the rack traffic. Jin et al. [34] studied joint host‐network optimization problem. The problem is formulated as an integer linear problem that combines VM placement and routing problem. Cui et al. [35] explore the joint policy‐aware and network‐aware VM migration problem and present a VM management to reduce network‐wide communication cost in data center networks while considering the policies regarding the network functions and middleboxes. Table 4.2 summarizes the research projects on network‐aware VM management.

Table 4.2 Network‐aware virtual machines management.

Project Objectives Approach/Technique Evaluation
Cziva et al. [29] Minimization of the network communication cost VM migration – Framework Design Prototype
Wang et al. [30] Reducing the number of hops between communicating VMs and network power consumption VM placement – Heuristic Simulation
Remedy [31] Removing congestion in the network VM migration – Framework Design Simulation
Jiang et al. [32] Minimization of the network communication cost VM Placement and Migration – Heuristic (Markov approximation) Simulation
VMPlanner [33] Reducing network power consumption VM placement and traffic flow routing ‐ Heuristic Simulation
PLAN [35] Minimization of the network communication cost while meeting network policy requirements VM Placement ‐ Heuristic Prototype/Simulation

4.4.2 Network‐Aware Virtual Machine Migration Planning

A large body of literature has focused on improving the efficiency of VM migration mechanism [36]. Bari et al. [37] propose a method for finding an efficient migration plan. They try to find a sequence of migrations to move a group of VMs to their final destinations while migration time is minimized. In their method, they monitor residual bandwidth available on the links between source and destination after performing each step in the sequence. Similarly, Ghorbani et al. [38] propose an algorithm to generate an ordered list of VMs to migrate and a set of forwarding flow changes. They concentrate on imposing bandwidth guarantees on the links to ensure that link capacity is not violated during the migration. The VM migration planning problem is also tackled by Li et al. [39] where they address the workload‐aware migration problem and propose methods for selection of candidate virtual machines, destination hosts, and sequence for migration. All these studies focus on the migration order of a group of VMs while taking into account network cost. Xu et al. [40] propose an interference‐aware VM live migration plan called iAware that minimizes both migration and co‐location interference among VMs. Table 4.3 summarizes the research projects on VM migration planning.

Table 4.3 Virtual machine migration planning.

Project Objectives Approach/Technique Evaluation
Bari et al. [37] Finding sequence of migrations while migration time is minimized VM migration – Heuristic Simulation
Ghorbani et al. [38] Finding sequence of migrations while imposing bandwidth guarantees VM migration – Heuristic Simulation
Li et al. [39] Finding sequence of migrations and
destination hosts to balance the load
VM migration – Heuristic Simulation
iAware [40] Minimization of migration and co‐location interference among VMs VM migration – Heuristic Prototype/Simulation

4.4.3 Virtual Network Functions Management

NFV is an emerging paradigm where network functions such as firewalls, network address translation (NAT), and virtual private networks (VPNs) are virtualized and divided up into multiple building blocks called virtualized network functions (VNFs). VNFs are often chained together and build service function chains (SFC) to deliver a required network functionality. Han et al. [41] present a comprehensive survey of key challenges and technical requirements of NFV where they present an architectural framework for NFV. They focus on the efficient instantiation, placement and migration of VNFs, and network performance.

VNF‐P is a model proposed by Moens and Turck [42] for efficient placement of VNFs. They propose a NFV burst scenario in a hybrid scenario in which the base demand for network function service is handled by physical resources while the extra load is handled by virtual service instances. Cloud4NFV [43] is a platform following the NFV standards by the European Telecommunications Standards Institute (ETSI) to build network function as a service using a cloud platform. Its VNF Orchestrator exposes RESTful APIs, allowing VNF deployment. A cloud platform such as OpenStack supports management of virtual infrastructure at the background. vConductor [44] is another NFV management system proposed by Shen et al. for the end‐to‐end virtual network services. vConductor has simple graphical user interfaces (GUIs) for automatic provisioning of virtual network services and supports the management of VNFs and existing physical network functions. Yoshida et al. [45] proposed as part of vConductor using virtual machines (VMs) for building NFV infrastructure in the presence of conflicting objectives that involve stakeholders such as users, cloud providers, and telecommunication network operators.

Service chain is a series of VMs hosting VNFs in a designated order with a flow going through them sequentially to provide desired network functionality. Tabular VM migration (TVM) proposed by [46] aims at reducing the number of hops (network elements) in service chains of network functions in cloud data centers. They use VM migration to reduce the number of hops the flow should traverse to satisfy SLAs. SLA‐driven ordered variable‐width windowing (SOVWin) is a heuristic proposed by Pai et al. [47] to address the same problem, however, using initial static placement. Similarly, an orchestrator for the automated placement of VNFs across the resources proposed by Clayman et al. [48].

The EU‐funded T‐NOVA project [49] aims to realize the NFaaS concept. It has designed and implemented integrated management and orchestrator platforms for the automated provisioning, management, monitoring, and optimization of VNFs. UNIFY [50] is another EU‐funded FP7 project aimed at supporting automated, dynamic service creation based on a fine‐granular SFC model, SDN, and cloud virtualization techniques. For more details on SFC, interested readers are referred to the literature survey by Medhat et al. [51]. Table 4.4 summarizes the state of the art projects on VNF management.

Table 4.4 Virtual network functions management projects.

Project Objectives Approach/Technique
VNF‐P Handling burst in network services demand while minimizing the number of servers Resource allocation ‐ Integer linear programming (ILP)
Cloud4NFV Providing network function as a service Service provisioning – Framework design
vConductor Virtual network services provisioning and management Service provisioning – Framework design
MORSA Multi objective placement of virtual services Placement ‐ Multi‐objective genetic algorithm
TVM Reducing number of hops in service chain VNF migration ‐ heuristic
SOVWin Increasing user requests acceptance rate and minimization of SLA violation VNF placement ‐ heuristic
Clayman et al. Providing automatic placement of the virtual nodes VNF placement ‐ heuristic
T‐NOVA Building a marketplace for VNF Marketplace – framework design
UNIFY Automated, dynamic service creation and service function chaining Service provisioning– framework design

4.5 Network Slicing Management in Edge and Fog

Fog computing is a new trend in cloud computing that attempts to address the quality of service requirements of applications requiring real‐time and low‐latency processing. While fog acts as a middle layer between edge and core clouds to serve applications close to the data source, core cloud data centers provide massive data storage, heavy‐duty computation, or wide area connectivity for the application.

One of the key visions of fog computing is to add compute capabilities or general‐purpose computing to edge network devices such as mobile base stations, gateways, and routers. On the other hand, SDN and NFV play key roles in prospective solutions to facilitate efficient management and orchestration of network services. Despite natural synergy and affinity between these technologies, significant research does not exist on the integration of fog/edge computing and SDN/NFV, as both are still in their infancy. In our view, intraction between SDN/NFV and fog/edge computing is crucial for emerging applications in IoT, 5G, and stream analytics. However, the scope and requirements of such interaction are still an open problem. In the following, we provide an overview of the state‐of‐the‐art within this context.

Lingen et al. [52] define a model‐driven and service‐centric architecture that addresses technical challenges of integrating NFV, fog, and 5G/MEC. They introduce an open architecture based on NFV MANO proposed by the European Telecommunications Standards Institute (ETSI) and aligned with the OpenFog Consortium (OFC) reference architecture 2 that offers uniform management of IoT services spanning through cloud to the edge. A two‐layer abstraction model along with IoT‐specific modules and enhanced NFV MANO architecture is proposed to integerate cloud, network, and fog. As a pilot study, they presented two use cases for physical security of fog nodes and sensor telemetry through street cabinets in the city of Barcelona.

Truong et al. [53] are among the earliest who have proposed an SDN‐based architecture to support fog computing. They have identified required components and specified their roles in the system. They also showed how their system can provide services in the context of vehicular adhoc networks (VANETs). They showed benefits of their proposed architecture using two use‐cases in data streaming and lane‐change assistance services. In their proposed architecture, the central network view by the SDN controller is utilized to manage resources and services and optimize their migration and replication.

Bruschi et al. [54] propose a network slicing scheme for supporting multidomain fog/cloud services. They propose SDN‐based network slicing scheme to build an overlay network for geographically distributed Internet services using non‐overlapping OpenFlow rules. Their experimental results show that the number of unicast forwarding rules installed in the overlay network significantly drops compared to the fully meshed and OpenStack cases.

Inspired by Open Network Operating System (ONOS) 3 SDN controller, Choi et al. [55] propose a fog operating system architecture called FogOS for IoT services. They identified four main challenges of fog computing:

  1. Scalability for handling significant number of IoT devices,
  2. Complex inter‐networking caused by diverse forms of connectivity, e.g., various radio access technologies,
  3. Dynamics and adaptation in topology and quality of service (QoS) requirements, and
  4. Diversity and heterogeneity in communications, sensors, storage, and computing powers, etc.

Based on these challenges, their proposed architecture consists of four main components:

  1. Service and device abstraction
  2. Resource management
  3. Application management
  4. Edge resource: registration, ID/addressing, and control interface

They also demonstrate a preliminary proof‐of‐concept demonstration of their system for a drone‐based surveillance service.

In a recent work, Diro et al. [56] propose a mixed SDN and fog architecture that gives priority to critical network flows while taking into account fairness among other flows in the fog‐to‐things communication to satisfy QoS requirements of heterogeneous IoT applications. They intend to satisfy QoS and performance measures such as packet delay, lost packets, and maximized throughput. Results show that their proposed method can serve critical and urgent flows more efficiently while allocating network slices to other flow classes.

4.6 Future Research Directions

In this section, we discuss open issues in software‐defined clouds and edge computing environments along with future directions.

4.6.1 Software‐Defined Clouds

Our survey on network slicing management and orchestration in SDC shows that the community very well recognizes the problem of joint provisioning of hosts and network resources. In the earlier research, a vast amount of attention has been given to solutions for the optimization of cost/energy focusing only on either host [57] or network [58], not both. However, it is essential for the management component of the system to take into account both network and host cost at the same time. Otherwise, optimization of one can exacerbate the situation for the other.

To address this issue, many research proposals have also focused on the joint host and network resource management. However, most of the proposed approaches suffer from high computational complexity, or they are not optimal. Therefore, it is important to develop algorithms that manage joint hosts and network resource provisioning and scheduling. In joint host and network resource management and orchestration, two conditions must be satisfied: finding the minimum subset of hosts and network resources that can handle a given workload and meeting SLA and users' QoS requirements (e.g., latency). The problem of joint host and network resource provisioning becomes more sophisticated when SDC supports VNF and SFC.

SFC is a hot topic, attaining a significant amount of attention by the community. However, little attention has been paid to VNF placement while meeting the QoS requirements of the applications. PLAN [35] intends to minimize the network communication costs while meeting network policy requirements. However, it only considers traditional middleboxes, and it does not take into account the option of VNF migration. Therefore, one of the areas requires more attention and development of novel optimization techniques is the management and orchestration of SFCs. This has to be done in a way that the placement and migration of VNFs are optimized while SLA violation and cost/energy are maximized.

Network‐aware virtual machines management is a well‐studied area. However, the majority of works in this context consider VM migration and VM placement to optimize network costs. The traffic engineering and dynamic flow scheduling combined with migration and placement of VMs also provide a promising direction for the minimization of network communication cost. For example, SDN, management, and orchestration modules of the system can be used to install flow entries on the switches of the shortest path with the lowest utilization to redirect VM migration traffic to an appropriate path.

The analytical modeling of SDCs has not been investigated intensely in the literature. Therefore, research is warranted that focuses on building a model based on priority networks that can be used for analysis of the SDCs network and validation of results from experiments conducted via simulation.

Auto‐scaling of VNFs is another area that requires more in‐depth attention by the community. VNFs providing networking functions for the applications are subject to performance variation due to different factors such as the load of the service or overloaded underlying hosts. Therefore, development of auto‐scaling mechanisms that monitor the performance of the VMs hosting VNFs and adaptively adds or remove VMs to satisfy the SLA requirements of the applications is of paramount importance for management and orchestration of network slices. In fact, efficient placement of VNFs [59] on hosts near to the service component producing data streams or users generating requests minimizes latency and reduces the overall network cost. However, placement on a more powerful node far in the network might improve processing time [60]. Existing solutions mostly focus on either scaling without placement or placement without scaling. Moreover, auto‐scaling techniques of VNFs, they typically focus on auto‐scaling of a single network service (e.g., firewall), while in practice auto‐scaling of VNFs must be performed in accordance with SFCs. In this context, node, and link capacity limits must be considered, and the solution must maximize the benefit gained from existing hardware using techniques such as dynamic pathing. Therefore, one of the promising avenues for future research on auto‐scaling of VNFs is to explore the optimal dynamic resource allocation and placement.

4.6.2 Edge and Fog Computing

In both edge and fog computing, the integration of 5G so far has been discussed within a very narrow scope. Although 5G network resource management and resource discovery in edge/fog computing have been investigated, many other challenging issues in this area are still unexplored. Mobility‐aware service management in 5G enabled fog computing and forwarding large amount of data from one fog node to another in real‐time overcoming communication overhead can be very difficult to ensure. In addition, due to decentralized orchestration and heterogeneity among fog nodes, modeling, management and provisioning of 5G network resources are not as straightforward as other computing paradigms.

Moreover, compared to mobile edge servers, cloudlets and cloud datacenters, the number of fog nodes and their probability of failure are very high. In this case, implementation of SDN (one of the foundation blocks of 5G) in fog computing can get obstructed significantly. On the other hand, fog computing enables traditional networking devices to process incoming data and due to 5G, this data amount can be significantly huge. In such scenario, adding more resources in traditional networking devices will be very costly, less secured and hinders their inherent functionalities like routing, packet forwarding, etc. which in consequence affect the basic commitments of 5G network and NFV.

Nonetheless, fog infrastructures can be owned by different providers that can significantly resist developing a generalized pricing policy for 5G‐enabled fog computing. Prioritized network slicing for forwarding latency‐sensitive IoT data can also complicate 5G enabled fog computing. Opportunistic scheduling and reservation of virtual network resources is tough to implement in fog as it deals with a large number of IoT devices, and their data sensing frequency can change with the course of time. Load balancing on different virtual networks and QoS can degrade significantly unless efficient monitoring is imposed. Since fog computing is a distributed computing paradigm, centralized monitoring of network resources can intensify the problem. In this case, distributed monitoring can be an efficient solution, although it can fail to reflect the whole network context in a body. Extensive research is required to solve this issue. Besides, in promoting fault tolerance of 5G‐enabled fog computing, topology‐aware application placement, dynamic fault detection, and reactive management can play a significant role, which is subjected to uneven characteristics of the fog nodes.

4.7 Conclusions

In this chapter, we investigated research proposals for the management and orchestration of network slices in different platforms. We discussed emerging technologies such as software‐defined networking SDN and NFV. We explored the vision of 5G for network slicing and discussed some of the ongoing projects and studies in this area. We surveyed state‐of‐the‐art approaches to network slicing in software‐defined clouds and application of this vision to the cloud computing context. We disscussed state‐of‐the‐art literature on network slices in emerging fog/edge computing. Finally, we identified gaps in this context and provided future directions toward the notion of network slicing.

Acknowledgments

This work is supported through Huawei Innovation Research Program (HIRP). We also thank Wei Zhou for his comments and support for the work.

References

  1. 1 J. G. Andrews, S. Buzzi, W. Choi, S. V. Hanly, A. Lozano, A. C. K. Soong, and J. C. Zhang. What Will 5G Be? IEEE Journal on Selected Areas in Communications 32(6): 1065–1082, 2014.
  2. 2 D. Ott, N. Himayat, and S. Talwar. 5G: Transforming the User Wireless Experience. Towards 5G: Applications, Requirements and Candidate Technologies, R. Vannithamby, and S. Talwar (eds.). Wiley Press, Hoboken, NJ, USA, Jan. 2017.
  3. 3 J. Zhang, X. Ge, Q. Li, M. Guizani, and Y. Zhang. 5G millimeter‐wave antenna array: Design and challenges. IEEE Wireless Communications 24(2): 106–112, 2017.
  4. 4 S. Chen and J. Zhao. The Requirements, Challenges, and Technologies for 5G of terrestrial mobile telecommunication. IEEE Communication Magazine 52(5): 36–43, 2014.
  5. 5 R. Buyya, R. N. Calheiros, J. Son, A.V. Dastjerdi, and Y. Yoon. Software‐defined cloud computing: Architectural elements and open challenges. In Proceedings of the 3rd International Conference on Advances in Computing, Communications and Informatics (ICACCI'14), pp. 1–12, New Delhi, India, Sept. 24–27, 2014.
  6. 6 M. Afrin, M.A. Razzaque, I. Anjum, et al. Tradeoff between user quality‐of‐experience and service provider profit in 5G cloud radio access network. Sustainability 9(11): 2127, 2017.
  7. 7 S. Nunna, A. Kousaridas, M. Ibrahim, M.M. Hassan, and A. Alamri. Enabling real‐time context‐aware collaboration through 5G and mobile edge computing. In Proceedings of the 12th International Conference on Information Technology‐New Generations (ITNG'15), pp. 601‐605, Las Vegas, USA, April 13–15, 2015.
  8. 8 I. Ketykó, L. Kecskés, C. Nemes, and L. Farkas. Multi‐user computation offloading as multiple knapsack problem for 5G mobile edge computing. In Proceedings of the 25th European Conference on Networks and Communications (EuCNC'16), pp. 225–229, Athens, Greece, June 27–30, 2016.
  9. 9 K. Zhang, Y. Mao, S. Leng, Q. Zhao, L. Li, X. Peng, L. Pan, S. Maharjan and Y. Zhang. Energy‐efficient offloading for mobile edge computing in 5G heterogeneous networks. IEEE Access 4: 5896–5907, 2016.
  10. 10 C. Ge, N. Wang, S. Skillman, G. Foster and Y. Cao. QoE‐driven DASH video caching and adaptation at 5G mobile edge. In Proceedings of the 3rd ACM Conference on Information‐Centric Networking, pp. 237–242, Kyoto, Japan, Sept. 26–28, 2016.
  11. 11 M. Afrin, R. Mahmud, and M.A. Razzaque. Real time detection of speed breakers and warning system for on‐road drivers. In Proceedings of the IEEE International WIE Conference on Electrical and Computer Engineering (WIECON‐ECE'15), pp. 495‐498, Dhaka, Bangladesh, Dec. 19–20, 2015.
  12. 12 A. V. Dastjerdi and R. Buyya. Fog computing: Helping the Internet of Things realize its potential. Computer. IEEE Computer , 49(8): 112–116, 2016.
  13. 13 F. Bonomi, R. Milito, J. Zhu, and S. Addepalli. Fog computing and its role in the internet of things. In Proceedings of the first edition of the MCC workshop on Mobile Cloud computing (MCC'12), pp. 13–16, Helsinki, Finland, Aug. 17, 2012.
  14. 14 R. Mahmud, M. Afrin, M. A. Razzaque, M. M. Hassan, A. Alelaiwi and M. A. AlRubaian. Maximizing quality of experience through context‐aware mobile application scheduling in Cloudlet infrastructure. Software: Practice and Experience , 46(11): 1525–1545, 2016.
  15. 15 R. Mahmud, K. Ramamohanarao, and R. Buyya. Fog computing: A taxonomy, survey and future directions. Internet of Everything: Algorithms, Methodologies, Technologies and Perspectives. Di Martino Beniamino, Yang Laurence, Kuan‐Ching Li, and Esposito Antonio (eds.), ISBN 978‐981‐10‐5861‐5, Springer, Singapore, Oct. 2017.
  16. 16 D. Amendola, N. Cordeschi, and E. Baccarelli. Bandwidth management VMs live migration in wireless fog computing for 5G networks. In Proceedings of the 5th IEEE International Conference on Cloud Networking (Cloudnet'16), pp. 21–26, Pisa, Italy, Oct. 3–5, 2016.
  17. 17 M. Afrin, R. Mahmud. Software Defined Network‐based Scalable Resource Discovery for Internet of Things. EAI Endorsed Transaction on Scalable Information Systems 4(14): e4, 2017.
  18. 18 M. Peng, S. Yan, K. Zhang, and C. Wang. Fog‐computing‐based radio access networks: issues and challenges. IEEE Network , 30(4): 46–53, 2016.
  19. 19 R. Mahmud, F. L. Koch, and R. Buyya. Cloud‐fog interoperability in IoT‐enabled healthcare solutions. In Proceedings of the 19th International Conference on Distributed Computing and Networking (ICDCN'18), pp. 1–10, Varanasi, India, Jan. 4–7, 2018.
  20. 20 T. D. P. Perera, D. N. K. Jayakody, S. De, and M. A. Ivanov. A Survey on Simultaneous Wireless Information and Power Transfer. Journal of Physics: Conference Series , 803(1): 012113, 2017.
  21. 21 P. Pirinen. A brief overview of 5G research activities. In Proceedings of the 1st International Conference on 5G for Ubiquitous Connectivity (5GU'14), pp. 17–22, Akaslompolo, Finland, November 26–28, 2014.
  22. 22 A. Nakao, P. Du, Y. Kiriha, et al. End‐to‐end network slicing for 5G mobile networks. Journal of Information Processing 2 (2017): 153–163.
  23. 23 K. Samdanis, S. Wright, A. Banchs, F. Granelli, A. A. Gebremariam, T. Taleb, and M. Bagaa. 5G Network Slicing: Part 1–Concepts, Principales, and Architectures [Guest Editorial]. IEEE Communications Magazine , 55(5) (2017): 70–71.
  24. 24 S. Sharma, R. Miller, and A. Francini. A cloud‐native approach to 5G network slicing. IEEE Communications Magazine , 55(8): 120–127, 2017.
  25. 25 W. Shi, J. Cao, Q. Zhang, Y. Li, and L. Xu. Edge computing: vision and challenges. IEEE Internet of Things Journal , 3(5): 637–646, 2016.
  26. 26 X. Foukas, G. Patounas, A. Elmokashfi, and M. K. Marina. Network Slicing in 5G: Survey and Challenges. IEEE Communications Magazine , 55(5): 94–100, 2017.
  27. 27 X. Li, M. Samaka, H. A. Chan, D. Bhamare, L. Gupta, C. Guo, and R. Jain. Network slicing for 5G: Challenges and opportunities., IEEE Internet Computing , 21(5): 20–27, 2017.
  28. 28 Huawei Technologies' white paper. 5G Network Architecture A High‐Level Perspective, http://www.huawei.com/minisite/hwmbbf16/insights/5G‐Nework‐Architecture‐Whitepaper‐en.pdf (Last visit: Mar, 2018).
  29. 29 R. Cziva, S. Jouët, D. Stapleton, F.P. Tso and D.P. Pezaros. SDN‐Based Virtual Machine Management for Cloud Data Centers. IEEE Transactions on Network and Service Management , 13(2): 212–225, 2016.
  30. 30 S.H. Wang, P.P. W. Huang, C.H.P. Wen, and L. C. Wang. EQVMP: Energy‐efficient and QoS‐aware virtual machine placement for software defined datacenter networks. In Proceedings of the International Conference on Information Networking (ICOIN'14), pp. 220–225, Phuket, Thailand, Feb. 10–12, 2014.
  31. 31 V. Mann, A. Gupta, P. Dutta, A. Vishnoi, P. Bhattacharya, R. Poddar, and A. Iyer. Remedy: Network‐aware steady state VM management for data centers. In Proceedings of the 11th international IFIP TC 6 conference on Networking (IFIP'12), pp. 190–204, Prague, Czech Republic, May 21–25, 2012.
  32. 32 J. W. Jiang, T. Lan, S. Ha, M. Chen, and M. Chiang. Joint VM placement and routing for data center traffic engineering. In Proceedings of the IEEE International Conference on Computer Communications (INFOCOM'12), pp. 2876–2880, Orlando, USA, March 25–30, 2012.
  33. 33 W. Fang, X. Liang, S. Li, L. Chiaraviglio, N. Xiong. VMPlanner: Optimizing virtual machine placement and traffic flow routing to reduce network power costs in Cloud data centers. Computer Networks 57(1): 179–196, 2013.
  34. 34 H. Jin, T. Cheocherngngarn, D. Levy, A. Smith, D. Pan, J. Liu, and N. Pissinou. Joint host‐network optimization for energy‐efficient data center networking. In Proceedings of the 27th IEEE International Symposium on Parallel and Distributed Processing (IPDPS'13), pp. 623–634, Boston, USA, May 20–24, 2013.
  35. 35 L. Cui, F.P. Tso, D.P. Pezaros, W. Jia, and W. Zhao. PLAN: Joint policy‐ and network‐aware VM management for cloud data centers. IEEE Transactions on Parallel and Distributed Systems , 28(4):1163–1175, 2017.
  36. 36 W. Voorsluys, J. Broberg, S. Venugopal, and R. Buyya. Cost of virtual machine live migration in clouds: a performance evaluation. In Proceedings of the 1st International Conference on Cloud Computing (CloudCom'09), pp. 254–265, Beijing, China, Dec. 1–4, 2009.
  37. 37 M.F. Bari, M.F. Zhani, Q. Zhang, R. Ahmed, and R. Boutaba. CQNCR: Optimal VM migration planning in cloud data centers. In Proceedings of the IFIP Networking Conference, pp. 1–9, Trondheim, Norway, June 2–4, 2014.
  38. 38 S. Ghorbani, and M. Caesar. Walk the line: consistent network updates with bandwidth guarantees. In Proceedings of the 1st workshop on Hot topics in software defined networks (HotSDN'12), pp. 67–72, Helsinki, Finland, Aug. 13, 2012.
  39. 39 X. Li, Q. He, J. Chen, and T. Yin. Informed live migration strategies of virtual machines for cluster load balancing. In Proceedings of the 8th IFIP international conference on Network and parallel computing (NPC'11), pp. 111–122, Changsha, China, Oct. 21–23, 2001.
  40. 40 F. Xu, F. Liu, L. Liu, H. Jin, B. Li, and B. Li. iAware: Making Live Migration of Virtual Machines Interference‐Aware in the Cloud. IEEE Transactions on Computers , 63(12): 3012–3025, 2014.
  41. 41 B. Han, V. Gopalakrishnan, L. Ji, and S. Lee. Network function virtualization: Challenges and opportunities for innovations. IEEE Communications Magazine , 53(2): 90–97, 2015.
  42. 42 H. Moens and F. D. Turck. VNF‐P: A model for efficient placement of virtualized network functions. In Proceedings of the 10th International Conference on Network and Service Management (CNSM'14), pp. 418–423, Rio de Janeiro, Brazil, Nov. 17–21, 2014.
  43. 43 J. Soares, M. Dias, J. Carapinha, B. Parreira, and S. Sargento. Cloud4NFV: A platform for virtual network functions. In Proceedings of the 3rd IEEE International Conference on Cloud Networking (CloudNet'14), pp. 288–293, Luxembourg, Oct. 8–10, 2014.
  44. 44 W. Shen, M. Yoshida, T. Kawabata, et al. vConductor: An NFV management solution for realizing end‐to‐end virtual network services. In Proceedings of the 16th Asia‐Pacific Network Operations and Management Symposium (APNOMS'14), pp. 1–6, Hsinchu, Taiwan, Sept.17–19, 2014.
  45. 45 M. Yoshida, W. Shen, T. Kawabata, K. Minato, and W. Imajuku. MORSA: A multi‐objective resource scheduling algorithm for NFV infrastructure. In Proceedings of the 16th Asia‐Pacific Network Operations and Management Symposium (APNOMS'14), pp. 1–6, Hsinchu, Taiwan, Sept. 17–19, 2014.
  46. 46 Y. F. Wu, Y. L. Su and C. H. P. Wen. TVM: Tabular VM migration for reducing hop violations of service chains in cloud datacenters. In Proceedings of the IEEE International Conference on Communications (ICC'17), pp. 1–6, Paris, France, May 21–25, 2017.
  47. 47 Y.‐M. Pai, C.H.P. Wen and L.‐P. Tung. SLA‐driven ordered variable‐width windowing for service‐chain deployment in SDN datacenters. In Proceedings of the International Conference on Information Networking (ICOIN'17), pp. 167–172, Da Nang, Vietnam, Jan. 11–13, 2017
  48. 48 S. Clayman, E. Maini, A. Galis, A. Manzalini, and N. Mazzocca. The dynamic placement of virtual network functions. In Proceedings of the IEEE Network Operations and Management Symposium (NOMS'14), pp. 1–9, Krakow, Poland, May 5–9, 2014.
  49. 49 G. Xilouris, E. Trouva, F. Lobillo, J.M. Soares, J. Carapinha, M.J. McGrath, G. Gardikis, P. Paglierani, E. Pallis, L. Zuccaro, Y. Rebahi, and A. Koutis. T‐NOVA: A marketplace for virtualized network functions. In Proceedings of the European Conference on Networks and Communications (EuCNC'14), pp. 1–5, Bologna, Italy, June 23–26, 2014.
  50. 50 B. Sonkoly, R. Szabo, D. Jocha, J. Czentye, M. Kind and F. J. Westphal. UNIFYing cloud and carrier network resources: an architectural view. In Proceedings of the IEEE Global Communications Conference (GLOBECOM'15), pp. 1–7, San Diego, USA, Dec. 6–10, 2015.
  51. 51 A. M. Medhat, T. Taleb, A. Elmangoush, G. A. Carella, S. Covaci and T. Magedanz. Service function chaining in next generation networks: state of the art and research challenges. IEEE Communications Magazine , 55(2): 216–223, 2017.
  52. 52 F. van Lingen, M. Yannuzzi, A. Jain, R. Irons‐Mclean, O. Lluch, D. Carrera, J. L. Perez, A. Gutierrez, D. Montero, J. Marti, R. Maso, and A. J. P. Rodriguez. The unavoidable convergence of NFV, 5G, and fog: A model‐driven approach to bridge cloud and edge. IEEE Communications Magazine , 55 (8): 28–35, 2017.
  53. 53 N.B. Truong, G.M. Lee, and Y. Ghamri‐Doudane. Software defined networking‐based vehicular adhoc network with fog computing. In Proceedings of the IFIP/IEEE International Symposium on Integrated Network Management (IM'15), pp. 1202–1207, Ottawa, Canada, May 11–15, 2015.
  54. 54 R. Bruschi, F. Davoli, P. Lago, and J.F. Pajo. A scalable SDN slicing scheme for multi‐domain fog/cloud services. In Proceedings of the IEEE Conference on Network Softwarization (NetSoft'17), pp. 1‐6, Bologna, Italy, July 3–7, 2017.
  55. 55 N. Choi, D. Kim, S. J. Lee, and Y. Yi. A fog operating system for user‐oriented IoT services: Challenges and research directions. IEEE Communications Magazine , 55(8): 44–51, 2017.
  56. 56 A.A. Diro, H.T. Reda, and N. Chilamkurti. Differential flow space allocation scheme in SDN based fog computing for IoT applications. Journal of Ambient Intelligence and Humanized Computing, DOI: 10.1007/s12652‐017‐0677‐z.
  57. 57 A. Beloglazov, J. Abawajy, R. Buyya. Energy‐aware resource allocation heuristics for efficient management of data centers for cloud computing. Future Generation Computer Systems , 28(5): 755–768, 2012.
  58. 58 B. Heller, S. Seetharaman, P. Mahadevan, Y. Yiakoumis, P. Sharma, S. Banerjee, and N. McKeown. ElasticTree: Saving energy in data center networks. In Proceedings of the 7th USENIX conference on Networked systems design and implementation (NSDI'10), pp. 249–264, San Jose, USA, April 28–30, 2010.
  59. 59 A. Fischer, J.F. Botero, M.T. Beck, H. de Meer, and X. Hesselbach. Virtual network embedding: A survey. IEEE Communications Surveys & Tutorials , 15(4):1888–1906, 2013.
  60. 60 S. Dräxler, H. Karl, and Z.A. Mann. Joint optimization of scaling and placement of virtual network services. In Proceedings of the 17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid '17), pp. 365–370, Madrid, Spain, May 14–17, 2017.

Notes

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.141.27