Chapter 7. Virtual Networking Services and Application Containers

This chapter covers the following topics:

Image Virtual Networking Services

Image Virtual Application Containers

This chapter covers the following exam objectives:

Image 4.2 Describe Infrastructure Virtualization

Image 4.2.d Virtual networking services

Image 4.2.e Define Virtual Application Containers

Image 4.2.e.1 Three-tier application container

Image 4.2.e.2 Custom container

Although many data center professionals may regard networking solely as “data plumbing,” the importance of reliable information sharing continues to grow as IT establishes a stronger alignment with business.

More than ever, modern network devices can offer sophisticated functionalities with much more value to business than simply forwarding packets. In that sense, networking services have established themselves as an integral part of data centers since the blooming of Internet commerce in the 1990s.

Inhabiting the gray area between applications and network, networking services can be defined as a set of repetitive operations normally carried out by application servers (or client devices) but actually implemented on specialized network devices. The most common data center networking services are firewalls, server load balancers, and WAN accelerators. And as server virtualization has evolved, many of these services have been packaged in virtual machines with comparable performance to some physical devices.

In Chapter 6, “Infrastructure Virtualization,” you learned the fundamental principles and were introduced to multiple solutions for virtual networking. One of the most innovative solutions, Cisco Nexus 1000V, represents the company approach to virtual machine traffic control with advanced features and remarkable consistency with physical network operations. Today, this virtual switch provides a comprehensive framework for virtual networking services both developed in-house and by third-party vendors.

The CLDFND exam requires awareness of the most common virtual networking services, including compute and edge firewalls, advanced routing, server load balancing, and WAN acceleration. This chapter first introduces these services, through real examples from the Cisco Nexus 1000V portfolio, and then turns its attention to virtual application containers, a concept that introduces standardization and consistency for cloud computing virtual networks.

“Do I Know This Already?” Quiz

The “Do I Know This Already?” quiz allows you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics, read the entire chapter. Table 7-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. You can find the answers in Appendix A, “Answers to Pre-Assessments and Quizzes.”

Image

Table 7-1 “Do I Know This Already?” Section-to-Question Mapping

1. Which of the following is not a data center networking service?

a. ADC

b. WAN acceleration

c. Network access control

d. Firewall

e. Intrusion prevention system

2. Which of the following are enhancements of vPath over service insertion methods such as VLAN manipulation, PBR, and WCCP? (Choose all that apply.)

a. Performance

b. Service chains

c. One-arm mode

d. Policy-based forwarding

e. Traffic offload

3. Which of the following are differences between VSG and ASAv? (Choose all that apply.)

a. VSG policies can be executed inside the hypervisor kernel.

b. ASAv policies can be executed inside the hypervisor kernel.

c. ASAv must analyze every packet from a connection.

d. VSG must analyze every packet from a connection.

e. VSG supports security policies with VM attributes.

4. Which network operating system does CSR 1000V run?

a. NX-OS

b. IOS

c. IOS XR

d. IOS XE

e. ASR-OS

5. Which of the following is not a benefit applications gain from the use of ADCs?

a. Scaling

b. High-availability

c. Content switching

d. Clustering

e. Acceleration

6. Which of the following are required configuration elements when deploying server load balancing in Citrix NetScaler 1000V? (Choose all that apply.)

a. Stickiness table

b. Virtual IP address

c. Monitor

d. Servers

e. DNS

7. Which of the following is not a WAN acceleration method available on vWAAS?

a. TFC

b. Windows printing AO

c. DRE

d. PLZ

e. TFO

8. Which of the following virtual networking services support vPath? (Choose all that apply.)

a. VSG

b. CSR 1000V

c. ASAv

d. vWAAS

e. NetScaler 1000V

9. Which of the following solutions are components of Cisco Virtual Application Cloud Segmentation? (Choose all that apply.)

a. Nexus 1000V

b. PNSC

c. UCS Director

d. CSM

e. VSG

f. CSR 1000V

10. Which of the following are differences between three-tier and custom virtual application containers? (Choose all that apply.)

a. Additional security zones

b. Zone-based firewall

c. Number of application tiers

d. Use of VXLAN

e. Number of segments

Foundation Topics

Virtual Networking Services

There are many ways to provide specialized services to applications. For example, one can install agents on application servers to achieve user session authorization or encryption according to a defined security policy. However, as the number of servers increases in a data center site (especially under the influence of server virtualization), such agents may easily become an operational challenge.

If a specific service is based on open standards and requires exhaustive repetition of the same operations, a dedicated network device is probably the best way to perform that service. These specialized devices are generically called networking services.

Offering predictable performance for predefined operations, networking services are usually deployed in a centralized position in a data center network while executing their functions transparently to both application servers and clients.

The most popular data center networking services are

Image Firewalls

Image Advanced routers

Image Server load balancers (SLBs) or application delivery controllers (ADCs)

Image Wide-area network (WAN) accelerators

Over the past two decades, networking services have been deployed in different formats, such as dedicated network appliances, hardware-based device insertion modules, or additional features on a network operating system. But with the increasing adoption of server virtualization, many networking services started to be commercialized in a virtual format. Also labeled virtual networking services, the first virtual appliances were essentially a repackaging of physical appliances, with virtual machines hosting the exact same software that ran on physical networking services.

Nonetheless, new approaches were developed as more networking services leveraged the flexibility of virtual switching. Providing a firm architecture for virtual networking designs, Cisco Nexus 1000V has aggregated an enviable portfolio of virtual networking services, which will be discussed at length in the following sections.

But before you delve into these services, you must first be acquainted with some of the “old-school” techniques that are still used to insert networking services in physical structures.

Service Insertion in Physical Networks

If a networking service must be deployed for an application, the traffic exchanged between clients and servers must be guided to the network device implementing such service. As an illustration, Figure 7-1 depicts some of the most traditional solutions for traffic steering deployed in data center networks.

Image
Image

Figure 7-1 Traffic Steering Techniques in Physical Networks

The topology on the left in Figure 7-1 depicts a method called VLAN manipulation, where two VLANs are used to drive traffic through a networking service. This arrangement works for inline appliances, which can bridge Ethernet frames (or route IP packets) between both VLANs, allowing servicing of all traffic that uses this path. While this technique is successfully used for security services such as firewalls and intrusion prevention systems (IPSs), it may not be ideal for networking services that must only be applied to selected traffic.

Depicted in the middle topology of Figure 7-1, policy-based routing (PBR) enables networking services in “one-arm mode,” where the specialized device is not positioned as a mandatory hop between clients and servers. Although it has the advantage of not overloading the appliance with traffic that should not be serviced, this technique requires manual configuration in centralized points of the network to allow precise traffic steering. Generally speaking, PBR is commonly applied to server load-balancer designs.

In 1997, Cisco developed the Web Cache Control Protocol (WCCP) to detour client HTTP requests to web caches, providing bandwidth savings and faster responses on a remote branch. The topology on the right in Figure 7-1 depicts an alternative WCCP design called reverse-proxy, where the web cache is not close to the client but rather located in the data center network. In this case, a strategically positioned network device (router or switch) detects incoming client HTTP traffic and, through WCCP encapsulation, steers it to the cache with the objective of offloading servers from having to send the same web objects repeatedly.

In more detail, WCCP in reverse-proxy carries out the following operations:

Step 1. The router (or switch) receives IP packets from a client.

Step 2. If packets belong to TCP port 80 (web traffic), they are encapsulated into WCCP packets and sent to the web cache. The encapsulation guarantees that the IP packets reach the web cache in the original format and without any manual configuration on intermediary network devices. If a requested web object is present in the cache, it sends it to the client without bothering the web servers.

Step 3. If the requested web object is not present in the cache, the device transparently retrieves the object from the server, sends it to the client, and caches it for future sessions.

When compared to other interception methods, WCCP offers the simplicity of configuring fewer devices to provide networking services for select applications. Although WCCP demands support on network devices and web caches, its elegance grants a natural extension to more services. For example, TCP traffic steering through WCCP is now a usual service insertion technique for WAN accelerators.

Virtual Services Data Path

Leveraging the considerable flexibility brought by server virtualization, networking services can be inserted with less complexity when compared to the techniques explained in the prior section. Taking advantage of this flexibility, Cisco Nexus 1000V incorporates a feature called Virtual Services Data Path (vPath). In a nutshell, vPath avoids convoluted network configurations and deploys networking service insertion through port profiles.


Note

As you have learned in Chapter 6, Nexus 1000V port profiles are interface configuration templates that can be inherited by distributed virtual switch interfaces that are connected to VMs.


As a visual aid, Figure 7-2 explores the working principles behind vPath.

Image

Figure 7-2 vPath in Action

In Figure 7-2, a special port profile is created and associated with two virtual machines connected to distinct Virtual Ethernet Modules (West VEM and East VEM). Denoted as a small circle, this port profile essentially signals to a VEM that frames to (or from) these virtual machines deserve distinctive traffic handling, rather than plain Layer 2 switching.

Afterward, the VEM encapsulates these frames into vPath packets destined to virtual networking services that can be reached through a shared Layer 2 segment (VLAN or VXLAN) or a remote IP subnet.

Figure 7-2 depicts vPath steering two frames to a virtual networking service: Frame1 (sent by VM A) and Frame2 (destined to VM D). Depending on the virtual appliance specialized service, these packets are processed accordingly and sent back to the VEM connected to the virtual machine to continue their original path.

Similarly to WCCP, vPath employs encapsulation to simplify traffic interception configurations. But as an enhancement, vPath introduces the concept of forwarding policies, where a virtual machine attribute (such as associated port profile) takes precedence over its pure networking characteristics (such as VLAN or IP address) to define service insertion. Furthermore, vPath enables virtual networking services to program Nexus 1000V VEMs to better serve target applications.

Knowing that examples speak much louder than abstractions, I will introduce a very illustrative vPath-enabled security service in the next section.

Cisco Virtual Security Gateway

Released in 2010, Cisco Virtual Security Gateway (VSG) brought the concept of a compute firewall to virtual networks based on Cisco Nexus 1000V. With VSG, traffic between two virtual machines can be permitted or blocked within the hypervisor kernel or, more specifically, the Virtual Ethernet Module.

But first, allow me to present the components that comprise the VSG architecture:

Image Cisco Prime Network Services Controller (PNSC): Deployed as a virtual machine, this software is responsible for the creation of security profiles, configuration of VSG (as well as other service devices), and the establishment of a multitenant hierarchy defined by tenants, virtual data centers (vDCs), virtual applications (vApps), and application tiers.

Image Cisco Virtual Security Gateway (VSG): This virtual appliance executes the rules defined in the PNSC security profiles on traffic from (or to) VMs that belong to a PNSC tenant, vDC, vApp, or tier.

Image Cisco Virtual Supervisor Module (VSM): In the Nexus 1000V supervisor module, VSG reachability is configured and PNSC security profiles are inserted into vEthernet port profiles. And as explained in Chapter 6, these port profiles generate connectivity policies that can be associated to virtual machines and, thereafter, drive VEM behavior in order to correctly steer traffic to VSG.

Image VM Manager: Actively associates connectivity policies (Port Groups in the case of VMware vSphere) to a VM network adapter card.

As you may have already noticed, these components work collaboratively to deploy the concept of a compute firewall. Further detailing this service, Figure 7-3 examines a VSG scenario at the moment when a new connection reaches a Nexus 1000V instance.

Image
Image

Figure 7-3 VSG Ready to Process Traffic

The following is the complete provisioning process used to build the scenario depicted in Figure 7-3:

Image The security administrator creates a VSG security profile called “WEB-SP” in PNSC. This policy basically blocks all traffic except TCP connections whose destination port is 80 or 443.

Image The network administrator defines how the VEMs can reach VSG (VLAN, VXLAN, or IP address), creates a port profile called “WEB-PP,” and inserts security profile “WEB-SP” into this port profile.

Image The configuration of port profile “WEB-PP” automatically creates a connectivity policy (or Port Group in VMware-speak) in the VM manager. Thus, the virtualization administrator assigns this policy to VM A, spawning vEthernet 10 in West VEM.


Note

For the sake of simplicity and because it is a topic beyond the scope of this book, I will not represent how exactly each virtual networking service deploys high availability in the examples discussed in this chapter. Please refer to each vendor’s products documentation for more detailed information.


This bucolic scene is disturbed with an HTTP connection sent toward VM A. Figure 7-4 shows what happens when the first packet of the connection reaches West VEM, regardless of its source (B, C, D, or an external host).

Image
Image

Figure 7-4 First Packet Steered to VSG

Figure 7-4 captures the instant when West VEM sends the connection’s first packet (encapsulated in a vPath frame) after noticing that it was supposed to reach a “protected” interface (vEthernet 10).

As VSG receives the packet, it enforces security profile WEB-SP rules on it. Because the packet belongs to a TCP port 80 connection, it conforms to WEB-SP rules. Consequently, VSG resends the encapsulated packet back to the West VEM, as displayed in Figure 7-5.

Image
Image

Figure 7-5 West VEM Sending First Packet to VM A

Besides traffic redirection, vPath also carries out more operations between VSG and West VEM. In fact, VSG caches the security profile decision into the Nexus 1000V module, allowing the remaining packets from that specific connection to be freely exchanged between VM A and the original source.

To recognize which packets belong to this same connection, VEMs with protected VMs maintain a flow table. Inside these tables, allowed and blocked connections are identified via destination IP address, source IP address, protocol (TCP, UDP, and ICMP, among others), destination port (in the case of TCP or UDP), and source port (in the case of TCP or UDP).

The VEM also identifies the state of each offloaded flow. Therefore, after a FIN or RST is seen on a TCP connection, the VEM automatically purges this specific offload entry. For connectionless protocols and interrupted TCP connections, Nexus 1000V has predefined timeouts for the offloading traffic entries configured by VSG.


Note

VSG does not offload to the VEM the handling of some protocols that require inspection of all packets of a flow, such as File Transfer Protocol (FTP), Trivial FTP (TFTP), and Remote Shell (RSH).


Because VSG only analyzes the first packet of a standard connection, it provides a scalable solution to control VM-to-VM (or “east–west,” as I intended to subconsciously suggest to you) traffic in cloud environments.

You can easily deduce that security rules based on IP addresses may not be the best way to secure the extremely dynamic networks. As a general rule in these scenarios, any security policy should be designed to be as reusable as possible. Hence, a cloud network architect should always strive to eliminate specific arguments on any policy or template.

With this objective, VSG supports the creation of security rules beyond IP addresses. As Figure 7-6 displays, PNSC provides flexible traffic classification methods that can be easily reused in automated environments.

Image

Figure 7-6 VSG Security Profile Rule in PNSC

Figure 7-6 exhibits security profile APP-SP, which contains a single rule that only allows VMs characterized as web servers to use TCP port 3349. Rather than defining these VMs through their IP addresses, the rule uses the VM name prefix as a source attribute, allowing a much more efficient method to build security rules.

Besides virtual machine naming, PNSC can define the following VM attributes for VSG security profiles:

Image Guest operating system

Image Hostname

Image Cluster name

Image Port profile

Image VM DNS name

Besides prefixes, PNSC also allows the use of additional regular expressions and operators such as contains, equals, and not equals to recognize parts of these VM attributes.


Note

The joint capability of handling traffic within the hypervisor kernel and using classification based on server virtualization attributes (even inside the same network segment) is informally defined as micro-segmentation.


Virtual zones (vZones) are yet another PNSC resource that greatly facilitates security policy writing. In a nutshell, a vZone defines a logical group of VMs with multiple common attributes that can be referenced, as a single parameter, on any security profile rule. As an example, a virtual zone called vZone-DB can aggregate all VMs whose names start with the prefix “SQL” and whose IP addresses belong to a subnet 192.168.1.0/24.


Note

More details related to VSG scalability, performance, and licensing can be found in Chapter 13, “Cisco Cloud Infrastructure Portfolio.”


Cisco Adaptive Security Virtual Appliance

A compute firewall such as Cisco Virtual Security Gateway (VSG) is carefully designed to enforce security policies within a defined organization unit such as a cloud tenant. Yet, additional protection measures are required to harden the organization unit from what is considered to be “the external wild world.” After all, attacks and exploits may come from the Internet or even from other tenants sharing the same infrastructure.

Also known as ASAv, the Cisco Adaptive Security Virtual Appliance is composed of the market-leading Cisco ASA firewall software ported into a virtual machine. Although ASAv may replace some physical Adaptive Security Appliance (ASA) models in server virtualization environments, it was originally designed to perform the role of an edge firewall for cloud tenant resources.

Figure 7-7 illustrates how ASAv can protect VMs from a cloud tenant in a Nexus 1000V scenario.

Image

Figure 7-7 ASAv Deployment Example

In both topologies in Figure 7-7, ASAv is protecting all virtual machines connected to VXLAN 9000 (VMs A, B, and C) from outside traffic coming through VLAN 50. Because edge firewalls must deploy Layers 4 to 7 security rules to avoid more sophisticated attacks, traffic is always steered to ASAv through VLAN (and VXLAN) manipulation. Routing IP packets between both inside and outside interfaces, ASAv can work as a default gateway for the protected VMs.


Note

At the time of this writing, ASAv does not support vPath. For this reason, it depends on other traffic steering techniques, such as embodying the VMs default gateway, to enforce security policies on all traffic from a cloud tenant (or select resources from it).


The topology on the right side of Figure 7-7 hides the representation of Layer 2 switches and virtualization hosts to clarify the objective of an edge firewall such as ASAv. In my opinion, such “broadcast domain view” can be an excellent alternative to characterize security domains in virtual network topologies.

ASAv can deploy up to ten virtual interfaces and, for that reason, may also filter intra-tenant traffic and deploy virtual demilitarized zones (DMZs). Besides traffic filtering, ASAv also provides the following capabilities for cloud tenant resources:

Image

Image Site-to-site virtual private networks (VPNs): Using IPsec, the edge firewall can securely connect external networks with compatible routers or firewalls.

Image Remote VPNs: This feature allows individual hosts deploying VPN software (such as Cisco AnyConnect) to remotely connect to ASAv as if they were located in a local security zone. With this feature, IT administrators can perform maintenance operations on protected VMs, for example.

Image Network Address Translation (NAT): ASAv can perform translation of private IP addresses to public IP addresses so internal VMs can be externally reached. This capability also allows the reuse of private IP subnets and addresses for multiple tenants.

Image Application inspection: Required for services that embed IP addressing information in user data or open secondary connections using dynamically assigned ports. Through this feature, ASAv performs deep packet analysis in protocols such as Domain Name Service (DNS), FTP, HTTP, Instant Messaging (IM), RSH, Session Initiation Protocol (SIP), SQL*Net, TFTP, among others.

Image Authentication, Authorization, and Accounting (AAA): Set of services to identify ASAv administrators and application users, define what type of resources they can access, and register the procedures they have executed.

At the time of this writing, ASAv can be managed through the following tools and methods:

Image Command-line interface (CLI): This configuration method uses commands from the well-known ASA operating system.

Image Cisco Adaptive Security Device Manager (ASDM): Graphical user interface that allows the easy creation of security rules and policies on a single ASAv instance. Its administration can be also offered to tenants’ administrators, if desired.

Image Cisco Security Manager (CSM): Management tool that allows security policy consistency across multiple ASAv instances through object management, event monitoring, report and troubleshooting tools, image control, health, and performance monitoring. CSM can be managed through a GUI or a REST-based XML API.

Image Application programming interface (API): ASAv also provides an individual API based on RESTful principles. Ideal for cloud computing and other automated environments.


Note

More details related to ASAv scalability, performance, and licensing can be found in Chapter 13.


Cisco Cloud Services Router 1000V

The vast majority of virtual switches only offer Layer 2 forwarding (bridging) between virtual machines and the physical network. Nevertheless, rather than limit Layer 3 forwarding (routing) to occur exclusively from physical network devices, cloud tenants can benefit from the insertion of advanced routing functions closer to their virtual machines.

Cisco Cloud Services Router (CSR) 1000V expands Cisco’s initiative of building virtual appliances based on its most flexible and popular physical devices. More specifically, CSR 1000V runs Cisco IOS XE, a robust and complete variation of the most popular network operating system in the world.

Deploying CSR 1000V, cloud tenants can leverage advanced Layer 3 features such as

Image

Image IP versions 4 and 6, including migration tools such as NAT64

Image Unicast routing protocols, including Border Gateway Protocol (BGP), Open Shortest Path First (OSPF), and Enhanced Interior Gateway Routing Protocol (EIGRP), as well as PBR

Image High availability gateway protocols such as Hot Standby Routing Protocol (HSRP), Virtual Router Redundancy Protocol (VRRP), and Global Load Balancing Protocol (GLBP)

Image IP multicast protocols, including Internet Group Management Protocol (IGMP) and Protocol Independent Multicast (PIM)

Image Virtual Forwarding and Routing (VRF) for routing and forwarding table segmentation

Image Multiprotocol Label Switching (MPLS) services, including Layer 3 VPNs (L3VPNs), Ethernet over MPLS (EoMPLS), and Virtual Private LAN Services (VPLS)

Image Zone-based firewall

Image IPsec VPNs, including Dynamic Multipoint VPN (DMVPN), Easy VPN, and FlexVPN

Image Access control lists (ACLs)

Image WCCP for an easier integration of myriad virtual networking services such as web caches and WAN accelerators


Note

CSR 1000V also deploys advanced data center features such as Overlay Transport Virtualization (OTV), which will be explained in more detail in Chapter 10, “Network Architectures for the Data Center: Unified Fabric.”


As a visual aid, Figure 7-8 presents a use case for CSR 1000V in an IaaS-based cloud.

Image

Figure 7-8 CSR 1000V Deployment Example

In this scenario, a company deploys Layer 3 VPNs for traffic isolation between corporate users (CORPORATE) and partners (PARTNER). After hiring IaaS services from a cloud provider, the company wants to enforce these security measures to virtual machines deployed in its brand new cloud tenant environment.

As Figure 7-8 shows, two CSR 1000V instances are deployed inside the cloud tenant for redundancy purposes (ideally, each one of these virtual routers should be located in distinct hosts through anti-affinity rules defined in a virtualization cluster). CSR1 and CSR2 deploy HSRP to implement an “always active” default gateway for virtual machines. Moreover, these tenant-controlled routers provide route advertisement throughout the company WAN with OSPF. Security is further tightened with the use of IPsec tunnels providing encryption to all data exchanged between remote branches and the cloud tenant.

Deploying MPLS in these IOS-based virtual appliances, the cloud tenant has separate VMs assigned to CORPORATE and PARTNER VPNs, as long as they are connected, respectively, to VLAN 500 or VXLAN 5000. Consequently, desktops from end users connected to a VPN PARTNER can only access virtual machines that are connected to VXLAN 5000 in the cloud tenant.


Note

At the time of this writing, CSR 1000V does not require vPath because it is usually positioned to deploy advanced routing features for all traffic exchanged between the cloud tenant domain and external networks.


The instantiation of an advanced router in a cloud tenant is such a compelling concept that public cloud providers, such as Amazon Web Services (AWS), are already offering CSR 1000V as an additional service for their customers.

CSR 1000V can be managed through the following methods:

Image CLI: Using Telnet or SSH, a network administrator can use the familiar commands from Cisco IOS. As a result, service providers and large enterprise networks can easily leverage CLI-based provisioning tools for CSR 1000V instances running on a cloud environment.

Image Cisco PNSC: This management tool can install and license CSR 1000V instances, as well as configure features according to their location within a created PNSC tenant hierarchy.

Image API: CSR 1000V additionally provides an API based on RESTful principles. It is ideal for cloud computing and other automated environments.


Note

More details related to CSR 1000V scalability, performance, and licensing can be found in Chapter 13.


Citrix NetScaler 1000V

With the massive popularity of e-business applications in the late 1990s, data center architects quickly realized that a web application running over a single server raises two immediate risks:

Image Lackluster performance: Whenever the server hits a saturation point defined by its hardware-software combination

Image Poor availability: In the case of a major hardware or software failure

To mitigate these risks, Cisco created the concept of the server load balancer (SLB) appliance in 1998. In essence, an SLB is a network device that can receive end-user traffic and send it to a selected server according to a predefined load balancing policy.

Figure 7-9 displays the main components on a typical SLB deployment, described in the following list.

Image

Figure 7-9 SLB Basic Architecture

Image

Image Servers: Basically the IP addresses from servers that will receive the connections dispatched from the SLB.

Image Service: The IP address, port, and protocol combination used to route requests to a specific load-balanced application server. A service is the logical representation of an application running on a server.

Image Monitors: Synthetic requests the SLB creates to check whether a service is available on a server. They can be as simple as an Internet Control Message Protocol (ICMP) Echo request or as sophisticated as a Hypertext Transfer Protocol (HTTP) GET operation bundled with a database query.

Image Virtual IP (VIP): An SLB internal IP address that is specifically used to receive end-user connections. This address is usually registered with DNS servers to be advertised to end users.

Image Virtual server: Combines a VIP, transport protocol (TCP or UDP), and port to which a client sends connection requests for a particular load-balanced application. It is associated with a set of services to which the SLB will dispatch end-user connections.

Image Stickiness table: An optional SLB component that stores client information. The SLB uses this data to consistently forward end-user subsequent connections to the server that was selected during the first access, thus maintaining user session states inside the same server. Examples of stored client information are source IP address, HTTP cookies, and special strings excised from user data.

Image Load balancing algorithm: It is the configured method of user traffic distribution among the servers deploying the same application. A wide variety of algorithms are available today for these devices, including round robin, least connections, and hashing.

Generally speaking, an SLB provides server load balancing through the following process:

STEP 1. The SLB receives the client connection request in its VIP address, identifies the virtual server that will take care of the connection, and checks if the client is already in the stickiness table.

STEP 2. If the client information is not already present in the stickiness table, the SLB takes a look at the services associated with the virtual server and determines, through their monitor results, which real servers have the application healthily running at that moment.

STEP 3. Using the virtual server configured load balancing algorithm, the SLB selects the server that will receive the user connection.

STEP 4. The SLB saves the client and server information in the stickiness table and coordinates both ends of the connection until it eventually ends.

An interesting analogy for an SLB would be an airport control tower, which must identify the main characteristics of a landing airplane (user connection) before deciding which runway (server) it can use. The control tower usually applies a predefined method (algorithm) to sequence the arriving planes and must already know (monitor) if a runway is in maintenance or not.


Note

Please do not confuse the concept of a server load balancer with that of server cluster software, which essentially allows servers to work in tandem, providing a scale-out solution for a specific application. Although SLBs may replace specific end-user distribution functions of clustering software, only the latter can manage user session synchronization among cluster members and provide shared access to stored data.


With widespread adoption in most data centers in the 2000s, SLBs were aptly renamed application delivery controllers (ADCs) as they expanded their capabilities with features such as the following:

Image Content switching: Server selection based on Layers 5 to 7 parameters from HTTP, FTP, RSTP, DNS, as well as user data.

Image Secure Sockets Layer (SSL) acceleration: Hardware-assisted encryption to offload secure web servers.

Image TCP connection reuse: Multiplexing of numerous TCP connections from clients to a small number of connections between the ADC and a server. This feature offloads web servers from the management of multiple TCP connections.

Image Object compression: Decreases bandwidth requirements with web objects being compressed in the ADC and, subsequently, delivered to the application clients employing decompression in their web browsers.

Image Web acceleration: Includes several distinct mechanisms whose objective is improving application response time for web applications.

Image Application firewall: Embedded analysis tools which prevent security breaches, data loss, and unauthorized customizations to web applications with sensitive business or customer information.

Similar to other networking services, ADCs were eventually introduced as virtual appliances targeting server virtualization and cloud deployments. In both contexts, this virtual networking service is generically used to increase capacity of applications when virtual machines are scaled out through cloning or template instantiation.

Citrix NetScaler (NS) 1000V embodies the packaging of Citrix NetScaler ADCs in virtual appliances. And as Figure 7-10 depicts, NetScaler 1000V provides vPath integration with virtual networks based on Nexus 1000V.

Image

Figure 7-10 NetScaler 1000V Deployment Example

In the described scenario, a cloud tenant has already deployed two virtual machines (A and C) to host a web application. In the situation displayed on the left side of the figure, NetScaler 1000V receives the requests from clients using a VIP address. As a consequence, the virtual ADC load balances the client connections between both VMs, providing more user capacity and availability for the application.

Because the company business department is foreseeing a sudden user interest during a future limited promotion, the cloud administration team decides to deploy two more virtual machines (B and D) for that period. The right side of the figure showcases this situation, where NetScaler 1000V is automatically configured to load balance client traffic to all VMs, doubling the user capacity of the application.

If you are wondering about the value vPath is adding to this scenario, first you have to understand a classic problem related to ADCs connected in one-arm mode: how to guarantee that response traffic from the servers reaches the ADC.

In Figure 7-10, when NetScaler 1000V sends the connection request to the virtual machine, by default, the client IP address is used as the connection source address. Therefore, the VM response has the client IP address as the destination. And because Nexus 1000V is a Layer 2 switch, it would naturally send the VM response to the VM default gateway, blinding the ADC from the server response. Besides, the connection would be instantly terminated as soon as the client did not see the original destination IP address (VIP) at the response source IP address.

Traditionally, two methods are deployed in physical networks to solve this challenge, with variable benefits and drawbacks:

Image Source NAT: The ADC replaces the client IP address with an internal address (for example, the VIP), forcing the return traffic to be directed back to it. As a disadvantage, the application servers only detect the ADC as the origin for all connections and consequently lose track of the client accesses for monitoring and accountability purposes.

Image Policy-based routing: As previously explained in the section “Service Insertion in Physical Networks,” a network device may deploy this non-default method to route server responses back to the ADC, demanding an exclusive IP subnet for the virtual service. However, this method is not possible in pure Layer 2 scenarios such as the one depicted in Figure 7-10.

Nexus 1000V and NetScaler 1000V can avoid this conundrum through vPath encapsulation. When this virtual ADC is assigned to a vPath-enabled port profile, Nexus 1000V can register load-balanced connections coming from NetScaler 1000V and automatically steer the return traffic to the ADC, avoiding the drawbacks from both source NAT and PBR.


Note

More details related to Citrix NetScaler 1000V scalability, performance, and licensing can be found in Chapter 13.


Cisco Virtual Wide Area Application Services

The two main reasons why applications usually suffer from performance issues in wide-area networks are latency and bandwidth starvation:

Image Latency can be defined as the time spent to transmit any signal through a communication channel. In any client/server application transaction, application response time can be roughly derived from the multiplication of the round-trip time (two times the latency) and the number of messages exchanged between client and server, with reception confirmation. For example, if a server needs to send 1000 messages to a client and requires reception confirmation for each of them to send the next message, a latency of 1 millisecond will result in 2 seconds for the whole transaction. On the other hand, with a latency of 100 milliseconds, the same transaction would need 3 minutes and 20 seconds (or 200 seconds) to finish.

TCP connections behave differently from the data exchange I just described. In summary, TCP employs the concept of a transmission window, which represents the unidirectional amount of connection data that can be transmitted after a reception confirmation. In a TCP connection, the transmission window increases whenever the last transmitted data chunk is successfully transmitted. Nevertheless, most hosts present a transmission window size limitation of 64 KB, which may compromise TCP connections in high-latency links, regardless of their available bandwidth.

Image Because it defines how Internet access is usually billed, bandwidth starvation is arguably the most intuitive cause attributed to unsatisfactory application performance. For example, in a low-bandwidth WAN connection, traffic queuing in network devices increases, causing packet loss and retransmissions on TCP connections.

To support the consolidation process of servers in centralized data centers, a new networking service called WAN acceleration was conceived. In essence, WAN acceleration aims to transparently assuage the effect of latency and bandwidth saturation for applications traversing WAN links.

Cisco Wide Area Application Services (WAAS) epitomizes Cisco’s innovative and scalable WAN acceleration solution. WAAS is a symmetrical acceleration solution, demanding the deployment of an accelerator device at each end of a WAN link: one close to the client and another in the vicinity of the application server. With this arrangement, application connections are intercepted by both WAAS devices, which in turn apply acceleration algorithms to decrease the application response time and increase the WAN link capacity.

In summary, Cisco WAAS offers the following acceleration algorithms:

Image

Image TCP Flow Optimization (TFO): Each WAAS device acts as a TCP proxy, providing local acknowledgements to the host that is closest to the device on behalf of the remote end. With this artifice, WAAS “fools” both client and server through the illusion that both are connected to the same LAN. Then, the devices deploy an optimized version of TCP between them to leverage the most from the WAN bandwidth for each connection.

Image Data Redundancy Elimination (DRE): WAAS inspects TCP traffic to identify redundant data patterns at the byte level and quickly replace them with 6-byte signatures that are automatically indexed and recognized by both WAAS devices.

Image Persistent Lempel-Ziv (PLZ): Standards-based compression that can be applied (in conjunction with DRE or not) to further reduce the amount of bandwidth consumed by a TCP flow.

Image Application optimization (AO): WAAS provides specific optimization algorithms for message-intensive applications based on SSL, HTTP, Microsoft Server Message Block (SMB), Network File System (NFS), Messaging Application Programming Interface (MAPI), Citrix Independent Computer Architecture (ICA), and Windows Printing.


Note

You will receive a more detailed explanation of both SMB and NFS in Chapter 9, “File Storage Technologies.”


Full WAAS services can be deployed in two different physical formats: Cisco Wide Area Virtualization Engine (WAVE) appliances and service modules for Cisco Integrated Services Routers (ISR). On both formats, traffic interception is performed through PBR, inline pass-through network adapters, WCCP, ADCs, or a specialized WAAS clustering solution called Cisco AppNav.


Note

Select Cisco routers can also deploy an IOS feature called WAAS Express, which implements a smaller set of WAAS acceleration algorithms.


In 2010, Cisco WAAS was also released as a virtual appliance called Virtual Wide Area Application Services (vWAAS). Similarly to other virtual networking services, this format allows the deployment of WAN acceleration for cloud computing environments, which are intrinsically exposed to WAN latency and client bandwidth restrictions.

Since its first version, vWAAS incorporated vPath to its options of supported traffic interception methods. Hence, Nexus 1000V can easily steer virtual machine traffic that requires WAN acceleration to a vWAAS instance.

Figure 7-11 examines a vWAAS deployment using vPath.

Image

Figure 7-11 vWAAS Deployment Scenario

In Figure 7-11, one end user is accessing an application hosted in VM A while another end user is requesting services from an application running on VM B. A physical WAAS appliance in branch A receives WCCP-redirected traffic and therefore can negotiate WAN acceleration algorithms with another WAAS device installed in the path to the application server. The depicted vWAAS covers this function, receiving steered traffic when the Nexus 1000V VEM perceives that VM A is connected to a vPath-enabled interface.

vPath may also be used to offload vWAAS from non-accelerated traffic. In Figure 7-11, the connection between end-user B and VM B cannot be accelerated because branch B does not have a WAAS device. Consequently, when vWAAS does not detect a remote device, it can program the VEM on host 2 to not steer subsequent packets from this connection.

As Figure 7-11 also demonstrates, a specific virtual appliance called vWAAS Central Manager (vCM) is responsible for managing a complete WAAS system. In that sense, vCM can configure and monitor thousands of WAAS devices, at the time of this writing.

Figure 7-12 displays one of the multiple reporting capabilities from vCM. In this specific screen capture, vCM provides information related to the whole WAN acceleration deployment about traffic volume, data reduction, and top 10 most compressed applications.

Image

Figure 7-12 vWAAS Central Manager GUI

Besides a GUI, vCM also provides for cloud computing implementations an API for monitoring purposes.


Note

More details related to vWAAS scalability, performance, and licensing can be found in Chapter 13.


vPath Service Chains

In previous sections, you have learned how vPath simplifies virtual networking insertion in Nexus 1000V scenarios. This section explores how this technology can help when multiple services should handle the traffic of a single VM. With this goal, Nexus 1000V supports service chains, in which a sequence of vPath-enabled virtual networking services is faithfully followed whenever a VEM detects a connection to certain virtual machines.

Figure 7-13 details how Nexus 1000V builds a vPath service chain for three different virtual networking services.

Image

Figure 7-13 vPath Service Chain in Action

In the figure, Nexus 1000V is configured to implement a vPath service chain enforcing the following order:

1. NetScaler 1000V

2. vWAAS

3. VSG

A CSR 1000V instance routes the first packet from a client request to a VIP configured in NetScaler 1000V (continuous arrow). After the virtual ADC load balances the connection to VM A, it uses vPath to encapsulate the result and send it back to the VEM (dashed arrows represent vPath encapsulated traffic).

Following the service chain order associated to VM A, the Nexus 1000V module forwards the packet to vWAAS to verify whether its WAN acceleration algorithms may be applied to the connection.

Again encapsulated in vPath, the original packet is steered to VSG for security policy checking. When the packet finally reaches VM A in its original form, the VEM is already programmed for the return traffic, where

Image The VEM may already deploy VSG’s security rule decision.

Image It may not send more packets from that specific connection to vWAAS, in case the connection cannot be accelerated.

Image The module will surely steer the server response to NetScaler 1000V.

Most importantly, this rather complex traffic management is completely hidden under the service chain definition, which is inserted in a Nexus 1000V port profile. All the steering and offload decisions are implicitly executed, and will continue to happen even if any of the virtual services or the VM live migrates to another host.

Fundamentally, a vPath service chain provides policy-defined service insertion for virtual machines. If desired, other port profiles may implement distinct service chains: for example, a second service chain may only include vWAAS and VSG services for select VMs.

vPath service chains make it extremely easy for clouds to employ virtual networking services as “add-ons” for each application tier on a cloud tenant.

In the case of a failure on any virtual networking service on the chain, Nexus 1000V detects the lack of connectivity probe (keepalive) responses from the failed service and no longer steers traffic to it. However, the configured fail-mode in such service will define how the entire service chain will behave:

Image Fail-mode close: If the virtual networking service fails or loses connectivity to Nexus 1000V, all port profiles associated with the service will drop every packet and the whole service chain will stop working.

Image Fail-mode open: If the virtual networking service fails or loses connectivity to Nexus 1000V, all port profiles associated with the service will perform frame forwarding as if the service is not included in the service chain.

Undoubtedly, because of its central position for vPath-enabled services, Nexus 1000V is the most important tool during troubleshooting processes involving service chains.

Virtual Application Containers

Business applications rely on servers, storage, and networking, which includes segment creation and additional networking services (routing, security, and load balancing, among others). As a consequence, the application provisioning process is severely challenged with the complexity of installing, configuring, and licensing all of these components.

By definition, some level of application isolation is required in any multitenant domain, for multiple reasons, such as security, compliance, or service-level agreements (SLAs). As a simple example, you can imagine that in a service provider hosting applications for multiple customers, a single tenant may want to separate applications for internal employees from those for external partners.

With both situations in mind, data center architects embraced the concept of a network container to speed up application provisioning and reinforce network isolation in multitenant environments. In summary, a network container can be defined as a set of networking services configured in a standardized manner.

During the 2000s, to avoid deploying dedicated network devices (switches, routers, firewalls, and ADCs) for each tenant, most data center service providers built network containers using a virtualization technique called device partitioning to better leverage the usage of networking resources. The following elements represent common network device partitions that were heavily used during that period:

Image Virtual Routing and Forwarding (VRF) instance: A routing instance that can coexist with several others in the same routing equipment. It is composed of independent routing and forwarding tables, a set of interfaces, and optional routing protocols to exchange routing information with other peers.

Image Firewall context: An independent virtual firewall, with its own security policy, interfaces, and administrators. From an administration perspective, deploying multiple contexts is similar to having multiple standalone devices. Cisco ASA supports multiple contexts deploying separate routing tables, firewall features, IPS, and management capabilities.

Image ADC context: An abstraction of an independent load balancer with its own interfaces, configuration, policies, and administrators. This technology was originally deployed in the former Cisco ADC solution called Application Control Engine (ACE).

Figure 7-14 illustrates four network container examples composed of network partitions and offered to tenants of a data center service provider.

Image

Figure 7-14 Network Container Examples

In all four container options depicted in Figure 7-14, VRF instances provide Layer 3 services to external networks, while both firewall and SLB contexts provide their specialized networking services to one or more applications from a tenant. Using these virtual partitions, a service provider can logically provision selected networking services without undertaking the manual procedures that would be necessary if physical appliances were deployed.

As you previously learned in the section “Service Insertion in Physical Networks,” this scenario uses VLAN manipulation to insert services between clients and servers from an application. For this objective, an additional VLAN must be provisioned to connect two networking services (or one service to the application servers). Thus, the bronze, silver, gold, and diamond network containers, respectively, consume 2, 2, 3, and 5 VLANs from the 4094 that are available on a single service provider network infrastructure.


Note

For service providers interested in developing such designs, Cisco offers an extremely useful standardization tool in the form of the Virtualized Multiservice Data Center (VMDC) reference architecture. VMDC essentially provides a framework for building multitenant data centers with focus on the integration of networking, computing, server virtualization, security, load balancing, and system management.


Being also a multitenant environment, cloud computing recycles many principles and best practices learned from data center service providers. But with the benefits brought by server virtualization, cloud providers could evolve network containers into virtual application containers through the addition of three new ingredients:

Image Virtual machines: The simplicity of VM provisioning allows virtual servers to be added to these templates.

Image VXLAN: Used to scale and facilitate network provisioning for tenants.

Image Virtual networking services: Can replace device partitions, decreasing cloud orchestration operations with the physical network and expanding their capabilities to scale out (create more virtual appliances) and scale up (allocate more resources to a virtual appliance).

As discussed in Chapter 4, “Behind the Curtain,” standardization is a mandatory requirement for ease of automation. And for this reason, virtual application containers are considered a key element of cloud computing architecture.

Using its broad portfolio of virtual networking services, Cisco streamlined the creation of virtual application containers through a solution called Cisco Virtual Application Cloud Segmentation (VACS). This software package basically automates installation, licensing, and configuration of multiple virtual services to enable an easy and efficient setup of virtualized applications.

Cisco VACS provisions application environments through virtual application container templates, which are used for the instantiation of identical virtual application containers. The solution architecture has three main software components:

Image Cisco Nexus 1000V: Provides network segmentation through VLAN and VXLANs, and offers vPath-based network insertion

Image Cisco PNSC: Controls the installation, licensing, and configuration of virtual networking services

Image Unified Computing System (UCS) Director: Cisco orchestration solution that provides the management interface to deploy, provision, and monitor the whole VACS solution

As Figure 7-15 demonstrates, VACS discharges the cloud orchestrator from executing repetitive (and strongly correlated) tasks to instantiate a virtual application container.

Image

Figure 7-15 VACS Architecture

Figure 7-15 highlights two ways to manage VACS:

Image GUI: Used to install VACS components, license virtual networking services, create virtual application container templates, and instantiate containers from them, if necessary

Image REST API: Interface that allows a cloud orchestrator to instantiate virtual application containers based on virtual application container templates, as well as decommission containers that will not be used anymore

Regardless of its origin, when a request for a new container reaches UCS Director, it follows preexistent workflows that interact with PNSC (to provision the required virtual networking services), a VM manager (to spawn virtual machines), and Nexus 1000V (to correctly connect these VMs, virtual appliances, and external networks). Assembling like the Avengers, these elements form a brand new virtual application container.

VACS provides wizards to create “almost-ready-to-go” virtual application container templates, which are commonly referred as three-tier templates. Figure 7-16 illustrates this construct.

Image

Figure 7-16 Three-tier Container Template in VACS

As you can see, this predefined template enforces VSG security policies in three application tiers (web, application, and database) that are connected to a shared segment (VLAN or VXLAN). It also achieves segregation through CSR 1000V acting as a zone-based firewall and uses EIGRP as the default routing protocol to advertise the public subnet assigned to the virtual machines. It can also deploy static routes and use NAT to publish internal services to external networks.

A three-tier template can be defined as internal (where external networks can only access the web tier) or external (where all three application tiers can be externally accessed).

To guarantee uniqueness of addressing and segments ID, this container template must be configured with pools, described in Table 7-2, before it can instantiate any application container.

Image
Image

Table 7-2 Three-tier Virtual Application Container Template Parameters

Finally, the three-tier container template also provides predefined VSG vZones for web, application, and database virtual machines.


Note

As an exclusive add-on to the web tier from the three-tier container template, you can add a redundant pair of SLB virtual networking services based on open source HAProxy (http://www.haproxy.org/).


Adversely from three-tier container templates, VACS uses custom virtual application container templates to address specific requirements from a cloud computing environment. Additionally to all parameters defined on a three-tier template, a custom template requires the parameters listed and described in Table 7-3.

Image
Image

Table 7-3 Custom Virtual Application Container Template Additional Parameters


Note

As an exclusive add-on any tier in the custom container template, you can add a redundant pair of open source HAProxy SLBs.


After you create these container templates, they become available for container creation requests. And as I have commented before, these requests will generate isolated virtual application containers.

Although most VACS deployments use the REST API to accept inbound requests from cloud portals and orchestrators, a simple example of container creation can be demonstrated through the UCS Director GUI, as shown in Figure 7-17.

Image

Figure 7-17 Requesting an Application Container

In Figure 7-17, a non-administrative user (“demouser”) is requesting the creation of an application container called App-Cont. Because the container template already exists, the user request generates a workflow iteration to instantiate the template. Figure 7-18 displays the workflow in action.

Image

Figure 7-18 VACS Workflow

And after all tasks related to components of the App-Cont are executed in the workflow shown in Figure 7-18, the application container is finally ready to be used. Figure 7-19 depicts the final outcomes of the request, including VMs and virtual networking services originally specified in the three-tier container template.

Image

Figure 7-19 Virtual Machines from App-Cont

Around the Corner: Service Insertion Innovations

Throughout this chapter, I have explored many service insertion methods, including VLAN manipulation, policy-based routing (PBR), Web Cache Control Protocol (WCCP), and Virtual Data Path (vPath). While the advantages from the first three approaches deserve extended discussions on physical network designs, their benefits tend to pale when compared to the flexibility and simplicity of vPath.

As previously explained, vPath permits the insertion of multiple virtual networking services through the use of granular policies that can discriminate virtual machines with flexibility beyond addresses and subnets. But if you think vPath is the last word on service insertion technologies, you are quite wrong.

Take for example the Cisco Remote Integrated Services Engine (RISE), which is depicted in action in Figure 7-20.

Image

Figure 7-20 Cisco RISE in Action

RISE is intended to simplify one-arm mode implementations of networking services (such as ADCs), abstracting these appliances as “remote modules” of a physical Nexus data center switch.

This perception is achieved through a tight integration between the devices, which enables select SLB configurations to be automatically reflected in the switch. As Figure 7-20 exemplifies, the configuration of a VIP load balancing user sessions to real servers produces two consequences:

Image Auto-PBR: The Nexus switch automatically configures policy-based routing to steer server responses to the ADC.

Image Route Health Injection (RHI): As soon as there is at least one active server, the ADC advertises the VIP address through a dynamic routing protocol, allowing end users to be preferably routed to the ADC site.

Both RISE and vPath depend on joint efforts between Cisco and other vendors, such as Citrix. Cisco is also currently leading the development of Network Services Header (NSH), which is a service chaining protocol based on the early success of vPath. In a nutshell, NSH includes the following enhancements:

Image It is an open standard, motivating a broad acceptance from a wide variety of networking services and vendors.

Image It is designed for both physical and virtual networks.

Image It is transport-independent because it can be inserted between the original packet and any outer network transport encapsulation such as MPLS, VXLAN, or Generic Routing Encapsulation (GRE).

Along with other vendors, Cisco has proposed an IETF draft, describing all details around the header and allowing software and hardware vendors to develop NSH-solutions before the publication of the final standard.


Note

Cisco Application Centric Infrastructure (ACI) also implements an innovative service insertion technique called service graphs. Because this feature is a component of this paradigm-shift architecture, I will save this discussion for Chapter 11, “Network Architectures for the Data Center: SDN and ACI.”


Further Reading

Image Deliver the Next-Generation Intelligent Data Center with Cisco Nexus 7000 Series Switches, Citrix NetScaler Application Delivery Controller, and RISE Technology: http://www.cisco.com/c/dam/en/us/products/collateral/switches/nexus-7000-series-switches/white-paper-c11-731370.pdf

Image Network Service Header: https://tools.ietf.org/html/draft-ietf-sfc-nsh-01

Exam Preparation Tasks

Review All the Key Topics

Review the most important topics in this chapter, denoted with a Key Topic icon in the outer margin of the page. Table 7-4 lists a reference of these key topics and the page number on which each is found.

Image

Table 7-4 Key Topics for Chapter 7

Complete the Tables and Lists from Memory

Print a copy of Appendix B, “Memory Tables” (found on the CD), or at least the section for this chapter, and complete the tables and lists from memory. Appendix C, “Answers to Memory Tables,” also on the CD, includes completed tables and lists so that you can check your work.

Define Key Terms

Define the following key terms from this chapter, and check your answers in the glossary:

networking service

firewall

server load balancer (SLB)

WAN acceleration

service insertion

policy-based routing (PBR)

Web Cache Control Protocol (WCCP)

virtual networking service

Virtual Data Path (vPath)

Virtual Security Gateway (VSG)

Adaptive Security Virtual Appliance (ASAv)

Cloud Services Router (CSR) 1000V

NetScaler 1000V

Virtual Wide Area Application Services (vWAAS)

Virtual Application Cloud Segmentation (VACS)

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.7.102