Chapter 2. Appraising Virtual Private LAN Service

High-availability requirements of virtualization technologies, the costs associated with deploying and maintaining a dedicated fiber network, and technology limitations have led to the development of Virtual Private LAN Service (VPLS)-based solutions. These solutions provide multipoint Layer 2 (L2) connectivity services across geographically dispersed data centers over a Layer 3 (L3) network infrastructure.

Key objectives for using L3 routing rather than L2 switching to interconnect data centers include the following:

• Resolve core link-quality problems using L3 routing technology

• Remove data center interdependence using spanning-tree isolation techniques

• Provide storm-propagation control

• Allow safe sharing of infrastructure links for both L2 core and L3 core traffic

• Adapt to any data center design such as Rapid Spanning Tree Protocol (RSTP)/Rapid Per-VLAN Spanning Tree Plus (RPVST+), Multiple Spanning Tree (MST), Virtual Switching System (VSS), Nexus, and blade switches

• Allow VLAN overlapping and therefore virtualization in multitenant or service provider designs

The following sections discuss virtual private network (VPN) technologies and some associated challenges in detail.

VPN Technology Considerations

Ethernet switching technology has a long and successful record for delivering high bandwidth at affordable prices in enterprise LANs. With the introduction of 10-Gbps Ethernet switching, the outstanding price/performance of Ethernet provides even greater price/performance advantages for both enterprise and service provider networks.

In attempting to deliver Ethernet-based multipoint services, service providers are evaluating and deploying two types of multipoint architectures: L3 VPNs and L2 VPNs.

Layer 3 Virtual Private Networks

The most common L3 VPN technology is Multiprotocol Label Switching (MPLS), which delivers multipoint services over a L3 network architecture. MPLS L3 VPNs offer many attractive characteristics for delivery of multipoint services, including the following:

• The use of a variety of user network interfaces (UNI), including Frame Relay, Asynchronous Transfer Mode (ATM), and Ethernet

• The scalability of MPLS for wide-area service deployments

• The capability to access multiple L3 VPNs from the same UNI

• The ability for an end user to add new customer edge (CE) nodes without having to reconfigure the existing CE nodes

• Robust quality of service (QoS) implementation, allowing differentiated packet forwarding according to application characteristics

• The capability to support stringent service level agreements (SLA) for high availability

However, MPLS L3 VPNs do impose certain provisioning requirements on both service providers and enterprises that sometimes are unacceptable to one or the other party. Some enterprises are reluctant to relinquish control of their network to their service provider. Similarly, some service providers are uncomfortable with provisioning and managing services based on L3 network parameters as is required with MPLS L3 VPNs.

Layer 2 Virtual Private Networks

For multipoint L2 VPN services, service providers have frequently deployed Ethernet switching technology using techniques such as 802.1Q tunneling (also referred to as tag stacking or QinQ) as the foundation for their metro Ethernet network architecture.

Switched Ethernet service provider networks support multipoint services, also referred to as transparent LAN services (TLS), which provide a good example of L2 VPN service supported over L2 network architecture.

Switched Ethernet network architectures have proven to be successful in delivering high-performance, low-cost L2 VPN multipoint services by service providers in many countries. However, as the size of these switched Ethernet networks has grown, the limitations of the scalability of this architecture has become increasingly apparent:

• Limited VLAN address space per switched Ethernet domain

• Scalability of spanning-tree protocols (IEEE 802.1d) for network redundancy and traffic engineering

• Ethernet MAC address learning rate, which is important to minimize broadcast traffic resulting from unknown MAC addresses

These limitations, which are inherent in Ethernet switching protocols, preclude the use of Ethernet switching architectures to build L2 VPN services that scale beyond a metropolitan-area network (MAN) domain.

To address the limitations of both MPLS L3 VPNs and Ethernet switching, and because classical L2 switching was not built for extended, large-scale networks, innovations in network technology have led to the development of VPLS.

VPLS Overview

Virtual Private LAN Service (VPLS) is an architecture that provides multipoint Ethernet LAN services, often referred to as transparent LAN services (TLS), across geographically dispersed locations, using MPLS as transport.

Service providers often use VPLS to provide Ethernet multipoint services (EMS). VPLS is being adopted by enterprises on a self-managed MPLS-based MAN to provide high-speed any-to-any forwarding at L2 without the need to rely on spanning tree to keep the physical topology loop free. The MPLS core uses a full mesh of pseudowires (PWs; described in the next section) and split horizon to avoid loops.

To provide multipoint Ethernet capability, Internet Engineering Task Force (IETF) VPLS drafts describe the concept of linking virtual Ethernet bridges by using MPLS PWs. At a basic level, VPLS is a group of virtual switch instances (VSI) that are interconnected by using Ethernet over MPLS (EoMPLS) circuits in a full-mesh topology to form a single, logical bridge.

A VLAN can be extended across multiple segments by using a bridge or a switch, so it is not constrained to a single wire. A bridge or a switch monitors each incoming Ethernet frame to determine its source and destination MAC addresses. A bridge or switch uses the source MAC address to build a table of device addresses that are attached to its ports. The bridge or switch uses a destination address to make a forwarding decision. If the frame is a broadcast or multicast frame or its destination address is unknown to the bridge or switch, the frame is forwarded (flooded) to all ports except the receiving port. This flooding process is called a learning bridge mechanism.

In concept, a VSI is similar to this bridging function. In both technologies, a frame is switched based on the destination MAC address and membership in an L2 VPN (a VLAN). VPLS forwards Ethernet frames at L2, dynamically learns source MAC address-to-port associations, and forwards frames based on the destination MAC address. If the destination address is unknown, or is a broadcast or multicast address, the frame is flooded to all ports associated with the virtual bridge. Therefore, in operation, VPLS offers the same connectivity experienced as if a device were attached to an Ethernet switch by linking VSIs using MPLS PWs to form an “emulated” Ethernet switch.

Because of the flooding behavior, loops in bridged networks are disastrous. There is no counter such as the Time-To-Live (TTL) field in an IP header at the frame level to indicate how many times a frame has circulated a network. Frames continue to loop until an entire network is saturated and the bridges can no longer forward packets. To prevent loops in a network, bridges or switches use Spanning Tree Protocol (STP) to block any ports that might cause a loop. VPLS offers an alternative to the STP control plane and limits STP to local sites.

Compared to traditional LAN switching technologies, VPLS is more flexible in its geographic scaling. Therefore, CE sites might be within the same metropolitan domain, or might be geographically dispersed on a regional or national basis. The increasing availability of Ethernet-based multipoint service architectures from service providers for both L2 VPN and L3 VPN services is resulting in a growing number of enterprises transitioning their WANs to these multipoint services. VPLS is playing an increasingly important role in this transition.

The VFI identifies a group of PWs that are associated with a VSI.

Figure 2-1 shows the use of VFIs using MPLS PWs to form an “emulated” Ethernet switch.

Figure 2-1 Schematic representation of VPLS components.

image

VPLS is an emerging technology intended to provide an emulated LAN service across the MPLS core network. The service is a multipoint architecture to connect remote sites at Layer 2.

VPLS consists of three primary components:

Attachment circuits: Connection to CE or aggregation switches, usually Ethernet but can be ATM or Frame Relay

Virtual circuits (VCs or PWs): Connections between Network-facing Provider Edges (N-PEs) across MPLS network based on draft-martini-l2circuit-trans-mpls-11

VSI: A virtual L2 bridge instance that connects attachment circuits to VCs

The VPLS specification outlines five components specific to its operation:

VPLS architecture (VPLS and Hierarchical VPLS [H-VPLS]): The Cisco 7600 series routers support flat VPLS architectures and Ethernet edge H-VPLS topologies.

Autodiscovery of PEs associated with a particular VPLS instance: The Cisco 7600 series routers support manual neighbor configuration and Border Gateway Protocol (BGP) autodiscovery.

Signaling of PWs to interconnect VPLS VSIs: The Cisco 7600 series routers use Label Distribution Protocol (LDP) to signal draft-martini PWs (VC).

Forwarding of frames: The Cisco 7600 series routers forward frames based on destination MAC address.

Flushing of MAC addresses upon topology change: In Cisco IOS Software Release 12.2(33)SRC1 and later, Cisco 7600 series routers support MAC address flushing and allow configuration of a MAC address aging timer.

Note

For more information about VPLS, refer to the Cisco document “Cisco IOS MPLS Virtual Private LAN Service: Application Note,” available at www. http://tinyurl.com/c6k6ot.

Understanding Pseudowires

A PW is a point-to-point connection between pairs of PE routers. It is bidirectional and consists of a pair of unidirectional MPLS VCs. The primary function of a PW is to emulate services such as Ethernet, ATM, Frame Rely, or time-division multiplexing (TDM) over an underlying core MPLS network through encapsulation into a common MPLS format. The PW is a mechanism that carries the elements of an emulated service from one PE router to one or more PEs over a packet-switched network (PSN). A PW is a virtual connection that, in the context of VPLS, connects two VFIs.

In Ethernet switching extension, PWs are used to cross-connect (xconnect) either physical ports, subinterfaces, or VLANs (SVI or VFI) over MPLS transport.

When used as a point-to-point (P2P) Ethernet circuit connection, the PW is known as EoMPLS, where the PW is created using targeted LDP. This P2P connection can be used in the following modes:

Xconnect ports (EoMPLS port mode) with whatever frame format on the port (that is, with or without the dot1q header): In this mode, the xconnected port does not participate in any local switching or routing.

Xconnect port VLAN (also known as subinterfaces): Allows selective extraction of a VLAN based on its dot1q tag and P2P xconnect packets. Local VLAN significance is usually required in this mode and is supported today only on an Ethernet Service (ES) module’s facing edge, also known as facing the aggregation.

Xconnect SVI (interface VLAN) in a P2P fashion: This approach requires a shared port adapter interface processor (SIP) or ES module facing the core (also called facing the MPLS network).

PWs are the connection mechanisms between switching interfaces (VFIs) when used in a multipoint environment (VPLS). This approach allows xconnecting SVI in a multipoint mode by using one PW per destination VFI with MAC address learning over these PWs.

PWs are dynamically created using targeted LDP based on either a neighbor statement configured in the VFI or automatically discovered based on Multiprotocol BGP (MP-BGP) announcements. This feature is beyond the scope of this book because it requires BGP to be enabled in the network.

In Cisco IOS Software Releases 12.2(33)SRB1 and 12.2(33)SXI, the targeted LDP IP address for the PW, can be independent from the LDP router ID, which is required for scalability and load balancing.

PWs can be associated to PW classes that would specify creation options such as transport of the PWs via MPLS Traffic Engineering (TE) tunnels. This approach allows full control of load repartition on the core links and permits Fast Reroute (FRR) protection to accelerate convergence on link or node failures. This protection against core link failure is achieved by locally repairing the label switch paths (LSP) at the point of failure, allowing data to continue to flow while a new end-to-end LSP is being established to replace the failure. FRR locally repairs the protected LSPs by rerouting them over backup tunnels that bypass failed links or nodes.

VPLS to Scale STP Domain for Layer 2 Interconnection

EoMPLS requires that STP be enabled from site to site to provide a redundant path, so it is not a viable option for large-scale L2 extensions of bridge domains.

VPLS is a bridging technique that relies on PWs to interconnect point-to-multipoint virtual bridges. Because VPLS is natively built with an internal mechanism known as split horizon, the core network does not require STP to prevent L2 loops.

Figure 2-2 shows three remote sites connected via a multipath VPLS core network.

Figure 2-2 Multipath VPLS with STP isolation.

image

Redundancy with VPLS, as shown in Figure 2-3, is difficult. In a single N-PE, split horizon prevents traffic from looping from one PW to another. However, with dual N-PEs per site, traffic can traverse the link between the dual N-PEs or via the aggregation switches, creating a loop.

Figure 2-3 Redundant N-PEs within the data center.

image

The design goal is to maintain the link between aggregation switches in forwarding mode. At the same time, a goal is to block VPLS-associated traffic from traversing the inter-N-PE link within a data center, allowing only one N-PE to carry traffic to and from the VPLS cloud.

Even though a VPLS bridge is inherently protected against L2 loops, a loop-prevention protocol must still be used against local L2 loops in the access layer of the data centers where cluster nodes are connected. Therefore, for each solution described in this book, VPLS is deployed in conjunction with Embedded Event Manager (EEM) to ensure loop prevention in the core based on the full mesh of PWs and redundant N-PEs in each location. Edge node or edge link failure is protected against by using VPLS and EEM to customize the solution behavior based on network events as they occur.

An active/standby path-diversity protection mechanism is provided per VLAN using VPLS to ensure one L2 connection to VPLS at a time. This mechanism is known as VPLS with EEM in N-PE.

H-VPLS Considerations

VPLS requires the creation of one VFI for each bridge domain that is to be extended across an L3 core network. Some organizations might require many VLANs to be extended between geographically dispersed data centers. However, creating the hundreds of VFIs that would be required in this case is not practical.

One of the options for scaling VLAN transport via VPLS is to use additional 802.1Q encapsulation, which is known as QinQ. QinQ is the Cisco implementation of the IEEE 802.1ad standard and specifies how to double-tag a frame with an additional VLAN. A frame that enters an interface configured for QinQ encapsulation receives a core VLAN number. In QinQ encapsulation, the edge VLAN number is hidden, and frames are switched based on this core VLAN.

Figure 2-4 illustrates H-VPLS encapsulation. In the original Ethernet frame, the payload is tagged with the 802.1Q edge VLAN number and with the destination (DA) and source (SA) MAC addressees. To aggregate and hide the edge VLANs, QinQ inserts an additional 802.1Q core VLAN number, as shown in the QinQ frame. However, the DA of the incoming Ethernet frame is used for forwarding. In addition, VPLS adds the two labels that LDP provides. One label points toward the destination N-PE (core label), the other label identifies the PW, so it points to the correct VFI.

Figure 2-4 H-VPLS encapsulation.

image

EEM

The Cisco IOS Embedded Event Manager (EEM) is a unique subsystem within Cisco IOS Software. EEM is a powerful and flexible tool to automate tasks and customize the behavior of Cisco IOS and the operation of the device. EEM consists of Event Detectors, the Event Manager, and an Event Manager Policy Engine.

You can use EEM to create and run programs or scripts directly on a router or switch. The scripts are called EEM policies and can be programmed with a simple command-line interface (CLI) or by using a scripting language called Tool Command Language (Tcl). Policies can be defined to take specific actions when the Cisco IOS Software recognizes certain events through the Event Detectors. The result is an extremely powerful set of tools to automate many network management tasks and direct the operation of Cisco IOS to increase availability, collect information, and notify external systems or personnel about critical events.

EEM helps businesses harness the network intelligence intrinsic to Cisco IOS Software and gives them the capability to customize behavior based on network events as they happen, respond to real-time events, automate tasks, create customer commands, and take local automated action based on conditions detected by Cisco IOS Software.

EEM is a low-priority process within IOS. Therefore, it is important to consider this fact when using EEM on systems that are exposed to environments in which higher-priority processes might monopolize routers’ CPU resources. Care should be taken to protect the routers’ CPU from being hogged by higher-priority tasks such as broadcast storms. Recommended practice dictates allocating more time for low-priority processes using the IOS command process-max-time 50. In addition, control-plane policing (CoPP), storm control, and event dampening should also be deployed to prevent CPU hog.

Note

The Cisco document “Cisco IOS Embedded Event Manager Data Sheet” provides detailed information about EEM, and is available at www. http://tinyurl.com/3hrdm7.

MPLS

Multiprotocol Label Switching (MPLS) combines the performance and capabilities of L2 (data link layer) switching with the proven scalability of L3 (network layer) routing. MPLS enables enterprises and service providers to build next-generation intelligent networks that deliver a wide variety of advanced, value-added services over a single infrastructure. MPLS also makes it possible for enterprises and services providers to meet the challenges of explosive growth in network utilization while providing the opportunity to differentiate services, without sacrificing the existing network infrastructure. The MPLS architecture is flexible and can be employed in any combination of L2 technologies. This economical solution can be integrated seamlessly over any existing infrastructure, such as IP, Frame Relay, ATM, or Ethernet. Subscribers with differing access links can be aggregated on an MPLS edge without changing their current environments because MPLS is independent of access technologies.

Integration of MPLS application components, including L3 VPNs, L2 VPNs, TE, QoS, Generalized MPLS (GMPLS), and IPv6 enable the development of highly efficient, scalable, and secure networks that guarantee SLAs.

MPLS delivers highly scalable, differentiated, end-to-end IP services with simple configuration, management, and provisioning for providers and subscribers. By incorporating MPLS into their network architectures, service providers can save money, increase revenue and productivity, provide differentiated services, and gain competitive advantages.

Label Switching Functions

In conventional L3 forwarding mechanisms, as a packet traverses the network, each router extracts all the information relevant to forwarding the packet from the L3 header. This information is then used as an index for a routing table lookup to determine the next hop for the packet.

In the most common case, the only relevant field in the header is the destination address field, but in some cases, other header fields might also be relevant. As a result, the header analysis must be performed independently at each router through which the packet passes. In addition, a complicated table lookup must also be performed at each router.

In label switching, the analysis of the L3 header is done only once. The L3 header is then mapped into a fixed-length, unstructured value called a label.

Many headers can map to the same label, as long as those headers always result in the same choice of next hop. In effect, a label represents a forwarding equivalence class (that is, a set of packets that, however different they might be, are indistinguishable by the forwarding function).

The initial choice of a label need not be based exclusively on the contents of the L3 packet header. For example, forwarding decisions at subsequent hops can also be based on routing policy.

After a label is assigned, a short label header is added in front of the L3 packet. This header is carried across the network as part of the packet. At subsequent hops through each MPLS router in the network, labels are swapped, and forwarding decisions are made by means of MPLS forwarding table lookup for the label carried in the packet header. Therefore, the packet header does not need to be reevaluated during packet transit through the network. Because the label is a fixed length and unstructured, the MPLS forwarding table lookup process is both straightforward and fast.

MPLS LDP

MPLS Label Distribution Protocol (LDP) allows the construction of highly scalable and flexible IP VPNs that support multiple levels of services.

LDP provides a standard methodology for hop-by-hop, or dynamic label, distribution in an MPLS network by assigning labels to routes that have been chosen by the underlying Interior Gateway Protocol (IGP) routing protocols. The resulting labeled paths, called label switch paths (LSP), forward label traffic across an MPLS backbone to particular destinations.

LDP provides the means for label switch routers (LSR) to request, distribute, and release label prefix binding information to peer routers in a network. LDP enables LSRs to discover potential peers and to establish LDP sessions with those peers for the purpose of exchanging label binding information.

MPLS LDP Targeted Session

The mpls ldp neighbor targeted command is implemented to improve the label convergence time for directly connected LSRs. When the links between the neighbor LSRs are up, both the link and targeted hellos maintain the LDP session. When the link between neighboring LSRs goes down, the targeted hellos maintain the session, allowing the LSR to retain labels learned from each other. When the failed link comes back up, the LSRs can immediately reinstall labels for forwarding use without having to reestablish their LDP session and exchange labels again.

Cisco recommends the use of the mpls ldp neighbor targeted command to set up a targeted session between directly connected MPLS LSRs when MPLS label forwarding convergence time is an issue.

The following commands set up a targeted LDP session with neighbors using the default label protocol:

image

Limit LDP Label Allocation

Normally, LDP advertises labels for IP prefixes present in the routing table to all LDP peers. Because the core network is a mix of IP and MPLS architecture, you must ensure that the advertisements of label bindings is limited to only a set of LDP peers so that not all traffic is label switched. All other traffic should be routed using IP.

The no mpls ldp advertise-labels command prevents the distribution of any locally assigned labels to all LDP neighbors.

In the following example, a standard access control list (ACL) used on N-PE routers limits the advertisement of LDP bindings to a set of LDP peers that match access list 76. Therefore, it is a best practice to use a separate IP block for N-PE loopback interfaces for L2 VPN services:

image

MPLS LDP-IGP Synchronization

The LDP-IGP synchronization feature specifically refers to synchronization of IP path and MPLS labels to prevent traffic from being black-holed. The algorithm used to build the MPLS forwarding table depends on the IP routing protocol in use. It is important to ensure that LDP is fully converged and established before the IGP path is used for switching traffic. The IP path is not inserted in the routing table until LDP has converged.

The command syntax necessary to configure the LDP-IGP synchronization feature is as follows:

image

Note

For detailed information about MPLS LDP-IGP synchronization, refer to the Cisco document “MPLS LDP-IGP Synchronization,” available at www. http://tinyurl.com/d5cux5.

MPLS LDP TCP “Pak Priority”

MPLS LDP uses TCP to establish adjacency before exchanging network information. During heavy network traffic, LDP session keepalive messages can be dropped from the outgoing interface output queue. As a result, keepalives can time out, causing LDP sessions to go down. Use the mpls ldp tcp pak-priority command to set high priority for LDP messages sent by a router using TCP connections.

Note

Configuring the mpls ldp tcp pak-priority command does not affect previously established LDP sessions.

MPLS LDP Session Protection

MPLS LDP session protection maintains LDP bindings when a link fails. MPLS LDP sessions are protected through the use of LDP hello messages. When you enable MPLS LDP, the LSRs send messages to locate other LSRs with which they can create LDP sessions.

If the LSR is one hop from its neighbor, it directly connects to that neighbor. The LSR sends LDP hello messages as User Datagram Protocol (UDP) packets to all the routers on the subnet. The hello message is called an LDP link hello. A neighboring LSR responds to the hello message, and the two routers begin to establish an LDP session.

If the LSR is more than one hop from its neighbor, it does not directly connect to the neighbor. The LSR sends out a directed hello message as a UDP packet, but as a unicast message specifically addressed to that LSR. The hello message is called an LDP targeted hello. The nondirectly connected LSR responds to the hello message, and the two routers establish an LDP session.

MPLS LDP session protection uses LDP targeted hellos to protect LDP sessions (for example, two directly connected routers that have LDP enabled and can reach each other through alternate IP routes in the network). An LDP session that exists between two routers is called an LDP link hello adjacency. When MPLS LDP session protection is enabled, an LDP targeted hello adjacency is also established for the LDP session. If the link between the two routers fails, the LDP link adjacency also fails. However, if the LDP peer can still be reached through IP, the LDP session remains up, because the LDP targeted hello adjacency still exists between the routers. When the directly connected link recovers, the session does not need to be reestablished, and LDP bindings for prefixes do not need to be relearned. Thus, MPLS LDP session protection improves LDP convergence following an outage by using LDP targeted discovery to retain previously learned label bindings. To configure MPLS LDP session protection, enter the following commands:

image

Note

For more information about MPLS LDP session protection, refer to the Cisco document “MPLS LDP Session Protection,” available at www. http://tinyurl.com/cfmvrm.

Summary

As explained in this chapter, VPLS is an important part of a solution for extending L2 connectivity across geographically dispersed data centers. The MST-based, EEM-based, and generic routing encapsulation (GRE)-based solutions that this book describes incorporate this technology.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.15.160.43