11
Quality of Service Mechanisms in the Switch/Routers

11.1 Introduction

The QoS mechanisms discussed in this chapter are based on the Cisco Catalyst 6500 Series of switch/routers. The Catalyst 6500 family of switch/routers supports a wide range of QoS features, which makes this discussion representative of the main QoS features found in the typical switch/router in the market. This case study allows the reader to appreciate better the kind of QoS features the typical switch/router would support.

The ever-growing demands for higher network performance and bandwidth, and the greater need for smarter network applications and services, are some of the factors driving the design of today's switches, routers, switch/routers, and other network devices. Users continue to demand a wide range of performance requirements and services that include high QoS, network and service availability, and security.

These demands also drive the need to build scalable and more reliable enterprise and service provider networks. QoS mechanisms, in general, represent a collection of capabilities that provide ways to identify, prioritize, and service different classes of traffic in a network. This identification and classification allows the network to prioritize and service traffic such that end-user requirements are satisfied as best as possible.

The Catalyst 6000/6500 Series support a number of QoS tools that are used to provide preferential treatment of traffic as they pass through the switch/router. Some Cisco Catalyst 6500 switch/routers perform QoS processing centrally on the Supervisor Engine, while those with distributed processing architectures perform the processing directly on the line cards. The Catalyst 6500 employs some hardware support to perform QoS processing.

The various Catalyst 6500 switch/router components that are involved in packet forwarding and QoS processing are the Multilayer Switch Feature Card (MSFC), Policy Feature Card (PFC), and the network port Application-Specific Integrated Circuits (ASICs) on the line cards. The PFC is primarily responsible for hardware-based Layer 2 and 3 forwarding of packets, as well as supporting a number of important QoS functions.

The PFC supports the necessary ASICs needed to perform hardware-based Layer 2 and 3 forwarding, QoS classification and priority queuing, and security access control list (ACL) filtering. The PFC, being mainly a forwarding engine, requires a route processor (i.e., the MSFC) to populate a Layer 3 route/flow cache or full forwarding table used by its Layer 3 forwarding engine ASIC.

If no route processor is installed (as is possible in some earlier Catalyst 6500 switch/router configurations using Supervisor Engine 1A), the PFC can perform only Layer 3/4 QoS classification and security ACL filtering but not Layer 3 packet forwarding. The reason for this limitation is that the MSFC is required to provide the route processor functions needed by the PFC to perform Layer 3 forwarding of packets. The MSFC provides the necessary forwarding information maintained in the PFC's route/flow cache or forwarding table (or forwarding information base (FIB)) without which the PFC cannot Layer 3 forward packets.

In some Cisco Catalyst 6500 platforms, the line card is the main component that performs QoS processing. In these platforms, the QoS features supported are primarily implemented in the network port ASICs to allow for high-speed processing of arriving traffic. The level of QoS processing a line card supports depends on the functionality built into the line card port ASIC.

Over the development cycle of the Catalyst 6500, and with advancements in hardware and software technology, a number of QoS tools have been developed and are available for the Catalyst 6500. For this reason, the QoS processing capabilities differ between the different generations of the Cisco Catalyst 6500 Series. This chapter presents an overview of the main QoS features available on Cisco Catalyst 6000 and 6500 switch/routers (see Chapters 7 and 9).

11.2 QoS Forwarding Operations within a Typical Layer 2 Switch

This section describes the basic operations performed on an Ethernet frame as it passes through an Ethernet (Layer 2) switch. The main components involved in the forwarding operations are illustrated in Figure 11.1. Most of the simpler Layer 2 switches have only a Content Addressable Memory (CAM) that holds the Layer 2 address table required for Layer 2 forwarding. Some of the higher end Layer 2 switches, on the other hand, support, in addition, Ternary CAMs (TCAMs) that maintain entries used for QoS and security processing (not for Layer 3 forwarding). Layer 3 devices support TCAMs that maintain the Layer 3 routes and addresses that are learned from the routing protocols.

img

Figure 11.1 Main forwarding components within the Layer 2 switch.

The CAM and TCAM form part of the most important components used in the hardware processing and forwarding of packets, and switches, switch/routers, and routers leverage these for wire speed Layer 2 and 3 forwarding. Typically, these architectures support the ability to perform multiple searches or lookups in different parts of the CAM and TCAM simultaneously. This ability to perform multiple lookups in parallel allows the device to forward packets at higher speeds.

When a Layer 2 packet enters a port on a Layer 2 switch, it is stored in one of the port's ingress (input) queues right after Layer 2 forwarding table lookup (in some architectures), or before forwarding table lookup (in other architectures). The input queues can be configured with QoS priorities with each queue given a different service and packet discard priority profile.

A higher queue service priority ensures that time-sensitive traffic is not held behind nontime-sensitive traffic during network congestion periods. Scheduling algorithms such as strict priority, weighted fair queuing (WFQ), weighted round-robin (WRR), and deficit round-robin (DRR) can be used to service the ingress priority queues.

The process of receiving, classifying, priority queuing, and servicing a packet at the input port also requires the switch to perform a number of other important tasks. These tasks include performing a forwarding table lookup to find the egress switch port(s) to which the packet is to be forwarded, and applying security policies to the packet. These tasks can be executed in a pipeline or in parallel by the forwarding engine using the modules shown in Figure 11.1. These modules are described in Table 11.1.

Table 11.1 Forwarding and QoS Modules in the Layer 2 Switch

Module Description
Layer 2 forwarding table The Layer 2 forwarding (address) table is typically implemented in a CAM. The packet's destination MAC address is read by the forwarding engine and used as a key (an index) into the CAM holding the MAC address table. If the destination MAC address is found, its corresponding egress port and, if applicable, VLAN ID are read from the CAM. If the MAC address is not found, then the packet is marked for broadcast through all other switch ports (a process called flooding). This involves forwarding the packet out every switch port in the VLAN the packet belongs to
QoS ACLs The switch can be configured with ACLs to classify incoming packets according to certain parameters in the packet. This allows a network manager to priority queue, police, shape the rate of traffic flows, and to write/remark QoS markings in outbound packets. The QoS ACLs are typically implemented in a TCAM to allow these QoS decisions to be made in a single table lookup
Security ACLs To implement security policies, the switch can be configured with security ACLs to identify arriving packets according to their MAC addresses, IP addresses, protocol types, and Layer 4 port numbers. The ACLs are typically implemented in a TCAM such that a security filtering decision on whether to forward or filter a packet can be made in a single table lookup

As illustrated in Figure 11.1, after the forwarding table lookups in the CAM and TCAM have been performed, the packet is transferred into the appropriate priority queue at the outbound switch port. The egress priority queue is determined by either the QoS bit settings carried in the packet or in an internal routing tag prepended to the packet. Similar to the ingress priority queues, the egress priority queues can also be serviced according to priority, which can depend on the time criticality of the queued traffic using scheduling algorithms such strict priority, WFQ, WRR, and DRR.

11.3 QoS Forwarding Operations within a Typical Multilayer Switch

As discussed in Chapter 3, the route/flow cache-based switch/router architectures employ a route processor and a forwarding engine where the route processor determines the destination (next hop node and outbound port) of the first packet in a traffic flow (typically, via software-based Layer 3 forwarding table lookup). The forwarding engine receives the resulting destination information of the first packet from the route processor and sets up a corresponding entry in its route/flow cache.

The forwarding engine then forwards subsequent packets in the same traffic flow based on the newly created flow entry in its cache. Even in architectures that are not route/flow cache based, the same flow caching technique can still be used to generate traffic flow information and statistics.

The topology-based switch/router architectures employ a forwarding engine (which sometimes is a specialized forwarding hardware) and a full Layer 3 forwarding table for packet forwarding. A route processor runs Layer 3 routing protocols to construct a routing table that reflects the true view or topology of the entire network. The contents of the routing table are distilled to generate the Layer 3 forwarding table used by the forwarding engine.

The topology-based architecture allows for efficient forwarding table lookup, typically, in hardware, and can support high packets forwarding rates. For a given destination IP address, the longest matching prefix in the forwarding table provides a corresponding next hop IP address and switch/router port out of which the packet should be forwarded to get to its destination.

As the network topology changes over time, the routing table and forwarding table are updated and packet forwarding continues without noticeable performance penalty. Typically, a route updating process running on the route processor downloads the current routing table information to forwarding table used by the forwarding engine.

To support high-speed forwarding rates, high-end, high-performance switch/routers forward packets using specialized forwarding engine ASICs. As illustrated in Figure 11.2, specific Layer 2 and Layer 3 components, including forwarding tables and ACLs, are maintained in hardware. Layer 2, Layer 3, ACL, and QoS policy tables are stored in high-speed memory so that forwarding decisions and filtering can be done at wire speed.

img

Figure 11.2 Main forwarding components within the multilayer switch.

The switch/router performs lookups in these tables to determine the forwarding instructions for a packet. Along with specifying which port to forward a packet out of, the instructions indicate whether a packet with a specific destination IP address (and other parameters) is supposed to be dropped according to an ACL filtering rule.

As illustrated in Figures 11.1 and 11.2, switches maintain these tables using specialized memory architectures, typically, CAMs and TCAMs. The Layer 2 table is typically maintained in a CAM to allow the Layer 2 forwarding engine to make Layer 2 forwarding decisions at high speeds. TCAMs are mostly used for maintaining Layer 3 forwarding tables and allow for lookups based on longest prefix matching. IP routing and forwarding tables are normally organized by IP address prefixes.

The TCAM can also be used to store QoS, ACL, and other information generally associated with upper-layer protocol information processing in a packet. Most switches, switch/routers, and routers support multiple TCAMs to allow for both inbound and outbound QoS, as well as security ACLs, to be processed in parallel when a Layer 2 or Layer 3 forwarding decision is being made.

The term VMR (Value, Mask, and Result) is often used to describe the format of entries maintained in a TCAM. The “Value” in VMR refers to the pattern stored in the TCAM that can have some of its bits masked (i.e., concealed) by the “Mask.” The masked “Value” is to be matched by fields extracted from a packet header fed into the TCAM. Examples of extracted packet fields include IP addresses, protocol ports, DSCP values, and so on.

The “Mask” refers to the bits used to conceal the associated “Value” pattern in the TCAM. The masked “Value” provides/determines the prefix that is to be matched by the extracted packet fields fed into the TCAM. The “Result” refers to the result or action pointed to when a search or lookup in the TCAM hits the masked “Value” pattern.

This “Result” returned could be a “permit” or “deny” action stored in the TCAM for QoS or security ACLs filtering. The “Result” values could also be for priority queuing in a QoS policy in the case of a TCAM used for traffic classification and priority queuing. The “Result” could also be a pointer to an entry in a Layer 2 adjacency table that contains the next hop port and MAC address rewrite information in the case of a TCAM used for IP packet forwarding.

Figure 11.2 shows the main Layer 2 and 3 forwarding and QoS modules in a typical switch/router. Similar to a Layer 2 switch, packets arriving on a switch/router port are stored in the appropriate ingress priority queue right after forwarding table lookup (or just before lookup depending on the forwarding architecture used). In the latter, each packet is placed in an ingress queue and its header is examined to retrieve both the Layer 2 and Layer 3 destination addresses to be used for the forwarding table lookup.

The switch/router has to determine to which switch/router port(s) to forward the packet after the Layer 2 or 3 forwarding table lookup. The forwarding engine also has to determine the QoS and security handling instructions for the packet by performing lookups in the corresponding ACLs maintained in the TCAM (Figure 11.2). The modules involved in the forwarding of a packet are described in Table 11.2. The forwarding operations are performed in parallel in hardware in some switch and router architectures.

Table 11.2 Forwarding and QoS Modules in the Multilayer Switch

Module Description
Layer 2 forwarding table Typically, the Layer 2 forwarding table is maintained in a CAM. The destination MAC address in an arriving packet is extracted and used as an index to the CAM. If the packet requires Layer 3 forwarding, its destination MAC address is that of the receiving interface on the switch/router. In this case, the CAM lookup results are used only to decide that the packet should be Layer 3 forwarded. Packets addressed to the MAC address of any of the switch/router local interfaces should be Layer 3 forwarded
Layer 3 forwarding table The Layer 3 forwarding table is typically maintained in a TCAM. The forwarding engine examines the Layer 3 forwarding table using the destination IP address of the packet as an index. The longest matching prefix in the table points to the next hop Layer 3 address and outbound port (plus its MAC address) for the packet. The Layer 3 forwarding table also contains the Layer 2 (MAC) address of the receiving interface of the next hop node. Also contained along with the egress port is any VLAN ID for the outgoing packet
QoS ACLs QoS ACLs are also maintained in TCAMs. In many implementations, information required for packet classification, policing, and marking can all be performed simultaneously in a single lookup in the QoS TCAM
Security ACLs Inbound and outbound ACLs can also be maintained in a TCAM so that lookups to determine whether to forward or filter a packet can be performed in the TCAM in a single lookup

After the Layer 2 or 3 forwarding table lookup, the packet is transferred to the appropriate egress priority queue at the destination switch/router port. The next hop destination information obtained from the Layer 3 forwarding table also comes with a corresponding receiving interface MAC address information. The Layer 3 address (i.e., the IP address in the packet) used to retrieve the next hop and its Layer 2 address may also point to other information regarding tagging, writing, or rewriting certain QoS markings to the departing packet.

The original Layer 2 destination address in the packet (which is the address of the receiving interface of the switch/router) is replaced with the next hop's Layer 2 address. The forwarded packet's Layer 2 source address is changed to that of the switch/router's outbound port before it is transmitted onto the next hop. The Time-To-Live (TTL) value in the Layer 3 packet must be decremented by one.

The Layer 3 packet header checksum must be recalculated because the contents of the Layer 3 packet (the TTL value) have changed. In addition, the Layer 2 checksum must be recalculated because both Layer 2 and 3 addresses have changed. Essentially, the entire Ethernet frame header and trailer must be rewritten before the frame is placed into the egress priority queue. All the above Layer 2 and 3 forwarding operations in addition to the QoS and security ACL operations can be accomplished efficiently in hardware [AWEYA2001, AWEYA2000].

11.4 QoS Features in the Catalyst 6500

The QoS features on the Cisco Catalyst 6500 are described below and also illustrated in Figure 11.2. Reference [CISCQOS05] provides a summary of the QoS features and describes where each feature is implemented in the switch/router, whether the feature is on the ingress or egress side of the device. This reference also provides a summary of the QoS capabilities for each of the line cards in the Cisco Catalyst 6500 Family.

Over the years, a number of (different) PFC versions have been developed for the Cisco Catalyst 6500. Reference [CISCQOS05] provides a high-level overview of the major QoS capabilities of each PFC version (PFC1, PFC2, PFC3A, PFC3B, and PFCBXL). PFC4 and related models are not discussed in Ref. [CISCQOS05]. The description of PFC4 and its related models are given in Refs [CISC2TQOS11, CISC2TQOS17].

11.4.1 Packet Classification

Classification is the process or action by which network devices identify specific network traffic or packets so that they can be given a level of service. There are a number of network services and applications where packet classification plays a major role, such as differentiated qualities of service, policy-based routing, network firewall functions, traffic billing, and so on.

Packet classification allows a network device to determine which flow an arriving packet belongs to so that it can determine whether to forward or filter the packet, which port or interface on the device to forward the packet, the class of service (CoS) the packet should be given, how much should the packet and its flow be billed, and so on.

The classification function in network devices is performed by an entity normally referred to as a packet or flow classifier. The classifier stores a number of rules that describe how the classification of packets should take place and also requires that each flow of packets being processed satisfy, at a minimum, one of the stored rules.

The rules maintained by the classifier determine which flow a packet belongs to based on the classifier examining contents of the packet header plus other packet fields. Each rule essentially specifies a class (category, group, or rank) that a packet belongs to based on some criterion derived from the contents of the packet. Also associated with each classification rule is a specified action which is to be carried out when the rule is satisfied.

Classification in the switch/router may be performed based on a number of predefined or selected fields in the arriving packet. For example, a flow could be defined by looking at particular combinations of a packet's source IP address, destination IP address, transport protocol, and transport protocol port numbers. Furthermore, a flow could be simply defined by a specific destination IP address prefix and a range of transport protocol port values, or simply by looking at the port of arrival.

Some examples of classification tools provided by the Cisco Catalyst 6500 are ACLs and per port trust setting on a device [CISCQOS05]. The process of classification in the Catalyst 6500 involves inspecting different fields in the Ethernet (Layer 2) header, along with fields in the IP header (Layer 3), and the Transmission Control Protocol/User Datagram Protocol (TCP/UDP) header (Layer 4) to determine the level of service that will be given to the packet as it transits the switch/router.

11.4.2 Queuing

Queuing in a network device provides a temporary relieve mechanism during short-term congestion by allowing the device to temporarily hold data in memory when the arrival rate of data at a processing point in the device is larger than the departure rate. Network devices use queues to temporarily hold packets until the packets can be processed (forwarding table lookup, classification, packet rewrite, etc.) and forwarded.

Each queue is allocated some buffer memory, which provides the holding space for the data waiting to be processed. In the Catalyst 6500, the number of queues and the amount of buffering allocated to each queue are dependent on the hardware platform used as well as the line card type in use [CISCBQT07].

The Catalyst 6500 Series Ethernet line card modules implement some form of receive (at ingress) and transmit (egress) buffering per port. These buffers are used to store arriving packets as (Layer 2 and 3) forwarding decisions are made within the switch/router, or as packets are being temporarily held for transmission on a port. The buffering becomes more important when the aggregate rate at an output port is greater than the output rate the port supports.

In the Catalyst 6500 architecture, due to the higher switch fabric speed relative to the input ports, the switch fabric itself almost never becomes the bottleneck to data flow to the output ports. Congestion can occur on the transmit (egress) side when more than one port send data to a particular destination port. A majority of the packets entering the switch/router will not experience congestion under light to moderate traffic load conditions. For these reasons, the receive port buffers on the switch/router are configured to be relatively small compared to the transmit port buffers.

When the QoS features are not enabled on the switch/router, all packets have equal access to the port buffers, regardless of the traffic type or class they belong to. Furthermore, in the event of congestion (that is, when a port buffer overflows), all packet regardless of traffic type are equally subject to discard at that port buffer. Packets in the (single queue) buffer are transmitted in the order in which they arrive, and if the buffer is full, all subsequent arriving packets are discarded. This queuing discipline is known as First In, First Out (FIFO) queuing with tail-drop. FIFO queuing is normally used for traffic that requires best-effort service (i.e., service with no guarantees whatsoever).

When the QoS features are enabled on the switch/router, the port buffers are partitioned into a number of priority queues. Each queue is configured with one or more packet discard thresholds. The combination of packet classification, multiple priority queues (within a buffer), and packet discard thresholds associated with each queue allow the switch/router to make intelligent traffic management decisions when faced with congestion. Traffic sensitive to delay and delay variation, such as streaming voice and video packets, can be placed in a higher priority queue for transmission, while other less time-sensitive or less important traffic can be buffered in lower priority queues and are subject to discard when congestion occurs.

11.4.3 Congestion Avoidance

The primary goal of congestion avoidance is managing queue occupancy to avoid overflows and data loss. As a queue accepts data and starts to fill-up during short-term congestion, a congestion avoidance mechanism can be used to ensure that the queue does not fill-up completely. When the queue fills up, subsequent packets arriving to it will simply be discarded, irrespective of the priority class or settings/markings in the packets. Excessive data drops and delays caused by the congestion could affect the performance of some end-user applications.

To address these concerns, congestion avoidance mechanisms are normally used to minimize the potential queue overflows and data loss. Typically, a network device will employ queue occupancy limits or thresholds (lower than the maximum queue size) to signal when certain occupancy levels are crossed. When a queue threshold is crossed, the device can randomly discard lower priority packets while trying as much as possible to accept higher priority packets into the queue. The network device can use congestion avoidance mechanisms such as Random Early Detection (RED) and Weighted Random Early Detection (WRED), which allow for more intelligent packet discard and also perform better than the classical tail-drop mechanism.

When the QoS is enabled on the Catalyst 6500 Series Ethernet line card modules, the multiple priority queues and drop thresholds on the switch/router ports are also enabled. Different priority queues and thresholds can be configured depending on the model of the line card. Each queue can also be configured with one or more packet discard thresholds. The two packet drop threshold types that can be configured are as follows.

11.4.3.1 Tail-Drop Thresholds

On switch/router ports that are configured with tail-drop thresholds, packets of a given CoS value are accepted into the queue until the drop threshold associated with that CoS value is crossed. When this happens, subsequent packets of that CoS value are discarded until the queue occupancy drops below the threshold.

Let us assume, for example, that traffic with CoS = 1 is assigned to Queue 1, which has a queue threshold equal to 60% of the maximum queue size of Queue 1. Then, packets with CoS = 1 will not be dropped until the queue length of Queue 1 is 60% full. When the 60% threshold is exceeded, all subsequent packets with CoS = 1 are dropped until the queue drops below the 60% limit.

11.4.3.2 WRED Thresholds

On switch/router ports configured with WRED thresholds, packet of a given CoS value are accepted into the queue randomly to avoid congestion and queue overflow. The probability of a packet with a given CoS value being discarded or accepted into the queue depends on the thresholds and weight assigned to traffic with that CoS value.

Let us assume, for example, that traffic with CoS = 2 is assigned to Queue 1 with two queue thresholds 40% (low) and 80% (high). Then packets with CoS = 2 will not be dropped at all unless the occupancy of Queue 1 exceeds 40% full. When the queue occupancy exceeds 40% and approaches 80%, packets with CoS = 2 can be dropped with an increasing probability rather than being accepted outright into the queue. When the queue occupancy exceeds 80%, all packets with CoS = 2 are dropped until the queue occupancy falls below 80% once again.

The switch/router drops packets at random when the queue size is between the 40% (low) and 80% (high) thresholds. The packets discarded are selected randomly and not on a per flow basis. WRED is more suitable for traffic generated by rate or window adaptive protocols, such as TCP. A TCP source is capable of adjusting its transmission rate to cope with random packet losses in the network by pausing transmission (backing off) and adjusting its transmission window size to avoid further losses on the transmission path.

Reference [CISCBQT07] describes the structure of the priority queues and thresholds that can be configured on a port in the Catalyst 6500. It describes the number of strict priority queues (when configured) and the number of standard queues along with their corresponding tail-drop or WRED thresholds. The different priority queues and threshold settings on the Catalyst 6500 Ethernet line card modules are also described in Ref. [CISCBQT07].

11.4.3.3 Queue Configuration Information

Reference [CISCBQT07] describes for each of the Catalyst 6500 Series Ethernet line card modules the following queue configuration information:

  • Total buffer size per port (total buffer size)
  • Overall receive buffer size per port (Rx buffer size)
  • Overall transmit buffer size per port (Tx buffer size)
  • Port receive queue and drop threshold structure
  • Port transmit queue and drop threshold structure
  • Default size of receive buffers per queue with QoS enabled (Rx queue sizes)
  • Default size of transmit buffers per queue with QoS enabled (Tx queue sizes)

11.4.4 Traffic Policing

Policing is the process of monitoring and enforcing the flow rate of packets at a particular point in a network. The policing mechanism (i.e., the policer) monitors and determines if the rate has exceeded a predefined rate. The rate is determined by measuring traffic within a certain time interval, where, typically, this time interval is a configurable number in the policer. An arriving packet to the policer is deemed to be out-of-profile when it arrives when the flow rate exceeds the predefined rate limit. When this happens, the arriving packet is either dropped or the CoS settings in it are marked down (to a lower priority class).

Traffic policing in the switch/router provides an effective way to limit the bandwidth consumption of traffic passing through a given port or group of ports (as in the case of ports that belong to a single VLAN). Basic policing can be implemented at a port or queue by defining the average rate and burst limit of data that the port or queue is allowed to accept. A policer and a policing policy can be configured that uses an ACL to identify and screen out the traffic that should be presented to the policer.

Multiple policers and policing policies can be configured to work in a switch/router simultaneously, allowing a network manager to set different traffic management policies for different traffic classes passing through the device. A network manager can also configure a policing policy to limit the rate of all traffic entering a particular switch/router port, or the overall rate to a given VLAN by limiting the rate of individual flows to a given rate.

11.4.5 Rewrite

As packets enter or exit a particular network with a given set of CoS offering, border or edge network devices (switches, routers, or switch/routers) might be called upon to alter the original CoS settings of the packets. The network device may be configured with CoS rewrite rules that allow it to rewrite the CoS bit settings within the packet.

Each rewrite occasion requires the network device to read the forwarding (priority) class and loss priority setting within the packet, use this information to lookup the corresponding CoS value from a CoS table containing the rewrite rules, and then write this CoS value over the old value in the packet.

For an arriving packet at a port, an ingress classifier reads the CoS bit setting in the packet and maps that to a forwarding class and packet loss priority combination in a table maintained by the classifier. The device can also be configured to have rewrite rules that allow it to alter CoS bit settings in outgoing packets at the outbound interfaces of the device in order to meet the traffic management objectives of the receiving network. This allows the network devices in the receiving network to receive and correctly classify each packet into the appropriate priority queue with correct bandwidth and packet loss profile.

In addition, an edge device may be required to rewrite a given CoS setting (IP Precedence, Differentiated Services code point (DSCP), IEEE 802.1Q/p, or MPLS EXP (Multiprotocol Label Switching Experimental) bit settings) at its ingress or egress interfaces to facilitate classification, priority queuing, and application of different drop profiles at core or backbone network devices. The Cisco Catalyst 6500 supports CoS rewrite mechanisms that can be used to change the priority value of a packet (increase or decrease it) based on CoS policies that may be configured by the network administrator.

11.4.6 Scheduling

As already discussed, a classifier reads markings or fields in arriving packets and interprets them to be able to assign the packets to different priority queues or process them according to predefined rules. Scheduling is a mechanism used to remove the data in queues and transfer them to other receiving entities. The scheduling mechanism also determines the transmission order of the packets being queued based on the available shared output resources and such that some performance objectives are achieved. The scheduling could be done to satisfy the QoS requirements of each packet or a flow of packets.

The priority queues represent storage locations where the classified packets are temporarily held while waiting to be scheduled and forwarded to the next processing entity. The packet scheduling mechanism often defines precisely the decision process and algorithm used to select which packets should be serviced next or dropped (based on the available system resources and priority markings in the packet). The packet forwarding process may also involve buffer management, which refers to any particular mechanism used to manage or regulate the occupancy of the available memory resources used for storing arriving packets.

When an input or output port is heavily loaded or congested, the FIFO queuing approach may no longer be an efficient way to satisfy the QoS requirements of each flow or user. When supporting different traffic classes, the FIFO approach does not make efficient use of the available system resource to satisfy the different requirements of the classes. Under congestion conditions, multiple packets with different QoS requirements (e.g. bandwidth, delay, delay variation, loss) compete for the finite common FIFO transmission resource.

For these reasons, network devices require classification, priority queuing, and appropriate packet scheduling mechanisms to properly handle the QoS requirements of the different traffic processed, and also the order of packet transmission while accounting for the different QoS requirements of individual packets, flows, or users. In a networking device, scheduling can be done at the ingress and/or egress side of the device.

11.4.6.1 Input Queue Scheduling

When a packet enters an ingress port, it can be assigned to one of a number of port-based priority queues (based on, for example, CoS marking in the packet) prior to the forwarding table lookup. After the forwarding decision is made, the packet is scheduled and forwarded to the appropriate egress port. Typically, multiple priority queues are used at the ingress port to accommodate the different traffic that require different treatment.

Time sensitive traffic require service levels where latencies in the device (and the network, in general) must be kept to a minimum. For instance, video and voice traffic require low latency, requiring the switch to give priority to these traffic over other traffic from protocols such as File Transfer Protocol (FTP), web, email, Telnet, and so on.

11.4.6.2 Output Queue Scheduling

After the forwarding table lookup and possibly the rewrite processes (see Figure 11.3), the switch/router places the processed packet in an appropriate outbound (egress) priority queue prior to transmission to the outside network. The switch/router may perform congestion management on the output queues to ensure that the queues do not overflow. This is typically accomplished using a mechanism such as RED or WRED, which drops packets randomly (i.e., probabilistically) to ensure that the queues do not overflow.

Figure depicts Cisco Catalyst 6500 QoS processing model.

Figure 11.3 Cisco Catalyst 6500 QoS processing model.

WRED is a derivative of RED and is used in the Catalyst 6500 family and many other network devices. In WRED, packets are dropped randomly while recognizing the CoS values in the packets. The CoS setting in the packets are inspected to determine which packets will be dropped. When the queue occupancy reaches predefined thresholds, lower priority packets are dropped first to make room in the queue for higher priority packets.

The Cisco Catalyst 6500 supports a number of scheduling algorithms such as strict priority, shaped round-robin (SRR), WRR, and deficit weighted round-robin (DWRR) [CISCQOS05]. The switch/router supports ingress and egress port scheduling based on the CoS setting associated with arriving packets. In the default configuration of QoS in the switch/router, packets with higher CoS values are mapped to higher queue numbers. For example, traffic with CoS 5, which is typically associated with VoIP traffic, is mapped to the strict priority queue, if configured.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.142.232