Chapter 11. Queueing and Congestion

11.0. Introduction

Quality of Service (QoS) has been a part of the IP protocol since RFC 791 was released in 1981. However, it has not been extensively used until recently. The main reason for using QoS in an IP network is to protect sensitive traffic in congested links. In many cases, the best solution to the problem of congested links is simply to upgrade them. All you can do with a QoS system is affect which packets are forwarded and which ones are delayed or dropped when congestion is encountered. This is effective only when the congestion is intermittent. If a link is just consistently over- utilized, then QoS will at best offer a temporary stopgap measure until the link is upgraded or the network is redesigned.

There are several different traffic flow characteristics that you can try to control with a QoS system. Some applications require a certain minimum throughput to operate, while others require a minimum latency. Jitter, which is the difference in latency between consecutive packets, has to be carefully constrained for many real-time applications such as voice and video. Some applications do not tolerate dropped packets well. Others contain time-sensitive information that is better dropped than delayed.

There are essentially three steps to any traffic prioritization scheme. First, you have to know what your traffic patterns look like. This means you need to understand what traffic is mission critical, what can wait, and what traffic flows are sensitive to jitter, latency, or have minimum throughput requirements. Once you know this, the second step is to provide a way to identify the different types of traffic. Usually, in IP QoS you will use this information to tag the Type of Service (TOS) byte in the IP header. This byte contains a 6-bit field called the Differentiated Services Control Point (DSCP) in newer literature, and is separated into a 3-bit IP Precedence field and a TOS field (either 3 or 4 bits) in older literature. These fields are used for the same purpose, although there are differences in their precise meanings. We discuss these fields in more detail in Appendix B.

The third step is to configure the network devices to use this information to affect how the traffic is actually forwarded through the network. This is the step where you actually have the most freedom, because you can decide precisely what you want to do with different traffic types. However there are two main philosophies here: TOS-based routing and DSCP per-hop behavior.

TOS-based routing basically means that the router selects different paths based on the contents of the TOS field in the IP header. However, the precise TOS behavior is left up to the network engineer, so the TOS values could affect other things such as queueing behavior. DSCP, on the other hand, generally looks at the same set of bits and uses them to decide how to handle the queueing when the links are congested. TOS-based routing is the older technique, and DSCP is newer.

You can easily implement TOS-based routing to select different network paths using Cisco’s Policy Based Routing (PBR). For example, some engineers use this technique of Frame Relay networks to funnel high priority traffic into a different PVC than lower priority traffic. And many standard IP protocols such as FTP and Telnet have well-defined default TOS settings.

Most engineers prefer the DSCP approach because it is easier to implement and troubleshoot. If high priority application packets take a different path than low priority PING packets, as is possible in the TOS approach, it can be extremely confusing to manage the network. DSCP is also usually easier to implement and less demanding of the router’s CPU and memory resources, as well as being more consistent with the capabilities of modern routing protocols.

Note that any time you stop a packet to examine it in more detail, you introduce latency and potentially increase the CPU load on the router. The more fields you examine or change, the greater the impact. For this reason, we want to stress that the best network designs handle traffic prioritization by marking the packets as early as possible. Then other routers in the network only need to look at the DSCP field to handle the packet correctly. In general, you want to keep this marking function at the edges of the network where the traffic load is lowest, rather than in the core where the routers are too busy forwarding packets to examine and classify packets.

We discuss the IP Precedence, TOS, and DSCP classification schemes in more detail in Appendix B.

Queueing Algorithms

The simplest type of queue transmits packets in the same order that it receives them. This is called a First In First Out (FIFO) queue. And, although it sounds naively like it treats all traffic streams equally, it actually tends to favor resource-hungry, ill-behaved applications.

The problem is that if a single application sends a burst that fills a FIFO queue, the router will wind up transmitting most of the queued packets, but will have to drop incoming packets from other applications. If these other applications adapt to the decrease in available bandwidth by sending at a slower rate, the ill-behaved application will greedily take up the slack and could gradually choke off all of the other applications.

Because FIFO queueing allows some data flows to take more than their share of the available bandwidth, it is called unfair. Fair Queueing (FQ) and Weighted Fair Queueing (WFQ) are two of the simpler algorithms that have been developed to deal with this problem. Both of these algorithms sort incoming packets into a series of flows.

We discuss Cisco’s implementations of different queueing algorithms in Appendix B.

When talking about queueing, it is easy to get wrapped up in relative priorities of data streams. However, it is just as important to think about how your packets should be dropped when there is congestion. Cisco routers allow you to even implement a congestion avoidance system called Random Early Detection (RED), which also has a weighted variant, Weighted Random Early Detection (WRED). These algorithms allow the router to start dropping packets before there is a serious congestion problem. This forces well-behaved TCP applications to back off and send their data more slowly, thereby avoiding congestion problems before they start. RED and WRED are also discussed in Appendix B.

Fast Switching

One of the most important performance limitations on a router depends on how the packets are processed internally. The worst case is where the router’s CPU has to examine every packet to decide how to forward it. Packets that are handled in the CPU like this are said to use process switching. It is never possible to completely eliminate process switching in a router, because the router has to react to some types of packets, particularly those containing network control information. And, as we will discuss in a moment, process switching is often used to bootstrap other more efficient methods.

For many years, Cisco has included more efficient methods for packet processing in routers. These often involve offloading the routing decisions to special logic circuits, frequently associated with interface hardware. The actual details of how these circuits work is often not of much interest to the network engineer. The most important thing is to ensure that as many packets as possible use these more efficient methods.

Fast switching is one of Cisco’s earlier mechanisms for offloading routing from the CPU. In fast switching, the router uses process switching to forward the first packet to a particular destination. The CPU looks up the appropriate forwarding information in the routing table and then sends the packet accordingly. Then, when the router sees subsequent packets for the same destination, it is able to use the same forwarding information. Fast switching records this forwarding information in an internal cache, and uses it to bypass the laborious route lookup process for all but the first packet in a flow. It works best when there is a relatively long stream of packets to the same destination. And, of course, it is necessary to periodically verify that the same forwarding information is still valid. So fast switching requires the router to process switch some packets just to check that the cached path is still the best path.

To allow for reliable load balancing, the fast switching cache includes only /32 addresses. This means that there is no network or subnet level summarization in this cache. Whenever the fast switching algorithm receives a packet for a destination that is not in its cache, or that it can’t handle because of a special filtering feature that isn’t supported by fast switching, it must punt. This means that the router passes the packet to a more general routing algorithm, usually process switching.

Fast switching works only with active traffic flows. A new flow will have a destination that is not in the fast switching cache. Similarly, low-bandwidth applications that only send one packet at a time, with relatively long periods between packets, will not benefit from fast switching. In both of these cases, the router must punt, and process switch the packet. Another more serious example happens in busy Internet routers. These devices have to deal with so many flows that they are unable to cache them all.

Largely because of this last problem, Cisco developed a more sophisticated system called Cisco Express Forwarding (CEF) that improves on several of the shortcomings of fast switching. The main improvement is that instead of just caching active destinations, CEF caches the entire routing table. This increases the amount of memory required, but the routing information is stored in an efficient, hashed structure.

The router keeps the cached table synchronized with the main routing table that is acquired through a dynamic routing protocol such as OSPF or BGP. This means that CEF needs to punt a packet only when it requires features that don’t work with CEF. For example, some policy-based routing rules do not work with CEF. So, when you use these, CEF must still punt and process switch these packets.

In addition to caching the entire routing table, CEF also maintains a table of information about all available next-hop devices. This allows the router to build the appropriate Layer 2 framing information for packets that need to be forwarded, without having to consult the system ARP table.

Because CEF rarely needs to punt a packet, even if it is the first packet of a new flow, it is able to operate much more efficiently than fast switching. And because it caches the entire routing table, it is even able to do packet-by-packet round-robin load sharing between equal cost paths. CEF shows its greatest advantage over fast switching in situations where there are many flows, each relatively short in duration. Another key advantage is that CEF has native support for QoS, while fast switching does not.

A Distributed CEF is available on routers that support Versatile Interface Processor (VIP) cards, such as the 7500 series. This allows each VIP card to run CEF individually to further improve scalability.

11.1. Fast Switching and CEF

Problem

You want to use the most efficient mechanism in the router to switch the packets.

Solution

As we discuss in Appendix B, one of the most important things you can do to improve router performance, and consequently network performance, is to ensure that you are using the best packet switching algorithm. All Cisco routers support fast switching, and it is enabled by default. However, some types of configurations require that it be disabled. The following example shows how to turn fast switching back on if it has been disabled:

Router#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
Router(config)#interface FastEthernet0/0
Router(config-if)#ip route-cache
Router(config-if)#end
Router#

If you are using policies, including policies for class-based QoS, you also need to configure fast switching to handle them, using the ip route-cache policy command:

Router#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
Router(config)#interface FastEthernet0/0
Router(config-if)#ip route-cache policy
Router(config-if)#end
Router#

CEF, on the other hand, is not enabled by default. Unlike fast switching, which is enabled separately for each interface, you have to enable CEF globally for the entire router, as well as on each interface:

Router#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
Router(config)#ip cef
Router(config)#interface FastEthernet0/0
Router(config-if)#ip route-cache cef
Router(config-if)#end
Router#

Discussion

The ip route-cache command used to enable fast switching has a couple of useful options. The second example demonstrates one of these, the policy keyword, which allows fast switching of policy-based routing:

Router(config-if)#ip route-cache policy

Another useful option is the same-interface keyword, which instructs the router to allow fast switching of packets that come in and go back out through the same physical interface:

Router(config)#interface Serial0/0
Router(config-if)#ip route-cache same-interface

You should use this option when the router frequently needs to switch packets between different networks that all connect to the same port. This could be the case for Frame Relay networks, as well as for LANs that use subinterfaces or secondary IP addresses.

Cisco supplies three useful commands to look at CEF performance. The first is show cef interface:

Router#show cef interface FastEthernet0/0
FastEthernet0/1 is up (if_number 4)
  Corresponding hwidb fast_if_number 4
  Corresponding hwidb firstsw->if_number 4
  Internet address is 172.22.1.3/24
  ICMP redirects are always sent
  Per packet load-sharing is disabled
  IP unicast RPF check is disabled
  Inbound access list is 120
  Outbound access list is not set
  IP policy routing is disabled
  Hardware idb is FastEthernet0/1
  Fast switching type 1, interface type 18
  IP CEF switching enabled
  IP CEF Feature Fast switching turbo vector
  Input fast flags 0x0, Output fast flags 0x0
  ifindex 4(4)
  Slot 0 Slot unit 1 VC -1
  Transmit limit accumulator 0x0 (0x0)
  IP MTU 1500
Router#

The output of this command shows that CEF is enabled on the interface FastEthernet0/0 as well as information about inbound and outbound ACLs and policies. In this example, you can see that the interface has an access group configured to use access list number 120 to filter inbound traffic.

You can use the show cef drop and show cef not-cef-switched commands to see more detailed CEF forwarding statistics:

Router#show cef drop
CEF Drop Statistics
Slot  Encap_fail  Unresolved Unsupported    No_route      No_adj  ChkSum_Err
RP            71           0           0         105           0           0
Router#show cef not-cef-switched
CEF Packets passed on to next switching layer
Slot  No_adj No_encap Unsupp'ted Redirect  Receive  Options   Access     Frag
RP         0       0           0        0      572        0        0        0

These commands show you details of CEF’s operation on the router. The first command shows how many packets CEF has had to drop, and the reasons for the drops. The Slot column in the output of both commands refers to the VIP slot where the packets were received. In this case, the router didn’t have any VIP cards because it was a Cisco2600. So all packets are received by the route processor, which is indicated by the RP in the leftmost column.

The Encap_fail column in the show cef drop output shows the number of packets that CEF has dropped because they were incomplete and there was no adjacency route in the CEF table. Unresolved indicates the number of packets dropped because CEF could not resolve the destination address prefix. If there had been any packets that could not be switched by CEF because of unsupported features, they would appear in the Unsupported column. The No_route column shows the number of packets dropped because CEF didn’t have a route to the destination. Similarly, No_adj shows the number of packets for which CEF did not have an entry in its adjacency table, so it had to send an ARP query. Finally, ChkSum_Err shows the number of times that CEF had to drop packets because they were corrupted.

The show cef not-cef-switched command has similar output. No_adj is the same here as it was in the show cef drop command, while Unsupp’ted is the same as the Unsupported column. The No_encap column counts the number of packets that could not be switched because they were encapsulated in another protocol. Redirect means that CEF has had to send these packets to another algorithm, usually process switching, to handle. And Receive lists the number of packets that were received from another internal switching algorithm. The remaining columns are rarely of interest in practice.

You can display the CEF version of the routing table with the show ip cef command:

Router#show ip cef
Prefix              Next Hop             Interface
0.0.0.0/0           172.25.1.1           FastEthernet0/0.1
0.0.0.0/32          receive
172.16.2.0/24       attached             FastEthernet0/1
                    attached             FastEthernet1/1
172.22.1.0/24       attached             FastEthernet0/1
172.22.1.0/32       receive
172.22.1.3/32       receive
172.22.1.4/32       172.22.1.4           FastEthernet0/1
<many lines deleted>
Router#

Notice in this output that there are actually two equal-cost routes to 172.16.2.0/24. CEF supports load balancing between these two paths.

You can expand the detail on these entries with the show ip cef detail command:

Router#show ip cef detail
IP CEF with switching (Table Version 31), flags=0x0
  31 routes, 0 reresolve, 0 unresolved (0 old, 0 new), peak 1
  31 leaves, 21 nodes, 25560 bytes, 62 inserts, 31 invalidations
  0 load sharing elements, 0 bytes, 0 references
  universal per-destination load sharing algorithm, id 0697166A
  3(1) CEF resets, 0 revisions of existing leaves
  Resolution Timer: Exponential (currently 1s, peak 1s)
  0 in-place/0 aborted modifications
  refcounts:  5672 leaf, 5632 node

Adjacency Table has 5 adjacencies
0.0.0.0/0, version 27, cached adjacency 172.25.1.1
0 packets, 0 bytes
  via 172.25.1.1, FastEthernet0/0.1, 0 dependencies
    next hop 172.25.1.1, FastEthernet0/0.1
    valid cached adjacency
0.0.0.0/32, version 0, receive
172.16.2.0/24, version 21, attached, connected
0 packets, 0 bytes
  via FastEthernet0/0.2, 0 dependencies
    valid glean adjacency
172.16.2.0/32, version 10, receive
172.16.2.1/32, version 9, receive
172.16.2.255/32, version 11, receive
172.22.1.0/24, version 22, attached, connected
0 packets, 0 bytes
  via FastEthernet0/1, 0 dependencies
    valid glean adjacency
172.22.1.0/32, version 16, receive
<many lines deleted>
Router#

11.2. Setting the DSCP or TOS Field

Problem

You want the router to mark the DSCP or TOS field of an IP packet to affect its priority through the network.

Solution

The solution to this problem depends on the sort of traffic distinctions you want to make, as well the version of IOS you are running in your routers.

There must be something that defines the different types of traffic that you wish to prioritize. In general, the simpler the distinctions are to make, the better. This is because all of the tests take router resources and introduce processing delays. The most common rules for distinguishing between traffic types use the packet’s input interface and simple IP header information such as TCP port numbers. The following examples show how to set an IP Precedence value of immediate (2) for all FTP control traffic that arrives through the serial0/0 interface, and an IP Precedence of priority (1) for all FTP data traffic. This distinction is possible because FTP control traffic uses TCP port 21, and FTP data uses port 20.

The new method for configuring this uses class maps. Cisco first introduced this feature in IOS Version 12.0(5)T. This method first defines a class-map that specifies how the router will identify this type of traffic. It then defines a policy-map that actually makes the changes to the packet’s TOS field:

Router#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
Router(config)#access list 101 permit any eq ftp any
Router(config)#access list 101 permit any any eq ftp
Router(config)#access list 102 permit any eq ftp-data any
Router(config)#access list 102 permit any any eq ftp-data
Router(config)#class-map match-all ser00-ftpcontrol
Router(config-cmap)#description branch ftp control traffic
Router(config-cmap)#match input-interface serial0/0
Router(config-cmap)#match access-group 101
Router(config-cmap)#exit
Router(config)#class-map match-all ser00-ftpdata
Router(config-cmap)#description branch ftp data traffic
Router(config-cmap)#match input-interface serial0/0
Router(config-cmap)#match access-group 102
Router(config-cmap)#exit
Router(config)#policy-map serialftppolicy
Router(config-pmap)#description branch ftp traffic policy
Router(config-pmap)#class ser00-ftpcontrol
Router(config-pmap-c)#set ip precedence immediate
Router(config-pmap-c)#exit
Router(config-pmap)#class ser00-ftpdata
Router(config-pmap-c)#set ip precedence priority
Router(config-pmap-c)#exit
Router(config-pmap)#exit
Router(config)#interface serial0/0
Router(config-if)#ip route-cache policy
Router(config-if)#service-policy input serialftppolicy
Router(config-if)#end
Router#

For earlier IOS versions, where class maps were not available, you have to use policy-based routing to alter the TOS field in a packet. Applying this policy to the interface tells the router to use this policy to test all incoming packets on this interface and rewrite the ones that match the route map:

Router#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
Router(config)#access list 101 permit any eq ftp any
Router(config)#access list 101 permit any any eq ftp
Router(config)#access list 102 permit any eq ftp-data any
Router(config)#access list 102 permit any any eq ftp-data
Router(config)#route-map serialftp-rtmap permit 10
Router(config-route-map)#match ip address 101
Router(config-route-map)#set ip precedence immediate
Router(config-route-map)#exit
Router(config)#route-map serialftp-rtmap permit 20
Router(config-route-map)#match ip address 102
Router(config-route-map)#set ip precedence priority
Router(config-route-map)#exit
Router(config)#interface serial0/0
Router(config-if)#ip policy route-map serialftp-rtmap
Router(config-if)#ip route-cache policy
Router(config-if)#end
Router#

Discussion

Before you can tag a packet for special treatment, you have to have an extremely clear idea of what types of traffic need special treatment, as well as precisely what sort of special treatment they will need. In the example, we have decided to give special priority to FTP traffic received on a specific serial interface. We show how to do this using both the old and new configuration techniques.

This may appear to be a somewhat artificial example. After all, why would you care about tagging inbound traffic that you have already received from a low-speed interface? Actually, one of the most important principles for implementing QoS in a network is that you should always tag the packet as early as possible, preferably at the edges of the network. Then, as it passes through the network, each router only needs to look at the tag, and doesn’t need to do any additional classification. In this case, we would ensure that the FTP traffic returning in the other direction is tagged by the first router that receives it. So the outbound traffic has already been tagged, and it is a waste of router resources to reclassify the outbound packets.

Many organizations actually take this idea of marking at the edges one step further, and remark every received packet. This helps to ensure that users aren’t requesting special QoS privileges that they aren’t allowed to have. However, you should be careful of this because it can sometimes disrupt legitimate markings. For example, a real-time application might use RSVP to reserve bandwidth through the network. It is important that the packets for this application have the appropriate Expedited Forwarding (EF) DSCP marking, or the network might not handle them properly. However, you also don’t want to let other non–real-time applications from this same source have the same EF priority level. So, if you are going to configure your routers to remark all incoming packets at the edges, make sure you understand what incoming markings are legitimate.

Recipe 15.9 shows another interesting variation of this idea. In that case, the routers are running DLSw to bridge SNA traffic through an IP network. So the routers themselves actually create the IP packets. This creates an additional challenge because there is no incoming interface, so that recipe uses local policy-based routing. The fact that the router creates the packets also gives it an important advantage, because it doesn’t have to consider any DLSw packets that might just happen to pass through.

The advantages of the newer class map method aren’t obvious in this example, but one of the first big advantages appears if you want to use the more modern DSCP tagging scheme. Because the older policy-based routing method doesn’t directly support DSCP, you have to fake it by setting both the IP Precedence and the TOS separately as follows:

Router(config)#route-map serialftp-rtmap permit 10
Router(config-route-map)#match ip address 115
Router(config-route-map)#set ip precedence immediate
Router(config-route-map)#set ip tos max-throughput

In this case, the packet will wind up with an IP Precedence value of immediate, or 2 (010 in binary), and TOS of max-throughput or 4 (0100 in binary). Combining the bit patterns gives you 0100100, but, as we discuss in Appendix B, DSCP only uses the first 6 bits, 010010. If you look up this bit combination in Table B.3 in Appendix B, you will see that it corresponds to a value of AF21, which is Class 2 and lowest drop precedence.

Doing the same thing with the class map method is much more direct:

Router(config)#policy-map serialftppolicy
Router(config-pmap)#class serialftpclass
Router(config-pmap-c)#set ip dscp af21

Class maps will also be useful later in this chapter when we talk about class-based Weighted Fair Queueing and class-based traffic shaping.

It is important to note that throughout this entire example, we have only put a special value into the packet’s TOS or DSCP field. This, by itself, doesn’t affect how the packet is forwarded through the network. To forward properly you must ensure that as each router in the network forwards these marked packets, the interface queues will react appropriately to this information.

Finally, while this recipe shows two useful ways of marking packets, Recipe 11.12 shows still another method, using Committed Access Rate (CAR) features. CAR tends to be more efficient on higher speed interfaces.

11.3. Using Priority Queueing

Problem

You want to enable strict priority queues on an interface so that the router always handles high priority packets first.

Solution

To enable Priority Queueing on an interface, you must first define the priority list, and then apply it to the interface:

Router#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
Router(config)#access list 101 permit ip any any precedence 5 tos 12
Router(config)#access list 102 permit ip any any precedence 4
Router(config)#access list 103 permit ip any any precedence 3
Router(config)#priority-list 1 protocol ip high list 101
Router(config)#priority-list 1 protocol ip medium list 102
Router(config)#priority-list 1 protocol ip normal list 103
Router(config)#priority-list 1 default low
Router(config)#interface Ethernet0
Router(config-if)#priority-group 1
Router(config-if)#end
Router#

Discussion

As we discuss in Appendix B, priority queues strictly ensure that high priority packets are always handled before lower priority packets. We stress that using pure Priority Queueing like this is usually a bad idea because the higher priority traffic can take all of the available bandwidth and completely starve all other network traffic. You want to use this style of queueing only when you can be absolutely certain that the aggregate bandwidth of all high priority traffic will never consume the available link bandwidth. This could be the case, for example, if the high priority traffic is shaped before reaching this router, or for applications like Voice over IP (VoIP) that use a relatively constant amount of bandwidth, and don’t burst above this constant rate.

The priority-list command has a relatively flexible syntax for identifying what types of traffic will use which queues. However, we prefer the access list method shown in the example. This is because it gives the greatest range of possibilities for identifying traffic types.

In the example, we use access list 101 to decide which packets to send to the high priority queue:

Router(config)#access list 101 permit ip any any precedence 5 tos 12

If you write out the bit patterns for an IP Precedence value of 5 and a TOS of 12, you get 101 and 1100. Combining these together and dropping the last bit gives 101110, which is identical to the EF DSCP value. This is typically the DSCP value that is used to mark packets for real-time applications.

Cisco introduced a dscp keyword to the access list command in IOS Version 12. 1(5)T. This allows you to accomplish the same thing with a slightly simpler access list. This access list should also process faster because it only makes one comparison instead of two:

Router(config)#access list 101 permit ip any any dscp ef

The access lists that define the other queues also select specific IP Precedence values. This is because we want to carefully limit the amount of processing that the router has to do. The less the access list has to look at, the better.

Note also that the router will process the priority list in the order that it was entered. In general you will want to keep queueing latency for high priority packets as low as possible. This is why we define the higher priority queues first.

In the example, we also specifically included a command to put any unmatched packets into the low priority queue:

Router(config)#priority-list 1 default low

If we had not included this command, the router would have used the normal priority queue for any unmatched packets by default.

You can look at Priority Queueing information on an interface with the show interface command:

Router#show interface Ethernet0
Ethernet0 is up, line protocol is up
  Hardware is Lance, address is 0000.0cf0.8460 (bia 0000.0cf0.8460)
  Internet address is 192.168.1.201/24
  MTU 1500 bytes, BW 10000 Kbit, DLY 1000 usec,
     reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation ARPA, loopback not set, keepalive set (10 sec)
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 00:00:00, output 00:00:00, output hang never
  Last clearing of "show interface" counters never
  Input queue: 0/75/0 (size/max/drops); Total output drops: 0
  Queueing strategy: priority-list 1
  Output queue (queue priority: size/max/drops):
     high: 0/20/0, medium: 0/40/0, normal 0/60/0, low 0/80/0
  5 minute input rate 1000 bits/sec, 2 packets/sec
  5 minute output rate 2000 bits/sec, 2 packets/sec
     7390 packets input, 655552 bytes, 0 no buffer
     Received 6687 broadcasts, 0 runts, 0 giants, 0 throttles
  0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
  0 input packets with dribble condition detected
  81097 packets output, 6240100 bytes, 0 underruns
  2 output errors, 0 collisions, 7 interface resets
  0 babbles, 0 late collision, 0 deferred
  2 lost carrier, 0 no carrier
  0 output buffer failures, 0 output buffers swapped out
Router#

In this case, you can see that the high priority queue has a maximum depth of 20 packets. The medium queue can hold 40 packets, normal holds 60, and the low priority queue can hold 80 packets. This increasing queue depth pattern is necessary to help deal with queue starvation problems. You can modify these default values as follows:

Router(config)#priority-list 1 queue-limit 10 15 25 35

This command sets the depths for all of the queues in increasing order. This particular example would set the high priority queue to hold a maximum of 10 packets, 15 for the medium queue, 25 for the normal queue, and 35 for the low priority queue.

Note that the router will automatically use the high priority queue for critical network control information such as routing updates and keepalives. If these packets are not sent in a timely fashion, it can disrupt how the network functions. If the router were to put this critical information into a lower priority queue, there would be a danger that higher priority application traffic could starve the lower priority queues, and disrupt routing or possibly even bring down parts of the network. CBWFQ and Cisco’s new Low Latency Queueing (LLQ) algorithm offer all of the advantages of Priority Queueing discussed here, and fewer of the disadvantages. This feature is discussed in Recipe 11.13. We recommend using LLQ instead of Priority Queueing if your router supports it. Cisco introduced LLQ in IOS level 12.0(6)T.

11.4. Using Custom Queueing

Problem

You want to configure custom queueing on an interface to give different traffic streams a share of the bandwidth according to their IP Precedence levels.

Solution

Implementing Custom Queueing on a router is a two-step procedure. First you must define the traffic types that will populate your queues. And then you apply the queueing method to an interface:

Router#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
Router(config)#access list 103 permit ip any any precedence 5
Router(config)#access list 104 permit ip any any precedence 4
Router(config)#access list 105 permit ip any any precedence 3
Router(config)#access list 106 permit ip any any precedence 2
Router(config)#access list 107 permit ip any any precedence 1
Router(config)#queue-list 1 protocol ip 3 list 103
Router(config)#queue-list 1 protocol ip 4 list 104
Router(config)#queue-list 1 protocol ip 5 list 105
Router(config)#queue-list 1 queue 5 byte-count 3000 limit 55
Router(config)#queue-list 1 protocol ip 6 list 106
Router(config)#queue-list 1 protocol ip 7 list 107
Router(config)#queue-list 1 default 8
Router(config)#interface HSSI0/0
Router(config-if)#custom-queue-list 1
Router(config-if)#end
Router#

Discussion

When you enable Custom Queueing, the router automatically creates 16 queues for application traffic plus one more for system requirements. You can look at the queues with a normal show interface command:

Router#show interface Ethernet0
Ethernet0 is up, line protocol is up
  Hardware is Lance, address is 0000.0cf0.8460 (bia 0000.0cf0.8460)
  Internet address is 192.168.1.201/24
  MTU 1500 bytes, BW 10000 Kbit, DLY 1000 usec,
     reliability 255/255, txload 2/255, rxload 1/255
  Encapsulation ARPA, loopback not set, keepalive set (10 sec)
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 00:00:00, output 00:00:00, output hang never
  Last clearing of "show interface" counters never
  Input queue: 2/75/0 (size/max/drops); Total output drops: 0
  Queueing strategy: custom-list 1
  Output queues: (queue #: size/max/drops)
     0: 0/20/0 1: 0/20/0 2: 0/20/0 3: 0/20/0 4: 0/20/0
     5: 0/55/3 6: 5/20/0 7: 0/20/0 8: 0/20/0 9: 0/20/0
     10: 0/20/0 11: 0/20/0 12: 0/20/0 13: 0/20/0 14: 0/20/0
     15: 0/20/0 16: 0/20/0
  5 minute input rate 5000 bits/sec, 12 packets/sec
  5 minute output rate 106000 bits/sec, 24 packets/sec
     132910 packets input, 14513345 bytes, 0 no buffer
     Received 109570 broadcasts, 0 runts, 0 giants, 0 throttles
     9 input errors, 0 CRC, 0 frame, 0 overrun, 9 ignored, 0 abort
     0 input packets with dribble condition detected
     1028116 packets output, 85603681 bytes, 0 underruns
     1 output errors, 42 collisions, 8 interface resets
     0 babbles, 0 late collision, 4 deferred
     1 lost carrier, 0 no carrier
     0 output buffer failures, 0 output buffers swapped out
Router#

In this output you can see that queue number 6 currently has five packets queued and waiting for delivery (6: 5/20/0), while queue number 5 has had to drop three packets due to congestion (5: 0/55/3).

The example assigns queue number 3 for all packets with the highest application IP Precedence value of 5. Similarly, packets with Precedence 4 use queue number 4, Precedence 3 use queue 5, Precedence 2 use queue 6, Precedence 1 use queue 7, and everything else uses queue number 8.

Custom Queueing does not assign a default queue for unclassified traffic, so you must remember to do this. The command in the example defines the default as queue number 8:

Router(config)#queue-list 1 default 8

Note that if there is another non-IP protocol such as IPX configured on this interface, it will also use the default queue. If you prefer to give this other protocol its own set of queues, you can use define them using access lists for that protocol. The configuration is nearly identical to the IP example we have shown, except for the exact access list syntax, which naturally depends on the protocol.

By default, the Custom Queueing scheduler visits all queues in order and takes an average of 1500 bytes from each, and each queue can hold up to 20 packets. In the example, we changed these default values for queue number 5:

Router(config)#queue-list 1 queue 5 byte-count 3000 limit 55

This tells the scheduler to take an average of 3000 bytes from this queue on each pass, and to store up to 55 packets in the queue. Increasing the number of bytes will effectively increase the share of the bandwidth that this queue receives. Increasing the queue depth decreases the probability of tail drops. But it also increases the amount of time that a packet could theoretically spend in the queue, which may increase latency and jitter.

In this example, all of the traffic types are selected by the IP Precedence value. It is also possible to select based on specific applications. You can do this either with an access list or, in some cases, using keywords in the queue-list command. For example, if you wanted to select all DLSw traffic and send it to queue number 9, you could create an access list:

Router(config)#access list 117 permit ip any eq 2065 any
Router(config)#access list 117 permit ip any any eq 2065
Router(config)#access list 117 permit ip any eq 2067 any
Router(config)#access list 117 permit ip any any eq 2067
Router(config)#queue-list 1 protocol ip 9 list 117

Or you could do it like this:

Router(config)#queue-list 1 protocol dlsw 9

This second method is clearly easier, but the number of protocol types that can be defined this way is unfortunately rather limited.

We have three important final notes on Custom Queueing that you should bear in mind. The first point is that if traffic from all of these streams is present, the router will share traffic between them. In this example, we have used six different queues: one for each of the five application precedence levels plus a default. By default, each will receive a roughly equal share of the total bandwidth. So you may be surprised to find that, despite imposing different queues for the different traffic-only types, the important traffic still doesn’t get a large enough share of the bandwidth. You can affect this with the byte-count keyword, as we discussed earlier. Note that the queues are serviced by byte count rather than packet count. So suppose you have two queues, one of which supports an interactive session with many short packets, while the other contains a bulk transfer with a few large packets. If you configure the router to service these queues with the same byte-count, it will tend to forward a lot more of the small packets. But the net share of the bandwidth will be roughly equal on average.

Secondly, in Custom Queueing, the traffic within each queue competes directly with all other traffic in the same queue. So, for example, if one user sends a burst of application traffic that fills one of the queues, this will cause tail drops for other users whose traffic uses the same queue. This will cause a smaller version of the global problem of a FIFO queue that we discuss in Appendix B.

The third point is that the more queues you define, the smaller the share of the total bandwidth each queue receives. Further, having more queues increases the amount of processing the router has to do to segregate the traffic.

The second and third points compete with one another. The second one tends to point toward increasing the number of queues to limit the competition within each queue. But the third point should convince you that there is a point of diminishing returns where more queues will not help the situation. In practice, the third rule tends to win out. It rarely turns out to be beneficial to have more than five or six Custom Queues unless some of those queues are only used very lightly.

Custom Queueing is an older QoS mechanism on Cisco routers. In most cases, you will likely find that a newer algorithm such as CBWFQ will be more flexible and give better results.

11.5. Using Custom Queues with Priority Queues

Problem

You want to combine Custom Queueing with Priority Queueing on an interface so the highest priority packets are always handled first, and lower priority traffic streams share bandwidth with one another.

Solution

You can split the queues so that some use Priority Queueing and the remainder use Custom Queueing:

Router#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
Router(config)#access list 101 permit ip any any precedence 7
Router(config)#access list 102 permit ip any any precedence 6
Router(config)#access list 103 permit ip any any precedence 5
Router(config)#access list 104 permit ip any any precedence 4
Router(config)#access list 105 permit ip any any precedence 3
Router(config)#access list 106 permit ip any any precedence 2
Router(config)#access list 107 permit ip any any precedence 1
Router(config)#queue-list 1 protocol ip 1 list 101
Router(config)#queue-list 1 protocol ip 2 list 102
Router(config)#queue-list 1 protocol ip 3 list 103
Router(config)#queue-list 1 protocol ip 4 list 104
Router(config)#queue-list 1 protocol ip 5 list 105
Router(config)#queue-list 1 protocol ip 6 list 106
Router(config)#queue-list 1 protocol ip 7 list 107
Router(config)#queue-list 1 lowest-custom 4
Router(config)#interface HSSI0/0
Router(config-if)#custom-queue-list 1
Router(config-if)#end
Router#

Discussion

This example is similar to Recipe 11.4, which looked at a pure Custom Queueing example. In this case, however, we have added the command:

Router(config)#queue-list 1 lowest-custom 4

This command allows you to mix Custom and Priority Queue types. Note that this command only works with queue-list number 1. It is not available for any other queue lists.

In this example, queue number 4 is the lowest numbered Custom Queue. So, in this example, queues 1, 2 and 3 are all Priority Queues. This means that the router will deliver all of the packets in queue number 1, then all of the packets in queue number 2, then all of the packets in queue number 3. And then, if these high priority queues are all empty, it will use custom queueing to deliver the packets in the lower priority queues.

The main advantage to this sort of configuration is that it gives absolute priority to real-time applications. This is important not because of the bandwidth, but because Priority Queueing the real-time applications minimizes their queueing latency. However, as with the pure Priority Queueing example in Recipe 11.3, you have to be extremely careful to prevent the high priority traffic from starving the other queues.

11.6. Using Weighted Fair Queueing

Problem

You want your routers to use the TOS/DSCP fields when forwarding packets.

Solution

The simplest way to make your routers use DSCP or TOS information is to just make sure that Weighted Fair Queueing (WFQ) is enabled:

Router#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
Router(config)#interface Serial0/0
Router(config-if)#fair-queue
Router(config-if)#end
Router#

WFQ is enabled by default on all interfaces of E1 speeds (roughly 2Mbps) or less. You can enable WFQ on higher speed interfaces as well, but we don’t recommend it.

To configure more specific behavior, you can tell WFQ how to allocate its queues:

Router#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
Router(config)#interface Serial0/0
Router(config-if)#fair-queue 64 512 10
Router(config-if)#end
Router#

Discussion

Before we discuss these examples in any detail, we should mention that WFQ works well even if you don’t use any TOS/DSCP marking. In this case, it simply gives the same default weighting to every flow, which is the same as conventional Fair Queueing (without the weights). As we discuss in Appendix B, Fair Queueing is much more effective than FIFO queueing. However, with TOS/DSCP marking, WFQ really shows its value, giving higher priority flows more of the bandwidth, and preventing high volume flows from starving low volume flows regardless of their TOS markings.

The first example just enables WFQ on the interface. In fact, this is the default for all interfaces that operate slower than 2.048Mbps (E1 speed), except for interfaces that use LAPB or SDLC. WFQ does not work with LAPB or SDLC.

The second example is a little bit more interesting, though, because it changes the default queues. The fair-queue command has three optional parameters. In the example, we specified:

Router(config-if)#fair-queue 64 512 10

The first number specifies the congestive discard threshold. This just means that if there are more than 64 packets in any given queue, the router will start to discard any new packets. The default threshold is 64.

The second number is the number of dynamic queues. The values must be a power of 2 multiplied of 16 to a maximum of 4096 (i.e., one of the following values: 16, 32, 64, 128, 256, 512, 1024, 2048, or 4096). The default value is 256. If this interface must support a large number of flows, then it is a good idea to choose a larger value.

The last number (10 in this case) is the number of queues that the router will set aside for RSVP reservation requests. Please see Recipe 11.9 for more information about RSVP.

In most cases, the default parameters are good enough. However, there are two times in particular when it is useful to modify them. First, if the interface must support an extremely large number of flows, you will get better performance by using a larger number of queues. However, be careful doing this on faster interfaces because the router may start to have trouble processing the additional queues. In that case, you could actually get a performance improvement by decreasing the number of queues. In this case, each queue could wind up simultaneously handling a few distinct flows.

You will also need to change the default queue parameters if you are using RSVP to reserve queues. By default, this parameter is zero. If you are using RSVP with WFQ, you must allocate some reserved queuing space.

See Also

Recipe 11.9

11.7. Using Class-Based Weighted Fair Queueing

Problem

You want to use Class-Based Weighted Fair Queueing on an interface.

Solution

There are three steps to configuring Class-Based Weighted Fair Queueing (CBWFQ) on a router. First, you have to create one or more class maps that describe the traffic types. Then you create a policy map that tells the router what to do with these traffic types. Finally you need to attach this policy map to one or more of the router’s interfaces:

Router#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
Router(config)#class-map highprec
Router(config-cmap)#description Highest priority Prec=5
Router(config-cmap)#match ip precedence 5
Router(config-cmap)#exit
Router(config)#class-map medhiprec
Router(config-cmap)#description Medium-high priority Prec=4
Router(config-cmap)#match ip precedence 4
Router(config-cmap)#exit
Router(config)#class-map medloprec
Router(config-cmap)#description Medium-low priority Prec=2,3
Router(config-cmap)#match ip precedence 2 3
Router(config-cmap)#exit
Router(config)#policy-map cbwfqpolicy
Router(config-pmap)#class highprec
Router(config-pmap-c)#bandwidth percent 25
Router(config-pmap-c)#exit
Router(config-pmap)#class medhiprec
Router(config-pmap-c)#bandwidth percent 25
Router(config-pmap-c)#exit
Router(config-pmap)#class medloprec
Router(config-pmap-c)#bandwidth percent 25
Router(config-pmap-c)#exit
Router(config-pmap)#class class-default
Router(config-pmap-c)#fair-queue 512
Router(config-pmap-c)#queue-limit 96
Router(config-pmap-c)#exit
Router(config-pmap)#exit
Router(config)#interface serial0/1
Router(config-if)#service-policy output cbwfqpolicy
Router(config-if)#end
Router#

This feature is available in IOS levels 12.0(5)T and higher.

Discussion

CBWFQ need not be significantly different from regular WFQ. In the example we have defined all traffic with an IP Precedence value of critical (5) to have a special queue. We have also created a single queue for traffic with Precedence 4, and another one for traffic with Precedence values of 2 and 3. All other traffic, including traffic with Precedence 0 and 1 as well as all non-IP traffic, uses regular WFQ. To make this fact slightly more clear, we have modified the default WFQ parameters with the following commands:

Router(config)#policy-map cbwfqpolicy
Router(config-pmap)#class class-default
Router(config-pmap-c)#fair-queue 512
Router(config-pmap-c)#queue-limit 96

This simply modifies the default WFQ behavior for all traffic that doesn’t match one of the other defined classes. It sets the number of WFQ queues to 512, and sets the queue depth to a maximum of 96 packets. You could achieve the same effect using the fair-queue interface command from Recipe 11.6:

Router(config-if)#fair-queue 96 512 0

But that example doesn’t give you the ability to also have separate queues for special classes of traffic, as shown in this recipe. The final argument for this fair-queue interface command specifies the number of queues to set aside for RSVP. We are trying to duplicate the effect of the cbwfqpolicy policy map, which doesn’t include any RSVP queues, so we have set the last argument to zero here. Please refer to Recipe 11.6 for more information on this command.

You can create up to 64 class-based queues for use with CBWFQ. You can control the share of the bandwidth to each queue using the bandwidth keyword either using an absolute value in kilobits per second, or a percentage of the total available bandwidth. The following example shows the syntax for using a percentage:

Router(config-pmap)#class highprec
Router(config-pmap-c)#bandwidth percent 25
Router(config-pmap-c)#exit

The bandwidth percent command is available in IOS levels 12.1(1) and higher. For earlier releases, you can only specify an absolute bandwidth:

Router(config-pmap-c)#bandwidth 5000

The argument for this version of the command is a value in Kbps between 8 and 2,000,000, which should be sufficient for most interface types. Note that the upper limit here is 2Mbps, which is roughly the E1 speed mentioned earlier as the effective upper limit to using WFQ. Because CBWFQ generally uses fewer queues and doesn’t need to sort based on flow, you can use it for higher speed interfaces as well. However, you should let your average CPU utilization be your guide. If you do too many tests when classifying packets, you might find that the router can’t keep up with high packet rates.

In both versions you have to keep two important factors in mind. First, although this is essentially a Layer 3 feature, you have to include any Layer 2 framing overhead when configuring the bandwidth. If a given queue supports a streaming multimedia application with a known bit rate, it is often a good idea to slightly overestimate the requirements to include this Layer 2 overhead. If the application doesn’t use the excess, CBWFQ allocates it to other queues.

The second important factor is that the total allocated bandwidth must not exceed a configurable maximum value. By default, this maximum is 75%. You can change it (for example, to 80%) by using the following interface level command:

Router(config-if)#max-reserved-bandwidth 80

You would apply this command to the interface that runs CBWFQ and needs a little extra reserved capacity. It is usually best to leave this at its default value, however. The router uses the remainder for unclassified traffic and network control packets. In this case, we have configured WFQ for the unclassified traffic. It is vital, however, to reserve enough bandwidth for critical network functions such as Layer 2 keepalive frames and routing protocols.

Creating the policy map alone doesn’t actually change the way the router behaves. To do that you have to attach this policy to an interface as follows:

Router(config)#interface serial0/1
Router(config-if)#service-policy output cbwfqpolicy

One of the classes defined in this example is a high priority class that we called “highpriority”. This class map simply looks for traffic that is tagged with an IP Precedence value of 5. The policy map then tells the interface to give up to 25% of its bandwidth to this high priority traffic. If there is not enough high priority traffic to use this, the router will allocate the excess to the remaining traffic.

We will expand on this concept in Recipe 11.13.

11.8. Controlling Congestion with WRED

Problem

You want to control congestion on an interface before it becomes a problem.

Solution

The syntax for configuring WRED changed with the introduction of class-based QoS. The old method defined WRED across an entire interface:

Router#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
Router(config)#interface HSSI0/0
Router(config-if)#random-detect
Router(config-if)#random-detect precedence 0 10 20 10
Router(config-if)#random-detect precedence 1 12 20 10
Router(config-if)#random-detect precedence 2 15 25 15
Router(config-if)#random-detect precedence 3 18 25 15
Router(config-if)#random-detect precedence 4 20 30 20
Router(config-if)#random-detect precedence 5 22 30 20
Router(config-if)#random-detect precedence 6 30 40 25
Router(config-if)#random-detect precedence 7 40 50 100
Router(config-if)#random-detect precedence RSVP 45 50 100
Router(config-if)#end
Router#

The new configuration method uses the same syntax as CBWFQ:

Router#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
Router(config)#class-map Prec5
Router(config-cmap)#description Critical
Router(config-cmap)#match ip precedence 5
Router(config-cmap)#exit
Router(config)#policy-map cb_wred
Router(config-pmap)#class Prec5
Router(config-pmap-c)#random-detect dscp-based
Router(config-pmap-c)#exit
Router(config-pmap)#class class-default
Router(config-pmap-c)#fair-queue 512
Router(config-pmap-c)#queue-limit 96
Router(config-pmap-c)#random-detect dscp-based
Router(config-pmap-c)#exit
Router(config-pmap)#exit
Router(config)#interface HSSI0/1
Router(config-if)#service-policy output cb_wred
Router(config-if)#end
Router#

Discussion

For the older method, you can set up the drop probabilities according to IP Precedence values using the following command:

Router(config-if)#random-detect precedence 7 40 50 100

The first argument after the precedence keyword here is the IP Precedence value. The options are any integer between 0 and 7 or the keyword RSVP. After this are the minimum threshold, maximum threshold, and the so-called mark probability denominator. The minimum threshold is the number of packets that must be in the queue before the router starts to discard. The probability at the minimum threshold is essentially zero, but it rises linearly as the number of packets in the queue rises. The maximum probability occurs at the maximum threshold. You specify the actual value of the probability at this maximum using the mark probability denominator. In this case, we have set the value to 100, which means that at the maximum, we will discard 1 packet in 100. Therefore, halfway between the maximum and minimum thresholds, the router will drop 1 packet in 200.

As we discuss in Appendix B, the router doesn’t necessarily drop packets when the queue depth reaches the minimum threshold. Rather, it uses a moving average so that temporary bursts of data are not dropped. This configured minimum is the lower limit of this moving average, which is reached only when the congestion continues for a longer period of time.

If you do not change these values, the defaults take IP Precedence values into account. The default mark probability denominator is 10, so the router will discard at most 1 packet in 10. The default maximum threshold depends on the speed of the interface and the router’s capacity for buffering packets, but it is the same for all Precedence values. So, by default, the only differences between WRED’s treatment of different IP Precedence levels is in the minimum threshold. The default minimum threshold for packets with an IP Precedence of 0 is 50% of the maximum threshold. This value rises linearly with Precedence so that the minimum threshold for Precedence 7 and packets with RSVP reserved-bandwidth allocations are almost the same as the maximum threshold.

In the new-style example, we have only created one class-based queue to show the principle. In practice, you would probably want to create more than this. All of the traffic that doesn’t have an IP Precedence value of 5 uses the default queue, where we have configured both WFQ and WRED.

This example uses DSCP-based random detection. WRED has a built-in ability to discriminate based on DSCP value, so that traffic streams with higher drop precedence values are more likely to drop packets. The default WRED settings when using DSCP-based random detection are shown in Table 11-1.

Table 11-1. Default parameters for DSCP-based WRED

DSCP value

Minimum threshold queue depth

Maximum threshold queue depth

Drop probability at maximum

AFx1

32

40

1/10

AFx2

28

40

1/10

AFx3

24

40

1/10

As Table 11-1 shows, the default DSCP-based thresholds are the same for every class. So, for example, AF12, AF22, AF32 and AF42 all begin dropping packets in a sustained congestion situation when the queue depth reaches 28 packets. They reach their maximum drop probability when there are 40 packets in the queue. In all cases, the drop probability at the maximum threshold value is 1/10 (the mark probability), meaning that the router will randomly drop 1 packet in 10.

You can change these values in a policy map as follows:

Router(config-pmap)#class AF1x
Router(config-pmap-c)#bandwidth percent 20
Router(config-pmap-c)#random-detect dscp-based
Router(config-pmap-c)#random-detect dscp af13 10 20
Router(config-pmap-c)#random-detect dscp af12 20 50
Router(config-pmap-c)#random-detect dscp af11 50 100 50
Router(config-pmap-c)#exit

In each of the random-detect dscp commands, the first argument is the DSCP value, followed by the minimum threshold, the maximum threshold, and the denominator of the mark probability. In the case of the AF11 entry, the router will start dropping these packets when there are more than 50 packets in the queue and increase the probability until the number reaches 100. At that point, the probability of dropping a packet of this type will be 1 in 50.

Note that these thresholds apply to all traffic in the queue, not just traffic with this particular DSCP value. So there may be 20 AF11 packets, 10 AF12, and 20 more marked with the AF13 DSCP value. Since this adds up to 50 packets, the router will start to drop the AF11 packets. However, because the maximum thresholds for AF12 and AF13 packets are 50 and 20 respectively, the router will already be dropping packets of these types at the full rate (1 packet in 10 by default) before it starts to drop any AF11 packets.

This example assumes that you want to use DSCP values to control the WRED thresholds. This is not necessary, however. You can also use an unweighted version of the command as follows:

Router(config)#class-map AF11
Router(config-cmap)#match ip dscp af11
Router(config-cmap)#exit
Router(config)#policy-map example
Router(config-pmap)#class AF11
Router(config-pmap-c)#bandwidth percent 10
Router(config-pmap-c)#random-detect
Router(config-pmap-c)#exit

This is particularly useful when your class definitions already take DSCP values into account, as this class map does. Since there is no variation of DSCP values among the class of packets that have a DSCP value of AF11, it isn’t necessary for WRED to look at the DSCP value again.

See Also

Recipe 11.7

11.9. Using RSVP

Problem

You want to configure RSVP on your network.

Solution

Basic RSVP configuration is relatively simple. All you need to do is define how much bandwidth can be reserved on the interface:

Router#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
Router(config)#interface FastEthernet0/0
Router(config-if)#ip rsvp bandwidth 128 56
Router(config-if)#end
Router#

Some network administrators have to worry about unauthorized use of bandwidth reservation. You can control this by specifying an access list of allowed neighbor devices:

Router#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
Router(config)#access list 15 permit ip 192.168.1.0 0.0.0.255
Router(config)#interface FastEthernet0/0
Router(config-if)#ip rsvp bandwidth 128 56
Router(config-if)#ip rsvp neighbor 15
Router(config-if)#end
Router#

Discussion

Note that before you can configure RSVP on an interface, you must first configure the interface for WFQ, CBWFQ, or WRED. This step is not included in this example, to make it easier to focus on the RSVP configuration. For examples of WFQ, CBWFQ, and WRED, please refer to Recipe 11.6, Recipe 11.7, and Recipe 11.8 respectively.

The first example tells the router to pay attention to RSVP signaling, and defines how much bandwidth can be reserved in the following command:

Router(config-if)#ip rsvp bandwidth 128 56

The first numerical argument, 128, specifies that applications can reserve a maximum aggregate bandwidth of 128Kbps. The last argument, 56, means that the largest amount that a single application can request is 56Kbps.

When you use the ip rsvp neighbor command, as in the second example, it is important to remember that this router receives RSVP reservation requests from neighboring devices. If this is an access router, then the neighboring device on the local LAN port could be an end device. But for other routers and other interfaces, it is likely that the RSVP request will come from another router, not from the end device making the initial request. So, for router-to-router connections, it may not be useful to specify an access list because all RSVP requests, legitimate or not, will come from a neighboring router. The best place to control which devices are allowed to reserve bandwidth is on the access router.

There are several useful show commands to look at the RSVP configuration of your router as well as the dynamic reservation requests. The first of these is the show ip rsvp interface command, which shows information on the reservations that have been made by interface:

Router#show ip rsvp interface
interfac allocate i/f max  flow max per/255 UDP  IP   UDP_IP   UDP M/C
Et0      0M       1M       100K     0  /255 0    2    0        0
To0      50K      1M       100K     12 /255 0    1    0        0

This command shows that there are two interfaces that are currently supporting RSVP reservations, Ethernet0 and TokenRing0. The allocate column shows the amount of bandwidth that has been allocated to active RSVP requests on each interface. In all of these fields, the letter K stands for Kbps and the M for Mbps. The i/f max column shows the total amount that can be allocated on each of these interfaces, while the flow max shows the maximum that can be requested by any one flow. These are the parameters from the ip rsvp bandwidth interface configuration command.

The remaining columns show information about the actual allocated streams. The per/255 column shows the fraction of the total interface bandwidth that is used by each of these allocations. This is measured as a fraction of 255, as is common for expressing loads on Cisco interfaces. The UDP column shows the number of UDP-encapsulated sessions, IP counts the TCP-encapsulated sessions, UDP_IP shows the sessions that use both UDP and TCP. The UDP M/C column shows whether the interface is configured to allow UDP reservations.

You can look at individual reservations in detail with the following command:

Router#show ip rsvp installed
RSVP: Ethernet0 has no installed reservations
RSVP: TokenRing0
BPS    To              From            Protoc DPort  Sport  Weight Conversation
50K    192.168.5.5     192.168.1.10    TCP    888    999    4      520
Router#

This shows that the router is currently supporting a 50Kbps TCP session between the two IP addresses that are shown, with the source and destination port numbers, 999 and 888, respectively. The Weight column shows the weighting factor, and Conversation shows the conversation (or flow) number used by WFQ for this queue. If you don’t run WFQ on this interface, both of these values appear as 0.

There is considerable overlap between the information shown in the show ip rsvp installed command and the output with the reservation and sender keywords. However, there are some important additional pieces of information here:

Router#show ip rsvp reservation
To            From          Pro DPort Sport Next Hop      I/F   Fi Serv BPS Bytes
192.168.5.5   192.168.1.10  TCP 888   999   192.168.3.2   To0   FF LOAD 50K   50K
Router#show ip rsvp sender
To              From            Pro DPort Sport Prev Hop        I/F  BPS  Bytes
192.168.5.5     192.168.1.10    TCP 888   999   192.168.1.201   Et0   50K    50K

Router#

With the reservation keyword, you can see details about what type of reservation has been made. In this case, FF indicates that this is a Fixed Filter reservation, which means that it contains a single conversation between two end devices. However, RSVP also allows aggregation of flows. If this column says SE, which stands for Shared Explicit Filter, then it represents a shared reservation of unlimited scope. The other option is WF (Wildcard Filter) and indicates a shared reservation that can include only certain end devices or applications.

With the sender flag, you see the actual path information for the reservation. The Prev Hop and I/F columns here show the address and interface of the previous hop router. The BPS column shows the reserved bandwidth for this session in Kbps, and the Bytes column shows the maximum burst size in kilobytes.

The show ip rsvp neighbor command simply lists all of the IP addresses of active RSVP neighbors on all interfaces. This command is useful if you want to figure out what devices are making RSVP requests. As we mentioned earlier, since all RSVP requests are made hop-to-hop, it is quite likely that you will see a lot of routers in this list. However, on access routers, this command will help you to see whether the right end devices are making RSVP requests. If there are unauthorized devices in the list, you may want to consider using the ip rsvp neighbor interface configuration command to restrict which devices are allowed to make requests:

Router#show ip rsvp neighbor
Interfac Neighbor        Encapsulation
Et0      192.168.1.10    RSVP
Et0      192.168.1.201   RSVP
To0      192.168.3.2     RSVP

11.10. Using Generic Traffic Shaping

Problem

You want to perform traffic shaping on an interface.

Solution

Generic traffic shaping works on an entire interface to limit the rate that it sends data. This first version restricts all outbound traffic to 500,000 bits per second:

Router#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
Router(config)#interface FastEthernet0/0
Router(config-if)#traffic-shape rate 500000
Router(config-if)#end
Router#

You can also specify traffic shaping for packets that match a particular access list. This will buffer only the matching traffic, and leave all other traffic to use the default queueing mechanism for the interface:

Router#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
Router(config)#access list 101 permit tcp any eq www any
Router(config)#access list 101 permit tcp any any eq www
Router(config)#access list 102 permit tcp any eq ftp any
Router(config)#access list 102 permit tcp any any eq ftp
Router(config)#interface FastEthernet0/0
Router(config-if)#traffic-shape group 101 100000
Router(config-if)#traffic-shape group 102 200000
Router(config-if)#end
Router#

There is also a newer class-based method for configuring traffic shaping on an interface using CBWFQ. We discuss this technique in Recipe 11.13.

Discussion

The first example shows how to configure an interface to restrict the total amount of outbound information. This is extremely useful when there is a device downstream that will not cope well with hard bursts of traffic.

A common example is the method of delivering ATM WAN services through an Ethernet interface, frequently called LAN Extension. In this type of network, the Ethernet port on your router connects to the carrier’s switch, which bridges one or more remote Ethernet segments together using an ATM network. The problem with this is that the Ethernet interface is able to send data much faster than the ATM network is configured to accept it. So you run the risk of dropping large numbers of packets within the ATM network. Since the carrier networks usually don’t support customer Layer 3 QoS features, the entire ATM network acts just like a big FIFO queue with a tail drop problem. As we discuss in Appendix B, this is extremely inefficient.

This is why it can be extremely useful to restrict the total amount of traffic leaving an interface. It can also be useful to restrict only certain applications, as we demonstrated in the second example. However, we discuss more efficient class-based methods for controlling the total amount of traffic of a particular type in Recipe 11.7. This older group traffic-shaping method should only be used on routers that do not support CBWFQ.

11.11. Using Frame-Relay Traffic Shaping

Problem

You want to separately control the amount of traffic sent along each of the PVCs in a Frame Relay network.

Solution

This first example shows how to configure Frame Relay traffic shaping using point-to-point frame relay subinterfaces:

Router#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
Router(config)#interface HSSI0/0
Router(config-if)#encapsulation frame-relay
Router(config-if)#exit
Router(config)#interface HSSI0/0.1 point-to-point
Router(config-subif)#traffic-shape rate 150000
Router(config-subif)#frame-relay interface-dlci 31
Router(config-subif)#end
Router#

Most Frame Relay carrier networks are sufficiently overprovisioned, meaning that you can actually use much more capacity than your contractual Committed Information Rate (CIR). So you might want to apply traffic shaping only when you encounter Frame Relay congestion problems, and then only to reduce the data rate until the congestion goes away:

Router#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
Router(config)#interface HSSI0/0
Router(config-if)#encapsulation frame-relay
Router(config-if)#exit
Router(config)#interface HSSI0/0.1 point-to-point
Router(config-subif)#traffic-shape adaptive 10000
Router(config-subif)#frame-relay interface-dlci 31
Router(config-subif)#end
Router#

Discussion

These examples are different from the one that we showed in Recipe 11.10. In this recipe we don’t want to control the entire aggregate traffic flow, and we don’t care about the traffic based on application. Here we want to ensure that every Frame Relay PVC using this interface is shaped separately so that they don’t overrun the amount of bandwidth purchased from the WAN carrier. If you have 20 PVCs on an interface, it is fine to send the maximum per-PVC bandwidth to all of them simultaneously, but you will suffer from terrible performance problems if you try to send all of that bandwidth through a single PVC.

Usually you will purchase a particular amount of Frame Relay bandwidth, or CIR, from the WAN carrier for each PVC. So the first example shows how you can force the router to only send 150Kbps through the PVC with DLCI 31. It is important to remember that you can have different CIR values for different PVCs. You may need to have a different Frame Relay traffic shaping rate on every PVC.

The second example assumes that a lot of the time there will be very little congestion in the carrier’s network, so you should be able to safely use some of the excess capacity. The Frame Relay protocol includes the ability to tell devices when there is congestion in the network. There are two types of congestion notifications, which are just noted as flags in the header portion of regular user frames. If a router receives a frame with the Forward Explicit Congestion Notification (FECN) flag set, it knows that the frame encountered congestion on its way from the remote device to the router. If the router receives a frame with the Backward Explicit Congestion Notification (BECN) flag set, a frame encountered congestion on its way from this router to the remote device. Please refer to Chapter 10 for a more detailed discussion of these Frame Relay protocol features.

The traffic-shape adaptive command tells the router that when it sees frames with a BECN flag, it should reduce the sending rate on this PVC. By default, this command will back off the sending rate all the way to zero. So, in the example, we have specified a minimum rate of 10,000bps, which would correspond to the CIR for this PVC:

Router(config-subif)#traffic-shape adaptive 10000

In general, this adaptive traffic shaping method is preferred over the static method, because it will give you significantly better network performance when the carrier’s network is not congested. However, it is important to remember that the precise implementation of FECN and BECN markings is up to the carrier. Some carriers disable these features altogether, while others use them inconsistently. Since most customers ignore these markings, there is often very little reason to ensure that they are accurate.

You should check with your network vendor before implementing adaptive Frame Relay traffic shaping. We also recommend monitoring your FECN and BECN statistics for a reasonable period of time before implementing them, to verify that they are reliable.

11.12. Using Committed Access Rate

Problem

You want to use Committed Access Rate (CAR) to control the flow of traffic through an interface.

Solution

CAR provides a useful method for policing the traffic rate through an interface. The main features of CAR are functionally similar to traffic shaping, but CAR also allows several extremely useful extensions. This first example shows the simplest application. We have configured CAR here to do basic rate limiting. The interface will transmit packets at an average rate of 500,000bps, allowing bursts of 4,500 bytes. If there is a burst of longer than 9,000 bytes, the router will drop the excess packets:

Router#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
Router(config)#interface HSSI0/0
Router(config-if)#rate-limit output 500000 4500 9000 conform-action transmit exceed-
action drop
Router(config-if)#end
Router#

This next example defines three different traffic classifications using access lists, and separately limits the rates of these applications:

Router#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
Router(config)#access list 101 permit tcp any eq www any
Router(config)#access list 101 permit tcp any any eq www
Router(config)#access list 102 permit tcp any eq ftp any
Router(config)#access list 102 permit tcp any any eq ftp
Router(config)#access list 102 permit tcp any eq ftp-data any
Router(config)#access list 102 permit tcp any any eq ftp-data
Router(config)#access list 103 permit ip any any
Router(config)#interface HSSI0/0
Router(config-if)#rate-limit output access-group 101 50000 4500 9000 conform-action
transmit exceed-action drop
Router(config-if)#rate-limit output access-group 102 50000 4500 9000 conform-action
transmit exceed-action drop
Router(config-if)#rate-limit output access-group 103 400000 4500 9000 conform-action
transmit exceed-action drop
Router(config-if)#end
Router#

CAR also includes a useful option to match DSCP in the rate-limit command without needing to resort to an access group. In the following example, the DSCP values with the highest drop precedence values are rate-limited. Note that, unlike several other Cisco commands, here you must specify the decimal value of the DSCP field. Please refer to Table B-3 in Appendix B for a list of these values:

Router#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
Router(config)#interface HSSI0/0
Router(config-if)#rate-limit output dscp 14 50000 4500 9000 conform-action transmit
exceed-action drop
Router(config-if)#rate-limit output dscp 22 50000 4500 9000 conform-action transmit
exceed-action drop
Router(config-if)#rate-limit output dscp 30 50000 4500 9000 conform-action transmit
exceed-action drop
Router(config-if)#end
Router#

Finally, CAR also allows you to define a new kind of access list called a rate-limiting access list:

Router#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
Router(config)#access list rate-limit 55 5
Router(config)#interface HSSI0/0
Router(config-if)#rate-limit output access-group rate-limit 55 50000 4500 9000
conform-action transmit exceed-action drop
Router(config-if)#end
Router#

Discussion

People are often confused about the difference between CAR and traffic shaping, because they appear to perform extremely similar functions. However, there is one very important difference. When a traffic shaping interface experiences a burst of data, it attempts to buffer the excess. But CAR just does whatever exceed-action you have specified:

Router(config-if)#rate-limit output 500000 4500 9000 conform-action transmit exceed-
action drop

In this example, the exceed-action is to simply drop the packet. Meanwhile, the conform-action in each example is to simply transmit the packet. Any traffic that falls below the configured rate is said to conform. CAR includes several other possibilities besides simply transmitting or dropping the packet:

drop

CAR drops the packet.

transmit

CAR transmits the packet unchanged.

set-prec-transmit

CAR changes the IP Precedence of the packet, and then transmits it.

continue

CAR moves on to evaluate the next rate-limit command on this interface.

set-prec-continue

CAR changes the IP Precedence and then evaluates the next rate-limit command.

Cisco has added several additional options to IOS Versions 12.0(14)ST and higher:

set-dscp-continue

CAR changes the DSCP field and then evaluates the next rate-limit command.

set-dscp-transmit

CAR changes DSCP field and then transmits the packet.

set-qos-continue

CAR sets the qos-group and then evaluates next command.

set-qos-transmit

CAR sets the qos-group and then transmits the packet.

There are two additional commands that you can use with MPLS to alter the MPLS experimental field:

set-mpls-exp-continue

This sets the experimental field and then continues.

set-mpls-exp-transmit

This option sets the experimental field and transmits the packet.

The various continue options allow you to string together a series of CAR commands on an interface to do more sophisticated things:

Router#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
Router(config)#access list 101 permit tcp any eq www any
Router(config)#access list 101 permit tcp any any eq www
Router(config)#access list 103 permit ip any any
Router(config)#interface HSSI0/0
Router(config-if)#rate-limit output 50000 4500 4500 conform-action transmit exceed-
action continue
Router(config-if)#rate-limit output access-group 101 100000 4500 9000 conform-action
set-prec-transmit 3 exceed-action continue
Router(config-if)#rate-limit output access-group 103 100000 4500 9000 conform-action
set-prec-transmit 0 exceed-action drop
Router(config-if)#end
Router#

In this example, the interface will transmit all packets when the rate is 50,000bps or less. As soon as the traffic exceeds this rate, however, the router starts to bump up the IP Precedence of all HTTP traffic to a value of 3, and all other traffic goes down to a precedence of 0. It will continue to transmit all of these packets until the average rate exceeds 100,000bps. You can use this sort of technique to carefully tune how your network behaves in congestion situations.

You can also use CAR and the exceed-action set-prec-transmit command to lower the Precedence of high priority IP traffic when it exceeds its allocated portion of the bandwidth. Simply transmitting it with a lower Precedence represents a nice and useful intermediate step to dropping high priority packets outright. However, with real-time packets, it is better to drop than buffer or remark, because those options would introduce unwanted latency and jitter.

The other useful thing you can do with CAR is to rate-limit inbound traffic:

Router(config-if)#rate-limit input 50000 4500 4500 conform-action transmit exceed-
action drop

Of course, it’s never completely ideal to allow a remote device to send too many packets across the network, only to drop them as they are received. But it is sometimes useful when your network acts as a service provider to other networks. For example, you might have downstream customers who have subscribed to a subrate service. This would include things like selling access through an Ethernet port, but restricting the customer to some lower rate such as 100Kbps.

Alternatively, you could use inbound rate-limit commands to ensure that your downstream customers are allowed to use your network for surfing the web, but only if the rate is kept below some threshold:

Router(config)#access list 101 permit tcp any eq www any
Router(config)#access list 101 permit tcp any any eq www
Router(config)#access list 103 permit ip any any
Router(config)#interface HSSI0/0
Router(config-if)#rate-limit input 50000 4500 4500 conform-action transmit exceed-
action continue
Router(config-if)#rate-limit input access-group 101 100000 4500 9000 conform-action
drop exceed-action continue
Router(config-if)#rate-limit input access-group 103 100000 4500 9000 conform-action
transmit exceed-action drop
Router(config-if)#end
Router#

You could even use CAR to simply rewrite the IP Precedence values of all packets received from a customer:

Router(config)#interface HSSI0/0
Router(config-if)#rate-limit input 100000 4500 9000
 conform-action set-prec-transmit
0 exceed-action set-prec-transmit 0
Router(config-if)#end
Router#

This same technique is also helpful in combating Internet-based DOS attacks. For example, if your network is being inundated with PING flood or SYN ACK attacks, you might want to look specifically for these types of packets, and make sure that they are restricted to a low but reasonable rate. This way, the legitimate uses of these packets will not suffer, but you will reduce the service denial problem.

The last example in the solution section of this recipe needs a little bit of explanation because some of the properties can be confusing:

Router(config)#access list rate-limit 55 5
Router(config)#interface HSSI0/0
Router(config-if)#rate-limit output access-group rate-limit 55 50000 4500 9000
conform-action transmit exceed-action drop

The access-list rate-limit command allows you to create a new and special variety of access lists especially for use with CAR. There are three ranges of rate-limiting access list index numbers. You use access lists with values between 0 and 99 to match IP Precedence values. If the index number is between 100 and 199, it will match MAC addresses, and if it is between 200 and 299, it matches MPLS experimental field values.

In the example above, access list number 55 simply matches all packets with IP Precedence values of 5. You can also use a precedence bit mask to match several values. For example, to match Precedence values 0, 1, and 2, you could use a mask of 01100000, which is 96 in decimal:

Router(config)#access list rate-limit 56 mask 96

The MPLS access lists work in a similar way, matching the value in the MPLS experimental field:

Router(config)#access list rate-limit 255 6
Router(config)#access list rate-limit 256 mask 42

The MAC address access lists work on standard Ethernet or Token Ring 48-bit MAC addresses:

Router(config)#access list rate-limit 155 0000.0c07.ac01

You have to be careful about how you use these rate-limiting access lists, because it’s easy to get them confused with regular access lists. You can have a regular access list with the same number as a rate-limiting access list. The only difference is that you apply rate-limiting access lists with the rate-limit keyword on the rate-limit command as follows:

Router(config)#interface HSSI0/0
Router(config-if)#rate-limit output access-group rate-limit 55 50000 4500 9000
conform-action transmit exceed-action drop

11.13. Implementing Standards-Based Per-Hop Behavior

Problem

You want to configure your router to follow the RFC-defined per-hop behaviors defined for different DSCP values.

Solution

This recipe constructs an approximate implementation of both expedited forwarding (EF) and assured forwarding (AF), while still ensuring that network control packets do not suffer from delays due to application traffic. With the QoS enhancements provided in IOS Version 12.1(5)T and higher, there is a straightforward way to accomplish this using a combination of WRED, CBWFQ and Low Latency Queueing (LLQ):

Router#configure terminal
Enter configuration commands, one per line.  End with CNTL/Z.
Router(config)#class-map EF
Router(config-cmap)#description Real-time application traffic
Router(config-cmap)#match ip precedence 5
Router(config-cmap)#exit
Router(config)#class-map AF1x
Router(config-cmap)#description Priority Class 1
Router(config-cmap)#match ip precedence 1
Router(config-cmap)#exit
Router(config)#class-map AF2x
Router(config-cmap)#description Priority Class 2
Router(config-cmap)#match ip precedence 2
Router(config-cmap)#exit
Router(config)#class-map AF3x
Router(config-cmap)#description Priority Class 3
Router(config-cmap)#match ip precedence 3
Router(config-cmap)#exit
Router(config)#class-map AF4x
Router(config-cmap)#description Priority Class 4
Router(config-cmap)#match ip precedence 4
Router(config-cmap)#exit
Router(config)#policy-map cbwfq_pq
Router(config-pmap)#class EF
Router(config-pmap-c)#priority 58 800
Router(config-pmap-c)#exit
Router(config-pmap)#class AF1x
Router(config-pmap-c)#bandwidth percent 15
Router(config-pmap-c)#random-detect dscp-based
Router(config-pmap-c)#exit
Router(config-pmap)#class AF2x
Router(config-pmap-c)#bandwidth percent 15
Router(config-pmap-c)#random-detect dscp-based
Router(config-pmap-c)#exit
Router(config-pmap)#class AF3x
Router(config-pmap-c)#bandwidth percent 15
Router(config-pmap-c)#random-detect dscp-based
Router(config-pmap-c)#exit
Router(config-pmap)#class AF4x
Router(config-pmap-c)#bandwidth percent 15
Router(config-pmap-c)#random-detect dscp-based
Router(config-pmap-c)#exit
Router(config-pmap)#class class-default
Router(config-pmap-c)#fair-queue 512
Router(config-pmap-c)#queue-limit 96
Router(config-pmap-c)#exit
Router(config-pmap)#exit
Router(config)#interface HSSI0/1
Router(config-if)#service-policy output cbwfqpolicy
Router(config-if)#end
Router#

If you are running older IOS versions, you can use Custom Queueing to create different levels of forwarding precedence, but you can’t combine them with WRED to enforce the standard drop precedence rules.

Discussion

We have repeatedly said throughout this chapter that Priority Queues are dangerous because they allow high priority traffic to starve all lower priority queues. However, strict prioritization does give excellent real-time behavior for the highest priority traffic because it ensures minimal queueing latency. This recipe uses Cisco’s low LLQ, which avoids most of the problems of pure Priority Queueing.

This example creates an approximation of the differentiated services model defined in RFCs 2597 and 2598. All real-time and network control traffic uses LLQ to ensure that it is always delivered with minimal delay. All other IP traffic falls into one of the assured forwarding classes shown in Appendix B, with the exception of packets that do not have a DSCP tag value. Untagged traffic, including non-IP traffic, will use the default forwarding behavior.

Each column in Table B.3 represents a precedence class, which we have called AF1x, AF2x, AF3x, and AF4x. For each of these classes, we have reserved a share of the bandwidth and configured DSCP-based WRED:

Router(config-pmap)#class AF4x
Router(config-pmap-c)#bandwidth percent 15
Router(config-pmap-c)#random-detect dscp-based
Router(config-pmap-c)#exit

We showed how to modify the default WRED thresholds and drop probabilities in the discussion section of Recipe 11.8.

The example also defines a class called EF that matches all packets with an IP Precedence value of 5, which has a binary representation of 101. Note that, technically, the EF DSCP value looks like 101110 in binary. So, the example allows packets to join this queue if only the first 3 bits of the DSCP are correct. This is for backward compatibility and to ensure that we don’t leave out any high priority traffic. However, if you wanted to create a queue for a real EF DSCP value and a separate queue for packets with IP Precedence 5, you could do so like this:

Router(config)#class-map EF
Router(config-cmap)#description Real-time application traffic
Router(config-cmap)#match ip dscp ef
Router(config-cmap)#exit
Router(config)#class-map Prec5
Router(config-cmap)#description Critical application traffic
Router(config-cmap)#match ip precedence 5
Router(config-cmap)#exit
Router(config)#policy-map cbwfq_pq
Router(config-pmap)#class EF
Router(config-pmap-c)#priority 58 800
Router(config-pmap-c)#exit
Router(config-pmap)#class Prec5
Router(config-pmap-c)#bandwidth percent 15
Router(config-pmap-c)#exit

Note that you must define the classes in the policy map in this order because the router matches packets sequentially. If you specified the EF class map after the Prec5 map, you would find that all of your EF traffic would wind up in the other queue, which is not what you want. Note also that, as we discussed in Recipe 11.7, you have to be careful of the total reserved bandwidth in CBWFQ. Simply adding these lines to the recipe example would give a total of 75% allocated bandwidth. This is the default maximum value. If you want to exceed this value, Recipe 11.7 shows how to modify the maximum.

We have already discussed most of the commands shown in the class definitions in other recipes. However, the EF queue contains a special command:

Router(config-pmap-c)#priority 58 800

This defines a strict priority queue for this class with a sustained throughput of 58Kbps. This is based on the assumption that the EF application uses a standard stream of 56Kbps, and we have added a small amount on top of this to allow for Layer 2 overhead. The last argument in the priority command is a burst length in bytes. This allows the application to temporarily exceed the defined sustain rate, just long enough to send this many bytes. In this case we’re assuming that the real-time application uses small packets, so allowing it to send a burst of 800 bytes when it has already reached the configured sustain rate of 58Kbps should be more than sufficient.

Note that this does not imply that there is strict policing on this queue. If you also want to enforce a maximum rate of 65Kbps, for example, on this queue, you could also include a police statement, as follows:

Router(config-pmap)#class EF
Router(config-pmap-c)#priority 58 800
Router(config-pmap-c)#police 65000 1600
Router(config-pmap-c)#exit

In this command we have also been slightly more generous with the burst size, extending it to 1600 bytes. Enforcing an upper limit like this is a good idea on priority queues because it helps to prevent the highest priority traffic from starving the other queues. However, it is not necessary to enforce the upper limit this way just to avoid starving the lower queues. This is because the priority command stops giving strict priority to this queue when the bandwidth rises above the specified limit.

It should be clear from the ability to allocate both minimum and maximum bandwidth with the CBWFQ priority and police commands, LLQ is a much more sophisticated and flexible type of queueing than the simple Priority Queueing discussed in Recipe 11.3. So if your router is capable of supporting CBWFQ, we recommend using LLQ for any situation where you want Priority Queueing. Cisco introduced the LLQ feature in IOS level 12.0(6)T.

11.14. Viewing Queue Parameters

Problem

You want to see how queueing is configured on an interface.

Solution

Cisco provides several useful commands for looking at an interface’s queueing configuration and performance. The first of these is the show queue command:

Router#show queue FastEthernet0/0
  Input queue: 0/75/105/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: weighted fair
  Output queue: 0/1000/96/0 (size/max total/threshold/drops)
     Conversations  0/1/128 (active/max active/max total)
     Reserved Conversations 0/0 (allocated/max allocated)
     Available Bandwidth 75000 kilobits/sec

Router#

Use the show queueing command to look the router’s queueing configuration in general:

Router#show queueing
Current fair queue configuration:

  Interface           Discard    Dynamic  Reserved  Link    Priority
                      threshold  queues   queues    queues  queues
  FastEthernet0/0     96         128      258       8       1
  Serial0/0           64         256      37        8       1
  Serial0/1           96         128      256       8       1

Current DLCI priority queue configuration:
Current priority queue configuration:

List   Queue  Args
1      high   protocol ip          tcp port 198
1      high   protocol pppoe-sessi
2      high   protocol ip          udp port 199
3      low    default
3      high   protocol ip          list 101
Current custom queue configuration:
Current random-detect configuration:
Router#

Discussion

The show queue and show queueing commands augment the show interface output, which also shows important queueing information:

Router#show interface FastEthernet0/0
FastEthernet0/0 is up, line protocol is up
  Hardware is AmdFE, address is 0001.9670.b780 (bia 0001.9670.b780)
  MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec,
     reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 100Mb/s, 100BaseTX/FX
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 00:00:00, output 00:00:00, output hang never
  Last clearing of "show interface" counters never
  Input queue: 0/75/105/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: weighted fair
  Output queue: 0/1000/96/0 (size/max total/threshold/drops)
     Conversations  0/1/128 (active/max active/max total)
     Reserved Conversations 0/0 (allocated/max allocated)
     Available Bandwidth 75000 kilobits/sec
  5 minute input rate 1000 bits/sec, 2 packets/sec
  5 minute output rate 2000 bits/sec, 2 packets/sec
     2495069 packets input, 181306312 bytes
     Received 2333309 broadcasts, 0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
     0 watchdog
     0 input packets with dribble condition detected
     1927544 packets output, 197958017 bytes, 0 underruns
     0 output errors, 0 collisions, 21 interface resets
     0 babbles, 0 late collision, 0 deferred
     0 lost carrier, 0 no carrier
     0 output buffer failures, 0 output buffers swapped out
Router#

The show queue command is a good starting point when looking at queueing issues. It tells you what queueing algorithm is used, as well as information about any drops:

Router#show queue FastEthernet0/0
  Input queue: 0/75/105/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: weighted fair
  Output queue: 0/1000/96/0 (size/max total/threshold/drops)
     Conversations  0/1/128 (active/max active/max total)
     Reserved Conversations 0/0 (allocated/max allocated)
     Available Bandwidth 75000 kilobits/sec

In this case, you can see that the interface uses WFQ. This can be slightly deceptive because we actually configured this interface for CBWFQ. The Reserved Connections line indicates no RSVP reservation queues have been allocated for this interface. So, if you tried to use RSVP on this interface, it would not work right now.

The show queue command gives no output at all when you use Custom Queueing or Priority Queueing on an interface.

The first section of output from the show queueing command gives some useful summary information on fair queueing parameters:

Router#show queueing
Current fair queue configuration:

  Interface           Discard    Dynamic  Reserved  Link    Priority
                      threshold  queues   queues    queues  queues
  FastEthernet0/0     96         128      258       8       1
  Serial0/0           64         256      37        8       1
  Serial0/1           96         128      256       8       1

In this case you can immediately see and compare the queue sizes between different interfaces.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.97.204