Appendix B. IP Precedence, TOS, and DSCP Classifications

In Chapter 11, we discussed several important concepts related to traffic classification, queueing algorithms, and congestion handling systems. Unfortunately, some of these concepts are unfamiliar to many network engineers, so this appendix includes some more detail and background.

Every IP packet (including both IPv4 and IPv6) includes a TOS byte. This byte is broken up into fields that the network uses to help provide the appropriate QoS commitments. In the older TOS model defined in RFC 1349, the first three bits contain the IP Precedence value, and the next four bits contain the TOS value.

It is easy to get confused between the different uses of the term “TOS.” Sometimes it refers to the entire byte and sometimes to just the four bits that describe forwarding behavior. To help reduce the confusion, we call the four-bit field the TOS field, and the entire byte the TOS byte.

Table B-1 shows the standard IP Precedence values. It is important to note that normal application traffic is not permitted to use IP Precedence values 6 or 7, which are strictly reserved for keepalive packets, routing protocols, and other important network traffic. The network must always give these packets higher priority than any application packets because no application will work if the network loses its topology information.

Table B-1. Standard IP Precedence values

IP Precedence

Decimal value

Bit pattern

Routine

0

000

Priority

1

001

Immediate

2

010

Flash

3

011

Flash Override

4

100

Critical

5

101

Internetwork control

6

110

Network control

7

111

Table B-2 shows the standard IP TOS values, as defined in RFC 1349. The idea was that an application could use these bits to request the appropriate forwarding behavior. Because the values are specified in different bits, the standard originally allowed applications to specify more than one option. This turned out to be unmanageable in practice, because it wasn’t clear which bit should have precedence in cases where two bits were set, and each would select a different path. So the standard was changed in RFC 1349 to prevent combinations of TOS bits.

Table B-2. Standard IP TOS values

IP TOS

Decimal value

Bit pattern

Normal

0

0000

Minimum monetary cost

1

0001

Maximum reliability

2

0010

Maximum throughput

4

0100

Minimum delay

8

1000

Note that there is some disagreement in the literature about the last bit, which sometimes signifies “minimum monetary cost” and sometimes is not used at all. Some references state that the TOS byte has one unused bit, and others say that there are two unused bits. In any case, this entire scheme is now considered obsolete, and has been replaced by the DSCP model. However, many common applications (including Telnet and FTP) still set TOS field values by default. So it is important that the network be able to handle these settings gracefully.

In the new DSCP formalism, defined in RFC 2474, the TOS byte is divided into a 6-bit DSCP field followed by 2 unused bits. As we discuss in the next section, the DSCP formalism was designed to give good backward compatibility with the older formalism. In particular, the first three bits of the DSCP field map perfectly onto the older IP Precedence definitions.

The first three bits of the DSCP field identify the forwarding class. If the value in the first 3 bits is 4 or less, the packet uses Assured Forwarding (AF). If the value is 5, which corresponds to the highest allowed application IP Precedence value, then the packet uses Expedited Forwarding (EF). These names are slightly confusing because, in general, Assured Forwarding is merely expedient, while Expedited Forwarding is more likely to assure delivery.

Table B-3 shows the Assured Forwarding DSCP values. As we have already mentioned, the first 3 bits specify the forwarding class. A higher value in this sub-field results in a higher forwarding precedence through the network. The remaining 3 bits specify Drop Precedence. The higher the Drop Precedence, the more likely the packet will be dropped if it encounters congestion.

Table B-3. Assured Forwarding DSCP values
 Class 1Class 2Class 3Class 4

Drop Precedence

Value

Name

Value

Name

Value

Name

Value

Name

Lowest Drop Precedence

001010 (10)

AF11

010010 (18)

AF21

011010 (26)

AF31

100010 (34)

AF41

Medium Drop Precedence

001100 (12)

AF12

010100 (20)

AF22

011100 (28)

AF32

100100 (36)

AF42

Highest Drop Precedence

001110 (14)

AF13

010110 (22)

AF23

011110 (30)

AF33

100110 (38)

AF43

For Expedited Forwarding there is only one value. It has a binary value of 101110, or 46 in decimal, and it is usually simply called EF. Note that this continues to follow the same pattern. The first 3 bits correspond to a decimal value of 5, which was the highest application IP Precedence value. You could think of the remaining three bits as specifying the highest Drop Precedence, but really this isn’t meaningful because there is only one EF value. However, there is still significant room for defining additional EF types if it becomes necessary in the future.

The remaining two unused bits in the TOS byte have been the subject of some very interesting discussions lately. RFC 3168 suggests that they might be used for congestion notifications, similar to the Frame Relay FECN (Forward Explicit Congestion Notification) and BECN (Backward Explicit Congestion Notification) flags. This would seem to be a natural place to make this designation, since there is no congestion notification field anywhere else in the IPv4 or IPv6 headers. If packets carried this sort of information, routers could use adaptive processes to optimize forwarding behavior. If a link started to become congested, all upstream routers would automatically sense the problem and start to back off the rate that they were sending traffic before any application suffered from queue drops. This would be similar to the adaptive Frame Relay traffic shaping system that we discussed in Recipe 11.11. We look forward to seeing Cisco implement this feature in the future.

B.1. Combining TOS and IP Precedence to Mimic DSCP

You can also get the equivalent of DSCP, even on older routers that support only TOS and Precedence, by combining the TOS and Precedence values. All Assured Forwarding DSCP Class 1 values are equivalent to an IP Precedence value of 1, Priority. All Class 2 values correspond to IP Precedence 2, Immediate; Class 3 values to IP Precedence 3, Flash; and Class 4 corresponds to an IP Precedence value of 4, Flash Override. The higher IP Precedence values are not used for Assured Forwarding.

You can then select the appropriate Drop Precedence group from the TOS values. However, you have to be careful, since there are 4 TOS bits. Combining this with the 3 bits from IP Precedence gives you 7 bits to work with, while DSCP only uses the first 6. For example, looking at the bit values that give AF11 in Table B-3, you can see that the last three bits are 010. So the corresponding TOS value would be 0100, which is 4 in decimal, or maximum throughput.

In Table B-3, you can see that a TOS value of 4, maximum throughput, always gives the lowest AF Drop Precedence. Selecting a TOS value of 8, minimum delay, gives medium Drop Precedence in all classes. And you can get the highest Assured Forwarding Drop Precedence value by setting a TOS value of 12, which doesn’t have a standard name in the TOS terminology.

There is reasonably good interoperability between the AF DSCP variables and the combination of IP Precedence and TOS, which is good because it’s impossible for a router to tell the difference in general. Only the three top priorities of IP Precedence are not represented, and that is simply because these DSCP values are used for guaranteed delivery services.

The biggest difference between the TOS and Assured Forwarding models is that, where the Assured Forwarding model is used to define a type of queueing, the TOS model is used to select a particular path. TOS was intended to work with a routing protocol, such as OSPF, to select the most appropriate path for a particular packet based on its TOS value. That is, when there are multiple paths available, a TOS-based OSPF (such as the version suggested in RFC 2676) would attempt to make a reasonable TOS assignment to each of them. If the router needed to forward a packet that was marked with a particular TOS value, it would attempt to find a route with the same TOS value. Note that Cisco never incorporated this type of functionality into its OSPF implementation, however. So, if TOS is going to have an effect on how packets are forwarded, you have to configure it manually by means of policy-based routing.

This was the historical intent for TOS, but in practice, most engineers found that it was easier to just use the TOS field to influence queueing behavior rather than path selection. So the IETF developed the more modern and useful DSCP formalism.

Assured Forwarding introduces the concept of Per-Hop Behavior (PHB). Each DSCP value has a corresponding well-defined PHB that the router uses not to select a path, but to define how it will forward the packet. The router will forward a packet marked AF13 along the same network path as a packet marked AF41 if they both have the same destination address. However, it will be more likely to drop the AF13 packet if there is congestion, and it will forward the AF41 packet first if there are several packets in the queue.

From this it should be clear why it is easier to implement AF than TOS-based routing on a network.

B.2. RSVP

Reservation Protocol (RSVP) is a signaling protocol that allows applications to request and reserve network resources, usually bandwidth. The core protocol is defined in RFC 2205. It is important to remember that RSVP is used only for requesting and managing network resources. RSVP does not carry user application data. Once the network has allocated the required resources, the application marks the packets for special treatment by setting the DSCP field to the EF value, 101110.

The process starts when an end device sends an RSVP PATH request into the network. The destination address of this request is the far end device that it wants to communicate with. The request packet includes information about the application’s source and destination addresses, protocol and port numbers, as well as its QoS requirements. It could specify a minimum required bandwidth, and perhaps also delay parameters. Each router along the path picks up this packet and figures out the best path to the destination.

Each router receiving an RSVP PATH request replaces the source address in the packet with its own, and forwards the packet to the next router along the path. So the QoS parameters are requested separately on each router-to-router hop. If a router is able to accommodate the request, it sends back an RSVP RESV message to the requester. For all but the first router on the path, the requester is the previous router. If a router receives one of these RESV packets, it knows that everything upstream from it is able to comply with the request. If it also has the resources to accommodate the requested QoS parameters, it sets aside the resources and sends an RESV packet to its downstream neighbor. It also sends an RSVP CONFIRM message upstream to acknowledge that the request will be honored. The routers periodically pass PATH, RESV, and CONFIRM packets to one another to ensure that the resources remain available.

If a router is not able to set aside the requested resources for whatever reason, it rejects the reservation. This may result in the entire path being rejected, but it can also just mean that the network will reserve the resources everywhere except on this one router-to-router link.

It would clearly be counterproductive if every device on the network could request as much bandwidth as they wanted, whenever they wanted. This would leave few network resources for routine applications. So usually when you configure a router for RSVP, you just set aside a relatively small fraction of the total bandwidth on a link for reservation. Further, you will often want to restrict which source addresses are permitted to make RSVP requests.

Because RSVP makes its reservation requests separately on each link, it can easily accommodate multicast flows. In this case, you have to be careful that the periodic updates happen quickly so that any new multicast group members won’t have to wait long before they start to receive data. Please refer to Chapter 23 for a more detailed discussion of multicast services.

RSVP is an extremely useful technique for reserving network resources for real-time applications such as Voice over IP (VoIP). However, because it forces the routers to keep detailed information on individual data flows, it doesn’t scale well in large networks. RSVP is most useful at the edges of a large network, where you can reserve bandwidth entering the core. However, you probably don’t want it running through the core of your network.

In large networks, it is common to use RSVP only at the edges of the network, with more conventional DSCP-based methods controlling QoS requirements in the core.

B.3. Queueing Algorithms

You can implement several different queueing algorithms on Cisco routers. The most common type is Weighted Fair Queueing (WFQ), which is enabled by default on low-speed interfaces. There is also a class-based version of WFQ called Class-Based Weighted Fair Queueing (CBWFQ). These algorithms have the advantage of being fast, reliable, and easy to implement. However, in some cases, you might want to consider some of the other queueing systems available on Cisco routers.

Priority Queueing lets you specify absolute prioritization in your network so that more important packets always precede less important ones. This can be useful, but it is often dangerous in practice.

The other important queueing algorithm on Cisco routers is Custom Queueing, which allows you direct control over many of the queueing parameters.

Weighted Fair Queueing

A flow is loosely defined as the stream of packets associated with a single session of a single application. The common IP implementations of Fair Queueing (FQ) and WFQ assume that two packets are part of the same flow if they have the same source and destination IP addresses, the same source and destination TCP or UDP port numbers, and the same IP protocol field value. The algorithms combine these five values into a hash number, and sort the queued packets by this hash number.

The router then assigns sequence numbers to the queued packets. In the FQ algorithm, this process of sequencing the packets is optimized so that each flow gets a roughly equal share of the available bandwidth. As it receives each packet, the router assigns a sequence number based on the length of this packet and the total number of bytes associated with this same flow that are already in the queue.

This has a similar effect to a flow-based Round Robin (RR) queueing algorithm, in which all of the flows are assigned to different queues. These queues are then processed a certain number of bytes at a time until enough bytes have accumulated for a given queue to send a whole packet. Although this is a useful way of picturing the algorithm mentally, it is important to remember that the Cisco implementations of FQ and WFQ do not actually work this way. They keep all of the packets in a single queue. So, if there is a serious congestion problem, you will still get global tail drops.

This distinction is largely irrelevant for FQ, but for WFQ it’s quite important. WFQ introduces another factor besides flow and packet size into the sequence numbers. The new factor is the weight (W), which is calculated from the IP Precedence (P) value. For IOS levels after 12.0(5)T, the formula is:

W = 32768/(P+1)

For all earlier IOS levels, the weight is lower by a factor of 4096:

W = 4096/(P+1)

Cisco increased the value to allow for finer control over weighting granularity.

The weight number for each packet is multiplied by the length of the packet when calculating sequence numbers. The result is that the router gives flows with higher IP Precedence values a larger share of the bandwidth than those with lower precedence. In fact, it is easy to calculate the relative scaling of the bandwidth shares of flows with different precedence values.

Table B-4 shows, for example, that a flow with Flash Override Precedence will get five times the bandwidth of a packet with Routine Precedence. However, if all of the flows have the same precedence, WFQ behaves exactly the same as FQ.

Table B-4. Relative share of bandwidth in WFQ by IP Precedence

Precedence name

Value

Relative share of bandwidth

Routine

0

1

Priority

1

2

Immediate

2

3

Flash

3

4

Flash Override

4

5

Critical

5

6

Internetwork control

6

7

Network control

7

8

These algorithms tend to do three things. First, they prevent individual flows from interfering with one another. Second, they tend to reduce queueing latency for applications with smaller packets. Third, they ensure that all of the packets from a given flow are delivered in the same order that they were sent.

In practice, of course, a router has limited memory resources, so there is a limit to how many flows it can handle. If the number of flows is too large or the volume of traffic is too high, the router will start to have trouble with the computation. So these algorithms tend to be best on low-speed interfaces. WFQ is enabled by default on all interfaces with bandwidth of E1 (roughly 2Mbps) or less. The only exceptions are interfaces that use SDLC or LAPB link layer protocols, which require FIFO queuing.

Cisco provides several mechanisms to improve the bandwidth scaling of queueing algorithms. The first is Distributed Weighted Fair Queueing (DWFQ), which is only available in routers that have Versatile Interface Processor (VIP) cards such as 7500 series routers, or the older 7000 series with RSP7000 processors. DWFQ is essentially the same as WFQ, except that the router is able to distribute the queueing calculations to the various VIP modules. But there is also another important difference: DWFQ uses a different sorting algorithm called Calendar Queueing, which uses much more memory, but operates much faster. This tradeoff means that you can use DWFQ on a VIP2-50 card containing Port Adapters Modules (PAM) with an aggregate line speed of up to OC-3. In fact, if the aggregate line speed is greater than a DS-3 (45Mbps), we don’t recommend using DWFQ on anything slower than a VIP2-50. Cisco claims that DWFQ can operate at up to OC-3 speeds. However, if you need to support several interfaces that aggregate to OC-3 speeds on one VIP module, you may want to consider a different queueing strategy, particularly CBWFQ.

The next popular queueing strategy on Cisco routers, particularly for higher speed interfaces, is CBWFQ. CBWFQ is similar to WFQ, except that it doesn’t group traffic by flows. Instead, it groups by traffic classes. A class is simply some logical grouping of traffic. It could be based on IP Precedence values, source addresses, input interface, or a variety of other locally useful rules that you can specify on the router.

The principal advantage to CBWFQ is that it allows you to expand the functionality of WFQ to higher speeds by eliminating the need to keep track of a large number of flows. But there is another important advantage to CBWFQ. The most common and sensible way to use CBWFQ is to assign the classes according to precedence or DSCP values. You can then manually adjust the weighting factors for the different classes. As you can see in Table B-4, the standard WFQ weighting factors give traffic with an IP Precedence value of 1 twice as much bandwidth as Precedence 0 traffic. However, Precedence 7 traffic gets just under 17% more bandwidth than Precedence 6 traffic. For many applications, these arbitrary weighting factors are not appropriate. So the ability to adjust these weighting factors can come in handy if you need to give your highest-priority traffic a larger share of the bandwidth.

Priority Queueing

Priority Queueing (PQ) is an older queueing algorithm that handles traffic with different precedence levels much more pragmatically. The Cisco implementation of Priority Queueing uses four distinct queues called high priority, medium priority, normal priority, and low priority. The PQ algorithm maintains an extremely strict concept of priority. If there are any packets in a higher priority queue, they must be sent first before any packets in the lower priority queues are sent.

Some types of critical real-time applications that absolutely cannot wait for low priority traffic work well with PQ. However, there is an obvious problem with this strategy: if the volume of traffic in the higher priority queues is greater than the link capacity, then no traffic from the lower priority queues will be forwarded. PQ starves low priority applications in these cases.

So a pure PQ implementation requires that you have an extremely good understanding of your traffic patterns. The high priority traffic must represent a small fraction of the total, with the lowest priorities having the largest net volume. Further, you must have enough link capacity that the PQ algorithm is only used during peak bursts. If there is routine link congestion, PQ will give extremely poor overall performance.

However, Cisco has recently implemented a new hybrid queue type, called Low Latency Queueing (LLQ), which you can use with CBWFQ to give the best features of PQ while avoiding the queue starvation problem. The idea is simply to use CBWFQ for all of the traffic except for a small number subqueues that are reserved strictly for relatively low-volume real-time applications. The router services the real-time queues using a strict priority scheme and the others using CBWFQ. So, if there is a packet in one of the real-time queues, the router will transmit it before looking in one of the other queues. However, when there is nothing in the priority queues, the router will use normal CBWFQ for everything else.

LLQ also includes the stipulation that if the volume of high priority traffic exceeds a specified rate, the router will stop giving it absolute priority. The guarantees that LLQ will never starve the low priority queues.

This model is best suited to applications like voice or video where the real-time data comes in a fairly continuous but low bandwidth stream of small packets, as opposed to more bursty applications such as file transfers.

Custom Queueing

Custom Queueing (CQ) is one of Cisco’s most popular queueing strategies. CQ was originally implemented to address the clear shortcomings of PQ. It lets you configure how many queues are to be used, what applications will use which queues, and how the queues will be serviced. Where PQ has only 4 queues, CQ allows you to use up to 16. And, perhaps most importantly, it includes a separate system queue so that user application data cannot starve critical network control traffic.

CQ is implemented as a round-robin queueing algorithm. The router takes a certain predetermined amount of data from each queue on each pass, which you can configure as a number of bytes. This allows you to specify approximately how much of the bandwidth each queue will receive. For example, if you have four queues, all set to the same number of bytes per pass, you can expect to send roughly equal amounts of data for all of these applications. Since the queues are used only when the network link is congested, this means that each of the four applications will receive roughly one quarter of the available bandwidth.

However, it is important to remember that the router will always take data one packet at a time. If you have a series of 1500-byte packets sitting in a particular queue and have configured the router to take 100 bytes from this queue on each pass, it will actually transmit 1 entire packet each time, and not 1 every 15 times. This is important because it can mean that your calculations of the relative amounts of bandwidth allocated to each queue might be different from what the router actually sends.

This difference tends to disappear as you increase the number of bytes taken each time the queues are serviced. But you don’t want to let the number get too large or you will cause unnecessary latency problems for your applications. For example, if the byte count for each of your four queues is 10,000 bytes, and all of the queues are full, the router will send 10,000 bytes from the first queue, then 10,000 bytes from the second queue, and so on. From the time it finishes servicing the first queue until the time that it returns to service it again, it will have sent 30,000 bytes. It takes roughly 160ms to send this much data through a T1 link, but the gap between the previous two packets in this queue was effectively zero. Variations in latency like this are called jitter, and they can cause serious problems for many real-time applications.

So, as with all of the other queueing algorithms we have discussed, CQ has some important advantages and disadvantages. Chapter 11 contains recipes that implement all of the queueing varieties we have discussed. You need to select the one that matches your network requirements best. None of them is perfect for all situations.

B.4. Dropping Packets and Congestion Avoidance

Imagine a queue that holds packets as they enter a network bottleneck. These packets carry data for many different applications to many different destinations. If the amount of traffic arriving is less than the available bandwidth in the bottleneck, then the queue just holds the packets long enough to transmit them downstream. Queues become much more important if there is not enough bandwidth in the bottleneck to carry all of the incoming traffic.

If the excess is a short burst, the queue will attempt to smooth the flow rate, delivering the first packets as they are received and delaying the later ones briefly before transmitting them. However, if the burst is longer, or more like a continuous stream, the queue will have to stop accepting new packets while it deals with the backlog. The queue simply discards the overflowing inbound packets. This is called a tail drop.

Some applications and protocols deal with dropped packets more gracefully than others. For example, if an application doesn’t have the ability to resend the lost information, then a dropped packet could be devastating. On the other hand, some real-time applications don’t want their packets delayed. For these applications, it is better to drop the data than to delay it.

From the network’s point of view, some protocols are better behaved than others. Applications that use TCP are able to adapt to dropping an occasional packet by backing off and sending data at a slower rate. However, many UDP-based protocols will simply send as many packets as they can stuff into the network. These applications will keep sending packets even if the network can’t deliver them.

Even if all applications were TCP-based, however, there would still be some applications that take more than their fair share of network resources. If the only way to tell them to back off and send data more slowly is to wait until the queue fills up and starts to tail drop new packets, then it is quite likely that the wrong traffic flows will be instructed to slow down. However, an even worse problem called global synchronization can occur in an all-TCP network with a lot of tail drops.

Global synchronization happens when several different TCP flows all suffer packet drops simultaneously. Because the applications all use the same TCP mechanisms to control their flow rate, they will all back off in unison. TCP then starts to automatically increase the data rate until it suffers from more packet drops. Since all of the applications use the same algorithm for this process, they will all increase in unison until the tail drops start again. This whole wavelike oscillation of traffic rates will repeat as long as there is congestion.

Random Early Detection (RED) and its cousin, Weighted Random Early Detection (WRED), are two mechanisms to help avoid this type of problem, while at the same time keeping one flow from dominating. These algorithms assume that all of the traffic is TCP-based. This is important because UDP applications get absolutely no benefit from RED or WRED.

RED and WRED try to prevent tail drops by preemptively dropping packets before the queue is full. If the link is not congested, then the queue is always more or less empty, so these algorithms don’t do anything. However, when the queue depth reaches a minimum threshold, RED and WRED start to drop packets at random. The idea is to take advantage of the fact that TCP applications will back off their sending rate if they drop a packet. By randomly thinning out the queue before it becomes completely full, RED and WRED keep the TCP applications from overwhelming the queue and causing tail drops.

The packets to be dropped are selected at random. This has a couple of important advantages. First, the busiest flow is likely to be the one with the most packets in the queue, and therefore the most likely to suffer packet drops and be forced to back off.

Second, by dropping packets at random, the algorithm effectively eliminates the global synchronization problems discussed earlier.

The probability of dropping a packet rises linearly with the queue depth, starting from a specified minimum threshold up to a maximum value. A simple example can help to explain how this works. Suppose the minimum threshold is set to 5 packets in the queue, and the maximum is set to 15 packets. If there are fewer than 5 packets in the queue, RED will not drop anything. When the queue depth reaches the maximum threshold, RED will drop one packet in 10. If there are 10 packets in the queue, then it is exactly halfway between the minimum and maximum thresholds and RED will drop half as many packets as it will at the maximum threshold: 1 packet in 20. Similarly, 7 packets in the queue represents 20% of the distance between the minimum and maximum thresholds, so the drop probability will be 20% of the maximum: 1 packet in 50.

If the queue fills up despite the random drops, then the router has no choice but to resort to tail drops, the same as if there were no sophisticated congestion avoidance. So RED and WRED have a particularly clever way of telling the difference between a momentary burst and longer-term heavy traffic volume, because they need to be much more aggressive with persistent congestion problems.

Instead of using a constant queue depth threshold value, these algorithms base the decision to drop packets on an exponential moving time averaged queue depth. If the queue fills because of a momentary burst of packets, RED will not start to drop packets immediately. However, if the queue continues to be busy for a longer period of time, the algorithm will be increasingly aggressive about dropping packets. This way the algorithm doesn’t disrupt short bursts, but it will have a strong effect on applications that routinely overuse the network resources.

The WRED algorithm is similar to RED, except that it selectively prefers to drop packets that have lower IP Precedence values. Cisco routers achieve this by simply having a lower minimum threshold for lower precedence traffic. So, as the congestion increases, the router will tend to drop packets with lower precedence values. This tends to protect important traffic at the expense of less important applications. However, it is also important to bear in mind that this works best when the amount of high precedence traffic is relatively small.

If there is a lot of high priority traffic in the queue, it will not tend to benefit much from the efficiency improvements typically offered by WRED. In this case, you will likely see only a slight improvement over the characteristics of ordinary tail drops. This is yet another reason for being careful in your traffic categorization and not being too generous with the high precedence values.

Flow-based WRED is an interesting variant on WRED. In this case, the router makes an effort to separate out the individual flows in the router and penalize only those that are using more than their share of the bandwidth. The router does this by maintaining a separate drop probability for each flow based on their individual moving averages. The heaviest flows with the lowest precedence values tend to have the most dropped packets. However, it is important to note that the queue is congested by all the traffic, not just the heaviest flows. So the lighter flows will also have a finite drop probability in this situation. But, the fact that the heavy flow will have more packets in the queue, combined with the higher drop probability for these heavier flows, means that you should expect them to contribute most of the dropped packets.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.36.166