What Information Is Distributed

So what's involved in information distribution? Information distribution is easily broken down into three pieces:

  • What information is distributed and how you configure it

  • When information is distributed and how you control when flooding takes place

  • How information is distributed (protocol-specific details)

Chapter 4 details exactly how this information is used to calculate a TE tunnel path.

As discussed in previous chapters, the idea behind MPLS Traffic Engineering is to allow routers to build paths using information other than the shortest IP path. But what information is distributed to allow the routers to make more intelligent path calculations?

MPLS Traffic Engineering works by using OSPF or IS-IS to distribute information about available resources in your network. Three major pieces of information are distributed:

  • Available bandwidth information per interface, broken out by priority to allow some tunnels to preempt others

  • Attribute flags per interface

  • Administrative weight per interface

Each of these three is advertised on a per-link basis. In other words, a router advertises available bandwidth, attribute flags, and administrative metric for all the links that are involved in MPLS Traffic Engineering.

Available Bandwidth Information

Perhaps the most appealing attribute of MPLS Traffic Engineering is the capability to reserve bandwidth across your network. You configure the amount of reservable bandwidth on a link using the following per-interface command:

router(config-if)#ip rsvp bandwidth [<1-10000000 total-reservable-bandwidth>
  [per-flow-bandwidth]]

This command can take two parameters. The first is the amount of total reservable bandwidth on the interface, in kbps. The second is the maximum amount of bandwidth that can be reserved per flow on the interface. The per-flow maximum is irrelevant to MPLS Traffic Engineering and is ignored. However, the ip rsvp bandwidth command is used for more than just MPLS Traffic Engineering, and the per-flow-bandwidth parameter has relevance to non-MPLS-related RSVP.

NOTE

Currently, RSVP for MPLS TE and “classic RSVP” (for IP microflows) do not play nicely together. You cannot enable an RSVP pool and have it reserved by both RSVP for MPLS TE and classic RSVP. The heart of the issue is that if you configure ip rsvp bandwidth 100 100, both MPLS TE and classic RSVP (as used for Voice over IP, DLSW+, and so on) each think they have 100 kbps of bandwidth available. This behavior is not a feature. If you're going to use RSVP, use it for MPLS TE or IP signalling; don't use both in your network at the same time.


If you don't configure the ip rsvp bandwidth command, the default reservable bandwidth advertised for that interface is 0. A common cause of tunnels not coming up during testing or initial deployment of MPLS Traffic Engineering is configuring a TE tunnel to require a certain amount of bandwidth but forgetting to configure available bandwidth on any link. See Chapter 11 for more details on how to detect this.

You don't have to specify a value for the total-reservable-bandwidth value in the ip rsvp bandwidth command. If you don't specify a value, the default is 75 percent of the link bandwidth. The link bandwidth is determined by the interface type or the per-interface bandwidth command.

The per-flow maximum defaults to being equal to the total-reservable-bandwidth parameter, but as previously mentioned, it's irrelevant anyway. So fuggetaboutit.

How much bandwidth do you allocate to the interface? That's a larger question than it might seem at first. It has to do with your oversubscription policies and how you enforce them; see Chapter 10, “MPLS TE Deployment Tips,” for more information.

You can double-check your configuration using the command show ip rsvp interface, as demonstrated in Example 3-4.

Example 3-4. Checking RSVP Bandwidth Status for an Interface
gsr12>show ip rsvp interface
						interface    allocated  i/f max  flow max pct UDP  IP   UDP_IP   UDP M/C
PO0/0        0M         116250K  116250K  0   0    0    0        0
PO0/2        0M         116250K  116250K  0   0    0    0        0
PO4/2        233250K    466500K  466500K  50  0    1    0        0

The only columns that are relevant to MPLS TE are interface, allocated, i/f max, pct, and IP. pct is the percentage of reservable bandwidth that has actually been reserved across the link. IP is the number of IP tunnels (in this case, TE tunnels) reserved across that link.

In this example, a single tunnel across interface POS4/2 has reserved 233250 kbps, or 50 percent, of the total link bandwidth.

As MPLS Traffic Engineering tunnels reserve link bandwidth, the amount of allocated bandwidth changes, but the maximum available bandwidth does not. The amount of currently reservable bandwidth on an interface is the configured reservable interface bandwidth minus the currently allocated bandwidth. In addition to configuring the per-link available bandwidth, you also can configure the amount of bandwidth required by a tunnel. This sets one of the values the tunnel headend uses in its path calculation; see Chapter 4 for more details on how this calculation is performed.

Why do you need to configure both the per-interface and the tunnel bandwidth? It's simple. The per-interface configuration tells the network how much bandwidth is available on an interface, and the per-tunnel configuration at the headend tells the headend how much of the announced bandwidth to consume.

Example 3-5 shows the configuration on the tunnel headend.

Example 3-5. Configuring the Desired Bandwidth on the Tunnel Headend
						interface Tunnel0
						ip unnumbered Loopback0
						tunnel mode mpls traffic-eng
						tunnel destination 192.168.1.8
						tunnel mpls traffic-eng path-option 10 dynamic
						tunnel mpls traffic-eng bandwidth
						kbps
					

Most commands that modify the behavior of a TE tunnel headend are configured on traffic engineering tunnels, as opposed to physical interfaces or in the global configuration. All the commands configured on a traffic engineering tunnel start with tunnel mpls traffic-eng. Keep this in mind as you learn more about how to configure tunnel interfaces.

You can verify the bandwidth setting on a tunnel interface using the show mpls traffic-eng tunnels tunnel-interface command. The output shown in Example 3-6 comes from a traffic engineering tunnel interface configured with tunnel mpls traffic-eng bandwidth 97.

Example 3-6. Verifying Bandwidth Settings on a Tunnel Interface
gsr1#show mpls traffic-eng tunnels Tunnel0

Name: gsr1_t0                             (Tunnel0) Destination: 192.168.1.8
  Status:
    Admin: up         Oper: up     Path: valid       Signalling: connected

    path option 10, type explicit bottom (Basis for Setup, path weight 40)

  Config Parameters:
    Bandwidth: 97       kbps (Global)  Priority: 7  7   Affinity: 0x0/0xFFFF
    AutoRoute:  enabled   LockDown: disabled  Loadshare: 97       bw-based
    auto-bw: disabled(0/218) 0  Bandwidth Requested: 97

  InLabel  :  -
  OutLabel : POS3/0, 35

The Bandwidth: 97 under Config Parameters indicates that the tunnel is configured to request 97 kbps of bandwidth.

The Loadshare and Bandwidth Requested values are also 97. Loadshare has to do with sharing traffic among multiple tunnels; see Chapter 5, “Forwarding Traffic Down Tunnels,” for more details. Bandwidth Requested is the amount of currently requested bandwidth. It is different from the Bandwidth value if you change the requested bandwidth and the new reservation has not yet come up.

Tunnel Priority

Some tunnels are more important than others. For example, you might have tunnels carrying VoIP traffic and tunnels carrying data traffic that are competing for the same resources. Or you might have a packing problem to solve (see Chapter 9, “Network Design with MPLS TE”). Or you might simply have some data tunnels that are more important than others. No matter what your motivation, you might be looking for a way to have some tunnels preempt others.

MPLS TE gives you a mechanism to do this. Each tunnel has a priority, and more-important tunnels take precedence over less-important tunnels. Less-important tunnels are pushed out of the way and are made to recalculate a path, and their resources are given to the more-important tunnel.

Priority Levels

A tunnel can have its priority set anywhere from 0 to 7. Confusingly, the higher the priority number, the lower the tunnel's importance! A tunnel of priority 7 can be preempted by a tunnel of any other priority, a tunnel of priority 6 can be preempted by any tunnel with a priority of 5 or lower, and so on down the line. A tunnel of priority 0 cannot be preempted.

To avoid confusion, the correct terminology to use is “better” and “worse” rather than “higher” and “lower.” A tunnel of priority 3 has a better priority than a tunnel of priority 5. It's also valid to use “more important,” meaning a numerically lower priority, and “less important,” which is a numerically higher priority.

If you use this terminology, you won't get stuck on the fact that a tunnel of priority 3 has a higher priority (meaning it takes precedence) but a lower priority value (meaning it's numerically lower) than a tunnel of priority 5.

Preemption Basics

The basic idea is that some tunnels are more important than others. The more-important tunnels are free to push other tunnels out of their way when they want to reserve bandwidth. This is called tunnel preemption.

But how does it all really work?

Figure 3-1 shows the topology used in the examples in this section.

Figure 3-1. One Tunnel on a Network


The link to pay attention to is the OC-3 between RouterC and RouterD. Currently, there is a 42-Mbps tunnel from RouterA to RouterD. This tunnel is configured with a default priority of 7.

For starters, consider IGP flooding. Both OSPF and IS-IS flood not only the available bandwidth on a link, but also the available bandwidth at each priority level. Example 3-7 shows the link information advertised for the RouterC to RouterD link.

Example 3-7. Link Information Advertised for the RouterC to RouterD Link with One Tunnel Reservation
RouterC#show mpls traffic-eng topology 192.168.1.5

IGP Id: 0168.0001.0005.00, MPLS TE Id:192.168.1.5 Router Node  (isis  level-2) id 4
      link[0]: Point-to-Point, Nbr IGP Id: 0168.0001.0006.00, nbr_node_id:5, gen:33
          frag_id 0, Intf Address:192.168.10.5, Nbr Intf Address:192.168.10.6
          TE metric:10, IGP metric:10, attribute_flags:0x0
          physical_bw: 155000 (kbps), max_reservable_bw_global: 116250 (kbps)
          max_reservable_bw_sub: 0 (kbps)

                                 Global Pool       Sub Pool
               Total Allocated   Reservable        Reservable
               BW (kbps)         BW (kbps)         BW (kbps)
               ---------------   -----------       ----------
        bw[0]:            0           116250                0
							bw[1]:            0           116250                0
							bw[2]:            0           116250                0
							bw[3]:            0           116250                0
							bw[4]:            0           116250                0
							bw[5]:            0           116250                0
							bw[6]:            0           116250                0
							bw[7]:        42000            74250                0
						

The highlighted portion of this output shows the available bandwidth at each priority level. Ignore the Sub Pool Reservable column; that's addressed in Chapter 6, “Quality of Service with MPLS TE.”

This output shows that 42 Mbps (42,000 kbps) has been reserved at priority level 7. This bandwidth is used by the tunnel from RouterA to RouterD. This means that other TE tunnels that run SPF for this link and have their own priorities set to 7 see 74.25 Mbps available on this link (74,250 kbps). TE tunnels that have their priorities set to 6 or lower see 116.25 Mbps (116,259 kbps) reservable.

Figure 3-2 shows the same topology as in Figure 3-1, but with another TE tunnel at priority 5, for 74.250 Mbps, from RouterB to RouterD.

Figure 3-2. Two Tunnels of Different Priorities


Example 3-8 shows the IGP announcement for the link between RouterC and RouterD.

Example 3-8. Link Information Advertised with Two Tunnels
RouterC#show mpls traffic-eng topology 192.168.1.5
IGP Id: 0168.0001.0005.00, MPLS TE Id:192.168.1.5 Router Node  (isis  level-2)
  id 4
      link[0]: Point-to-Point, Nbr IGP Id: 0168.0001.0006.00, nbr_node_id:5,
  gen:55
          frag_id 0, Intf Address:192.168.10.5, Nbr Intf Address:192.168.10.6
          TE metric:10, IGP metric:10, attribute_flags:0x0
          physical_bw: 155000 (kbps), max_reservable_bw_global: 116250 (kbps)
          max_reservable_bw_sub: 0 (kbps)

                                 Global Pool       Sub Pool
               Total Allocated   Reservable        Reservable
               BW (kbps)         BW (kbps)         BW (kbps)
               ---------------   -----------       ----------
        bw[0]:            0           116250                0
        bw[1]:            0           116250                0
        bw[2]:            0           116250                0
        bw[3]:            0           116250                0
        bw[4]:            0           116250                0
        bw[5]:        74250            42000                0
							bw[6]:            0            42000                0
							bw[7]:        42000                0                0
						

Figure 3-2 shows a tunnel of priority 5 that has reserved 74.25 Mbps and a tunnel of priority 7 that has reserved 42 Mbps. The Global Pool Reservable column tells how much bandwidth is available to a given priority level. So if a CSPF is run for a tunnel of priority 7, this link is seen to have 0 bandwidth available. For a tunnel of priority 5 or 6, this link has 42 Mbps available, and for a tunnel of priority 4 or less, this link has 116.25 Mbps left.

If the tunnel of priority 5 decides to increase its bandwidth request to 75 Mbps, the 42-Mbps tunnel can't fit on this link any more without disrupting Tunnel0, the tunnel from RouterA to RouterD. What happens?

Tunnel0 (the 42-Mbps tunnel with a priority of 7) will be torn down. After Tunnel0 is torn down, RouterA periodically tries to find a new path for the tunnel to take.

In Figure 3-3, there is no new path, and the tunnel remains down; in a real network, there can be an alternative path from RouterA to RouterD that has 42 Mbps available, and the tunnel will come up. Chapter 4 covers the details of the path calculation that RouterA goes through—a process often called Constrained Shortest Path First [CSPF].

Figure 3-3. RouterA Searching for a New Tunnel Path


When setting up its 75-Mbps path with a priority of 5, RouterB does not take into account the possible existence of other tunnels at different priority levels; all it knows about is the bandwidth available at its priority level. If there was a path through this network that RouterB could take without disrupting RouterA's reservation, but that path was not the shortest path to meet all of RouterB's constraints, RouterB would not try to take that path. Tunnel preemption is not polite; it is aggressive and rude in trying to get what it wants. This makes sense. Higher-priority tunnels are more important and, therefore, are allowed to get whatever they want.

That's how preemption works. See the section “Packing Problem” in Chapter 9 for an idea of when and where you'd use priority.

Setup and Holding Priority

The preceding section explained that each tunnel has a priority. However, that's not quite true. Each tunnel has two priorities—a Setup priority and a Hold priority. They're almost always treated as a single priority, but it's worth understanding how they work and why there are two priorities instead of one.

RFC 3209 defines both a Setup priority and a Holding priority. It models them after the Preemption and Defending priorities in RFC 2751. The idea is that when a tunnel is first set up, its Setup priority is considered when deciding whether to admit the tunnel. When another tunnel comes along and competes with this first tunnel for link bandwidth, the Setup priority of the new tunnel is compared with the Hold priority of the first tunnel.

Allowing you to set the Setup priority differently from the Hold priority might have some real-life applications. For example, you could have a tunnel with a more-important Hold priority (say, 0) and a less-important Setup priority (maybe 7). A configuration like this means that this tunnel can't push any other tunnels out of its way to get resources it wants, because the tunnel has a Setup priority of 7. But it also means that as soon as the tunnel has been set up, it cannot be preempted by any other tunnels, because it has a Hold priority of 0.

One thing you cannot do is have a Setup priority that's better than the Hold priority for that tunnel. Why? Think about this for a second. If two tunnels (call them Tunnel1 and Tunnel2) are competing for the same resources, and both have a Setup priority of 1 and a Hold priority of 7, here's what happens:

1.
Tunnel1 comes up first and holds bandwidth with a Hold priority of 7.

2.
Tunnel2 comes up second and uses its Setup priority of 1 to push Tunnel1 off the link they're fighting over. Tunnel2 then holds the link with a Hold priority of 7.

3.
Tunnel1 comes along and uses its Setup priority of 1 to push Tunnel2 off the link they're fighting over. Tunnel2 then holds the link with a Hold priority of 7.

4.
Tunnel2 comes up second and uses its Setup priority of 1 to push Tunnel1 off the link they're fighting over. Tunnel2 then holds the link with a Hold priority of 7.

5.
Go back to Step 3, and repeat ad nauseum.

Any recent Cisco IOS Software version won't let you set the Setup priority to be lower than the Hold priority for a given tunnel, so you can't make this happen in real life. It's still worth understanding why this restriction is in place.

Having said all that, in real life, having Setup and Hold priorities differ is rare. It's perfectly OK to do if you can think of a good reason to do it, but it's not that common.

Configuring Tunnel Priority

The configuration is simple. The command is tunnel mpls traffic-eng priority setup [holding]. You don't have to specify a holding priority; if you don't, it's set to the same as the setup priority. Example 3-9 shows a sample configuration.

Example 3-9. Configuring Tunnel Priority
							interface Tunnel0
							ip unnumbered Loopback0
							no ip directed-broadcast
							tunnel destination 192.168.1.6
							tunnel mode mpls traffic-eng
							tunnel mpls traffic-eng priority 5 5
							tunnel mpls traffic-eng path-option 10 explicit name top
						

That's it. Priority is automatically handled at the midpoint, so there's nothing to configure there. The default priority is 7 (for both Setup and Hold). As Example 3-10 demonstrates, show mpls traffic-eng tunnels tells you what the configured priority is on the headend.

Example 3-10. Determining Where the Configured Priority Lies
gsr4#show mpls traffic-eng tunnels

Name: gsr4_t0                             (Tunnel0) Destination: 192.168.1.6
  Status:
    Admin: up         Oper: up     Path: valid       Signalling: connected

    path option 10, type explicit top (Basis for Setup, path weight 3)

  Config Parameters:
    Bandwidth: 75000    kbps (Global)  Priority: 5  5   Affinity: 0x0/0xFFFF
    Metric Type: TE (default)

The shaded portion shows you the Setup and Hold priorities. In this case, both are 5.

Attribute Flags

Another property of MPLS Traffic Engineering that you can enable is attribute flags. An attribute flag is a 32-bit bitmap on a link that can indicate the existence of up to 32 separate properties on that link. The command on a link is simple:

router(config-if)#mpls traffic-eng attribute-flags
						attributes (0x0-0xFFFFFFFF)
					

attributes can be 0x0 to 0xFFFFFFFF. It represents a bitmap of 32 attributes (bits), where the value of an attribute is 0 or 1. 0x0 is the default, which means that all 32 attributes in the bitmap are 0.

You have the freedom to do anything you want with these bits. For example, you might decide that the second-least-significant bit in the attribute flag means “This link is routed over a satellite path and therefore is unsuitable for building low-delay paths across.” In that case, any link that is carried over satellite would be configured as follows:

router(config-if)#mpls traffic-eng attribute-flags 0x2
					

Suppose for a minute that you're building an MPLS Traffic Engineering network to carry sensitive information that by regulation cannot leave the country. But, if you've got a global network, it might end up that your best path within a specific country is to leave that country and come back. For example, think about United States geography. It is entirely possible that the best path for data from Seattle to Boston would go through Canada. But, if you've got sensitive data that isn't allowed to enter Canada, what do you do?

One way to solve this problem is to decide that a bit in the attribute flag string means “This link leaves the country.” Assume that you use bit 28 (the fourth bit from the right) for this purpose.

In that case, the link would be configured as follows:

router(config-if)#mpls traffic-eng attribute-flags 0x8
					

Now, suppose that you have a link that carries delay-sensitive traffic that cannot leave the country. Maybe you're a maker of high-tech hockey sticks based in Boston, and you have Voice over IP communications between your Boston office and your Seattle office. You don't want the VoIP traffic to transit any satellite links, because the voice quality would be unacceptable. But you also don't want that VoIP traffic to take a data path through Canada, because a jealous competitor of yours is known to be eavesdropping on your circuits as soon as they cross the U.S. border in order to steal your newest hockey stick design. In that case, any links that were satellite uplinks in Canada would be configured as follows:

router(config-if)#mpls traffic-eng attribute-flags 0xA
					

0xA is the logical ORing of 0x2 (“This link crosses a satellite”) and 0x8 (“This link leaves the U.S.”).

On the tunnel headend, the configuration is a little different. You configured both a desired bit string and a mask.

The short explanation for all this affinity/mask/attribute stuff is that if (AFFINITY && MASK) == (ATTRIBUTE && MASK), the link is considered a match.

If you are confused by the short explanation, here's how it works—in plain English.

Affinity and mask can be a bit tricky to understand. Look at the mask as a selection of “do-care” bits. In other words, if a bit is set to 1 in the mask, you care about the value in the affinity string.

Let's look at a quick example or two with affinities, masks, and link attributes. For the sake of brevity, pretend for a second that all the bit strings involved are only 8 bits long, rather than 32.

If a link has attribute flags of 0x1, in binary this is 0000 0001.

You might think that the affinity string alone is enough to specify whether you want to match this link. You could use the command tunnel mpls traffic-eng affinity 0x1 and match the link attribute flags, right?

Sure, but only if you wanted to match all 8 bits. What if your link had three types of links:

0x1 (0000 0001)

0x81 (1000 0001)

0x80 (1000 0000)

and you wanted to match all links whose rightmost bit is 01?

To do that, you need to use a mask to specify which bits you want to match. As previously indicated, the mask is a collection of do-care bits; this means that if a bit is set to 1 in the mask, you do care that the bit set in the tunnel affinity string matches the link attribute flags. So, you would use the command tunnel mpls traffic-eng affinity 0x1 mask 0x3. A mask of 0x3 (0000 0011) means that you do care that the rightmost 2 bits of the link affinity string match the rightmost 2 bits of the tunnel affinity.

If you wanted to match all links with the leftmost bit set to (1000 0000), what affinity string and mask would you use?

						tunnel mpls traffic-eng affinity 0x80 mask 0x80
					

What about if you wanted the leftmost 2 bits to be 10 and the rightmost 2 bits to be 01? What would you use then?

						tunnel mpls traffic-eng affinity 0x81 mask 0xC3
					

This is because your affinity string is (1000 0001) and you care about the leftmost 2 bits (0xC) and the rightmost 2 bits (0x3). So, the mask is 0xC3.

Example 3-11 shows a sample configuration for the tunnel headend, including a mask. The mask is optional; if it is omitted, the default mask is 0xFFFF. (Remember, in hex, leading zeroes are removed, so the default mask is really 0x0000FFFF.)

Example 3-11. Configuring Headend Affinity and Mask
						interface Tunnel0
						ip unnumbered Loopback0
						tunnel mode mpls traffic-eng
						tunnel destination 192.168.1.8
						tunnel mpls traffic-eng path-option 10 dynamic
						tunnel mpls traffic-eng affinity
							0x0-0xFFFFFFFF string
							[mask <0x-0xFFFFFFFF>]
					

The mask is optional, and it defaults to 0xFFFF. Consider the earlier example, in which bit 0x2 indicated a satellite link and 0x8 indicated a non-U.S. link. If you want to build a tunnel that does not cross a satellite link, you need to make sure that any link the tunnel crosses has the satellite link bit set to 0. So, you would configure this:

						tunnel mpls traffic-eng affinity 0x0 0x2
					

This says that the entire bit string for the link should be set to 0x0, but that we look only at bit 0x2. As long as bit 0x2 is set to 0x0, a link is acceptable for this tunnel. This means that this tunnel can cross links that have attribute flags of 0x0, 0xFFFFFFFD, or any other value where bit 0x2 is set to 0.

If you want only links that neither cross a satellite link (so bit 0x2 must be 0) nor leave the U.S. (so bit 0x8 must be 0), the configuration is

						tunnel mpls traffic-eng affinity 0x0 0xA
					

Finally, if you want a link that does not leave the U.S. and that does cross a satellite, the configuration is

						tunnel mpls traffic-eng affinity 0x2 0xA
					

This says that the 0x2 bit must be 1 and the 0x8 bit must be 0.

Tunnel affinities and link attributes can be confusing, but they become straightforward if you review them a few times. They let you easily exclude traffic from a given link without regard to link metric or other available resources.

Administrative Weight

One of the pieces of information that's flooded about a link is its cost, which the TE path calculation uses as part of its path determination process. Two costs are associated with a link—the TE cost and the IGP cost. This allows you to present the TE path calculation with a different set of link costs than the regular IGP SPF sees. The default TE cost on a link is the same as the IGP cost. To change only the TE cost without changing the IGP cost, use the following per-link command:

router(config-if)#mpls traffic-eng administrative-weight
						(0-4294967295)
					

Although the command is mpls traffic-eng administrative-weight, it is often referred to as a metric rather than a weight. Don't worry about it. Just remember that administrative-weight is the command you use to set the administrative weight or metric on an interface. This command has two uses:

  • Override the metric advertised by the IGP, but only in traffic engineering advertisements

  • As a delay-sensitive metric on a per-tunnel basis

What does it mean to override the metric advertised by the IGP, but only in traffic engineering advertisements? Consider that in either OSPF or IS-IS, when a link is advertised into the IGP, it has a link metric that goes along with it. The default link metric in IS-IS is 10, and it can be configured with the per-interface command isis metric. The default link metric in OSPF is 108 divided by the link bandwidth, and it can be configured with the per-interface command ip ospf cost.

If mpls traffic-eng administrative-weight is not configured on an interface, the cost advertised in the traffic engineering announcements is the same as the IGP cost for the link. However, there might be a situation in which you want to change the cost advertised for the link, but only for traffic engineering. This is predominantly useful in networks that have both IP and MPLS Traffic Engineering traffic. Consider Figure 3-4.

Figure 3-4. Sample Network Using mpls traffic-eng administrative weight


In OSPF, the IGP cost for the DS-3 link from B to C is 2, and the IGP cost for the OC-3 link is 1. Assume that B has a TE tunnel that terminates on D, but also has IP traffic destined for C. By default, both TE tunnels and IP traffic go over the OC-3 link whenever possible. However, if you want the DS-3 link to carry TE tunnel traffic in preference to the OC-3 link, one way to achieve this is to change the administrative metric on these links so that the DS-3 link is preferred for TE traffic.

You do this by setting the administrative weight on the OC-3 link to something higher than the metric on the DS-3 link. On the OC-3 link, if you configure this:

router(config-if)#mpls traffic-eng administrative-weight 3
					

the link cost is changed for TE only. Figure 3-4 shows what the IP routing sees on RouterA, but the MPLS Traffic Engineering on RouterA sees the information depicted in Figure 3-5.

Figure 3-5. Network with TE Admin Weight and IGP Cost Set Differently


Exactly how this path calculation works is considered in Chapter 4.

Another use of the administrative weight command is as a delay-sensitive metric on a per-tunnel basis. This is also covered in Chapter 4. The basic idea is that you configure the link administrative weight as a measure of the delay on the link and then use that number, rather than the IGP cost, to determine the path-of-least-delay to the destination. This brings awareness of delay to the network, which lets you differentiate between an OC-3 land line with a relatively small delay, and an OC-3 satellite link with a much higher delay, but with the same bandwidth.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.222.200.143