Fine-Tuning MPLS TE Parameters

So far, this chapter has covered network measurements. You have read about both traffic (bandwidth) measurements and delay measurements. Delay and bandwidth measurements help you place tunnels initially or move them around subsequently.

As soon as your tunnels are operational, you might need to customize MPLS TE according to your needs. Cisco IOS Software provides a lot of latitude for you to fine-tune MPLS TE to better suit your network. The default values of the different timers should in most cases be fine. Just in case they are not, knobs are provided for you to tune. This might become more evident as your network grows and traffic increases.

This section doesn't cover all the possible configuration options, just the major ones you might need to deal with as your network and your demands change.

Headend Configuration Knobs

There are three main things you can tweak on a TE tunnel headend, as described in the following sections. These are things you might find you need to change as your network grows or your service requirements change.

mpls traffic-eng reoptimize timers frequency 0-604800 Command

What it does: Sets how often the LSPs on a headend are reoptimized. See the “Tunnel Reoptimization” section in Chapter 4, “Path Calculation and Setup,” for more details on this command.

Default setting: 3600 seconds (1 hour)

When to tune it: Lower this number if you want to increase the likelihood of an LSP's finding its way onto a shorter path shortly after a link has come up. A better way to solve this problem, though, might be to use the mpls traffic-eng reoptimize events link-up command, although you need to be careful about flapping links. Setting mpls traffic-eng reoptimize timers frequency to 0 disables periodic reoptimization. See the “Tunnel Reoptimization” section in Chapter 9 for more information.

Recommendation: Change it if you like, especially if you're not on a Cisco IOS Software release that supports mpls traffic-eng reoptimize events link-up. Just be sure to keep an eye on your CPU so that if you've got a headend with a large number of LSPs, it doesn't spend too many cycles on reoptimization.

mpls traffic-eng reoptimize events link-up Command

What it does: In addition to periodically running CSPF for all LSPs regardless of whether anything has changed, enabling this knob runs reoptimization when a link in the TE database has come up.

Default setting: Disabled

When to tune it: Turn this on if you want to converge on a new link as quickly as possible.

Recommendation: This command is generally a good thing to enable, but if you have a link in your network that's constantly flapping, using this command can cause all tunnels to be reoptimized every time that link comes up. This is a useful command to have, so you should definitely consider using it, but be mindful of the impact it can have when you have a problem in your network.

Also, when you enable this command, you might think that you can now disable periodic reoptimization, because event-driven reoptimization will catch all linkup events. This is not quite true. You should still run periodic reoptimization, because conditions exist that might result in a more-optimal LSP path but that this command won't catch (such as a significant increase in available bandwidth on an interface). However, you can run less-frequent periodic reoptimization, because the majority of events that significantly change the network topology are related to links going up and down.

mpls traffic-eng topology holddown sigerr 0-300 Command

What it does: When certain PathErr messages are received because a link went down, those links are marked as unavailable in the TE-DB for some amount of time. This holddown is to prevent headends from being told that the link is down via RSVP when the headends have not yet received an IGP update about the link's being down. If the links were not marked as unavailable in the TE-DB, when the headends try to find another LSP for the tunnel, because the IGP update has not arrived yet, the headends might choose a path over the very link that went down in the first place. This timer is the difference between the amount of time it takes for an RSVP message to get back to the headend and the amount of time the IGP takes to propagate the same information.

Default setting: 10 seconds

When to tune it: Increase this time if it takes your IGP longer than 10 seconds to converge; decrease it if your network has links that flap a lot and your IGP converges quickly when these links change state.

Recommendation: Make sure this number is no lower than the time it takes your IGP to converge. It's probably a good idea to set it to at least 2 * IGP convergence, to give things time to settle down. Your IGP should converge in a small number of seconds, but if you find that your IGP converges in more than 5 seconds, you should think about increasing this value slightly.

Midpoint Configuration Knobs

In addition to having things you can configure on the headend, you can also tweak a few things at a midpoint should you so desire.

mpls traffic-eng link-management timers bandwidth-hold 1-300 Command

What it does: When a Path message comes through, the bandwidth it asks for is temporarily set aside until the corresponding Resv is seen. mpls traffic-eng link-management timers bandwidth-hold is Link Manager's timeout on the Path message. When this timeout pops, the bandwidth is no longer held, and it is available for other LSPs to reserve. If the Resv message for the first LSP comes back after the timeout expires, the LSP is still set up. But if a second LSP had come along in the meantime and requested bandwidth after the timeout, that second LSP might have taken bandwidth that the first LSP wanted to use, resulting in a failure of the first LSP.

Default setting: 15 seconds

When to tune it: If you have a big-enough network that Path messages really take 15 seconds or longer to get the corresponding Resv back, you might want to increase this timer. 15 seconds is quite a long time, though, and if you have 15 seconds of setup delay along a path, you likely have underlying problems. Either that, or you have an end-to-end path that's more than 830,000 miles long! Setting this timer lower than the default might help avoid backoff and resignalling if you have LSPs that are churning faster than every 15 seconds. This generally indicates a problem as well, because LSP lifetime should be measured in days, not seconds. The higher this timeout value is set, the more likely you are to have collisions over the same resources if and only if Resv messages are taking a long time to come back.

Recommendation: Leave this knob alone unless you have extenuating circumstances.

mpls traffic-eng link-management timers periodic-flooding 0-3600 Command

What it does: Periodically, if the link bandwidth has changed but has not yet been flooded, the changed information is flooded anyway to bring the rest of the network in sync with the router advertising the links.

Default setting: 180 seconds (3 minutes)

When to tune it: If you have links that are always close to full, and you have lots of relatively little LSPs (LSPs that are small enough that the appearance or disappearance of an LSP will not cross a flooding threshold), tuning this number down might help.

Another option this command gives you is to be able to minimize IGP flooding. If you'd like to keep IGP flooding to an absolute minimum, you could set this timer to 0 and let flooding thresholds and link flaps take care of all the flooding. It's probably a good idea not to disable this command, but instead to stretch it out to something like the full 3600 seconds so that TE information is occasionally flooded to anyone who might have overly stale information.

Recommendation: This is another knob to leave alone, at least to start. If you find that your headends are often out of sync with the bandwidth available on your links, you might want to lower this timer so that it floods more often, but do so cautiously; lots of TE tunnel churn can lead to increased IGP flooding.

mpls traffic-eng flooding thresholds {up | down} 0-100… Command

What it does: Controls how much available link bandwidth needs to change (in both the up and down directions) for information about that link's bandwidth to be reflooded throughout the network. (See Chapter 3 for a detailed explanation.)

Default setting: 15, 30, 45, 60, 75, 80, 85, 90, 95, 96, 97, 98, 99, and 100 percent, in both the up and down directions

When to tune it: These flooding thresholds are more sensitive to links as they get full. If you have lots of LSPs that reserve 90 percent or more of a link and LSPs that reserve less than 5 percent of a link, you might want to increase the lower-end thresholds to something like 1,2,3,4,5,7,10,12,15.

Recommendation: Yet another knob that is good to know about but bad to fiddle with. Unless you're aware of the sizes and behaviors of your LSPs, you can easily do more harm (lots of unnecessary IGP flooding) than good with this command.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.104.238