Deployment Possibilities

One thing that you haven't seen yet is an example of how MPLS TE can be applied to solve specific problems in a network.

This section covers just that—the application of MPLS TE to solve different types of problems. This section has three main pieces:

  • Applications of tactical TE (two examples)

  • Using MPLS TE only for protection

  • Using MPLS TE for unequal-cost one-hop load balancing

Why is there no example of strategic (full-mesh) TE? It's not that strategic TE isn't useful, and it's certainly not that strategic TE isn't being deployed in the real world. It's that as soon as you have strategic TE, there's not much to it besides maintenance tasks—periodically changing your tunnel layout in accordance with traffic demands. Although this process is not trivial, the choices you can make in this area were covered fairly well in Chapter 9. This section instead covers a few applications of MPLS TE to solve specific problems, rather than the full general solution that strategic TE gives you.

Applications of Tactical TE

Throughout this book, you have read about the possibility of using tactical TE to solve specific congestion problems. Here are two real-world examples the authors have dealt with in which tactical TE was effectively used to solve problems.

Although these examples are real-world, done on actual networks with actual routers and by actual customers, we don't disclose the customer names. Let's call them FooNet and BarNet. The topologies have been changed somewhat, but the basic application of MPLS TE, and its effectiveness as a solution, remain the same.

FooNet: Bandwidth on Order

FooNet is a large national U.S. ISP. The northeast portion of their backbone network looks like a ring of routers, as shown in Figure 10-10.

Figure 10-10. FooNet's Northeast Backbone


All links in this network are OC-12, and OC-48s are on order. Most of the links are running at about 30 percent utilization (a load of 180 Mb), except for the link from Boston to Washington. The Boston→Washington link is 100 percent full, and it constantly drops packets. It turns out that the offered load on the Boston→Washington link is a persistent 700 Mb, and as you probably know, an OC-12 has a nominal capacity of about 600 Mbps. Of course, 700 Mbps is bigger than 600 Mbps. This results in roughly 14 percent packet loss for any traffic that tries to cross the link from Boston to Washington. Traffic demands aren't going away anytime soon, and the OC-48s that can eliminate this problem aren't due to arrive for months.

What to do? They can't send all Boston→Washington traffic via New York, because all links in the picture are the same speed. Doing that would only congest the Boston→New York→Newark →Pittsburgh→Washington path. Adjusting link metrics is tricky, too. Changing link metrics would affect not just the traffic in this small corner of their network, but also how traffic from other parts of the network enters the Northeast. It's certainly possible to solve this problem with link metric adjustment, but it's not easy.

The solution is MPLS TE. They simply build two TE tunnels from Boston to Washington—one via the directly connected link, and one in the other direction. Autoroute is enabled on both tunnels. The directly connected TE tunnel (called the short tunnel) and the other tunnel (called the long tunnel) are shown in Figure 10-11.

Figure 10-11. Applying MPLS TE to FooNet


NOTE

Why two TE tunnels? Recall from Chapter 5, “Forwarding Traffic Down Tunnels,” in the “Load Sharing” section, that a router does not load-share between an IGP path and a TE tunnel for the same destination. If all FooNet did were to build the long tunnel, all traffic to Washington would be sent down that tunnel, and that wouldn't solve the problem. So, two tunnels.


Because the short tunnel is a lower-latency path than the long tunnel, it's desirable to send as much traffic as possible down the short tunnel. The tunnels are set up in a 3:1 ratio—the short tunnel reserves 3 times as much bandwidth as the long tunnel. This has the net effect of taking the 700 Mb of traffic from Boston to Washington and sending 25 percent of it (185 Mb) down the long tunnel and 75 percent of it (525 Mb) down the short tunnel (see Figure 10-12).

Figure 10-12. Unequal Bandwidth Distribution in FooNet


Although 525 Mb is still a little full for an OC-12 (88 percent utilization—a link that, when full, has increased delays and periodic drops because of bursts), 88 percent full is a lot better than 117 percent full (700 Mb over a 600-Mbps link). And the long path, which went from a utilization of 180 Mb to 355 Mb, is now slightly over 50 percent full, which is empty enough that traffic down that path still receives good service.

Why 3:1? Splitting the bandwidth 50:50 down each tunnel would send 350 Mb of additional traffic across the long path, bringing its total utilization to 530 Mb. This isn't good, because this would come dangerously close to congesting the entire Northeast network! Consider Table 10-8, which shows possible load-sharing ratios and the traffic pattern that results from them.

Table 10-8. Possible Load-Sharing Ratios and the Resulting Traffic Patterns
Ratio (Short:Long) Additional Traffic on Long Path Total Traffic on Long Path Total Traffic on Short Path
1:1 350 Mb 530 Mb 350 Mb
2:1 233 Mb 413 Mb 467 Mb
3:1 175 Mb 355 Mb 525 Mb
4:1 140 Mb 320 Mb 560 Mb
7:1 88 Mb 268 Mb 612 Mb

3:1 also has the advantage of fitting exactly into CEF's load-sharing limits. Also covered in Chapter 5, the 2:1 and 4:1 ratios would really end up as something like 10:5 and 13:3, respectively. This isn't a big deal, but because either 3:1 or 4:1 seems to be acceptable in this situation, you might as well go with the one that fits best into the load-sharing algorithm.

Also, because this is a tactical TE situation, the actual bandwidth that is reserved on these tunnels doesn't matter. All that matters is the ratio of bandwidths between the two tunnels. So, for simplicity's sake, the long tunnel reserves 1 kbps of bandwidth, and the short tunnel reserves 3 kbps of bandwidth.

When the OC-48s come in, the tunnels are removed. There's no need for TE if you're trying to put 700 Mb down a link that has a capacity of 2.5 Gb.

BarNet: Same Problem, More Paths

You've seen how FooNet solved its problem. Nice, elegant, and simple. Now look at a more complex scenario—BarNet—and see how TE also solves that problem.

BarNet has the same fundamental problem as FooNet—links are being asked to carry more than they can hold. Figure 10-13 shows BarNet's network.

Figure 10-13. BarNet's International Links


The New York (NYC) POP has four routers, as does the Washington (DC) POP. Between routers in each POP are two Gigabit Ethernet segments. The DS-3s from NYC-1 to London and NYC-2 to London are both overloaded—60 Mb of offered load is being put down each 45-Mbps link. That's 133 percent oversubscription, which is definitely not a good thing. All other links are effectively empty, including the DS-3 from DC-2 to London. The goal is to move some of the traffic from the NYC DS-3s to the DC DS-3.

The first pass at a solution is to build a TE tunnel from NYC-1 to London across the short path and another across the long path, and a similar pair of TE tunnels from NYC-2 to London. Figure 10-14 shows this first-pass solution.

Figure 10-14. First Pass at Fixing BarNet's Problem


Bandwidth is reserved in a 1:1 ratio so that 30 Mb of the 60 Mb of London-bound traffic is sent across the link between NYC-2 and London and the remaining 30 Mb is sent across the link between DC-2 and London. A 2:1 ratio would also work here, sending 40 Mb across the NYC-2→London link and 20 Mb across the DC-2→London link.

This solution works, but it's inefficient. Traffic can enter the NYC POP via NYC-4 (which is connected to other POPs and routers, although that's not shown in the figure), get forwarded via IP to NYC-2, and get encapsulated in a tunnel that goes the long way to London—via NYC-4. So packets end up crossing NYC-4, and the Gigabit Ethernet switches twice.

Is this a problem? No, not really. It adds some delay, but only minimal delay; coming into and out of a local subnet twice wouldn't even be noticed when compared to the delay of an international DS-3. Traceroutes would show NYC-4 twice, which might or might not be a consideration.

It turns out that BarNet is satisfied with running the tunnels shown in Figure 10-14. A more optimal solution would be to also run tunnels from NYC-3 and NYC-4 to London so that traffic won't have to hairpin through NYC-2. Given the relatively massive amount of bandwidth available in the POP and the minimal delay required to go into and out of the same router and across a LAN link, optimizing this tunnel further wouldn't have much of a noticeable impact on user traffic.

A lot can be done with the tactical methodology. It all depends on how much work you want to put into it. In the BarNet case especially, all sorts of little optimizations can be made, such as experimenting with where the TE tunnels terminate, which necessitates playing with the autoroute metric or perhaps using static routes. Those applications aren't covered here, because there are probably a dozen little things that can be done differently in this picture that might make things more efficient. None of them have the immediate payoff of the simple solution, though.

The point to take away from all this is that tactical MPLS TE can and does work. The basic tenet is simple: If you add some TE tunnels to the router that is the headend of a congested link, you gain a large amount of control over where you can put traffic that might have run across that link. Just remember to pay close attention to make sure you don't create a problem somewhere else. Also remember to remove these tunnels when they're no longer needed.

TE for Protection

Some networks have no need to send traffic across paths other than the IGP shortest path. For these networks, there might be little to no advantage to deploying MPLS TE in either a strategic or tactical design. A full mesh of TE tunnels doesn't really have a problem to solve, and similarly, there's so much bandwidth in the network that there's little chance of tactical TE's being of any significant use.

There's still something MPLS TE can buy you. You can use MPLS TE purely for protection. As you saw in Chapter 7, “Protection and Restoration,” MPLS TE's Fast Reroute (FRR) can be used in place of SONET's Automatic Protection Switching (APS). FRR has several advantages over APS. These are discussed in Chapter 7 and therefore aren't discussed here.

Using MPLS TE purely for protection might seem like a hack, or perhaps an improper use of TE. It's not. Just as any technology evolves over time, MPLS TE has evolved from a tool purely used to send traffic along paths other than the IGP shortest path into a tool that can be used to minimize packet loss much quicker than the IGP can, without the tremendous expense (in both circuits and equipment) of APS.

The idea here is simple. Create one-hop TE tunnels (that is, tunnels between two directly connected routers) that reserve minimal bandwidth, and then protect the links these tunnels go over with FRR link protection.

Consider the network shown in Figure 10-15. It is a full mesh of OC-48 circuits between routers. There is a link from Router A to Router B, from Router A to Router C, and from Router B to Router C.

Figure 10-15. Simple Network Without Any Protection


This is a simplified SONET network; there's only one ADM. In real life, there would probably be more than one ADM in a SONET network, but the protection principles are the same.

Currently, this network has no protection capabilities. Protection is often a desirable thing, because it minimizes packet loss. This becomes more and more important the more SLAs you have and the more VoIP traffic you carry.

There are two ways to protect the traffic in this network. One is with APS, and the other is with FRR. First we'll take a quick look at APS so that you can contrast it with FRR. The next section assumes that you know how APS works. For more information on APS, see Appendix B.

SONET APS

Figure 10-16 shows a simplified APS network. It's the same network as in Figure 10-15, but with APS Working and Protect (W and P) circuits between all routers.

Figure 10-16. Simple APS Setup


Figure 10-16 shows a Working circuit and a Protect circuit between each pair of routers. There's a Working circuit and a Protect circuit between Router A and Router B, between Router A and Router C, and between Router B and Router C. This is a total of six circuits (A↔B Working, A↔B Protect, A↔C Working, A↔C Protect, B↔C Working, B↔C Protect) consuming 12 router interfaces.

If any one of the Working circuits goes down, the router quickly detects this failure and switches over to the Protect circuit. However, if one of the routers in this picture fails, APS doesn't do any good, because both Working and Protect terminate on the same router. FRR link protection has the same problem, but FRR node protection can alleviate this problem (although not in this simple picture).

If you would deploy APS as shown in Figure 10-16, you can easily use link protection to replace APS.

Using Link Protection Instead of APS

APS definitely has advantages over no protection at all, but at a cost. Look at how you might replace APS with MPLS TE Fast Reroute to achieve the same or better protection.

Figure 10-15 is the basic unprotected version of this network. Figure 10-17 shows what this network looks like from a routing perspective, with the ADM removed.

Figure 10-17. Routing View of Figure 10-15


Adding MPLS TE for FRR to this network means adding a total of 12 LSPs—six protected LSPs and six protection LSPs. Two things to remember here are

  • Using MPLS TE purely for link protection means building one-hop LSPs (to the router on the other end of a directly connected link).

  • TE LSPs are unidirectional, so you need two LSPs for a given link—one from each router to the other one.

This means that there are six one-hop primary LSPs:

  • Router A to Router B

  • Router B to Router A

  • Router A to Router C

  • Router C to Router A

  • Router B to Router C

  • Router C to Router B

Figure 10-18 shows these LSPs.

Figure 10-18. Six Primary LSPs


Of course, these LSPs by themselves do no good. It's only when you protect them that they add any value to your network.

In addition to the six primary LSPs, this network needs six protection LSPs. The protection LSPs are all doing link protection, and as such, they must terminate on the router at the other end of the protected link. Figure 10-19 shows the necessary protection LSPs. These LSPs have the same sources and destinations as the primary LSPs, but take different paths from the primary LSPs to get to the tunnel tails.

Figure 10-19. Six Protection LSPs


Put all together, there are a total of 12 LSPs in the network, as shown in Figure 10-20.

Figure 10-20. 12 LSPs


That's all you need to protect the directly connected links. Sure, the picture looks a little messy. But look back at Figure 10-15; each router has only two connections to the ADM instead of four. So if you do the extra work of dealing with the 12 LSPs (six protected, six protection), you have cut your necessary port requirements in half. This effectively doubles your port density. Are 12 LSPs too much to pay for this? The choice is up to you.

TE for Unequal-Cost One-Hop Load Balancing

As you've already read (in Chapter 5), MPLS TE gives you the ability to do unequal-cost load balancing between two or more TE tunnels. Much like using FRR to replace APS, you can use MPLS TE one-hop tunnels to achieve unequal-cost load balancing between directly connected routers.

Suppose you have a setup like the one shown in Figure 10-21.

Figure 10-21. Two Parallel Links of Different Bandwidths


If you want to send traffic from Router A to Router B across both links, how do you do it? With IP, it's difficult to load-balance properly between these two links. If you give both links the same cost, they'll both carry the same amount of traffic. This means that if you have 2 Gb of traffic to go from Router A to Router B, you'll end up sending 1 Gb over the OC-48 link (leaving ~1.5 Gb of unused bandwidth) and 1 Gb over the OC-12 link (which is about 400 Mb more than the OC-12 link can handle).

There's a simple solution to this problem. All you have to do is build two TE tunnels from A to B—one across the OC-12 and one across the OC-48. Load-balance between the two in a 4:1 ratio, and you'll end up sending approximately 500 Mb down the OC-12 and 2 Gb down the OC-48, as shown in Figure 10-22.

Figure 10-22. Load Sharing Over Links of Disparate Bandwidth with One-Hop TE Tunnels


As you might recall from the FooNet case study or the “Load Sharing” section of Chapter 5, you won't get exactly 4:1, but more like 12:4 (so, 3:1). Even though the ratios aren't perfect, you still end up with a much neater solution than if you tried to solve the problem using IP.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.22.100.180