LAN to LAN Connections: Campus Networks

As large companies added more and more LANs, as in one for each department, they discovered that in order to be really useful, some applications, like e-mail, need to traverse individual LANs. They also noticed that people and projects didn't always fall within the borders of a small network. Two kinds of choices presented themselves: either merge the LANs into one large system or find some way to link them. Merging didn't make sense, since the LAN's pipes weren't fat enough to handle hundreds of computers and dozens of servers. Neither Ethernet nor Token Ring, for example, does well with hundreds of nodes, nor can they function over anything other than short distances. Companies thus responded with strategies that linked the LANs. These are called campus or sometimes backbone networks. The difference between the two is that a backbone network is usually in one building, and a campus network connects multiple buildings (but not multiple locations—we're talking about a place where you can string your own wire). The terms are used here interchangeably, with one slight caveat—Fast Ethernet, particularly when set up in full duplex mode, is a viable option for the shorter backbone networks, but its 200 meter distance limitation makes it unsuitable for campus connections. Of course, it should be noted that the three backbone technologies, FDDI, ATM, and Gigabit Ethernet, could all run directly in the LAN as well.

FDDI

FDDI stands for Fiber Distributed Data Interface. It is a 100 Mbps token-passing network. FDDI was originally specified only for fiber, but a newer version, CDDI, works on Category 5 UTP at the same 100 Mbps speed. With multimode fiber, FDDI can span 1.2 miles; with single mode it will operate over more than 35 miles. FDDI uses the dual counter-rotating ring strategy to provide security against severed links. FDDI normally connects LANs on multiple floors of a building or across multiple buildings, with each LAN having a short cable run to where the ring passes its floor or building.

FDDI can be used directly for LANs, but has never taken off in that category. Part of the reason is the cost of the network interface cards (NICs) for the computers. One reason these are expensive is the use of fiber. Anything that requires conversion from electrical to optical signals, and vice versa, will be significantly more expensive than something relying on pure electronics. The other parts of the network are also an issue. Companies have generally preferred Fast Ethernet, with its mass-produced hubs, switches, and cheap Category 5 UTP, to fiber networks, for which every element is notably more expensive. CDDI offsets many of the problems of working with fiber, but it came along too late to compete with Fast Ethernet.

Indeed, FDDI hasn't been as successful a backbone technology as expected when it appeared, and its market share is going down rather than up. It now must compete with two hot technologies—Gigabit Ethernet and Asynchronous Transfer Mode (ATM).

ATM

ATM has the distinction of being an all-everything technology. It can function at the equivalent of Layers 1 through 4 all on its own, or it can ride over another Layer 1 protocol (notably SONET) while using its own Layer 2 switching capability to carry Layer 3 and Layer 4 traffic (usually TCP/IP).

ATM is a cell-switched system. All traffic coming into ATM is broken up into 53-byte cells. No exceptions. Rigidity extends to the format of its very short protocol data unit; the first five bytes hold header information, the next 48 carry the data "payload." Always. The fixed cell length and structure has important advantages in switching. The switch hardware can be custom designed to handle the cells, with the result that, once the circuit is established, transferring cells from the incoming to the outgoing port can occur very, very fast—much faster than with any other approach. Most ATM links are permanent virtual circuits, PVCs, that are set up in advance by the network provider. But ATM also offers switched virtual circuits (SVCs) which are equivalent to dial-up connections. Some systems use ATM SVCs as backups in case a primary link goes down.

The small size of the cells is an advantage and a disadvantage. It can hurt in that the process of breaking up packets to fit into the cells adds overhead (remember that IP packets range up to about 1,500 bytes). It also hurts in that the payload-to-header ratio is unusually low (another way of saying this is that overhead is high). Data people argued against the 53-byte cell size as ATM was being developed, but the advantage of the small cell comes in handling voice and video. These streaming connections can't suffer more than a small amount of delay. Because it requires another stage of analysis to determine a packet's length, a normal switch, such as that on an Ethernet LAN, introduces latency as it processes packets of varying size. Also, if there is contention for an output port, a long data packet can hold up another transmission, forcing it to sit in a buffer and wait. This problem is much worse in routers, which are already a lot slower than switches. Variable latency in switches and routers introduces "jitter" that degrades the quality of audio and video. Audio can become choppy sounding, video can flicker, audio and video can get out of synch, and so on. The ATM approach, both because of its short cells and because of its more efficient Layer 2 focus, does a much better job of keeping traffic flowing smoothly through the switches.

Uses of ATM

In its Layer 1 clothing, ATM is specified for fiber optics and for UTP. The UTP specification, which came late and which is only popular with a few vendors (notably IBM), operates at 25 Mbps. The standard ATM is 155 Mbps (fiber only), scaling up to over 2 Gbps. It's not clear when ATM hits the wall, but it can certainly go faster than the 2 Gbps level. In terms of sheer bps, ATM is the fastest higher-level protocol around. It can also go the distance. Unlike Ethernet and Token Ring, whose broadcast technologies limit their range, ATM links can be as long as the fiber connection they run on (e.g., 80 km or so).

While ATM can work directly with the media at Layer 1, the heart of the protocol is at Layer 2. Like X.25 and Frame Relay (see sections later in this chapter), ATM either uses static connections (permanent virtual circuits) or employs a call setup phase to create virtual circuits through the network. Once the call is set up and the best path from sender to receiver has been determined, the path gets an ID number, which is registered in the databases of each switch. Cells then carry just the path ID; destination and source addresses are not needed. This simplified system helps make the switches function as fast as possible. Actually, ATM has several ID levels. A path can be host to many channels, so once a path from A to B has been set up, a number of processes at A can communicate with a number of processes at B, without having again to go through the entire call setup process. Most ATM based systems use permanent virtual circuits, but businesses often employ switched virtual circuits as a way of getting extra capacity on the fly—"bandwidth on demand."

Tech Talk

Virtual circuit: A defined path from one point in a network to another that relies on switching created by software rather than on the physical switching used by a regular circuit is called a virtual circuit.


Paths and channels can be established in ways that determine how much bandwidth they will receive from the switches. Thus, if a channel has been guaranteed 155 Mbps, each switch on the path will keep information about bandwidth levels and will make sure that cells on that channel can move at the guaranteed rate. When combined with the fixed cell size, these guarantees ensure that video and voice can move through the network without encountering inappropriate latency. The guarantees are referred to by the generic phrase quality of service (QoS). ATM is a protocol that is known for its "promised" ability to provide QoS. "Promised" is put in quotations because this function, which adds a great deal of complexity to the switches, has been difficult to implement. Quite a few of ATM's users don't actually attempt to use QoS functions because they fear they will bring the network down.

ATM in Today's Market

ATM hasn't been around all that long (planning took place throughout the 1980s, but products have been on the market only since about 1991, and really useful since only about 1994) and it was a while after that before it was fully specified. This lack of complete and final specifications was a problem, since it forced vendors to try to solve things on their own, resulting in incompatibilities. In networking, caution is advised when mixing and matching hardware from different vendors. In the early days of ATM, this approach was particularly risky. The situation continues to improve, but network managers, whose jobs depend on every day being a boring one, tend to be skeptical of heterogeneity.

ATM has many technical strengths, and the idea that the same protocol will run from end to end through a complex network has aesthetic appeal. Even when they work, no one really likes the idea that there are systems out there in the network that are doing nothing more than translating packet formats and addresses. In addition to its gee-whiz technology, ATM has corporate backers of unsurpassed financial and market prowess. The core of support for ATM comes from the telephone companies. It is they who developed the technology in the first place (the 53-byte cell size was their choice because it is close to the optimum for voice). Telephone companies are rapidly adding ATM to their networks; initially this was only for data, but Sprint and AT&T have begun to use it for voice as well. The long distance carriers who provide the Internet's backbone are major users of ATM, as are big computer companies like IBM. However, there is an increasing interest in bypassing ATM in these long haul nets. As traffic increases, carriers are worried about the "cell tax," the overhead involved in fitting IP packets into ATM's little cells. ATM is also facing a challenge in the campus level. Its principal competitor is Gigabit Ethernet.

Gigabit Ethernet

The escalating demand for bandwidth has spawned an alternative to ATM, at least for campus-level connections. For all of ATM's advantages, it has some important negatives as well. The negatives are particularly vivid if you assume that Ethernet and Fast Ethernet will continue to be the primary LAN technologies for some time to come. Most do make this assumption. The Ethernets are cheap, simple to operate, and when you pair switching hubs with Fast Ethernet, have all the bandwidth anyone can think of using at the moment. One negative in connecting Ethernet LANs with ATM is that the overhead involved in converting TCP/IP packets to ATM cells means that Ethernet at 100 Mbps and ATM at 155 Mbps have about the same effective speed. A second problem has to do with latency. A great deal of current network traffic consists of short bursts of IP packets requesting Web pages. ATM, with its call setup protocol, isless efficient at this sort of traffic than is a simpler protocol like Ethernet, which doesn't waste time on establishing connections. Finally, at least for campus-level connections, ATM's ability to manage quality of service isn't really needed (yet).

Because of these factors, a number of companies began work on taking Ethernet to the Gigabit level. While there were some technical problems, they have been easily surmounted. The IEEE Ethernet specification, 802.3z, requires fiber optic cable. It employs the same CSMA/CD methodology as regular Ethernet, and combines it with a signaling approach pioneered by Fibre Channel. Adopting this already-tried linking protocol has accelerated development. The maximum length of Gigabit Ethernet links is yet to be determined, but they will certainly go four or so kilometers. As with Fast Ethernet, full duplex connections are easy to implement. Gigabit Ethernet network cards, links, and switches are substantially cheaper than their ATM counterparts for an equivalent level of throughput. Estimates are that once the technology matures, per port costs in the switches will be about a third of ATM's. Gigabit Ethernet developers also plan to implement some kind of QoS mechanism that will compete with ATM's version, and there is talk of taking the standard to 10 Gbps.

Tech Talk

Port cost: Devices such as switches and routers come with physical ports that are used for external connections (the points where cables snap in or screw on). As a way of providing consistent comparison, the cost of the various technologies is often given as the cost "per port."


Moving Gigabit Ethernet to the LAN will be a challenge. Taking unshielded twisted-pair wire to a billion bits per second is pushing that medium to its limit. Only very carefully installed Category 5 cable is likely to do the job, and it's not clear that gigabit speed transceiver electronics will be all that much cheaper than those used for fiber. Some believe that, given the technical challenges, gigabit speeds over copper may not be competitive with fiber, even if re-cabling is factored in. The counter argument is that there is a lot of Category 5 already installed, and clever engineers will find a way to use it. No verdict at this writing.

Which is Best for Backbone and Campus Links?

How best to handle high-speed data connections, especially those needed for multimedia? Use cell switching, chant the Bellheads (ATM adherents). The advantages, they say, are obvious: ATM offers quality of service, ATM is scalable to multi-Gigabit levels, and ATM is a standard, one that can operate seamlessly from a LAN desktop across campus links, through the WAN and out to another desktop. In response, the Netheads (TCP/IP partisans) claim that ATM's QoS doesn't really work and that the best way to deal with video and voice is just to add capacity—more Gigabit Ethernet links. They also note that, again by adding more links, Gigabit Ethernet scales well and that Ethernet is more of a standard than ATM. In other words, say the Netheads, any network problem is easy to solve—you "just throw bandwidth at it."

What's the answer? Well, you may have noticed FDDI isn't on the list. So, the question really is, will it be Gigabit Ethernet or ATM? At this point it looks like Fast Ethernet for the LAN, and Gigabit Ethernet for the campus, leaving ATM to duke it out with wave division multiplexing for the WAN. That will be the simplest and cheapest solution. However, if the ATM camp comes back with lower prices for the desktop side and more consistent standards for the network piece, we could have a real fight. Stepping aside from this, however, you should recall that it isn't the technology by itself that sells equipment—it's how useful it is. ATM has technical eloquence, but that isn't good for more than a few points of market share. From the demand perspective, ATM's future at the LAN and campus ends of the network depends on how fast businesses adopt streaming data applications that need really fat pipes. Examples that come to mind are integrated LAN and telephone systems, routine user-to-user video conferencing (in effect, the LAN as videophone service), and routine use of high resolution multimedia documents in a typical office (not just the limited number of places doing a specialized activity like education or training). These are the only kinds of existing applications that require ATM, and given their adoption so far, it looks good for the cheaper, simpler options. Aside from education, training, and entertainment, really high bandwidth applications are few and the number of companies that find them compelling enough to actually pay for them are fewer still. That will probably change, but before it does, the technologies could change as well. Technologies that are solutions to problems not yet developed often don't win, since something better is likely to come along in the interim.

Figure 12.3. Campus or backbone network. A network that connects LANs is usually called a backbone in just one building or a campus network if there are multiple buildings. The key element of both is a high speed interconnect that both links LANs and provides access to resources shared by all—in this illustration, a server and Internet access.


..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.190.144