Bandwidth Requirements of Network Segments

In the past, client/server networks were dimensioned, to follow the 80/20 rule. That is, 80% of the total network traffic was expect to be exchanged within the local segment, 20% was expected to cross bridges to systems on other segments. The proliferation of SAP R/3, consolidation of systems in the data center, and the Internet make this rule obsolete. Today, the pattern of information flow through most local area networks can be viewed as a tree structure:

  • The end-user connections at the offices and factory floor form the leaves—each sending and receiving a relatively small part of the overall information juice at a time.

  • The workgroup connections of a building floor form the branches—concentrating the specific communication demand of a department.

  • The campus backbone connection, where all data communication of an enterprise is accumulated to broad and steady streams, forms the trunk.

  • The server connections in the data center act as the roots—where the information flow is again distributed.

The structure of bandwidth demand in the different levels of the network infrastructure also follows a tree structure. As mentioned previously, dedicated 10 Mbps bandwidth to a single user's PC is usually sufficient. Individual data streams are accumulated at the workgroup level, where higher network bandwidth is needed. For workgroups of 12 to 24 users, 100 Mbps is sufficient in most cases. This is also the level where network management tools and methods introduced later for prioritization and broadcast reduction have to be implemented. At the building level, the data streams from the workgroup levels will be accumulated a second time and need again a higher network bandwidth of multiple 100 or 1000 Mbps. In the end, all data streams of a campus will reach the data center. Once in the data center, they are distributed to the server systems, executing the applications essential to run the business. These servers are connected to the data center's network backbone with multiple high-speed links.

These new communication patterns require a rethinking of local network concepts and architectures. In the past, different technologies were used to meet the bandwidth demands of the various levels, causing disruptions in the data path; high latency times; and complex, difficult to manage structures. A high-performance network infrastructure for SAP with a reasonable TCO needs a network that provides a highly scalable bandwidth hierarchy to each part of the information tree using one single technology throughout. The available LAN technologies are discussed in respect to this demand in the next part.

Network Technologies—Lord of the Rings versus the Ether

The fundamental difference between local area network technologies is the way the physical media (wires or fiber) is accessed, typically known as the media access control (MAC) methodology. In the past, all local network technologies were based on the sharing of a common transport media among a user community. Network communications is similar to CB radio (walkie-talkies) communications; transmission from more than one source at the same time causes collisions, resulting in a breakdown of all communications. Therefore, any network based on shared media needs algorithms to govern access to it.

The basic technologies developed for local area networks can be divided into two types: deterministic, like Token Ring or FDDI, and statistic, like Ethernet with high-speed variants Fast and Gigabit Ethernet. The Institute of Electrical and Electronic Engineers (IEEE) created the standardization of these algorithms in the IEEE-802 series (respective ISO/OSI-standard 8802). ATM is an exception because it is based on telephone switching technology; there is no shared media and therefore no media access control necessary (telephone systems use multiplex technologies for media sharing).

Token Ring (IEEE 802.5)

Token Ring technology uses a token passing method to control access to the media. A token is a special bit pattern that acts like a ticket, enabling its owner to send a message across the network. With only one token for each network, there is no way for two computers to transmit messages at the same time, eliminating network transmission collisions. The benefit of such a deterministic approach is that transmission times are predictable. Token Ring is sufficient to support end SAP user connectivity for a medium number of users. However, for the SAP backend network, a 16-Mbps Token Ring is likely to cause a performance bottleneck.

TIP

Migration Strategies for Existing Token Ring Environments

Organizations deploying Token Ring face the question of a migration to Ethernet when expanding their network infrastructure. As mentioned before, existing Token Ring cabling can be reused for Ethernet in most cases. But all Token Ring equipment has to be replaced because Token Ring and Ethernet are incompatible with each other. The easiest migration strategy is a big-bang approach.

However, in some cases, investment and time restrictions made a more gentle or phased migration necessary. For this concept, a Fast Ethernet overlay network is added to the existing Token Ring infrastructure (see Figure 10-1). This way new PCs equipped with an Ethernet port on the motherboard can be deployed side by side with older Token Ring equipped PCs. The application and utility servers would have separate network interface cards for connections to both networks. A translational router provides the interconnectivity. With such a strategy, the network segments can be migrated over time.


Figure 10-1. Token Ring Infrastructure with Fast Ethernet Overlay Network


Ethernet (IEEE 802.3)

The Ethernet media access methodology is simple: transmit into the common shared ether (called a collision domain or segment), and resolve the problems caused by collisions, rather than prevent them. To quote Bob Metcalfe, the spiritual father of the Ethernet: “Ethernet works not in theory—but in praxis.”

Collision domains are made of hubs, wires, and the computers connected to them. The collision domain ends at bridges, switches, or router ports. The collision likelihood rises with smaller packet size and with rising network load until the network finally stalls. Using today's switching technology, however, this effect can be removed easily by reducing the collision domain to a few users. Micro-segmentation reduces the collision domain down to 5–12 users. In a fully switched network with one single user per LAN segment (each user has a dedicated switch port) no collisions happen at all.

TIP

Reuse Existing Hubs for Micro-Segmentation

In the past, organizations deployed stackable and large modular hubs to provide shared bandwidth to hundreds of users. Under certain circumstances, this equipment can be reused deploying a micro-segmentation approach. Stackables can be de-stacked removing the stack cables. At the large modular hub installations, the modules can be isolated from the chassis backplane to form separate micro-segments. To enable connectivity one port of each hub should be connected to a switch. A drawback of this investment-saving solution is that you lose manageability and it is still a shared network.


After years of hot discussions, the battle between Ethernet and Token Ring is over—Ethernet is king of the local area network. Why is Ethernet so dominant? In the beginning Ethernet was restricted to bus topologies with coaxial cabling (10Base2 and 10Base5). In 1985 Hewlett-Packard developed 10BaseT Ethernet for star topologies using common twisted-pair telephone cabling. The ability to use existing cabling infrastructures for high bandwidth data transmissions marked the breakthrough for Ethernet as the low cost front-end network technology of choice. The development of switching technology, in conjunction with Fast and Gigabit Ethernet scalability, created a smooth migration path to a homogeneous bandwidth hierarchy, making Ethernet the superior choice. Analogous to public transport, people don't like to use the deterministic but expensive train when they can take the cheaper car, even with the risk of a traffic jam.

FDDI (ISO 9314)

The classical solution for failure-tolerant network infrastructures used Fiber Distributed Data Interface (FDDI). FDDI deploys redundant fiber links providing 100 Mbps shared between all connected computers. However, half of the cabling and ports are standby only. As a shared technology with no migration path to higher bandwidth or switching technology, FDDI is more and more being replaced by Fast and Gigabit Ethernet.

TIP

Reuse Existing FDDI Cabling for Ethernet Backbones

Most FDDI rings deploy physical star topologies. By replacing the FDDI bridges and concentrators with workgroup and backbone switches, the existing fiber cables can be salvaged easily. Migrating the logical ring (bandwidth is shared by the whole facility) to hierarchical star links (dedicated bandwidth for each connection) will multiply the available bandwidth. Deploying Ethernet trunking, the fibers of the backup ring will become productive, doubling the bandwidth, while still providing the same level of redundancy.


ATM (CCITT I.361)

For many years asynchronous transfer mode (ATM) was advertised by the network equipment vendors as the technology of the future. ATM has roots in telephone technology and is therefore a Consultative Committee for International Telephone and Telegraph (CCITT) standard. Asynchronous refers to the lack of a special synchronization signal used in common telephone technology. The ATM cell stream itself generates synchronization. Therefore, empty cells have to be transmitted when there is no traffic on the connection.

One of the most praised features of ATM is its quality-of-service (QoS) implementation, which prioritizes time-critical voice and video over data traffic. The QoS algorithm allocates network bandwidth for individual applications according to their known bandwidth requirements. However, in cases of network congestion, there is simply no connection for this type of traffic, just like the standard telephone network. Common LAN technologies have degraded performance in such cases, but connectivity is at least provided.

ATM was developed to integrate voice, data, and video services, as well as to provide local and wide area connectivity. However, the different recommendations of voice, video, and data services forced several compromises, which cause severe drawbacks. For delay-sensitive traffic, such as voice, very small data packets are needed. Therefore, a fixed packet size of 48 bytes for data and 5 for the header was chosen (see Figure 10-2). The much larger data frames must be fragmented and defragmented at the connection points between ATM and data-oriented networks. This causes a relatively poor ratio of data-to-payload overhead, which negatively impacts the real network bandwidth available. Under practical conditions, a 155-Mbps ATM link provides a real payload of approximately 80 Mbps.

Figure 10-2. ATM (CCITT I.361) Integrates Data, Voice, and Video Signals


The attempt to provide affordable ATM end-to-end connectivity cannot be considered a success. Therefore, a mixed environment has to be deployed with end-user access by Ethernet connections. However, common broadcast-based network features must be emulated by the ATM backbone. In combination with the necessity of a MAC-to-ATM address resolution due to the incompatibility of the address schemata this results in additional complexity and latency.

Compared with Ethernet, costs for ATM equipment are higher, the setup more complex. ATM may be the technology of choice when the joint transport of video, voice, and data traffic over a single network is necessary. In most other cases, however, the implementation of ATM should be carefully proven beforehand.

Fast, Giga, and More Ethernet

For a while now, the constant increase of bandwidth of Ethernet technologies has been taken for granted. From standard Ethernet at 10 Mbps came Fast Ethernet at 100 Mbps, and soon thereafter Gigabit Ethernet at 1000 Mbps. Optical wide area networks provide a bandwidth between 2.5 and 10 Gbps per channel using wavelength division multiplexing (WDM). Multi Terabit technologies are already tested in the labs, being made ready for the mass market. What seems like a natural evolution toward faster network speeds has, due to the physics of high- frequency electronics, some consequences that cannot be ignored.

The data communication signals are exposed to various effects with negative impact on the lowest physical network layer. These include, among others, the jitter (delay variation) of optical and electrical impulses on the way from the sender to the receiver. Special error coding methods are used to ensure that the bytes received are identical to those originally sent. Because these types of errors increase in proportion to the frequency on the wire, the possible bandwidth or network throughput achievable is limited. Achieving more bandwidth is not as easy as simply increasing the frequency of the network components.

Fast and Gigabit Ethernet were developed so quickly because they didn't employ any fundamentally new technologies. They simply used existing and proven error coding methods for high-frequency transmissions combined with the Ethernet algorithm. Fast Ethernet is based on transmission technology from FDDI. For Gigabit Ethernet, Fibre Channel coding algorithms have been adopted. However, the required carrier frequency exceeded the limits of common Category 5 copper cabling. As such, Gigabit Ethernet was initially only offered with fiber optic cable technology. Since then, the problem has been solved by using in parallel all four pairs of wires in the Category 5 cables.

One essential point about these technologies is that frame size and structure did not change. Because of this, there is a minimal latency time whenever network speeds changed (say from 10 to 100 Mbps or back) at switches.

Autonegotiation

So-called Nway-Autonegotiation is a feature enabling the integration of older, low-speed equipment automatically into high-speed networks. Autonegotiation switches or network interface cards detect the maximum bandwidth supported by the device on the other end of a link. In a second step, the support of full duplex transmission is also tested. According to the result of the test, the device switches into the fastest mode that is supported on this link. Auto-sensing considers only active components, there is no measurement of the quality of the cabling between the devices. Nway-Autonegotiation is an IEEE 802.3 standard (ANSI/IEEE Standard 802.3 MAC Parameter, Physical Layer, Medium Attachment Units, and Repeater for 100 Mbps). You may find terms like auto-sensing, auto-detection, or speed-sensing on network equipment vendors' data sheets. These features are the same as for autonegotiation, but the implementations are incompatible in most cases. Autonegotiation is based on microcode implemented at the chip set level; auto-sensing uses software drivers. To make things even more confusing, some vendors produce autonegotiation as well as auto-sensing NICs.

TIP

Disable Autonegotiation

There is an additional issue when two auto-sensing devices are connected to each other. In these cases, negotiations may end up with 10 Mbps and half-duplex because of misunderstandings between the two 100-Mbps, full duplex auto-sensing devices. To ensure full-speed negotiation is achieved, it is recommended to manually set or configure at least one device to full duplex and the nominal (100 or 1000 Mbps) bandwidth (i.e., with auto-sensing disabled). We found examples where a transfer rate of only 30 Kbps was reached on a Fast Ethernet connection. After setting the NIC of the server to full duplex Fast Ethernet and disabling autonegotiation, the transfer rate rises to 7 Mbps.


There is no auto-sensing available for fiber links. IEEE standards 10Base-FL for 10 Mbps and 100Base-FX for 100 Mbps over fiber cabling use different wavelengths. A new standard, 100Base-SX, proposed by the Telecommunications Industry Association (TIA), is intended to use the same wavelength (850 nanometers) as 10Base-FL, providing transmission up to 300 meters with low-cost light emitting diodes (LEDs). Under these conditions, auto-sensing for fiber may be a viable future option.

Full Switched—Full Duplex?

In a full switched connection, the send as well as the receive path can be used for transmission at the same time. Vendors of switching hardware claim that this feature will double the bandwidth of the connection. This benefit realistically only applies to links between switches and multiprocessor systems. In a common PC, the single CPU cannot process incoming and outgoing messages at the same time.


Is High Speed Shrinking the Network?

There is an issue with the CSMA/CD access protocol when network bandwidth is increased by an order of magnitude. The collision detection algorithm restricts the domain diameter rather than the physics of the media. With the shift of the decimal point for increased bandwidth in one direction, the decimal point for maximum size of the network will be shifted in the opposite direction. The maximum size of a collision domain is determined by the ability of the nodes furthest apart, while simultaneously sending messages to each other, to detect and correct errors within the message's transmission process time-window. Because this was especially a problem with small packet sizes over longer distances, DIX-Ethernet (Digital/Intel/Xerox), the grand-daddy of Ethernet, introduced a minimum packet size of 64 bytes. With this, a time slot of around 51 milliseconds was standardized as the maximal transmission delay between both ends of the network back and forth. In practice, network sizes between 2.5 and 4 km are possible, depending on the configuration.

Raising the bit-rate without changing the packet size results in a decreasing time slot. With the signal propagation velocity as a given natural physics constant, the distance the packets are allowed to travel has to be restricted in order to detect collision in time. When 100Base-T was defined, the repeater network diameter shrunk from 2 km to 200 meters with a maximum of two repeaters due to the timing necessary to detect collisions. At 1000 Mbps, the maximum network diameter allowable to reliably detect collisions with 64-byte packets would be 10 METERS! The negative effect of shrinking collision domains was the reason for deploying token passing algorithms for the first high-speed solution: FDDI.

The problem of shrinking collision domains is addressed with modern network-switch technology. In point-to-point architectures, no CS or CD is necessary, because collisions do not happen. Therefore, replacing hubs by switches makes much more sense than replacing 10baseT hubs by 100BaseT. In a full switched environment, the distance limitations and inefficiencies due to the transmission error detection and correction time-slot window are eliminated. The distances are restricted only by the physical limitations of the media or cabling. Using all 4-pair of a Category 5 UTP cable even Gigabit Ethernet connections up to 100 meters are supported.

Link Aggregation

Several vendors developed proprietary trunking technologies grouping together multiple 100-Mbps or 1-Gbps point-to-point links. For the server operating system, the aggregated network interfaces look like one logical interface with single MAC and IP addresses. Smart software drivers delivered with the network interface cards (NICs) coordinate the distribution of loads across multiple network interfaces. Link aggregation, or trunking, enables load balancing across parallel links providing higher performance and redundant parallel paths. In case of a link failure, traffic is automatically redirected to the remaining links within the trunk. Cisco's Fast EtherChannel is supported by most NIC vendors. The IEEE 802 group is working on a vendor-neutral specification 802.3ad under the name of link-aggregation.

Link aggregation is a feature that needs to be configured; otherwise, the spanning tree algorithms switch parallel links to cold standby mode to avoid broadcast storms. Only parallel point-to-point links between two devices are supported (see Figure 10-3).

Figure 10-3. Link Aggregation


..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.246.203