Chapter 26. Conclusions and Predictions

It’s hard to make predictions, especially about the future.

— Yogi Berra

This completes our picture of the current state of 802.11 networks. In this chapter, we’ll get out a crystal ball and look at where things are heading. First, we’ll look at standards that are currently in the works and close to completion. Then we’ll take a somewhat longer-term look and try to draw conclusions about where wireless networks are heading.

Standards Work

Publication of the 802.11 standard was only the beginning of wireless LAN standardization efforts. Several compromises were made to get the standard out the door, and a great deal of work was deferred for later. The 802.11 working group conducts its business publicly, and anybody can view their web site at http://grouper.ieee.org/ groups/802/11/ to get an update on the progress of any of these revisions to 802.11. As standards development progresses, many task groups post detailed reports, including the results of votes on different proposals.

Revisions to the standard are handled by Task Groups. Task Groups are lettered, and any revisions inherit the letter corresponding to the Task Group. For example, the OFDM PHY was standardized by Task Group A (TGa), and their revision was called 802.11a.

In the time since the publication of the first edition of this book, several standards revisions have been approved. 802.11g put the number 54 on boxes throughout the world. 802.11h made the underlying technology of 802.11a suitable for use in Europe, and convinced the U.S. government to open up additional spectrum in a worldwide harmonized band. For now, 802.11i has put security concerns largely to rest, and replaced them with demands for AES-based encryption.

New Standards

Several new standards are worthy of note. 802.11 continues to be a fertile ground for the development of new technology. As a sign of its maturity, it is getting close to rolling over to double letters for its task groups!

Task group E: quality of service extensions

Compared to their wired cousins, wireless networks have limited capacity. Task group E is developing standards to provide quality of service (QoS) by operating multiple queues and reserving the medium. To further provide service quality, 802.11e will define a new coordination function, the hybrid coordination function (HCF), with new means of accessing the network. It will also define the block acknowledgment protocol to reduce the fraction of network operations devoted to overhead.

802.11e is taking a great deal of time to produce. (Originally, it was dedicated to both QoS and security, before security was split off into Task Group I, which has now completed its work.) Rather than hold implementations for the final standard, the industry has selected a subset of the current drafts for interim standardization as Wi-Fi Multi-Media (WMM; see http://www.wi-fi.org/OpenSection/wmm.asp). WMM is to 802.11e as WPA was to 802.11i. Both are snapshots of a standard in process.

Task group K: radio resources

Mobile telephone networks make extensive measurements of the radio network to optimize its use of radio capacity. Many 802.11 products make some efforts to monitor radio quality, but there is no standard way of doing so. Task group K is developing a standard for use with 802.11 that will enable access points to collect radio statistics and make intelligent operating decisions based on them. New measurement types are defined to allow 802.11 stations to collect information on noise distribution, the number of hidden stations, and the load on any particular operating channel.

Task group N: high-throughput (100+ Mbps) MIMO PHY

Four complete proposals were initially received by TGn. However, two of the proposals have been withdrawn, leaving only the two described in Chapter 15. In standards-committee votes, TGnSync has been drawing slightly more support than WWiSE. The two standards are quite dissimilar, so expect a fair amount of horse-trading to create the final standard.

Products based on a TGn proposal cannot yet be called “draft 802.11n” since there is no official working draft standard at this time, and it is doubtful that a single proposal will have been selected by the time this book is in print. Products cannot label themselves “pre-N” without risking revocation of Wi-Fi certification. As a result, many products based on one proposal or the other are self-labeled as MIMO.

More distant standards

Task group P is developing extensions to 802.11 for use in automobiles, called Wireless Access in Vehicular Environments (WAVE). Cars move at much higher speed, necessitating handoff improvements. It also includes peer-to-peer networking capabilities to build a mesh between cars. Unlike many other forms of 802.11, it would use licensed spectrum. It is designed initially as a standard method of toll collection and download of safety information, although some observers think that it may eventually replace cellular communications.

Task group R is developing roaming protocols. 802.11i preauthentication is limited in that it does not reduce the computational workload of roaming. TGr is defining protocols that will enhance roaming by moving key material around the network. In the January 2005 meeting, several proposals were eliminated from further consideration, which is an important step in moving towards the final standard.

Task group S is developing mesh networking standards for use in multi-hop environments. Standards development is in a very early stage.

Task group U will modify 802.11 so that it will work with other network technologies. Its goal is similar in scope to the 802.21 working group. TGu modifies 802.11 as necessary to work with other network technologies such as third-generation cellular networks, while 802.21 works on a framework independent of any network technology.

Related standards

802.1X was originally designed for use on wired networks. Its use on wireless networks has been subject to a number of ad hoc standards that are essentially implementation agreements, and the integration of wired access control on wireless networks was messy. 802.1X-2004 specified a new version of EAPOL, and clarified the operation of the two state machines. It has not yet been widely implemented, but it will almost assuredly come to the market soon.

As more users adopt many types of wireless technology, each with its own niche of range and distance, inter-network handoff between complementary networks has moved to the fore. For example, many mobile professionals use 802.11 networks while sitting in hot spots and high-speed cellular data while in the car. Transferring a session between two disparate network types is the focus of the IEEE 802.21 working group.

Current Trends in Wireless Networking

What does the picture look like over the longer term? 802.11 has already killed off other wireless efforts aimed at the home market (such as HomeRF), and has consolidated its lock on short-range data access. As always, the larger issues for the long term are in areas such as mobility and security, both of which present problems that are not easily solved. Security, however, is moving away from a closed model of pure defense to a model that embraces the flexibility of wireless networks and uses them to provide services quickly.

Security

Security has always been the major issue associated with wireless LANs, although recent protocol work has eliminated many of the complaints specific to wireless networks. Mutual cryptographic authentication of the network and the user can now be performed using 802.1X and EAP. 802.11i has given wireless LANs the strong, trusted encryption that network managers were waiting for. WPA has made networks secure enough for practical use, and the industry has made substantial commitments to designing protocols capable of meeting stringent security standards.

Rather than isolate wireless networks, making them less functional, the new approach to network design is to integrate them into the overall network. Wireless networks improve productivity by decoupling user access from location. Early wireless networks gave away a great deal of the increased productivity by imposing access control between the wireless and wired networks, forcing users to learn new procedures for accessing data. Stronger security protocols enable the users to view the network as a single entity. Rather than attaching through cumbersome remote access procedures, users get the same network view as LAN attachment.

Changing the wireless LAN security model has depended a great deal on providing user authentication. With a strong link between the user account and any network activity comes accountability, and accountability may discourage many forms of network misuse. Authentication also shifts the emphasis of wireless LAN security from a general wall to be more like other forms of LAN security.

Authentication protocols

Today, authentication means RADIUS. Wireless LANs have given RADIUS a new lease on life, but the fact is that it was designed for the vacuum tube era of the Internet. Most users no longer use modems for access, but the protocol designed to provide service to modem users underpins the latest LAN medium. RADIUS is unsuited for the complexity being pushed upon it for LAN access. Anybody who has tried to use most RADIUS servers on the market can attest to the increased complexity foisted upon layers of obscure configuration files.

Authentication protocols will need to evolve to cope with the commoditization of IP transport. Wireless networks have driven most organizations to offer some form of network access to guests. As IP transport becomes cheaper and easier to provide, users will expect more access in more places. Authentication protocols need to adapt to more open networks by enabling access in any location. Specialized service providers such as iPass already perform this task; the research universities that have build Internet2 are working on a similar project.[118]

Building so-called “federated networks” requires authentication systems with extensive proxy capability, so that networks can authenticate guests from arbitrary organizations. I would not be surprised if an Internet authentication system evolved along parallel lines to DNS to enable an organization to find a server to authenticate visitors without having to preconfigure trust relationships.

Admission control

As networks become more open to visitors and other guests, network security takes on a whole new meaning. Ensuring that machines owned by an organization are secure is a difficult task, but one with known solutions. By standardizing on a platform and carefully monitoring for security patches, employing firewalls throughout a network, and keeping antivirus and intrusion detection signatures up to date, it is possible to ensure a reasonable level of security. However, the “keep everything up to date” model breaks down if the infection vector is an external machine. While a corporation can provide access to security software for all the machines it owns, guest machines are on their own. Several new software solutions are extending the concept of network authorization to include the state of a client machine. Machines are only allowed access to the network if they can be verified “clean.” Admission control is a way of extending authorization from just the rights of a user to include the state of the user’s computing platform as well.

Rogue device control

Controlling the radio spectrum by protecting against unauthorized wireless LAN deployment has been a major thread in the security story of wireless LANs. While there has been a great deal of engineering work, the theory is essentially the same as it always has been. Detect APs that are not part of your network by monitoring the radio spectrum, and take appropriate steps to shut them down if necessary. In recent years, this work has been extended to use radios that are part of your network, or in some cases, data gathered from client devices.

Once unknown devices have been identified, they must be analyzed to determine a threat level. Wireless networks that are attached to some other backbone are not a threat at all, and should not be the target of aggressive attacks. If office space is shared with other organizations, it is certainly possible that signals will bleed through the walls of adjacent offices. In addition to being poor neighbor relations, launching attacks on somebody else’s network may qualify as an offense under computer crime laws. To provide network security, some level of threat assessment is required. Unlicensed wireless devices not attached to your network are orthogonal to your administration and security management, and must be left alone. Ad-hoc networks built by visitors or others passing by should also remain undisturbed. Arguably, properly secured access points attached to your network should be left alone as well.

As in other areas of networking, consolidation is the order of the day. Rogue detection and assessment capabilities are increasingly built into wireless infrastructure. Although many companies offer additional equipment that can perform monitoring and security services, the baseline set of functionality required is integrated into most deployments.

Deployment and Management

Wireless LANs have followed the pattern of two previous innovations. Both the personal computer and the local area network started as under-the-radar affairs out of sight of the central IT staff. Eventually, however, they became centrally managed services providing a great deal of information. Wireless LANs have moved out of the under-the-radar phase and are quickly becoming standard connectivity.

The major deployment challenges now come from the effort to move beyond a simple coverage model. Early wireless networks were designed only to cover an area. With increasing usage, however, simple coverage is no longer enough. Ensuring higher capacity, especially in environments where users are accustomed to high-bandwidth wired networks, is at the forefront of protocol development and deployment.

Planning a network

Traditional 802.11 network planning is an arduous process consisting of walking around and taking a vast number of manual measurements. As with many other technological innovations, as the underlying mechanisms are better understood, tools are developed to improve the planning process. These tools are a way of “outsourcing” radio expertise to product developers, since most network administrators will never be radio experts.

One class of tools uses floor plans and architectural knowledge to calculate the number and location for access points, while another approach uses extensive dynamic radio calibration to adapt a network to its environment. Many products use both approaches. In any case, the time in which an expensive, time-consuming site survey was required is fast closing. Site surveys are too labor-intensive and too expensive.

Just as architects needed to learn how to plan for buildings with extensive network wiring, they will learn to incorporate 802.11 into the design process. As always, it helps to set requirements as early in the process as possible. With a basic idea of requirements and walls, preliminary AP locations can be calculated with the help of a modeling tool. By feeding back this information to architects and interior designers, the building’s layout can help increase network performance while reducing hassle down the line. If AP locations are decided based purely on aesthetic criteria, or ease of installation, it is likely that network coverage will suffer.

Planning is not the end of the process, either. Computerized models are not perfect, and some changes to the preliminary design should be expected. It is not uncommon to go through multiple test and optimization cycles in building a wireless LAN. Thankfully, however, the dropping cost of access points has lessened the need to use the absolute minimum number of APs possible.

In tandem with planning physical architecture, the logical network architecture must be thought through. As wireless networks have become more common, seamless mobility is expected throughout a facility. The desire for seamless mobility is often independent of the size of the facility, which can lead to interesting challenges in extremely large buildings.

Backhaul

One of the long-standing jokes at the Interop Labs is that the wireless LAN initiatives require a great deal of Ethernet cable to connect all the access points and electrical cords to power them up. As a result, the wireless networking group uses as much wire as other technology initiatives. At this point, wireless networks depend on back-end wiring to supply both network connection and power.

Generally speaking, network connections are easier to supply than electrical power, for a variety of reasons. For simplicity, access points use one wire, and it provides both power and network. However, in some situations, the converse may be true. In large auditoriums, for example, there may be electrical wiring in the ceiling for lighting, but no network connection. Rather than force the installation of network cable for the wireless LAN, several companies are exploring using the wireless network as a backhaul. Dual-radio access points can use one radio to provide service, one radio for uplink, and depend solely on power cable for energy. Mesh backhaul technology is likely to be valuable in a variety of challenging network wiring circumstances. In situtations where meeting the 100-meter Ethernet cable length is a pipe dream, it may be the only solution.

Mini-"regulators” and arbitrators

Disagreements over spectrum can easily erupt in 802.11, especially with the lack of capacity in the 2.4 GHz band used by 802.11b and 802.11g. Unlicensed spectrum means that anybody can use it, and there are no senior rights to the radio waves. In late 2002, an access point was installed by T-Mobile in the same area as an existing AP operated by the Portland-based Personal Telco project as part of a community network effort. There was no technical solution to the interference caused by both devices operating on the same channel, although both sides attempted to assert nonexistent claims.

Although the dispute in Portland was the best-publicized early dispute over spectrum, further problems are almost sure to arise. Many organizations attempt to control radio spectrum in some fashion. Buildings with many small offices may try to manage a single wireless LAN with multiple virtual wireless networks. Several airports have long tried to maintain a single physical wireless network while renting capacity out to the traveling public, airlines, and shops. So many agreements were written in such a way that ceded rights over the electromagnetic spectrum that the FCC declared all such agreements void, with a few minor exceptions.[119]

Automatic radio tuning technologies are only part of the solution. With so few channels, resolving interference to the optimum degree may be impossible. (The enthusiasm for building long-haul 802.11 networks may make the problem worse, as more 802.11 signals are sent through crowded areas.) There may be an opportunity here for technically competent individuals to assist with negotiating settlements between users. Some high-tech neighborhoods already have a small version of this problem, as adjacent houses use the same channel. Volunteers have formed neighborhood frequency allocation committees to adjust the channels used by adjacent houses to improve neighborhood performance.[120] With no legal authority, these arbitrators or “regulators” have no power to force changes, but rely instead on technical authority.

Guest access

Most importantly, though, different access controls are needed for the future. Many early wireless LANs were used just to extend corporate LANs throughout the office. Existing authentication concepts were designed for a known, static user group, such as a group of employees. Providing access to employees is a big task, but it just begins to scratch the surface of what wireless LANs can do. Just as in cellular telephony, the promise of wireless networks is installation in hard-to-reach spots and places where users are on the move, as well as the ability to connect them at an arbitrary destination.

Designing an 802.11 network for a public place such as an airport or train station requires dealing with the question of who is allowed to use the network, and what privileges they have. Network services must be authenticated, and users must be protected from each other. Providing robust services to several disjoint user groups while isolating them from each other requires some thought about network architecture.

Higher education is pushing the envelope on guest access. Research groups often span multiple institutions, and scholars often travel between multiple locations. Rather than require an account at each institution, there is a project to build a “federation” that will allow accounts from any institution in the federation to use networks at other members of the federation. Challenges involved in building the federation range from the purely technical to the nuts-and-bolts intersection of technology and process, to purely policy-related matters. On a technical level, some form of trusted authentication link must be built among federation members.[121] Federations are stuck using RADIUS, with its limitations.

Once a user account is authenticated against a home institution, however, there may be some need to isolate that user from interior of the visited network. To provide accountability, information that identifies the user needs to be passed from the home institution to the visiting institution. Operations staff at the visited network may want more than just a name, too. If a visitor is hit by a worm or virus, it may be vital to isolate the visitor’s computer, contact the visitor, and manually disinfect it. However, if the visitor has come several thousand miles, it may be impossible to contact the visitor’s home institution for contact information. Automatic disclosure of, say, a mobile telephone number with the authentication process would assist operations staff. Automatically supplying selected pieces of information without violating privacy, however, is a major technical challenge with current authentication protocols.[122]

Applications

The first application for wireless networks was freedom—freedom from wires and freedom from worrying about how location affects network services. Users move, but network jacks do not. First-generation applications adopted the Ethernet metaphor and left many applications unchanged. As application developers gained experience, applications began to buffer data and expect that network connections could come and go.

Wireless networks may also have a large part to play in the push for utility computing. As applications reside “out there” on the network, more universal network access methods are required. With improved authentication methods and wireless LAN interfaces on most computing devices, the network becomes the interface to the application. Better authentication systems may also help drive towards single-sign-on capabilities for applications.

Location

Early applications on wireless networks were simply applications from wired networks. Newer applications are likely to make much more out of the location awareness of wireless networks. Conferences often take over huge buildings, and considerable investment is made in signs and guides to keep attendees on the move to wherever they wish to go. Such events are already working on using the wireless network to provide location awareness to enable customized walking directions and “what’s near me” navigation applications. IBM Research developed an office system that tracks people and provides location updates at cubicles.[123] (With luck, future location-based innovations will feel less like Big Brother.)

Voice

After many years of predictions, voice over IP has finally arrived. Several service providers are now able to offer voice quality that is better than cellular to the home over DSL. After years of naysaying, consumers are rushing to adopt technology that provides a less-than-stellar call quality, but has additional flexibility.

802.11 is a strong contender for the next-generation cordless phone protocol. Right now, consumers who wish to use both cordless telephones and 802.11 networks need to purchase carefully and ensure that telephones operate in a different frequency band (sometimes by going to garage sales or eBay to hunt down older 900 MHz cordless phones!), or hope that the cordless phone coexists with 802.11. By using VoIP, a cordless phone can share the same network, alleviating worries. Further consumer electronics development using 802.11 will drive down the cost of chips and create a virtuous cycle.

One major shortcoming of 802.11 VoIP devices I expect to see addressed very quickly is the lack of any real authentication capabilities on the handsets. Up until this point, most handsets have not been capable of anything other than MAC filtering. 802.1X authentication with a cryptographic EAP method is a practical minimum security level. If such a phone used the Session Initiation Protocol (SIP), it could be used in an 802.1X-enabled hot spot. Rather than add expensive licensed capacity, mobile telephone carriers could add capacity in dense areas by offloading telephone calls on to a cheaper 802.11 network. Handing calls between two disparate infrastructures is a decidedly nontrivial task, however.

Datacasting

Wireless networks are inherently a broadcast medium. Just as in Ethernet, all frames in an 802.11 network are distributed to all stations and frame filtering rules are applied. The difference between Ethernet and 802.11 is that there is no easy radio analog to switching. 802.11 frames still travel in many directions and cannot be beamed with laser-like focus to a particular receiver. By using multicast frames, it is possible to build applications that provide data broadcasting to multiple receivers. With appropriate reliability protocols, it may be possible to build small-scale broadcasting capabilities into wireless LANs. In a twist on some television stations using spare digital TV bandwidth to send data, wireless LANs might become a short-range video distribution mechanism to many receivers. Several consumer electronics companies have joined the 802.11 standards process, so video distribution over 802.11 is not totally farfetched.

Protocol Architecture

One common theme in 802.11 is that too much is under the control of the client. In 802.11, all stations are created equal. If nineteen users are associated to an access point, the AP is responsible for 5% of the protocol. Arguably, the network infrastructure should be responsible for at least half of the protocol. Recent task groups are moving in that direction, with more centralized control to improve roaming, handoff, and service quality.

Federations and mobility

Federations and mobility are tied in together in terms of protocol architecture because they are both related to network sharing, or the portability of devices between different administrative networks. To borrow a term from mobile telephony, these concepts are driving the separation between the data plane (which moves user data) from the control plane (which provides authentication and authorization, and sets up the network for the data plane).

When European telephony experts wrote the standards on which modern second-generation cellular networks are built, it was explicitly recognized that no single telecommunications carrier had the financial resources to build a pan-European cellular network. Wireless telephony had previously been held back by a plethora of incompatible standards that offer patchwork coverage throughout parts of Europe. Experts realized that the value of carrying a mobile telephone was proportional to the area in which it could be used. As a result, the GSM standards that were eventually adopted emphasized roaming functions that would enable a subscriber to use several networks while being billed by one network company.

As usage of 802.11 has grown, authentication and cross-network roaming standards have become much more important. When the incredible cost of third-generation licenses pushed a number of cellular carriers to the brink of bankruptcy, they responded by sharing the data-carrying infrastructure to share the cost and risk of an expensive third-generation network build-out. Many 802.11 hot spot providers did the same, grouping into loose federations that allow users to access other providers’ networks.

As such arrangements become more common, the utility of federations will extend beyond service providers to include companies with joint ventures and research organizations. Protocols to authenticate manage visiting users will need to be developed because RADIUS does not scale or provide the features necessary.

As 802.11 becomes much more common, its advantages have become clear to organizations that are familiar with other network technologies. Attracted to cheap, unlicensed spectrum, many telephone companies have established their own service provider arms to offer 802.11 services. Incumbent telephone companies use 802.11 networks to sell WAN services. Mobile telephone operators are likely to start using 802.11 as cheap network expansion. The latter is already here, with several 802.11/3G hybrid phones announced. Combining 802.11 with a wider-area technology is quite natural, since it offers cheap abundant capacity in concentrated spots, while leaving the long-range network to a better matched technology.

Enabling quicker mobility while reducing overhead is the focus of several standards groups. Although 802.11i preauthentication can dramatically cut the time required to move from one AP to the next, it does not reduce the workload on the network to do so. A full 802.1X authentication is still required to establish the pairwise master key. As a result, there is still the same workload on the RADIUS server, and preauthentication across the WAN in a federated environment requires responsiveness across potentially large geographic (and networkologic) distances. Finding a way to move keys around the network rather than rederive them on each AP handoff will be very important in Internet-scale mobility. These protocols are being developed in Task Group R.

Future protocols

Wireless networks have a great deal of flexibility. With that flexibility comes a great deal of network administration overhead. Automatic discovery of network capabilities must extend past the low-level wireless parameters so that networks are able to announce how authentication is performed.

Early on, access points were defined by the capabilities of the underlying hardware. Newer radios have much more software functionality, and are embedded in access points that use even more software. As access points continue down the road of becoming almost completely defined by software, the market will further split between the high end and the low end. The low end will be little more than barely-modified reference designs manufactured in large quantities, and the high end will be a reference design that runs highly customized software.

In preparation for the AP becoming a platform for execution of 802.11 code, the IETF has chartered a working group to develop a protocol for the Control and Provisioning of Wireless Access Points (CAPWAP).[124] CAPWAP has produced a problem statement in RFC 3990, and is in the process of defining architectures and objectives for networks with lightweight access points. The protocol for AP control is expected in January 2006 as of this writing. As access points commoditize and tunneling protocols converge on a standard, it is likely that one of the Linux-based APs will receive a firmware update that enables the use of a standard tunneling protocol. In the meantime, I would not be surprised to see an effort made to develop new firmware for open-source APs that enables them to work with newer controllers.

The End

At this point, there is no way to prevent the spread of Wi-Fi. In the years since the first edition of this book, wireless networking has gone from an interesting toy to a must-have technology. Companies use it to improve productivity and attract employees, just as universities use it to attract students. With the dropping cost of chips and network cards, any laptop owner who wants connectivity can get it.

Network wires will remain for the tasks they are best suited for. Fixed computing resources that do not move will stay wired up, and high capacity networks must be constrained to operate along cables. Wireless networking, however, seems poised to continue its march towards the standard method of network connection, replacing “Where’s the network jack?” with “Do you have Wi-Fi?” as the question to ask about network access.



[118] See the Internet2 web page at http://security.internet2.edu/fwna/.

[119] See the FCC document DA-04-1844 of June 2004, available for download at http://hraunfoss.fcc.gov/edocs_public/attachmatch/DA-04-1844A1.pdf. The order “reaffirm[s] that ... the FCC has exclusive authority to resolve matters involving radio frequency interference [RFI] when unlicensed devices are being used, regardless of venue ... [and that] ... the rights that consumers have under our rules to install and operate customer antennas ... apply to the operation of unlicensed equipment, such as Wi-Fi access points...”

[120] See the Associated Press story at http://community.bouldernews.com/business/02bwire.html.

[121] The EduRoam project (http://www.eduroam.org) is one such project. Others are underway in different locations.

[122] The Internet2 consortium is developing a software system called Shibboleth (http://shibboleth.internet2.edu) precisely because no commercial solution was sufficient.

[123] The cubicle, codeveloped with office furniture maker Steelcase, is described at http://www.research.ibm.com/bluespace/.

[124] The working group’s home page is http://www.ietf.org/html.charters/capwap-charter.html.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.213.128