Chapter 6. QoS for 802.11 Wireless LANs—802.11e

The IEEE 802.11 working group chartered the 802.11e task group with the responsibility of enhancing the 802.11 MAC to include bidirectional quality of service (QoS) to support latency-sensitive applications such as voice and video. Also, new breeds of consumer electronics are looking to 802.11 as a wire replacement. Products such as cable and satellite receivers might one day send high-definition TV (HDTV) signals via 802.11 to TVs. DVDs and digital-video recorders might do the same. Imagine this scenario in a home environment where 802.11 is used for home networking as well as for wire replacement from the DVD player and satellite receiver to the TV. The new applications for 802.11 require an effective QoS mechanism to ensure that their latency-sensitive audio/visual data has priority over data such as Internet e-mail and web browsing. It would be irritating to have the movie you are watching interrupted because of e-mail or web traffic!

This chapter addresses how the IEEE 802.11 working group is addressing the requirement for QoS by reviewing the challenges for effective QoS in 802.11 networks, examining the QoS mechanisms in the draft text of the 802.11e draft standard, and discussing admission control.

Challenges for QoS in 802.11 Networks

802.11 networks work well for low-bandwidth, latency-insensitive data applications. Barcode scanners, personal digital assistants (PDAs), or laptops accessing files, web, or e-mail services can do so without the physical constraint of network cables or a significant loss of performance. But as enterprises start to embrace wireless LAN (WLAN) deployments, and as vertical market deployments such as healthcare and retail mature, the need for support of Voice over IP (VoIP) over wireless and video over wireless is mandatory.

If you think about it, it makes a lot of sense. Using VoIP over wireless can reduce the usage of cell phones in the work environment (where the company pays an airtime fee). This reduced use of cell phones gives network administrators a tangible dollar value to develop a return on investment (ROI) for a WLAN deployment.

QoS is a relatively mature technology for wired networks and is generally available on routers, switches, and end devices such as wired IP phones. For 802.11 WLANs, the contrary is true. It is an emerging technology that is hotly debated with the IEEE and the WLAN industry as a whole. The key challenges for a QoS mechanism in 802.11 networks include the following:

  • A half-duplex medium—. 802.11 is a shared, half-duplex medium, whereas most wired Ethernet deployments that leverage QoS are full duplex.

  • Same channel BSS overlap (also referred to as cochannel overlap)—. In cases where two adjacent 802.11 BSSs are on the same channel, interference and degradation to performance can occur.

  • Hidden node—. Nodes in range of the AP yet out of range with one another will collide and cause extensive contention in the BSS.

The following sections detail each of these challenges to 802.11 QoS.

QoS Impact of a Half-Duplex Medium

Chapter 2, “802.11 Wireless LANs,” described the basic access mechanisms for 802.11 as defined in 1997 with dispersion compensating fiber (DCF) and PCF. Both mechanisms allow for only one station to transmit on the medium at a given time, whether it's the access point (AP) or a client station. Wired Ethernet, and in particular 802.3x full-duplex operation, creates a point-to-point link between Ethernet stations, allowing simultaneous transmit and receive of data frames. This setup allows the Ethernet medium to theoretically operate at two times its normal bandwidth. (A Fast Ethernet link can handle a transmit of 100 Mbps and a receive of 100 Mbps simultaneously, for a total of 200 Mbps). Stated another way, a station that needs to transmit does not contend with the station on the other side of the link, who might also need to transmit.

Contrast that scenario to one with 802.11 networks. Not only does the AP contend for the medium as the clients do, but the clients also contend for the medium among themselves. PCF operation did introduce the notion of polled access, where the AP can act as the point coordinator and poll each client to see whether it has traffic to send. Although in very low client count BSSs this setup is reasonable, it was found to cause more degradation in overall throughput than the normal contention-based access in DCF. With no mechanism to coordinate client transmissions and prioritize one client over another, vendors must overcome a major challenge to support latency-sensitive applications such as VoIP.

Cochannel Overlap

Cochannel overlap is a common occurrence in 2.4 GHz WLAN deployments with more than three APs. Because of the restriction of three nonoverlapping channels, some APs end up adjacent to APs on the same channel. What does this mean for the clients in those BSSs? Figure 6-1 shows a client in a cochannel overlap area. If both APs begin to transmit at the same time, the frames collide and both stations must back off and retransmit.

Cochannel Overlap

Figure 6-1. Cochannel Overlap

You can encounter another scenario, known as a broadcast black hole. When a BSS has a power-save station, all broadcasts and multicasts are sent after a DTIM beacon. In most cases, all the APs in an electronic switching system (ESS) have the same beacon interval and same DTIM interval. If the internal timers are somewhat close together on cochannel adjacent APs, both APs can send the broadcast or multicast traffic simultaneously, causing the frames to collide in the overlap area and the client in the overlap area to miss the frames. Unlike unicast frames, broadcast and multicast frames are not acknowledged and therefore are not retransmitted. Cochannel overlap can subvert QoS mechanisms by increasing contention in 802.11 networks, and coupled with the black hole situation, it can cause a client to not receive potentially critical traffic.

Hidden Node Impact on QoS

The hidden-node problem described in Chapter 2 poses an issue for providing QoS in 802.11 as well. Using request to send/clear to send (RTS/CTS) messages to reserve the medium addresses the hidden node problem, but again, RTS/CTS is typically employed after the detection of a collision and after the appropriate backoff. The increased latency can and often does impact latency-sensitive applications. Devices using RTS/CTS for each frame also incur a performance penalty, with a large amount of overhead traffic for each data frame.

QoS Mechanism Overview

The 802.11e task group has debated many issues, including those discussed in the previous section. It has devised two proposed solutions for the future of 802.11 MAC. Bear in mind that the proposed specifications are not yet ratified, and changes might occur after this book is printed. The current two proposed solutions are

  • Hybrid coordination function (HCF) with contention operation—. More commonly known as Enhanced DCF (EDCF)

  • HCF with polled access operation

HCF in Contention Mode—The EDCF Access Mechanism

The draft 802.11e specification attempts to provide classification for up to eight classes of data. EDCF and HCF polled access leverages these eight classes, known as traffic classes (TC), which map to the eight classes defined in the 802.1D standard, as shown in Table 6-1. Traffic from QoS-enabled clients is categorized into four broader categories known as access categories (AC). ACs 0 to 3 map to the 802.1D priority classes.

Table 6-1. 802.11e TC-to-AC Mapping

802.1D Value/TC

Common Usage

AC and Transmit Queue

1

Low priority

0

2

Low priority

0

0

Best effort

0

3

Signaling/control

1

4

Video probe

2

5

Video

2

6

Voice

3

7

Network control

3

Any system that supports QoS needs three key components for the system to work:

  • A mechanism to classify the traffic

  • A mechanism to mark the traffic with the appropriate QoS value

  • A mechanism to differentiate and prioritize the traffic, based on the QoS value

The mechanism for classifying and marking data frames is outside the scope of the draft 802.11e document, but it is safe to assume that the application (such as a voice application on an 802.11 handset) can at least mark the IP precedence bits or differentiate services code point (DSCP) values. It is also safe to assume that a client device will map those Layer 3 values to the 802.11e traffic classes. With the traffic classified and marked, 802.11e provides the mechanism to differentiate and prioritize the traffic for transmission.

Channel Access for Differentiated Traffic

After the traffic is classified and placed in the appropriate queue, the next step is to transmit the frames. The challenge is how to provide priority to frames among client devices that are not directly communicating. EDCF addresses this challenge by introducing some new concepts and functionality:

  • Transmit opportunity (TXOP)—. A TXOP is the moment in time when a station can begin transmitting frames, for a given duration. Unlike basic medium access for DCF described in Chapter 2, where each frame and accompanying acknowledgment contends for the medium, a TXOP can facilitate multiple frames/acknowledgments as long as they fit within the duration of the TXOP (see Table 6-2).

  • Arbitration interframe space (AIFS)—. The AIFS is similar to the IFSs discussed in Chapter 2, but the size of the IFS varies based on AC. This process gives higher-priority stations a shorter AIFS and lower-priority stations a longer AIFS. The shorter the AIFS, the higher the chances of accessing the channel first.

Some existing concepts are used in new ways. In Chapter 2, the CW values CWmin and CWmax are set for every DCF station, and the values change only during backoff and channel access retries. In EDCF, different ACs can have different CW values to enhance the chance for higher-priority traffic to access the medium first.

Table 6-2 illustrates the default parameters for the CW values, AIFS, and TXOP for each AC.

Table 6-2. Access Category Medium Access Parameters

AC

CWmin

CWmax

AIFS

TXOP Limit (802.11b)

TXOP Limit (802.11a/g)

0

Standard 802.11 CWmin

Standard 802.11 CWmax

2

0

0

1

Standard 802.11 CWmin

Standard 802.11 CWmax

1

3.0 milliseconds (ms)

1.5 ms

2

((CWmin + 1)/2) – 1

Standard 802.11 CWmin

1

6.0 ms

3.0 ms

3

((CWmin + 1)/4) – 1

((CWmin +1)/2)–1

1

3.0 ms

1.5 ms

Some points worth mentioning about Table 6-2 follow:

  • AC(0) is classified as best effort traffic, so the parameters nearly match standard DCF values with the exception of the AIFS, which has a value of DIFS + 1 slot time. Also, note that a TXOP duration limit of 0 allows for only a single frame to be transmitted.

  • AC(1), with slightly higher priority, has the same channel-access parameters as an 802.11 DCF station, with the exception of a TXOP duration that allows for multiple frames to be transmitted and acknowledged.

  • AC(2) has a smaller contention window than the lower-priority ACs and a longer TXOP. To illustrate the impact of the smaller contention window, consider the following:

    • The default initial CWmin value is typically 7 slot times. A DCF station randomly selects a backoff value between 0 and the CWmin (in this case, 7) and uses that as the counter value to decrement. With AC(2), the CWmin value of 7 changes to 3. The station now only has to select a backoff value ranging from 0 to 3, a much shorter time window. The CWmax value is also different, now using the CWmin value of 7. In this case, after the station has backed off and reached the CWmax value, it increments the retry counter much faster.

  • AC(3) has the shortest contention window of the ACs but also has a shorter TXOP duration limit as well. AC(3) frames are typically network control or voice frames, which are small and don't require much “air” time to transmit successfully.

Each of the ACs exists within a QoS-enabled station or AP. It is possible for two or more ACs to collide internally. A lower-priority AC can still randomly select a short backoff and collide with a higher AC. In this case, the higher AC frame takes precedence and the lower AC is forced to back off and increase its CW.

Admission Control with EDCF

The purpose of QoS is to protect high-priority application traffic from low-priority application traffic. For example, QoS protects VoIP frames from Post Office Protocol 3 (POP3) frames. In cases where network resources are limited, such as 802.11 WLANs, it might be necessary to protect high-priority application traffic from other high-priority application traffic. It might sound odd, but consider this example. Suppose a BSS can accommodate a maximum of six simultaneous VoIP calls. Any data traffic that attempts to use the medium is prioritized below the VoIP traffic so that the call participants have a jitter-free, useful VoIP call experience.

Now a seventh call is initiated in the BSS. The BSS can only accommodate six calls, and the prioritization mechanism should allow the call to initiate as it matches the requirements to be classified as high-priority traffic. Yet if it is allowed to initiate, it will negatively impact the existing six VoIP calls, so all seven calls are performing poorly.

Admission control addresses this issue. In the same way that QoS protects high-priority traffic from low-priority traffic, admission control protects high-priority traffic from high-priority traffic. Admission control monitors the available resources of a network and intelligently allows or disallows new application sessions.

EDCF uses an admission control scheme known as distributed admission control (DAC). DAC functions at a high level by monitoring and measuring the percentage of utilization of the medium for each AC. The unused percentage of the medium is referred to as the available budget for the AC. This available budget is advertised to stations in the QoS parameter information element (IE) in the AP beacons. When the budget starts to approach 0, stations attempting to initiate new application streams avoid doing so, and existing nodes are not able to increase or extend their existing TXOPs that they are already using. This process protects the existing application streams from being impacted by new streams.

HCF in Controlled Access Mode

HCF operation is similar to the operation of PCF described in Chapter 2. The AP contains a logical entity known as the hybrid coordinator (HC) that keeps tracks of HCF client stations and schedules the polling intervals. Polled access as implemented in HCF allows a station to request a TXOP, instead of just determining that one is available, as with EDCF. HCF operation, combined with HCF admission control, allows the HC to intelligently determine what resources are available on the wireless medium and accept or reject application traffic streams. HCF can operate in two modes, one coexisting with EDCF and the other using a contention-free period (CFP), similar to PCF.

Contention-Free HCF Operation

Contention-free HCF operation operates as follows:

  1. The AP beacon is sent, including the PCF compensating fiber (CF) parameter set IE that specifies the start time and duration of a CFP.

  2. The HC offers a TXOP to HCF-capable stations by sending QoS CF-Polls to them.

  3. The stations must reply back within a SIFS time interval with data frames or with a QoS null frame, indicating the station has no traffic or the frame it desires to send is too large to do so in the time allotted in the TXOP.

  4. The CFP ends when the HC sends a CF-End frame, or the CFP duration expires.

Figure 6-2 illustrates this operation.

Contention-Free HCF Operation

Figure 6-2. Contention-Free HCF Operation

Interoperation of EDCF and HCF

Unlike PCF operation, HCF polled access can occur during the contention period and coexist with EDCF operation as well as DCF operation. Polled TXOPs are “delivered” to the HCF pollable stations and facilitate the transmission or reception of QoS data frames. The HC gains access to the medium before EDCF stations by having to wait only a PIFS interval before accessing the medium. Figure 6-3 illustrates the coexistence.

Contention Period HCF Operation (Coexistence with EDCF)

Figure 6-3. Contention Period HCF Operation (Coexistence with EDCF)

Admission Control with HCF

What truly differentiates HCF-controlled access operation from EDCF is HCF's admission-control mechanism. EDCF's use of DAC relies on the stations to interpret and respect the transmit budget advertised in the QoS parameter set IE. HCF requires that the station request particular reservation parameters for the application traffic stream, such as VoIP, from the HC. The HC can evaluate and determine whether there is enough budget available on the wireless medium to facilitate the requested traffic stream. The HC can then accept, reject, or even offer an alternative set of parameters to the station. As you can see, this mechanism is far more robust and effective than DAC. This robustness does not come without a penalty, however. The HC must keep a strict schedule of traffic streams, and depending on the implementation of the HC (which is not standardized but is left to vendor implementation), some implementations of HCF can be far more inefficient than others.

HCF admission control centers on the Transmission Specification IE, also known as the TSPEC. The TSPEC allows the client station to specify parameters such as

  • Frame/stream 802.1D priority

  • Frame size

  • Frame rate (e.g., packets per second)

  • Data rate (e.g., bits per second)

  • Delay

Figure 6-4 depicts a TSPEC information element as defined in 802.11e draft 4.0.

TSPEC IE Format

Figure 6-4. TSPEC IE Format

This data is sufficient for the HC to determine whether the wireless medium can sustain existing traffic streams and this newly requested stream without degrading any of the existing streams. The TSPEC also indicates to the HC how often the station is expecting to get polled. The station must generate a unique TSPEC for each traffic stream it wants to transmit and receive with priority and for each direction of the stream (i.e., a bidirectional VoIP call requires two traffic streams).

The HC can do one of three actions after receiving the TSPEC:

  • Accept the TSPEC and grant the new traffic stream into the wireless medium

  • Suggest an alternative set of TSPEC parameters to the client station

  • Reject the TSPEC

To illustrate a scenario where a station sends a TSPEC that is accepted, assume a VoIP call is to be placed on an AP that has three existing calls in place and some sporadic data traffic. The sporadic data traffic is classified as “best effort” traffic, whereas the VoIP traffic is classified as “high priority.”

The VoIP traffic is protected from the data traffic via HCF polling order and frequency. The traffic is also protected from EDCF traffic because it uses a HC and need only wait a PIFS interval before accessing the medium. EDCF stations must wait at least a DIFS interval and, in some cases, a DIFS plus one slot time (assuming the use of the default parameter set in Table 6-2).

The process for the new station to join the BSS and begin transmitting its traffic stream is as follows (and illustrated in Figure 6-5):

  1. The station must authenticate and associate to the BSS.

  2. The station sends an admission request using the management action (MA) request for QoS, containing its requested TSPEC for the VoIP call.

    Note

    A TSPEC is required for each direction, both from the client to the HC and from the HC to the client. The client must request both TSPECs.

  3. The HC accepts the TSPEC and responds with a MA response for QoS to the station.

  4. The HC sends a TXOP via a QoS data CF-Poll frame.

  5. The station responds with a QoS data frame or burst of frames, depending on the duration of the TXOP.

HCF Admission Control Message Overview

Figure 6-5. HCF Admission Control Message Overview

In some cases, the HC might not be able to accommodate a new TSPEC without impacting existing traffic streams. The HC has the option to suggest an alterative TSPEC to the client or reject the TSPEC altogether. In the former scenario, the following process occurs (and illustrated in Figure 6-5):

  1. The station joins the BSS via authentication and association.

  2. The station sends an admission request using the MA request for QoS with its desired TSPEC.

  3. The HC sends a MA response containing the alternative TSPEC to the client station.

  4. If the alterative TSPEC is acceptable to the client, the process continues as with Step 3 from the previous list.

  5. If the alterative TSPEC is not acceptable to the client, the client sends an MA to delete the TSPEC.

When the HC cannot accommodate the traffic stream, it sends a MA response rejecting the TSPEC, and the client station may then try again using a modified TSPEC.

Traffic streams can be removed in two ways:

  • The TSPEC timeout elapses.

  • The station or AP explicitly deletes the TSPEC.

With a TSPEC timeout, after the defined timeout period for the stream elapses, the HC sends the client station an MA for QoS to delete the TSPEC. The timeout is determined when the client station is polled and it responds with QoS-Null frames after several polls within a set window defined by the timeout value in the TSPEC. In the case where a QoS station or the HC desires to tear down a stream, a MA frame to delete the TSPEC is transmitted to the HC or client station, respectively.

Summary: The Challenges Facing EDCF and HCF

At the time of this writing, there are two major obstacles perplexing the IEEE with respect to 802.11e: an effective yet simple admission control for EDCF and the operation performance of HCF. These issues are hotly debated among the various vendors in the working group who endeavor to solve application issues.

DAC still presents performance problems because it does not strictly enforce admissions control. Stations may potentially transmit and negatively impact existing traffic streams. Resolution seems to surround the adoption of parameterized admissions control for EDCF as well (the use of TSPECs to admit or deny EDCF traffic). HCF has its share of issues. Proponents extol the virtues of polled access as the panacea for effectively using the medium and also providing the ability to nearly guarantee service. Detractors believe that practical implementations of HCF will fail, as did early PCF implementations, because of the cochannel overlap issues that plague the 2.4 GHz band. The effectiveness of HCF diminishes quickly with cochannel overlap.

Although the working group has not finalized the 802.11e standard, it continues to strive toward a practical and effective set of tools to extend and expand the implementations of 802.11 WLANs.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.29.145