Chapter 7. NDC Datacom: Wireless and Integration

The advent of mobile nodes gives rise to the promise of pervasive computing—the general availability of processing capability and network-connection potential. But the integration of wireless, mobile devices with an Internet grounded in copper and sheathed in glass cannot occur until we address significant software challenges.

Do you own a cell phone? How about an 802.11-enabled PDA? My Google Internet search found well over a million page references matching “free cell phone.” Most mobile phone providers now offer some version of the “free cell phone” deal, in exchange for their relatively low-cost monthly service offerings. In the late 1980s, cellular telephone sales signed up the first million subscribers worldwide, with unit prices in the hundreds and even thousands of dollars—and the analog minutes were pricey.

Mobile phones are tiny, cheap, and prevalent today. What was once dear is now a cheap commodity, thanks to digital physical layer protocols like time division multiple access (TDMA), which first appeared as the basis for Europe's Global System for Mobile communication (GSM) in the late 1980s. An aspirational economic symbol by the late 1980s, the development and early commercial success of cellular phones required nearly 40 years to reach the million subscriber mark, roughly 15 years ago. Since then, mobile processing capacity has steadily increased even as costs and power requirements have decreased—the result of a fitscape-driven virtual cycle, typical of ephemeralization, enabled by software.

Novel approaches to mobile datacom based on Shannon's information theory have also given rise to physical layer protocols that dramatically increase transmission capacity for wireless devices, enabling true mobile nodes from a network perspective. Indeed, today many developers write code for mobile telephones, using protocols and frameworks designed specifically to enable such development, a practice that would have been impossible for the vast majority just a few years ago. But we are only beginning to address the difficult problems inherent in a network that now features an additional dimension of change. Integration of mobile nodes into the still-evolving fabric of the Internet is not without challenges.

Analog Era

Fundamentally, a cell phone is a radio telephone. The Italian inventor Guglielmo Marconi is credited with having discovered “radio” with the advent of wireless telegraphy in 1896. Of course, invention is almost always the consequence of many previous discoveries and inquiries; in Marconi's case, he stood on the shoulders of giants like Faraday, Maxwell, Hertz, and Tesla. But his contribution opened the door to wireless transmission of analog signals, which enabled the heyday of world radio in the early 20th century. So why didn't we have radio telephones many years ago? The answer lies in the ephemeralizing nature of software, digital computers, and the tiny, powerful microprocessors that drive them. Microprocessors ferret through complicated operations with high reliability. Before the microprocessor, cell phones were simply not possible.

The barrier for early radio telephones was limited bandwidth; available frequencies (that is, channels) were often consumed by well-to-do individuals with radio telephones in their automobiles. Users often initiated a number of calls, starting very early in the morning, to guarantee channel availability when needed later in the day.

To be useful, a radio telephone has to work throughout an entire metropolitan area, supporting a large population of users. The scarcity of channels, exacerbated by frequency-reservation usage patterns as described above, could be solved by creating many smaller regions (which we now call cells) independently using the same frequencies. (That is, two callers in two different cells could use the same frequency without conflict.) Since a smaller area would likely host a smaller number of users, the available frequencies could be stretched. But what about movement from one cell to another? In principle, this problem could be solved by providing automatic switchovers from one cell to another, but such automatic switches turned out to be very complicated to implement. Then the digital era brought inexpensive microprocessors capable of performing the complex job of managing the switchover without losing the mobile call, and the modern cell phone was born.

Digital Era

At first, speech conveyed through modern cell phones was still analog, with digital systems responsible only for the switchover. This type of mobile phone is still known as an analog phone. A true all-digital cell phone represents speech too as a series of bits, processed by its embedded digital intelligence. In addition, a cell phone that digitizes speech improves sound quality, eliminates line static, and increases battery life by nearly an order of magnitude.

The migration of speech into the digital realm is more complicated than simply making the switchover process digital. The true all-digital cell phone had to wait for Moore's law to make microprocessors smaller and more powerful, thereby make computing itself mobile. The first true all-digital system was the TDMA-based European GSM system introduced in the late 1980s. TDMA, which uses time slicing as the means to multiplex transmissions over shared frequencies, helped solve the multiple-user problems faced by early radio phones. TDMA all-digital systems became available in the United States in the early 1990s.

CDMA (code division multiple access) all-digital systems,[1] which slice frequencies by unique embedded codes to increase channel potential even more than TDMA, emerged commercially around 1995. Figure 7.1 illustrates the five signal processing steps in generating a transmission-ready CDMA signal.

  1. Analog-to-digital conversion

  2. Voice compression, or vocoding

  3. Encoding and interleaving

  4. Signal channelization

  5. Digital-to-analog radio frequency (RF) conversion

CDMA signal processing

Figure 7.1. CDMA signal processing

The use of codes is a key part of this process. Encoding implies processing, which requires software. Clearly, a significant amount of processing needs to occur on both ends of the transmission pipeline for this scheme to work. Following its 1995 introduction, CDMA rapidly became one of the world's fastest-growing wireless technologies. In 1999, the International Telecommunications Union selected CDMA as the industry standard for new “third generation” (3G) wireless systems. Many wireless carriers are now building or upgrading to 3G CDMA networks to provide more capacity for voice traffic, along with high-speed datacom capabilities and services. Over 100 million consumers worldwide rely on CDMA for voice services, and an increasing number are leading the charge in using Internet data derived from a mobile phone.

The 1990s witnessed the transformation of mobile radio telephones from aspirational toys of the wealthy to all-digital commodity items that are now freely given away, a rapid transition fueled by Moore's law, the ephemeralizing nature of software by the network metaphor, and protocol breakthroughs like TDMA and CDMA. The fitscape in which mobile telephones play has given rise to something even more interesting, however: the basis for a wireless, mobile, application-driven network which, when married with the still evolving but grounded Internet, yields enormous potential for a profound paradigm shift in NDC. Before proceeding to that discussion, a few words regarding a few other wireless protocols are in order.

Wi-Fi and Wi-Fi5 (802.11b and 802.11a)

In technical terms, the IEEE 802.11b standard is the family of specifications created by the Institute of Electrical and Electronics Engineers (IEEE) for wireless, Ethernet-based local area networks in the 2.4 gigahertz bandwidth space. The balance of English speakers will think of Wi-Fi as a means whereby our computers and other smart devices connect to each other and to the Internet, at very high speed, without wiring or significant costs. As the adoption history of many digital technologies (including the cell phone) teaches, the cost factor is significant if Wi-Fi is to spread.

For the record, there are several flavors of specification within the IEEE 802.11 family. The specification for 802.11a, or Wi-Fi5, is for wireless communication over a 5 GHz band. Plain Wi-Fi, or 802.11b, is for wireless communication over the 2.4 GHz band. All other derivatives, from “c” to “i,” target performance, functionality, or security improvements in “a” and “b.”

Fundamentally, 802.11a and 802.11b each define a different physical layer. Wi-Fi radios transmit at 2.4 GHz and send data at up to 11 Mbps, using direct sequence spread spectrum modulation. Wi-Fi5 radios transmit at 5 GHz and send data at up to 54 Mbps, using orthogonal frequency division multiplexing (OFDM). Conceptually similar to CDMA, OFDM is a method of digital modulation in which a signal is split into several narrowband channels at different frequencies. The difference lies in the way in which the signals are modulated and demodulated. Interference minimization is given priority, reducing crosstalk among the channels and symbols that constitute the data stream. Less importance is placed on perfecting individual channels.

Why use Wi-Fi instead of Wi-Fi5? Consider the differences:

  • WiFi (802.11b)—Range requirements are significant. For larger facilities, such as a shopping mall or warehouse, 802.11b provides the least costly solution because fewer access points are needed. Many vendors already provide low-cost Wi-Fi solutions for devices, so cost may be a factor; the density of digital population is also a factor. If relatively few end users need to roam throughout the entire facility, 802.11b will likely meet performance requirements, unless these are very stringent.

  • WiFi5 (802.11a)—Choose 802.11a if there is a need to support higher-end applications, possibly including video, voice, and transmission of large files, for which 802.11b probably won't suffice. Also, it is useful to note that significant RF interference may be present around the 2.4 GHz band. The growing adoption of 2.4 GHz phones and Bluetooth devices could crowd the radio spectrum within the facility in question and significantly decrease the performance of Wi-Fi wireless LANs. The use of 802.11a, which operates around the 5 GHz band, will avoid this interference. If end users are densely populated (as in computer labs, airports, and convention centers), competition for the same access point can be alleviated by 802.11a's greater total throughput capability.

Bluetooth

Bluetooth is a royalty-free specification. Unlike many other wireless standards, the Bluetooth specification includes both OSI 7 link layer and application layer definitions for developers and supports data, voice, and content-centric applications. Radios that comply with the Bluetooth specification also operate in the unlicensed, 2.4 GHz radio spectrum.

Bluetooth devices use a spread-spectrum, full-duplex, frequency-hopping signal, which hops at up to 1600 hops/sec among 79 frequencies at 1 MHz intervals to ensure some degree of interference immunity. Up to seven simultaneous connections can be established and maintained with Bluetooth; for a personal area network (PAN) solution, multiple device support is mandatory. And PAN is indeed the intent of Bluetooth. Hence, Bluetooth and 802.11b should complement each other nicely, even though they compete for frequencies in the 2.4 GHz range. From a market perspective, a marriage of these two pioneering technologies will suffice. Note that other solutions may be required if the success of a Wi-Fi/Bluetooth combination starts to crowd available frequencies within the geographical constraints of 802.11b.

Optics, Wireless, and Network Integration

Moore's law has received most of the attention in, and most of the credit, for the evolution of the modern global techno-economic fitscape. The productivity gains in knowledge-based economies that we have witnessed over nearly a decade are often correlated with the utilization of IT. But capital expenditures go far beyond the cost of personal computers; communication among people is just as key to harvesting the potential of Web dynamics; Gilder's law must therefore be given credit as well. Just as important as increased processing and storage capabilities at lower cost is increased bandwidth access at lower costs. Gilder's law describes the multiplication of synapses to the nodal neurons of this global shared brain.

George Gilder, prolific author and technology age pundit, observed that the total bandwidth of communication systems triples every twelve months. This law became a zeitgeist component with the publication of his book Telecosm in 2000,[2] coincident with the beginning of the steep decline of the NASDAQ that the next 18 months would witness. While the epiphany that shook the dotcom world also impacted the global telecommunications industry, the growth of global communications capacity remains an important factor if we would predict the shape of any future Internet.

Fiber Optics

Fiber optics is a big part of the global telecom capacity picture. With a theoretical limit of some 30 Tbs per strand,[3] the potential of fiber far outweighs even the most promising wireless option when it comes to capacity and robustness. But I don't yet have a direct fiber link to the Internet, and not many people do.

The problem is one of “dark fiber”—or actually one of money, which is most often the case in any technology adventure. It's far easier and relatively less expensive to lay the millions of miles of fiber-optic cable around the world than it is to light it; the equipment costs for encoding, switching, and boosting fiber-optic signals is three to four times that of the actual fiber deployment. As such, millions of miles of fiber-optic cables are dark, awaiting the day when light will grace their core, even as more miles of fiber-optic cables are buried every day.

Since capital investments are driven by corporate profits, which are driven by consumption and enabled by capital markets, the economic malaise of the early 21st century may seem to have stalled the adoption of fiber optics, especially insofar as commodity connections are concerned. But the world is not a static domain.

In North Carolina, for example, efforts to connect rural areas have resulted in a three-year plan that would guarantee home access to at least 128 Kbps and business access to at least 256 Kbps, based on a fiber-optic backbone extended to rural areas. In California, the City of Palo Alto went even further, with a fiber-to-the-home (FTTH) initiative in a trial phase[4] from November 2000 to August 2002. Imagine having access to the throughput potential available from a single ultra-high-speed optical fiber that connects your home, and all the information appliances you choose within it, to the Internet. In Palo Alto, the efficacy of that class of connection, at what is effectively a commodity level, is still being actively explored.

FTTH is a commodity access technology that uses an optical network architecture optimized for simple, economical delivery of telephony, packet data, and video to the home through a single bidirectional fiber-optic strand. The Marconi[5] multimedia FTTH system was chosen for the trial. The services provided on the system are POTS (Plain Old Telephone Service), high-speed Internet access (data), and broadcast video. All three services are combined and distributed from the City of Palo Alto's Utility Control Center and transmitted to residential customers over a fusion-spliced fiber-optic network. The resulting Outside Cable Plant contains no active components and as such is referred to as a passive optical network (PON).

Data throughput is always a function of the lowest denominator in an end-to-end communication path. In the case of FTTH, that includes the inside house wiring, PC equipment, upstream gateway(s), ISP, and Internet bandwidth. In-house load testing in the Palo Alto trial has shown consistent payload throughput ranging from 3.5 Mbps to bursts of 7 Mbps. Compared to home fractional T1 products (ours cost roughly $450 per month, delivering bursts of up to 1.1 Mbps), a fiber-optic option today would be very appealing, especially if the price were competitive with Internet access delivered by cable television.

Will all home communications needs be met by a 3.5 Mbps reliable connection? Probably not. If multiple-channel, high-definition video, Internet traffic, audio, and voice communication are to flow concurrently, with complete reliability, arguably much more throughput to the home will be needed. But fiber optics at least holds promise for that ultra-high-speed future.

In the context of wireless and network integration, fiber optics presents a viable option and an important consideration for the NDC developer. At the same time, it also presents a bit of a problem: what class of consumer do I design my application to address? What bandwidth assumptions can I make? Are bandwidth considerations even germane to application development? Or should bandwidth, along with security, homogeneity, transport cost, and so on, be lumped together in a box labelled “fallacies” and placed mindfully in the closet, hopefully to be ignored by yet another generation of distributed computing software?

Wireless

A wide range of bandwidth options is the least of our worries. Deep within the design premise of the Internet is the notion of geographically fixed nodes (another fallacy?). Even as wireless physical access spaces emerge, the premise holds; wireless nodes roam a physical space constrained by the limits of the devices, frequencies, and protocols utilized, but always cognizant of and dependent on a receiver that acts as the Internet gateway, which in turn is geographically static. The promises of wireless appear at the edges, but the fixed core of the net leaves us with a novel set of design constraints and potential for innovations. The challenge is one of blending something that is fluid with something that is fixed.

Consider the protocols cited earlier: Wi-Fi, Wi-Fi5, and Bluetooth. We might imagine mobile nodes that implement these very specifications, serving users who are either in proximity to or ready to install the necessary LAN/MAN equipment for wireless application deployment. My Bluetooth PAN-devices, like my augmented reality goggles, soul-catcher storage unit, digital identification and security apparatus, and physiostasis monitor share the seven available Bluetooth channels, transmitting and receiving through my wearable Wi-Fi-enabled system, which acts as my PAN gateway, as shown in Figure 7.2. The Wi-Fi-available building, enabled by one or more receivers, shares duty with the Wi-Fi5 connected campus units.

Wi-Fi and Bluetooth

Figure 7.2. Wi-Fi and Bluetooth

The Wireless LAN (WLAN) horizon for one building may overlap with another, providing interesting switching requirements that the wired Internet need not consider. The same is true if a seamless migration from Wi-Fi to Wi-Fi5 fields is part of our imaginary installation. If n2 decides to leave the WLAN in question and wander around the campus, problems arise, as shown in Figure 7.3.

Wi-Fi, Wi-Fi5, and Bluetooth

Figure 7.3. Wi-Fi, Wi-Fi5, and Bluetooth

As a node wanders through myriad fields, each of which represents a different entry point into an otherwise fixed network, the need for protocol hand-over should be obvious. On the other hand, it is not out of the question to consider engineering around the absence of a deep datacom infrastructure that is mobile-node aware. Clever edge implementations, which mask information field shifts from the user, may be viable. This is a choice that will eventually present itself.

Another wireless choice is a clever edge implementation that already masks information field shifts from the user to some extent, and that may soon be ready as a PAN gateway cum wearable digi-nexus. It's called a 3G mobile phone, and it may yet emerge as the hands-down winner in the Darwinian contest du jour. A Bluetooth chip in a CDMA phone could accomplish much the same thing as a PDA with Wi-Fi/Wi-Fi5 and would likely find a commodity outlet channel a lot faster, given the uptake of mobile phones worldwide. Would protocol hand-over be solved? Not quite yet. But from an application deployment perspective, mobile phones are increasingly interesting choice.

NDC developers enjoy a litany of issues and choices in this nascent Network Age. But as is the case with any reasonably complex fitscape, we haven't even begun to explore the possibilities.

The π-Calculus and Protocol Hand-Over

Messages ride a medium, regardless of context. Whether the medium be RF, coherent light, IR, microwave, copper, or optical fiber, a message is always transmitted through a medium of some kind. Back when, when nodes on a network were tied down and geographically static, strategies for navigating the network could begin with these assumptions. But the deflection point born of the meeting of Moore's law and the wireless component of Gilder's law challenges those assumptions fundamentally and may represent a phase shift in NDC and the transformative software that drives it.

Recall the discussion of the Spanning Tree Protocol (STP) discussion in Chapter 4; some form of STP discovers and defines networks today. When a mobile node joins such a network, assumptions built into Internet datacom infrastructure, both software and hardware, are challenged; difficult problems arise, requiring a fundamental examination of previous assumptions. How do we effectively manage a network now made up of nodes that are not only intermittent but that also potentially manifest a constantly morphing network topology? The drawbacks of an STP in such an environment should be obvious. The answer may lie in the π-calculus, first discussed in Chapter 3.

Pioneered by a small group of theoretical computer scientists in the late 1980s,[6] the π-calculus is a theoretical tool designed to describe communicating systems in which it is possible to rigorously express processes that have changing structure. Where the component agents of a system are arbitrarily linked, communication between neighbors may carry information that changes that linkage. The calculus provides a means “. . . for analyzing properties of concurrent communicating process, which may grow and shrink and move about.”[7]

The calculus gains simplicity over previous approaches by removing distinctions between variables and constants; communication links are identified by names, and computation is represented only as the communication of names across links.

From a process perspective, what does it mean to be “mobile”? Strictly speaking, computer science may harbor many interpretations of the term and many correspondingly contrasting solutions. A process may migrate from one node to the next, physically moving around the network; a process may migrate in a virtual space of linked processes; a link may move in a virtual space of linked processes. These are only three of the many possible definitions of a mobile process.

The calculus describes mobile processes from the perspective of their links; the location of a process in a virtual space of processes is determined by its links to other processes. Those within the communication horizon of the process in question are a function of the links that process may have; accordingly, the movement of a process can be represented by the movement of the links.

A complete description of the π-calculus, its nomenclature, and its rules, is beyond the scope of this book. For this discussion, it is enough to say that the introduction of wireless nodes into a network in which Metcalfe's law is to also prevail—that is, the maximization of possible node-to-node connections—requires a review of existing discovery and routing assumptions. The calculus offers hope in that regard.

Summary

The wireless revolution represents a phase-shifting deflection point in NDC development. The baked-in topological assumptions of the Internet are fundamentally challenged by the advent of radio-frequency mobile phones, as well as the WLAN, WMAN, and PAN technologies that are today being gingerly explored by a generation of early adopters. Driven by rural connection needs and urban mobile possibilities, the convergence of ultra-high bandwidth via fiber optics with mobile broadband via Wi-Fi, et al., requires a rethinking of basic Internet premises. The resulting challenge and choices for NDC developers are legion.

Notes

1.

CDMA was pioneered commercially by Qualcomm, Inc., of San Diego, CA: see www.qualcomm.com.

2.

George Gilder, Telecosm (New York: Free Press, September 2000).

3.

David D. Nolte, Mind at Light Speed (New York: Free Press, 2001). According to Nolte, an “all-optical network,” which would provide optical switching as well as transmission, could theoretically maximize bandwidth to somewhere near 30 Tbs.

4.

www.cpau.com/fth/

5.

www.marconi.com

6.

See “A Calculus of Mobile Processes, Part I,” by Robin Milner, Joachim Parrow, and David Walker, www.lfcs.informatics.ed.ac.ak/reports/89/ECS-LFCS-89-85/.

7.

Robin Milner, Communicating and Mobile Systems: The pi-Calculus (Cambridge, UK: Cambridge University Press, 1999), p. 3.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.40.153