2

Some History and Background

Ever since the early human beings devised a spoken language and learned to communicate verbally with each other, there has been a search for ways to extend the distance over which these communications can travel. A variety of methods have been developed over the years since then to do just this.

Shouting, of course, was probably the first thing that these people learned. Then, perhaps, prehistoric people found that they could get their thoughts and ideas further afield by a pattern of sounds resulting from beating on hollow logs. Then, someone discovered that by stretching an animal skin over the ends of these hollow logs the sound would carry even further. With time, the drummers learned that they could express more complex ideas by using different rhythms on these drums.

For some time the ancient Greeks used a method of standing vases in a grid of shelves and partitions to convey information as shown in Figure 2-1. Arranging the vases in different patterns and/or using different numbers of vases meant different things. Incidentally, the word “telephon” has Greek roots and means “far speaking.” The Greek historian Heroditis reported the use of the reflection of the sun on a polished shield as a signaling method in 4000 BC, and this same technique was used to transmit information from one place to another all the way up to the latter part of the nineteenth century. It was refined by the use of a small, hinged, moveable mirror mounted in a frame and operated by hand to move the reflection of the sun to and from the observer. This was known as a heliograph and was widely used by the U.S Army until the adoption of the telegraph, and then the wireless, when it became available.

image

Figure 2-1 An ancient Greek signaling system.

For thousands of years, fires lit on hilltops have been used as a visual signaling method to pass on information. The entrance of the Greek soldiers into Troy, by means of the famous wooden horse in the 13th century BC, was signaled to the remainder of the Greek army by a fire. The British spread the news of General Wellington’s victory at the famous battle of Waterloo in 1815 up through the British Isles by signal fires lit on the top of strategically located hills. And while it wasn’t a very large fire, the flame of the lantern in the North Church tower was a very important signal to Paul Revere. The American Indian tribes, particularly those living on the western prairies, devised a useable system of smoke signals that were commonly seen by the pioneers heading west. This signaling system has been a traditional part of any Hollywood movie of the Old West.

Semaphore signals of various types, generally made up of two or more movable arms mounted on a pole, came into use in the 17th century. A semaphore was installed in Liverpool, England, a major seaport at the time, and was used to signal the arrival of ships in the harbor to the merchants in the town. One type, invented by a Frenchman in 1794, is illustrated in Figure 2-2. A more elaborate type, used in Germany is shown in Figure 2-3 and samples of the code that was used is shown in Figure 2-4.

image

Figure 2-2 The semaphore signal.

image

Figure 2-3 A German semaphore of the 1790s.

image

Figure 2-4 Examples of the code used by the German semaphore shown in Figure 2-3.

In Russia, by 1838, Czar Nicholas I had organized a system in which there were 220 semaphore towers located five to six miles apart with some 1,300 operators. This system linked St. Petersburg with Moscow and Warsaw. By 1884 there were more than 500 towers located in France, linking Paris with Calais, Strasbourg and Brest.

The semaphore had the advantage of making it possible to manipulate the moveable signaling arms, and thereby send a message, from some distance away by ropes or wires attached to the arms—there was no need for the operator to be right at the semaphore site. However, the operator still had to be able to see the other semaphores if the signal there was part of a signaling chain from one location to another some distance away.

Semaphore signals do not seem to have been widely used in the United States and where they were installed, it was generally in ports where they were confined to announcing the arrival of incoming ships. The first on record was a system of sixteen signals, set up in 1801, running from Martha’s Vineyard to Boston, again to signal the pending arrival of ships. There was also a series of semaphores stretching from the head of the Delaware Bay to Philadelphia for the same purpose. Others are reported to be have been used in Portland, Maine and Charleston, South Carolina.

A combination of pigeons and semaphores was in use for transmitting messages in New York about this same time, the early 1800s. When the harbor pilot went aboard incoming ships off shore at Sandy Hook, the first piece of America seen by these ships, pigeons were taken aboard too. Information regarding the ship’s cargo was put in capsules and attached to the birds. They were then released and flew to their home loft on Sandy Point. The information was then transmitted to New York by a chain of semaphore signals.

The railroads used semaphores for many years to signal the train personnel as to the condition of the track ahead and pass on other information. Before radio communication became reliable and generally accepted, the navy signaler who would wave flags in his hands, using a code similar to that in Figure 2-4, was the sole means of communication in daylight between the ships of the navies of the world. At night, when the flags could not be seen, a blinking searchlight, using the Morse code in flashes of light of varying length, was used.

Until the transcontinental telegraph line was completed in 1861 and wire was strung throughout the country, messages were sent from town to town by coach or by messengers on horseback over poor roads or sometimes just trails. News of the adoption of the Declaration of Independence in Philadelphia on July 4, 1776, was sent out by express riders who took five days to get to New York and eleven days to get to Worcester, Massachusetts. They did not reach Savannah, Georgia until August 8th.

The news of the opening of the Erie Canal in 1825 reached New York a little faster. Artillery cannons were spaced five to six miles apart to pass on the signal. The news reached New York from Buffalo in some eighty minutes.

In the period from 1840 to 1860, when the big movement of people westward to California began, mail was sent by ship from ports on the east coast of the United States to a port on the Atlantic side of Panama, where it was then carried across the isthmus packed on the backs of mules to another port on the Pacific side. Here, the mail was loaded onto another ship and then sailed up the coast to San Francisco. A semaphore was built on Point Lobos to send the news of a ship’s arrival to Telegraph Hill and then on to San Francisco. When the arrival of a mail ship was signaled, people used to rush to the post office and line up for their letters and for the latest news from the east.

The first overland mail route, traveled by horse-drawn coach in 1850, went from St Louis or Memphis to Los Angeles and San Francisco. The record time for the trip was twenty-four days, eighteen hours. The Pony Express, carrying mail and telegrams in saddle bags, started in 1860, used teams of horses and riders located at stations from ten to twenty-five miles apart to ride from St. Louis to Sacramento, California. Here the mail was transferred to boats that took it down the Sacramento River to San Francisco. This 2,000 mile trip took about ten days. The arrival of the telegraph system, of course, put an end to all of this.

Until the discovery of electricity in the late sixteenth century, all of the methods of extending the range of communication that were tried were either verbal or visual. When it was found that a charge of static electricity, usually generated by rubbing a glass or amber ball, would travel over wires and produce an effect on some object, such as a feather placed at the other end of the wire, some effort was made to use this to transmit information. One suggestion, made in 1753, was to use a wire for each letter of the alphabet. At one end, a piece of paper with a letter of the alphabet on it was placed very close to the end of the wire. At the other end of each wire a glass globe was fastened. When one of the globes was rubbed, generating a static electricity charge that then traveled along the wire, the piece of paper at the other end of that particular wire moved indicating which letter was intended to be read.

In the eighteenth and nineteenth centuries a number of inventors and scientists in the United States and Europe did a lot of work on the development of ways to use electricity for communication. In May 1844 this work led to Samuel Morse’s demonstration of a useable telegraph system using a buried wire that ran between Washington, D.C. and Baltimore, Maryland, a distance of about thirty-five miles. He sent the words, “What God hath wrought” using the coding system that had been devised and called the Morse code. This code was made up of a combination of dots and dashes for each letter in the alphabet and each number. The letters that occurred most frequently in everyday use were given the fewest dots or dashes or combinations. The dots and dashes were created by turning the power on for a short period to signify a dot, and leaving the power on for a longer period to signify a dash. The time period to signify a dash, ideally, was three times longer than that for the dot. Since the letter “E” is the most used letter in the alphabet it was assigned one dot. The letter “T”, the next most used, was given one dash. The letter “A” was made from a dot and a dash, the letter “B” from a dash and three dots, “C” was dash, dot, dash, dot, and so on through the alphabet, the numbers, and the punctuation marks.

Before long, the nation was crisscrossed east, west, north, and south by a network of telegraph poles and wires using this new communications system. The first transcontinental telegraph line was put into operation in 1861 except for the sixty miles between Salt Lake City, Utah and Carson City, Nevada. This gap in the link was covered by Pony Express riders until the line could be completed. Because there were no manufacturing facilities in California at this time, supplies for the telegraph line had to be shipped by boat around Cape Horn to San Francisco. From there, the supplies were hauled by horse-drawn wagon to the needed location.

Most telegraph lines paralled the expanding railroad systems. Supplies could be shipped by train or by horses and carts over trails and crude roads left by the railroad construction crews. Among the usual telegraph traffic sent over these a lot of information regarding the progress of the next train was passed on. In those early days many a youth spent hours at the local railroad station watching the telegrapher work magic with the key and listening to the clicks, sent from a distant station, made by the telegraph sounder. These were traditionally amplified by an old Prince Albert tobacco tin jammed behind it.

The quickest transmission of messages from the European continent to the United States and Canada depended, at this time, on the steamship traffic arriving at Halifax, Nova Scotia, Boston, or New York. Since boats reached Halifax a day or so before touching a U.S. port and because there was no line to Halifax at this time, it was important to news organizations and the government to get their messages from Halifax to the nearest telegraph station in Boston. Some used messengers on horseback to ride to Boston. One enterprising individual set up a service using carrier pigeons, which were somewhat faster.

It was obvious that a transatlantic telegraph cable was necessary, but the skills and the technology were not up to the task yet. Some cables had been laid underwater across rivers, but the failure rate was high due to the breakdown of the insulation, mainly burlap and tar, that was used. This problem was not solved until gutta percha, a product of the sap of the latex tree grown in Southeast Asia, was discovered and used as an insulating medium.

The first successful attempt to lay a cable across the Atlantic Ocean was made in 1858 from Valentia Bay on the southeast coast of Ireland to Trinity Bay in Newfoundland, a distance of some 1,600 miles. It lasted three months before failing, and in 1866 another cable was put down that proved to last under the conditions imposed. There are now over 400,000 miles of underseas cables laid in the oceans around the world.

While the use of the telegraph was increasing and lines were being laid or strung to connect the countries of the world, it became quickly evident, however, that having a third party, the telegraph operator, directly involved in the message-handling process was not the ultimate answer. In addition to the lack of privacy, the necessity of translating the dots and dashes into the words that were understandable to the average person was a handicap. A method of direct communication by voice, on a one-to-one basis, over these wires and cables became the aim of everyone working in this field. In March 1876, Alexander Graham Bell, during one of a long series of experiments he had been making to achieve this goal, spilled some acid on his leg and, without thinking, used an experimental telephone setup to summon his assistant from the next room with the immortal words, “Mr. Watson, come here, I want you.” Mr. Watson heard the call for help transmitted over the wires and responded.

The value of this new invention was quickly realized, and in 1878 the first telephone switchboard, with thirty-eight subscribers and using ex-telegraph boys as operators, was installed in New Haven, Connecticut. By 1880 there were 138 exchanges throughout the country. A typical early telephone exchange is shown in Figure 2-5.

image

Figure 2-5 An early telephone switchboard in Kansas City, 1904. (Reproduced with permission of the AT&T Corporate Archive)

About this same time Guglielmo Marconi, an Italian engineer who was living and working in England at the time, began to put together a number of recent technical discoveries that had been made by other workers in this field. His aim, like that of so many others, was to eliminate all of the wires needed in the then-present telegraph systems. By the year 1895 he had managed to transmit signals through the air for a distance of about two miles, without any connecting wires. His experiments were carried out on Salisbury Plain in England. This was the first wireless signal on record. Figure 2-6 shows an early painting illustrating Marconi’s demonstration of his system to the English military personnel.

image

Figure 2-6 Guglielmo Marconi demonstrating his wireless communication system to the British army in 1895. (Courtesy GEC-Marconi)

By 1897, wireless communication had improved to the point where messages, using Samuel Morse’s code of dots and dashes, were being exchanged between shore stations on the coast in England and ships up to ten miles out at sea. In 1899, Marconi transmitted signals across the English Channel to France, a distance of some twenty miles. The next great barrier facing Marconi was the 3,000-odd miles of the Atlantic Ocean. In 1901, using a string of kites to get his radio antenna up in the air, Marconi successfully sent the letter “S,” via Morse code, from Newfoundland, Canada to Cornwall, England.

As soon as it had been demonstrated that this wireless method of communication with dots and dashes was feasible, a number of people began working on ways to devise the means to transmit the human voice through the “ether.” After all, it had been done over the wires with the telephone. In 1902, Reginald Fessenden developed a system of modulating a continuous tone or note with the varying sound waves that make up the human voice. Then he transmitted this signal over the air to a wireless receiver. In the receiver the signal was demodulated; the sound waves were extracted from the signal and made intelligible to the listener.

This is a very simplified version of the way that it all began. From this point on, the story of wireless communication is a mixture of technical progress, political action, and financial manipulation. It has been well documented by others from all points of view.

The electromagnetic spectrum where this all takes place, shown in Figure 1-1, is a finite resource. There is only just so much of it—it cannot be expanded to meet our ever-increasing communications needs. The technology of the early days of radio allowed only the use of the longer wavelengths, those below 200 meters in wavelength or about 1.5 MHz in frequency. The higher frequencies above 1.5 MHz were considered useless for communication purposes at that time and were assigned to the many experimenters working in this area and to the growing group of “radio amateurs.” But the commercial pressure for increased spectrum space, and the experiments carried out by industry and the amateurs, resulted in the development of new and improved components and equipment that would operate at the higher frequencies. Operating techniques were devised that would make each wireless signal take up less space in the spectrum. This is still the subject of much continuing work as our communication needs grow.

Any location on this electromagnetic spectrum is usually referred to by the frequency of that particular electromagnetic wave, the number of times per second that it goes through its cycle, as you will have gathered. This frequency used to be referred to as so many cycles per second. Household current in the United States is sixty cycles and you can still hear or read about the term “kilocycle” or 1,000 cycles when referring to the location, for example, of a radio station on the broadcast dial or spectrum. Now the term “Hertz” is used in place of “cycle” in most references to frequency. This is in honor of the German scientist, Heinrich Hertz, who did so much work in the early days of radio communication. To save writing out the long string of zeros that would be needed at the higher frequencies, scientific notation is used and generally abbreviated; Kilohertz, or KHz for 1000 cycles or Hertz, Megahertz or MHz for 1,000,000 cycles, and now we need Gigahertz, GHz, for a billion cycles per second. Although some of the higher frequencies are now being used for wireless communication, the personal communications systems (PCS) are operating in the region of 1900 MHz—we cannot just keep going up and up in the spectrum. The wavelengths get shorter and shorter and the wireless signal does not travel so far. It becomes liable to attenuation by such things as foliage, rain drops, and even heavy fog.

Amid the research that was going on in all areas of this new method of wireless communication, there were undoubtedly tests and experiments being carried out in the area of mobile communications by the industry prior to 1920. This is the year that the first recorded mobile radiotelephone service was put into use by the Detroit Police Department. The frequencies assigned for this use were just above the present AM radio broadcasting band, then referred to as 1,600 Kilocycles, but now more properly, KHz. Some of the department’s scout cars were able to keep in touch with their headquarters with what are now bulky and crude receivers and transmitters. Figures 2-7 and 2-8 show some early experimental walkie-talkie radios and cellular telephones.

image

Figure 2-7 Early experimental walkie-talkies. (Photograph courtesy Al Gross)

image

Figure 2-8 First experimental cellular telephone with the OKI Model 1150. (Courtesy OKI Telecom)

The first commercial mobile radiotelephone service that was made available to the general public was introduced by the American Telephone and Telegraph corporation, AT&T, in St. Louis in 1946. This advance in mobile communications was probably helped along by the developments and improvements in radio communications that were made during World War II. A major contributor to these advances was the British invention of radar, which was the result of research and development in the higher radio frequencies. Experiments in this area of the electromagnetic spectrum showed that these frequencies could be made useable for communication purposes and they became invaluable in the postwar development work into the use of what has become known as microwave communication and in the use of mobile radiotelephones.

The system that was installed by AT&T in St. Louis was assigned channels or bands of frequencies in the 150- MHz band by the Federal Communication Commission (FCC). Only three channels were used because of the technical limitations of the system at the time. All the calls to and from the mobile subscribers were routed through a special operator. The mobile units used a simplex system, what can also be referred to as “push-to-talk.” That is, the speaker could not hear or be interrupted by the other party on the line while talking. When the speaker had finished and was ready to listen to the response, they had to push a switch or press a button before hearing the other party on the line or, in this case, on the air.

These early mobile radiotelephone systems all used one high-powered, base radio station per city or service area with the antenna placed on as high a site as was available so as to blanket as large an area as possible with the wireless signal. This meant that the limited number of channels available could not be reused in neighboring cities or towns because the coverage of the signals would overlap and interfere with each other.

Improvements were gradually made in the mobile service as experience was gained and advances made in the manufacture of electronic components and improved methods of using them. The demand for telephone communications, while on the move, grew with the increasing use of automobiles for business and pleasure. In 1964, AT&T introduced the Improved Mobile Telephone Service (IMTS) in which the subscriber in the car could dial the wanted number directly on the telephone handset in the vehicle—there was no operator intervention in the process as was previously necessary. The selection of an available, vacant communication channel was done automatically at the base station and the waiting time for a clear channel was reduced considerably.

The major problem, a lack of available radio frequencies or channel capacity to accommodate the growing number of people wanting mobile service, remained unsolved. In 1970, the FCC came out with the results of a long study that had been carried out on the existing mobile radiotelephone service. This study reviewed in some detail the possibilities of improving and expanding the service. The FCC then asked the communications industry to review the results of this study and to submit proposals for a new mobile system using higher, heretofore unused frequencies in the 800 MHz band that would be made available.

As a result of experimental work that had been going on for some years at AT&T Bell Labs and in response to the FCC request, AT&T filed a proposal for the development and installation of the first cellular radiotelephone system in 1971. For the next several years, the FCC study and the AT&T proposal that resulted, were looked at carefully and in 1974 the FCC allocated 40 MHz of the 800-MHz band for the initial development of a commercial cellular radiotelephone system. In 1975, Illinois Bell, then an AT&T subsidiary, filed an application with the FCC proposing to build and test the first commercial cellular system in the Chicago area. This system was to be based on the Bell Labs work. In 1977, the FCC gave approval for the installation of this system.

In 1978, the Chicago cellular telephone system, christened Advanced Mobile Phone Service (AMPS—a term that is still used) began operation using a test sample of subscribers from all segments of the business community in that area. This first system was to test the concept of the cellular system as a commercial venture. At the same time, there were some further “laboratory” tests carried out on transmitting and receiving equipment and operating techniques by AT&T personnel in Newark, New Jersey. No customers were involved in these tests.

The original AMPS system in Chicago consisted of ten cells covering about 21,000 square miles of the metropolitan area. In this system the new techniques, hardware, and software were used and tested as a single concept. A central computer and switching system were used to perform all of the tasks necessary to integrate the mobile system into the local telephone network. Custom circuitry and new microprocessors using the recently developed large-scale integrated circuits (LSI), in which the ability to perform many different functions were built into one transistor chip, were used throughout the system. All this technology was brought together for the first time to prove that the cellular concept was feasible.

AT&T, of course, planned to take the AMPS system nationwide after it had proven so successful in the Chicago tests. But the breakup of the AT&T telephone monopoly in January 1984, generally referred to as “the divestiture” in the industry meant that the newly-organized seven regional telephone companies resulting from the divestiture would have to set up individual cellular systems in their own assigned areas.

When discussing cellular telephone systems these newly independent entities, the “Baby Bells,” were known as the “wireline” companies for obvious reasons. Their cellular service was designated as the “B” system. In addition, the FCC ruled that they would have to compete with one other system that would be licensed by the FCC to operate in the same regional markets. These competing organizations were known as “nonwireline” companies or the “A” systems, and the FCC set aside half of the available wireless channels for assignment to them.

The mergers and sales in the telecommunications industry that took place in the 80s and 90s changed all this, and, as a result, the nonwireline cellular system in one area might very well be owned and operated by a Baby Bell, a wireline company from another area across the country.

Before the telephone corporation divestiture in 1984, AT&T had planned to install the next cellular system in the expanding Washington-Baltimore market area and to incorporate into this system improvements in equipment and operating procedures that were based on the experience gained with operating the AMPS system in Chicago. The proposed AT&T Washington-Baltimore system was taken over by the new regional telephone company, Bell Atlantic, and began operations in the middle of 1984 as Bell Atlantic Mobile Systems. Meanwhile, a nonwireline company made up of a consortium of communications and media organizations in the area was granted the second competing cellular license and began operations.

The rapid growth in the popularity of mobile communications in the 1990s led to the need for additional operating channels to take care of more users. To meet this need for more frequencies the FCC auctioned off an area of much higher frequency spectrum space, what was called the 2 GHz band. This auctioning process was a new governmental money-raising technique for assigning frequencies. These had heretofore been assigned or given away on the basis of a lottery. This new method resulted in millions of dollars being turned over to the U.S. Treasury.

One problem with these new frequency assignments was that there were already some occupants operating in this band. Typical users were public service organizations such as fire and police departments, public utilities, the railroads, and others. The FCC ruled that these organizations would have to shift their activities to different, higher frequencies and were given some five years to vacate the existing frequencies. The cost of the move, which would require the installation of different wireless equipment, was to be borne by the cellular companies that would then occupy the band. Committees have been set up by the various trade associations for these operators to try and make this move as painless and as economical as possible.

The cellular system designed for this new 1900 MHz band has been named the Personal Communications Service or just PCS. The operating technology for the first system put into operation in the United States was a variation on a system developed and used in Europe and on some 100 systems in 70 other countries. In Europe the system is known as GSM, a Global System for Mobile use. It was developed as a second generation, digital, European-wide system that would enable a cellular user to operate anywhere on the continent. Users in Europe had objected long and loud to the fact that their mobile phones on the existing systems would not work once they crossed a national border. Each country, it seems, adopted a different system for their initial ventures into cellular telephone communications. These systems were developed by telecommunications companies and government departments for use in their own countries. Attitudes changed, however, with the increase in international cooperation brought about by the growing importance of the European Common Market. The experience gained in developing the GSM system in Europe had proven that the technology, the hardware, and the necessary operating software would work. These were readily available for use in the United States. Operating systems other than GSM have been proposed for use by equipment manufacturers and potential PCS carriers in the United States and it remains to be seen if any one system will be adopted as a standard.

The first PCS system in the United States came on-line late in 1995. It covered the Washington/Baltimore area with some 300 cell sites. As described in the previous paragraph the system was designed to operate in the 1900 MHz band using the GSM technology that had proven to be effective in Europe and in other countries. Fully digital, the service offered smaller and lighter handsets than the current models. The PCS handsets are equipped with the facility to act as an answering machine and pager while the carrier offers all of the other services that users have come to expect with a telephone service such as call waiting and call forwarding.

In addition to the competition that is emerging between the original cellular carriers and the new PCS systems there is competition between methods of digitizing the wireless signal going from your mobile handset and the cell site. If you investigate cellular or PCS technology any deeper than the daily newspaper or the more popular consumer magazines, you will run across discussions of such initialed systems as Tone Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), and Global System for Mobile Radio (GSM), which has already been mentioned; these are all different techniques for handling the second generation of cellular communication—digital communication. The differences in these systems are technical in nature and do not really concern the consumer who is generally interested only in the quality and consistency of the communication that can be gotten from the carrier that is being used.

While the initial efforts were going on in the 1970s and 80s to improve the mobile radiotelephone systems used in automobiles and trucks, some work was also being done to improve the availability of public communication in other forms of transportation. There were flurries of publicity from the railroads about installing telephone booths in some trains, but the problems encountered with the original mobile radiotelephones in automobiles were compounded on the trains. The trains moved in and out of the urban areas and, therefore, quickly in and out of range of the cellular cell sites. Calls had to be completed within a relatively short time before the train moved out of the coverage area of the local carrier and a frustrating “drop-out” occurred.

A different problem existed when attempts were made to use a cellular telephone system in commercial aircraft. The cell site on the ground tended to be within radio range of the cellular transceiver in the aircraft for a much longer period of time since the airplane was high in the air and free from the factors that limited the range of a ground-based vehicle. Therefore, the transmissions from the aircraft tended to cause interference with nearby system cell sites whose area of coverage adjoined the original cell site that was being used.

The need for this type of instant communication in other than ground vehicles did not seem sufficient at that time to warrant a great effort on the part of the telecommunications or airline industries. After a few explanatory articles in the travel and hobby magazines, the equipment was removed and the service discontinued. But now with the general acceptance of the cellular telephone system, with improvements in the technology, and with a traveling public educated to have a telephone always at hand, new, more efficient services are being offered. These efforts are covered in more detail in Chapter 4.

Paging systems began to develop in the early 1930s. In fact, some of the early wireless telephone communications systems could almost be called pagers by today’s standards since only one-way communication was possible. The first paging units would emit an audible signal, a “beep,” and the recipient would know to go to a telephone and call the paging operator to receive the message. Used at first by public safety organizations such as police and fire departments, hospitals were probably the first civilian organization to adopt this system and used it to alert doctors that they were needed. There was no longer any need for a loudspeaker intercom system. But you can still hear a variation of this intercom system today at places like airports, “Will Mr. Smith please return to the international airlines ticket desk, please?”

However, this one-way communication tended to be slow, the original equipment carried by the user was bulky with a relative short battery life and it used up a lot of the scarce radio spectrum. Today there are four types of pagers on the market in the United States: tone-only, tone-voice, numeric, and alphanumeric. Tone-only pagers emit a beep or a ring alerting the user to call a prearranged telephone number for a message. Tone-voice pagers combine a vocal message with the tone. Numeric pagers display a telephone number on the screen for the user to call; this is the most popular type and accounts for the majority of the market. Alphanumeric pagers allow the user to receive, access, and manage information by means of a small display screen and a keyboard. There are some models that are silent and alert the user by a slight vibration. This version has proven to be useful for the hearing-impaired person.

Pagers are now almost the size of a credit card. They have been incorporated into wrist watches, and the paging circuitry will soon be built into portable or laptop computers and into the new personal digital assistants (PDAs) that are becoming so popular. This means that the recipient of a page will be able to respond using the computer or PDA that is connected to the pager. This has become known as two-way paging, as compared to the simple acknowledgment paging where the recipient merely responds to the effect that the message was received. There has been work on the part of computer on-line services such as America Online and Compuserve to begin offering a paging capability. At the beginning of 1995 it was estimated that there were some 20 million pagers of various types in use in the United States and that the number of users was growing at a rate of fifteen to twenty percent a year.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.138.116.20