© Alex Wulff 2019
A. WulffBeginning Radio Communicationshttps://doi.org/10.1007/978-1-4842-5302-1_5

5. Communications and Modulation

Alex Wulff1 
(1)
Cambridge, MA, USA
 

By now, you’ve read the terms “AM” and “FM” numerous times throughout this text. AM and FM are two common examples of how information is encoded and transmitted using radio waves. As you’ll discover, most information cannot be transmitted directly via radio waves. One needs to change the information to make it more suitable for transmission over the air. In this chapter, we’ll dive further into common encoding techniques and some of the details behind how modulation and communications systems work.

“Figure 1”

The diagram in Figure 5-1 is so significant that many in the communications industry simply refer to it as Figure 1. Its creator, Claude E. Shannon, is regarded as the father of information theory and modern communications. Shannon provided a mathematical demonstration of how information can be modulated, sent over a channel, and recovered.
../images/483418_1_En_5_Chapter/483418_1_En_5_Fig1_HTML.png
Figure 5-1

A recreation of Figure 1 from Claude Shannon’s A Mathematical Theory of Communication

The figure summarizes how Shannon viewed a communications system and how one can view most modern communications systems today. Almost any communications medium, including the electromagnetic spectrum, can be represented as a channel. Channels have certain characteristics, such as their ability to propagate information and the noise they add to a message. To communicate over the channel, one utilizes a transmitter. The transmitter’s job is to take information and encode it in a certain way that makes it favorable to the channel or the goals of the person/system attempting to transmit the information. A receiver then receives the message over the channel, applies the inverse encoding scheme, and outputs the information stream to its destination.

This separation of the receiver from the destination (in addition to the transmitter from the source) is an extremely powerful concept in modern communications. It enables a single transmitter to accept information from a potentially infinite number of sources. Without this separation, your mobile phone would need a separate transmitter for text, images, audio, and so on. Figure 5-2 demonstrates different ways to represent this information: as a continuous or discrete signal. We’ll represent discrete data points in this text as circles with a stem connecting them to the X-axis.
../images/483418_1_En_5_Chapter/483418_1_En_5_Fig2_HTML.jpg
Figure 5-2

Examples of a discrete signal with a continuous envelope drawn around it (left) and a continuous signal (right)

Engineers represent information in the form of a signal. For our purposes, you can think of a signal as simply values indexed by something. In most cases, this index is time. These values can be continuous, meaning there’s no real gap between one value and the next. Despite this lack of a “gap” between values, one can still point at a continuous signal at a particular time and identify its value. For example, one could determine the instantaneous volume of a speaker playing some music.

The other class of signals is discrete, meaning that sequential values are separate and distinct. Discrete signals are only defined at particular instants in time (or whatever the index is), and thus their distinct nature makes them very easy to store and manipulate in modern computing devices. In essence, a discrete signal is a list of values. Continuous signals are impossible to store in a regular computer; digital computers deal in the realm of discrete elements of data, so they have no capability to store a continuous signal. As such, a continuous signal must be sampled before it can be used in modern systems. This sampling is shown in Figure 5-3.
../images/483418_1_En_5_Chapter/483418_1_En_5_Fig3_HTML.jpg
Figure 5-3

A continuous signal (blue) with its sampled result (red). Notice how some high-frequency information is lost during the sampling process

To sample a continuous signal, one must decide the rate at which the signal is sampled. A higher sampling rate will require a larger amount of space to store the resulting discrete signal, as there are more samples for a given span of time. However, the resulting discrete signal more closely resembles the continuous signal. A lower sampling rate will require less memory, but has the potential to “miss” information from the original continuous signal. See Figure 5-4 for examples of different sampling rates. This naturally leads to a very important question: does there reach a point where faster sampling becomes redundant? That is, does there exist a sampling rate at which you can perfectly reconstruct the original signal from its samples? Much to the delight of engineers everywhere, the answer is yes.
../images/483418_1_En_5_Chapter/483418_1_En_5_Fig4_HTML.jpg
Figure 5-4

How many samples is too many samples?

This result is known as the Shannon-Nyquist Sampling Theorem . Shannon mathematically proved that there is no information loss when a continuous signal is sampled at twice the maximum frequency contained in the signal. A common task that the sampling theorem is well equipped to help with is sampling and reconstruction of audio. How fast should one sample audio such that the sampled signal is indistinguishable from the original? Well, human ears can’t really hear frequencies above 20 kHz. Therefore, a sampling rate of 40 kHz or greater is sufficient to capture audio. To be clear, information is still lost with a 40 kHz sampling rate, as tones above 20 kHz cannot be accurately reproduced with a 40 kHz sampling rate. In the case of audio, this is acceptable because humans cannot hear these tones anyways.

Analog vs. Digital Information

Throughout this discussion of signals, I have avoided mentioning analog and digital representations of data. Clear parallels exist between continuous signals and analog signals, and discrete signals and digital signals. As such, analog and digital are a natural place for the mind to go during the preceding section. This is not to say that continuous means analog and discrete means digital. Continuous and discrete are different classes of signals, whereas analog and digital are different ways to represent and encode data.

“Digital” simply means encoding information in a predefined number of symbols. The most common set of symbols is binary—all information, when stored using binary, is represented with ones and zeros. Analog, on the other hand, is a means of representing information in which the value of a signal is continuous in nature. Analog signals can still be discrete, but the value at each discrete moment in time has an infinite number of possibilities. Similarly, digital signals can be continuous or discrete, but the value of the signal at each moment in time only has a set number of possibilities.

Continuous Analog Signals

The very first information processing was done completely in the realm of continuous analog signals . Long before digital computers existed, engineers utilized analog circuits to process information. This included everything from complex calculations to musical effects for electric guitars. Analog computers were built in this era, but they were slow, large, and expensive due to the lack of generality of various parts of the computer. One analog computer often isn’t well suited to a multitude of different tasks (unlike digital computers today). Practically every signal in the natural world is continuous and analog, such as audio, temperatures over time, heartbeats, and more. See Figure 5-5 for an example of a continuous analog signal.
../images/483418_1_En_5_Chapter/483418_1_En_5_Fig5_HTML.jpg
Figure 5-5

A continuous analog signal

Discrete Analog Signals

The most common use of discrete analog signals is in digital sampling. Before a continuous analog signal can be converted to binary for use on a computer, it must first be sampled, creating a discrete analog signal. The approximate value of these samples is then converted into binary, forming a discrete digital signal. Information must be lost in this analog-to-binary conversion, as the magnitude of the analog signal can take on an infinite number of values. See Figure 5-6 for an example of a discrete analog signal.
../images/483418_1_En_5_Chapter/483418_1_En_5_Fig6_HTML.jpg
Figure 5-6

A discrete analog signal . Notice how the discrete samples take on many different values

These analog samples are often sorted into “bins,” with the number of bits for a particular sample determining the number of bins . With one bit you can store two possible values for a given sample, with two bits you can store four possible values, with three bits you can store eight possible values, and so on. This binning process and the resulting reconstruction are shown in Figure 5-7.
../images/483418_1_En_5_Chapter/483418_1_En_5_Fig7_HTML.jpg
Figure 5-7

Increasing levels of sampling resolution , or “bins.” More bins yield more precise sampling of the original signal

Continuous Digital Signals

A great example of a continuous digital signal is Morse code. Morse code was transmitted on telegraph lines by completing a circuit and allowing the flow of current across a long wire. Morse code is digital, as it encodes information using “on” or “off” states, but the actual signal transmitting the states is continuous. Any binary signal transmitted over radio waves is another example of a continuous digital signal. See Figure 5-8 for an example of a continuous digital signal.
../images/483418_1_En_5_Chapter/483418_1_En_5_Fig8_HTML.jpg
Figure 5-8

A continuous digital signal . This particular signal is binary, as it takes on only two values

Discrete Digital Signals

Discrete digital signals power the modern world. Information stored in a computer is discrete and digital; distinct cells in a computer’s memory have either a zero or a one stored in them, representing some kind of information. Computers then process this data and store the result as a discrete digital signal. See Figure 5-9 for an example of a discrete digital signal.
../images/483418_1_En_5_Chapter/483418_1_En_5_Fig9_HTML.jpg
Figure 5-9

A discrete digital signal that is also binary

Modulation

You now know the basics of how information is represented in many systems. But this is just half the story of a radio communications system: you now have your data, but how exactly should you send it using radio waves?

By now, we’ve discussed FM and AM numerous times without actually motivating why AM and FM are necessary in the first place. The answer to this is simply that audio signals, as well as most other signals of interest, do not have favorable characteristics for transmission over a radio channel. As mentioned before, the maximum frequencies audible to human ears are around 20 kHz. If one were to try and feed this audio into an antenna directly, not much would happen.

A good explanation for this is that a resonant antenna in the range of 20 kHz would have to be massive: 300,000,000 m/s ÷ 20,000 Hz ≈ 15 km. One would need a 15 km-long antenna in order to radiate the power effectively. Additionally, transmitting audio at a different frequency allows for multiple signals to be transmitted at the same time. This is easily seen with radio stations—they all transmit at different frequencies! If all audio signals were transmitted at their original frequencies, it wouldn’t be possible to transmit more than one signal at a time. Lastly, you must transmit lower-frequency signals at very high powers in order to get them to propagate effectively.

The solution to these problems is to modulate the original signal on top of some carrier frequency. This aptly named carrier frequency carries the information in question at a much higher frequency to its destination. Modulation, as the word is used in the context of communications, generally means encoding information on top of a carrier wave of a different frequency. The next problem that arises is how one can effectively modulate a signal on top of a carrier wave. This can be done by varying one of three properties of a radio wave: amplitude (the strength of the signal), frequency (the rate at which the signal changes), or phase (the relative position of a wave in reference to a standard position). A radio wave can be completely characterized using these three properties.

Amplitude Modulation (AM)

Amplitude modulation is perhaps the easiest modulation technique to understand. For analog signals, the amplitude of the carrier wave, or its envelope, carries the information of the original signal. AM circuits are extremely easy to implement, so amplitude modulation was used to transmit the first messages over radio waves. Figure 5-10 shows how an amplitude-modulated signal appears.
../images/483418_1_En_5_Chapter/483418_1_En_5_Fig10_HTML.jpg
Figure 5-10

Amplitude modulation : a signal (top) and a carrier (middle) are combined to create an amplitude-modulated signal (bottom)

The digital version of amplitude modulation is called amplitude-shift keying (ASK). This generally involves switching a carrier wave between differing, discrete amplitude states. Amplitude-modulated signals are very susceptible to noise. The presence of noise generally affects the amplitude of a radio wave at the receiver, and since the signal is encoded in the carrier’s amplitude, this can obscure the original message.

Frequency Modulation (FM)

Frequency modulation of analog signals involves changing the frequency of the carrier wave based on the amplitude of the signal. This is best illustrated in the following image. FM is much less prone to noise than AM because random noise does not affect the frequency of the wave. Circuits to implement FM are more difficult to produce and understand than AM; as such, FM did not come into use until decades after AM. See Figure 5-11 for an example of frequency modulation.
../images/483418_1_En_5_Chapter/483418_1_En_5_Fig11_HTML.jpg
Figure 5-11

A frequency-modulated signal : the original signal is on the top, the carrier is in the middle, and the result is on the bottom. As the signal’s amplitude increases, so does the frequency of the carrier

The digital counterpart to analog frequency modulation is frequency-shift keying, in which the carrier is modulated between a set number of frequencies. Each frequency can encode a different piece of information.

Phase Modulation

The phase of a wave represents its offset to a reference phase. This concept is best illustrated with an image, such as the one in Figure 5-12 showing different phase offsets for a sine wave.
../images/483418_1_En_5_Chapter/483418_1_En_5_Fig12_HTML.jpg
Figure 5-12

A diagram showing the phase difference between two waves. The red signal is said to be “leading” the blue signal

Phase is an extremely important concept in all of electrical engineering; despite how easy it is to understand what the phase of a wave actually is, phase can have numerous effects on communications systems. Synchronizing the phase of the modulator and demodulator is a difficult task, but can make modulation and demodulation techniques much simpler.

Phase is also another tool that communications engineers use to encode information. By modulating the phase of a carrier, one can encode information in a similar manner to modulation of frequency and amplitude. Phase modulation is almost always used to encode digital information rather than analog; this is done in the form of phase-shift keying (PSK), which is shown in Figure 5-13.
../images/483418_1_En_5_Chapter/483418_1_En_5_Fig13_HTML.jpg
Figure 5-13

Binary phase modulation : the phase of the carrier is modulated 180 degrees to indicate a binary 1 or a binary 0. Changes in color of the signal indicate a change in the bit

Frequency Division

A fascinating—and underappreciated—result of utilizing waves of different frequencies to encode information is their ability to be transmitted at the same time over the same channel and then subsequently recovered. This important principle is what enables all modern forms of radio communication. Transmitting information on different carrier frequencies at the same time is known as frequency multiplexing. The alternative to frequency multiplexing is time multiplexing, or transmitting information on the same carrier frequency at different times. Time multiplexing is shown in Figure 5-14.
../images/483418_1_En_5_Chapter/483418_1_En_5_Fig14_HTML.jpg
Figure 5-14

An example of time multiplexing, where two separate transmitters each get allotted a certain amount of time to transmit before switching. Time is represented on the horizontal axis in this figure

It’s very easy to see how frequency multiplexing can be more efficient. Imagine 1000 people in a room attempting to carry out a conversation with 1000 other people. It would be next to impossible to hear your conversation over the conversations of others, so everyone would be forced to take turns speaking. Now imagine that each pair of people is in a separate room—no turns have to be taken, and everyone can speak at once. In this scenario, the separate rooms represent separate carrier frequencies.

2G cellular networks used a time multiplexing scheme for data and voice services. This vastly limited data rates but was easier to implement with the hardware of the time. This technology, known as GSM, is still in use throughout many portions of the world. 4G LTE data services use a frequency multiplexing scheme with many channels allocated for each phone that vastly increases data transfer speeds.

The word channel in the topic of frequency division refers to a specific slice of frequencies allotted to a transmitter. This can be confusing, as the word channel is also used to describe a general communications medium (which is how I have mostly used to this point). The meaning of channel is generally clear from context. As you might expect, channel width varies with application. AM radio was originally designed to transmit voice signals across large swaths of area; as such, channels are only 10 kHz. This greatly limits the quality of the audio, making AM not a great option for transmitting music. FM channels are 200 kHz—this enables FM radio stations to transmit high-quality stereo audio. In fact, analog FM radio will oftentimes sound better than many digital radio streams from satellite radio or from the Internet.

Quadrature Encoding

It’s even possible to transmit two waves of the same frequency at the same time, provided that certain conditions are met. This is called quadrature encoding. This allows the data rate of a particular channel to increase even further—in fact, quadrature encoding is utilized in many of the newest high-data-transfer-rate communications schemes.

The two waves can be summed, transmitted at the same time, and then recovered as long as they’re 90 degrees out of phase. Sine and cosine waves are separated by 90 degrees, so it’s common to refer to the two carriers as the sine and cosine portions of the data transmission scheme. The sine component is often referred to as the “in phase” portion, and the 90-degree-shifted cosine component is often referred to as the “quadrature” portion. This is where quadrature encoding/modulation gets its name.

The most commonly implemented form of quadrature encoding is Quadrature Amplitude Modulation, or QAM. QAM is so efficient at transferring data that it is utilized in 4G LTE and 5G technologies to transfer information. In QAM, the intensity of the in phase and quadrature carriers is independently modulated to encode a series of bits. At the receiver, the two carriers are separated, and their levels are decoded. Different combinations of levels of each carrier can represent a string of bits. If more levels are used, more bits can be encoded in a single state of the sine and cosine waves. Figure 5-15 is your first glimpse into QAM: the constellation diagram.
../images/483418_1_En_5_Chapter/483418_1_En_5_Fig15_HTML.jpg
Figure 5-15

A constellation diagram depicting different levels of the I and Q carriers mapped to different binary values

QAM is easy to understand through the use of a constellation diagram. The constellation diagram shows the relative level of the I and Q carriers and how they map to different sequences of bits. The preceding constellation diagram shows a 4-QAM communications setup. It is given this name because there are four possible symbols; to achieve this, two levels of voltage must be used for each carrier. This is also made clear in the constellation plot, as it shows possible levels of +/- V for the received intensity of the I and Q portions (where V is some arbitrary voltage level). The symbol rate is the rate at which these changes in voltages of the carriers occur. To transmit more information using QAM, one can increase the number of voltage levels used to convey information. This allows for more possible combinations of the levels of the two carriers, consequently transmitting more bits per symbol. A higher-order QAM scheme is shown in Figure 5-16.
../images/483418_1_En_5_Chapter/483418_1_En_5_Fig16_HTML.jpg
Figure 5-16

A 16-QAM constellation diagram. Each carrier now has four possible different levels as opposed to the two that 4-QAM requires

Some 5G standards even call for the use of 256-QAM which can allow data transfer rates in excess of 1 billion bits (Gbits) per second with a sufficiently fast symbol rate. One might ask, “Then, why don’t communications engineers keep extending QAM even farther?” The answer to this question is noise. There does exist a finite maximum level for the signal intensity, so to keep adding intensity levels to QAM, engineers must subdivide the maximum range into more and more discrete levels. Noise can then make one level appear like another, introducing errors in the communication scheme. Modern implementations of quadrature encoding apply a sophisticated means to correct for errors, but this only works to a certain extent. This idea of noise is shown in Figure 5-17. Red dots in this figure represent received QAM signals—as noise increases, it becomes harder to classify signals. This problem is compounded by adding more levels of QAM. You can imagine that if instead of one symbol per quadrant there were two symbols per quadrant, it would be even harder to identify where a particular signal is supposed to fall.
../images/483418_1_En_5_Chapter/483418_1_En_5_Fig17_HTML.jpg
Figure 5-17

A 4-QAM constellation diagram depicting increasing levels of noise in the channel

Summary

Hopefully this chapter shed a little more light on how information is transmitted using radio waves and general communications channels. Information can be continuous or discrete, and represented in a digital or analog fashion. Modulation is then used to encode this information on a carrier wave. A carrier wave makes such data more favorable for transmission across a channel. One can modulate three things about an electromagnetic wave: its amplitude, frequency, and phase. With these three components, you can completely characterize a radio wave, and each can be used to modulate data.

You actually now have a somewhat complete, albeit high-level, picture of what a radio communications setup looks like. Understanding the interplay between the realm of information/data and the physical devices and principles that transmit this information is very important; if there’s something you don’t understand, be sure to go back and read it again and/or do more of your own research! In the next chapter, we’ll utilize a device called a microcontroller to send and receive packets of information using radio devices. This will allow you to put some of the things learned in this chapter into practice.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.42.168