CHAPTER 3

Bandwidth, Encoding, and Transport

In this chapter, you will learn about

• Bandwidth and what it means in the world of networked AV systems

• The difference between baseband and broadband

• Encoding analog signals to digital and digital signals for transmission over a network

• Different methods of data transmission


You now know that an IP or Ethernet network is designed to allow many devices to communicate simultaneously. You also know how the OSI model determines the way data moves over a network, and specifically how the Physical and Data Link layers are important elements of networked AV systems.

This chapter gets further into the nitty-gritty, explaining how physical phenomena, such as light and sound, are captured and sent as digital signals. You will explore how an analog wave is captured and transmitted as a series of 1s and 0s; how binary code is carried across physical media; and how all digital signals—including AV signals—travel across a packet-switched network.

Why? Because it’s important to know what happens to AV data as it crosses a network, not just for the understanding itself but also for the vocabulary it imparts. When collaborating with IT professionals on systems design and implementation, being able to “talk the talk” in terms of data transmission heads off any misunderstandings and leads to mutual respect.

Bandwidth

When describing data transmission on a network, bandwidth is one of the first characteristics you should think about. In fact, as an AV professional, the network’s bandwidth is one of the attributes you care most about. Bandwidth is the available or consumed data communication resources of a communication path, expressed in terms of bits per second. It is also called throughput or bit rate. It has an enormous impact on the network’s ability to transport AV signals. If you don’t have sufficient bandwidth, your signal quality will plummet—or your signal simply won’t arrive at its destination at all.

AV professionals are accustomed to thinking of bandwidth as analog signal bandwidth, measured in hertz. However, whenever the term “bandwidth” is used in the context of networking, it refers to data throughput.

 


images NOTE For the purposes of networked AV systems, bandwidth is used to refer to the capacity of network connections (e.g., the switch has a bandwidth of 100 Gbps) and/or the throughput requirements of data or devices (e.g., the videoconference requires 4 Mbps of bandwidth per endpoint).

When you’re thinking about using an IT network to transport AV data, you’re not really concerned about bandwidth capacity. Your real concern is bandwidth availability. You need to make sure the network has enough unused bandwidth to handle your AV signals.

How much bandwidth is enough? That’s not a simple question to answer. Internet traffic is “bursty.” Even if, on average, you’re only using half your ready capacity, at peak traffic times, the pipe can still get very full. There has to be enough bandwidth to withstand peak traffic. In the past, InfoComm has taught students to specify no more than 70 percent of a network’s ready bandwidth for use. Students and industry experts say that’s generous; in reality, you should specify no more than 50 percent of a network’s ready capacity for average use.

Of course, no two networks are the same. Your AV devices may be on a separate LAN, in which case you don’t have to worry about bursty Internet traffic crowding the network pipe. This allows you to comfortably plan to use a larger percentage of the available bandwidth.

If your network has Quality of Service (QoS) implemented, you may also get more leeway. QoS helps the network intelligently decide which data to keep and which to throw away or delete if the bandwidth runs out. Conferencing data in a real-time interactive DiffServ class should be prioritized above Internet traffic in a high-throughput data DiffServ class. Work with a network engineer or IT manager to find out how much bandwidth you can actually plan to use for AV. For more detail on QoS and DiffServ, see Chapter 5.

Baseband

Data transmission can also be described in terms of whether the frequency range of the connection carries one signal or many signals—baseband or broadband.

In the simplest terms, baseband is a frequency range before it is cut into smaller chunks (modulated) and after it is glued together (demodulated). Baseband uses an entire frequency range from near zero hertz to a high-end cut-off frequency. In other words, baseband is an unmodulated, raw electromagnetic wave.

Ethernet is transmitted as a baseband signal. Ethernet baseband uses pulses of direct current. Direct current requires exclusive use of the wire and therefore results in only one signal (one channel) being carried on the wire. Baseband has three states: one, zero, and idle. This makes it ideal for transmitting 1s and 0s across copper and fiber.

Baseband communication is limited by attenuation (signal loss). You can overcome this distance restriction by using a repeater to refresh the signal in transit.

Bandwidth Bottlenecks

Different parts of a network will have different bandwidth available. When you’re trying to determine how much bandwidth is available for your AV system, remember that bandwidth availability should be measured in terms of the smallest pipe that your data must travel through. The average doesn’t matter. If most of the network has a capacity of 1 Gbps but one network segment has a capacity of 100 Mbps, then the bandwidth available to your system is 100 Mbps. In order to accurately assess available bandwidth, you have to identify bandwidth bottlenecks.

images

If your data must leave the LAN and travel across a WAN, there will always be a bandwidth bottleneck. For LANs, the ready capacity can exceed 100 Gbps—it’s virtually unlimited. You can expect a WAN to have one-tenth of that capacity for downloads, and less for uploads or symmetrical transfers such as conferencing. As much as possible, when using an IT network to transport AV signals, you should aim to keep high-bandwidth traffic within the LAN and minimize the number of individual high-bandwidth streams that must be sent over the WAN.

Broadband

Broadband is baseband cut into pieces, as shown in Figure 3-1. In broadband, the frequency is split into discrete subfrequency ranges so that each subfrequency range can carry a different signal. Then the frequencies are rejoined for transmission. By using subfrequencies, more information can be passed along because each subfrequency carries its own data. Broadband requires the use of a modulator and demodulator (modem) or similar technology to split the frequency.

images

Figure 3-1 Broadband can carry different signals across the same wire.

Broadband has few limitations but uses more expensive equipment. Long-distance WAN communications, such as DSL and cable, are examples of broadband communication.

Encoding

A packet-switched network can accommodate different types of data traffic—control signals, text, live video or audio, interactive games, and so on. However, all signals sent across a network have one thing in common: they are all digital. That means all the light and sound that make up an AV experience must be reduced to the binary language of 1s and 0s that computers speak.

Of course, you can’t send actual digits over a physical medium such as copper wire or fiber-optic cable. In order to be transmitted from one networked device to another, digital signals must be encoded as electromagnetic signals. Those signals take different forms, depending on the physical medium. For example:

• Copper wire carries electrical voltage.

• Fiber-optic cable carries light pulses.

• Wi-Fi carries radio frequency signals.

What we have here, when it comes to AV, is a two-step process. In the analog-to-digital encoding process, an analog signal (sound, video) is translated into 1s and 0s so that computers can interpret it. In the data-transmission encoding process, 1s and 0s are translated into waves that can travel through physical matter.

Analog-to-Digital Encoding

When you transport AV signals across a network, you’re trying to reproduce a physical, analog reality using digital code. Keep in mind, a lot of the traffic you send across a network—computer-generated images, text, and other data—starts out as digital information. It doesn’t need to be converted before you can send it. But AV signals start life as physical phenomena, waves of light or sound. As AV professionals know, these physical waves are captured by transducers, such as cameras or microphones, and translated into electrical waves. It’s like representing an apple (a sound wave) with a detailed scale model of an apple (an electrical wave). As long as the model doesn’t get damaged in transit, you’ll be able to reconstruct an accurate representation of the original at the other end.

Before a wave can be sent across a network—or any other digital medium—it must be encoded as binary code. That’s like representing an apple (the sound wave) with a detailed textual description of that apple (binary code). The representation on the other end of the network will be as accurate as the level of detail in the description.

In order to get a highly accurate representation, you have to include a lot of detail in the description. In a way, encoding a digital signal is like creating a “connect-the-dots” puzzle. If you have too few dots (shown next, top), you can’t tell what picture the puzzle is supposed to represent. Not including enough information in the encoded signal causes playback quality to suffer. If you have too many dots (pictured, bottom), you don’t even need to draw lines to re-create the picture. Including too much information in the encoded signal makes for huge files that consume a lot of bandwidth. If you encode at the right level of detail, you can get acceptable quality without clogging the network pipes.

images

The level of detail in an encoded digital signal depends on two factors: the sampling rate and the bit depth. The sampling rate is how many times per second you sample the analog signal to make the digital copy. Think of a sample as a slice of the wave, which can be thick or thin. If the slices are too thick (if you’re not sampling frequently enough), you’ll miss a lot of detail. The thinner your slices are, the more detail you can see. On the other hand, you’ll have more data to keep track of, so your file will be bigger.

So how much sampling is enough?

According to the Nyquist-Shannon sampling theorem, an analog signal can be reconstructed if it is encoded at twice the highest frequency sampled. Because the range of human hearing extends to 20 kHz, the sampling rate for digital audio should be greater than 40 kHz. For example, if you sample a 6 kHz tone at 10 kHz, it creates noise at 4 kHz and 16 kHz (10 minus 6 and 10 plus 6). Both of those frequencies are within the range of human hearing, so you would hear that noise when the digital file was played back.

However, if you sample a 6 kHz tone at 48 kHz, the noise occurs at 42 kHz and 54 kHz—both well outside the range of human hearing. But is twice the frequency enough? Not always. In many cases, sampling at four times the frequency is necessary to capture all the nuances of amplitude and phase (see Figure 3-2).

images

Figure 3-2 How sampling at a higher rate increases quality.

Within each sample, you need to be able to say where the signal is—basically, how high is the voltage? Bit depth is the number of states you have in which to describe each sample. Bit depth is also called the quantization rate, as in, “How can you quantify the state of your signal?”

Bit depth is like a ruler (see Figure 3-3). If a 10-centimeter ruler has only centimeter marks, you have only 10 states in which to describe the height of an object. If the object’s height actually falls somewhere between those marks, your measurement won’t be very accurate. However, if the same ruler is marked in millimeters, your measurement will be more accurate. Again, though, more bit depth means bigger files and more bandwidth. At some point, you end up gathering more data than you actually need for a quality representation.

images

Figure 3-3 Bit depth is like a ruler: the more granular, the more accurate the result.

It makes sense that more samples result in a bigger file, but why is this the case with greater bit depth? After all, you’re still just talking about one state per sample. Each bit is a single digit of binary code. So if your bit depth is one bit, you have two states (0 and 1) in which to describe your signal. If your bit depth is two, you have four states (00, 01, 10, or 11), each made up of two digits. If your bit depth is three, there are eight possible states (000, 001, 010, 011, 100, 101, 110, 111). Now it takes three bits to express each state. Each time you add a bit to the bit depth, the number of possible states increases by a power of 2. At the same time, though, the number of digits it takes to express each state goes up by one, making the data stream even bigger.

How big? That depends on what you’re streaming. For audio, the sampling rate in hertz times the bit depth times the total number of channels to be encoded equals the bandwidth required in bits per second (bps). That is, sampling rate × bit depth × channels = bandwidth. So if you’re encoding a stereo audio signal at a sampling rate of 48 kHz and a bit depth of 16, you’re going to end up with a stream that requires 1,536,000 bps (or about 1.5 Mbps) of bandwidth (48,000 × 16 × 2 = 1,536,000). That’s big, but video is a lot bigger.

A video’s sampling rate is the number of frames per second. For video, though, you’re not just worried about one wave. You need to know the state of every single pixel in the image. So for encoded video signals, the number of horizontal pixels times the number of vertical pixels times the bit depth times frames per second (fps) equals the bandwidth in bps. That means that if you’ve got a 720-by-480-pixel video image, encoded at 8 bits and 30 fps, you end up with an 82.9 Mbps signal (720 × 480 × 8 × 30 = 82,944,000). And that’s only for black-and-white video. Color images have a red, green, and blue channel. If you sample all three channels equally (known as 4:4:4 sampling), the bandwidth required for color video is three times the bandwidth required for black and white. So a color image with the same resolution as the black-and-white example would require 248.9 Mbps of bandwidth (82,944,000 × 3 = 248,832,000).

Computer-generated images also have another layer—the transparency layer or alpha channel. That makes them four times as big as black-and-white images (82,944,000 × 4 = 331,776,000). Do you really need that much detail to send a quality digital video stream? No. Our eyes don’t see color as well as they see black and white. Broadcast television samples color at only half the rate of luminance. This is called 4:2:2 sampling. There’s no noticeable drop in quality, but the bandwidth required is 82.9 Mbps less than what’s required for 4:4:4 (82,944,000 × 2 = 165,888,000). Some digital video encoding formats use 4:1:1 sampling, which samples color information at one-quarter the rate of luminance. This method saves 41,472,000 bits per second over 4:2:2, and the image quality is still very good. The resulting stream is half the size of a 4:4:4 stream (82,944,000 × 1.5 = 124,416,000).

Still, 124.4 Mbps is a big stream—bigger than you can send over most wide area networks. In order to shrink video down to a size that can be sent over an IP network, the signal must be compressed. There are many methods of compressing and decompressing digital signals, a process that basically discards unneeded or frequently repeated data to make streams smaller. Some result in higher-quality AV than others. You will learn more about compression in Chapter 10.

At the end of the day, however, AV professionals care about signal quality. Preserving the quality of an analog signal as it’s encoded to digital takes a lot of data, and using network resources responsibly often means striking a balance between quality and bandwidth.

Encoding Digital Signals for Transmission

Now you have a digital representation of your analog AV signal. You want to get it from point A on the network to point B. But you can’t transmit 1s and 0s directly across a physical medium such as copper, fiber, or RF. A digital signal has to be encoded into a signal that matches the capability of the transmission media.

Digital signals have only two states: 1 or 0. In other words, on or off. Data communication equipment, such as a modem, transforms those bits into a two-stage digital signal, as depicted in Figure 3-4. The two states can be represented in a number of different ways: the absence or presence of current; positive or negative voltage with respect to ground; the voltage difference between two wires; or the absence or presence of light.

images

Figure 3-4 Networking equipment transforms binary code into a two-stage digital signal.

The physical representation of the signal aside, there are various ways to encode data, which can be represented by very different digital waveforms. The form the data takes depends on how far and how fast it needs to travel and the encoding method used. In general, encoding for transmission falls into one of two categories: two-level encoding (on/positive or off/negative) and three-level encoding (on/positive, idle/null, or off/negative). Let’s look at some examples. Figure 3-5 illustrates the main types of encoding, along with the transmission methods each supports.

images

Figure 3-5 Various ways of encoding data for digital transmission.

NRZ encoding Non-return-to-zero (NRZ) is a two-level encoding scheme that transforms 0s into a negative and 1s into a positive. This is the simplest form of encoding. The encoded NRZ signal looks like the graph of the original binary signal. The signal never reaches a null state, so the receiver can always easily tell if there’s a signal present or not. NRZ is mostly used for magnetic recording, but it can be used for signal transmission as well.

NRZI encoding Non-return-to-zero inverted (NRZI) sounds similar to NRZ, but it works differently. Fast Ethernet (100 Mbps) is encoded with NRZI, which requires a clocking mechanism for synchronization. Each bit is encoded at the half-wave of the clocking signal. In NRZI encoding, if the bit is equal to 1, the signal changes states, either from negative to positive or positive to negative. If the bit is equal to 0, the signal remains in the same state. NRZI can represent the same data as NRZ, with fewer transitions. That means it can be transmitted faster.

Phase encoding Ethernet (10 Mbps) is encoded using Manchester coding, commonly known as phase encoding. Like NRZI, phase encoding uses a clocking signal and encodes on the half-wave. Phase encoding originally expressed 0 bits as a low-to-high transition and 1 bit as a high-to-low transition. The Ethernet standard reversed this; under IEEE 802.3, 1 is low-to-high and 0 is high-to-low. Note that there are a lot of transitions in this encoding method. Phase encoding performs a logical operation called exclusive OR, or XOR. That means it treats two of the same bits in a row as a 0 and two different bits in a row as a 1.

Delay encoding Delay encoding is similar to phase encoding, but transitions occur only when the bit value is 1. Fewer transitions mean higher possible data rates. Delay encoding is mostly used to encode data for RF transmission.

Bipolar encoding Bipolar encoding is a three-level encoding method. It is used mostly for long-distance WAN communication. In bipolar encoding, 1 is indicated by the presence of a signal, either positive or negative, while 0 is indicated by the absence of a signal.

Data Transmission

Regardless of how data is encoded, there are many different aspects of data transmission. The communications industry uses bandwidth to measure throughput; baseband or broadband to define signal modulation; and the flow of data to and from the source to characterize the direction of communication. There are three basic modes of operation for this data flow in network communication: simplex, half-duplex, and full duplex. We explore these modes here.

Simplex Communication

As defined by the American National Standards Institute (ANSI), simplex is a form of data transmission whereby communication is available in only one direction. Data is sent from one node to others, but the other nodes cannot respond. The International Telecommunication Union (ITU) uses the term “half-duplex” to refer to this type of data transmission.

Simplex communications are like broadcasts from a radio station. Transmissions are broadcast to receivers, but the receivers can’t respond. Actually, all networks use broadcast. A broadcast message is a message sent by one node to all the other nodes on the same network. Any node on a network can send a broadcast message by addressing the message to a special broadcast address. These messages are a form of simplex communication.

Multicast messages are another form of simplex communication. In multicast communication, instead of sending the message to every node on the network, the sending node sends the message to a predefined group of “listeners.” Like broadcast messages, multicast messages are sent to a special multicast address. Only the nodes that subscribe to that address receive that communication.

Duplex Communication

Half-duplex is a form of data transmission in which only one network node at a time sends data. (Note that the ITU uses this term to refer to simplex communications.) Half-duplex is essentially bidirectional simplex communication. Half-duplex works like a walkie-talkie with a push-to-talk button. You can send information from either device, but only one device at a time can transmit. The original Ethernet standard, IEEE 802.3, used a half-duplex communication scheme. Because the network media was shared, only one node at a time could transmit data. This was before the concept of switched networks.

Full-duplex communication is a form of bidirectional data transmission in which multiple messages may travel on the same medium simultaneously. Modern Ethernet networks operate in full-duplex mode by default. Modern LAN technology can still support half-duplex communication if it needs to, as in situations where legacy components that aren’t capable of full-duplex transmission must be integrated into a network.

Chapter Review

It is important to understand how data is transported across a network from one device to another, if for no other reason than AV professionals need to speak the language of IT professionals when they collaborate on networked systems. That said, knowing how AV signals are encoded, and how bandwidth (or lack thereof) may affect the performance of a networked AV system, can be crucial to a project’s success.

Now that you’ve completed this chapter, you should be able to

• Describe how AV signals are encoded and transmitted as binary data

• Define network bandwidth and identify potential bandwidth bottlenecks

• Compare and contrast baseband and broadband network communications

• Compare and contrast simplex, duplex, and half-duplex network communication

Review Questions

1. In ______ communication, data is sent to nodes, but the nodes cannot respond.

A. wide area network

B. full-duplex

C. half-duplex

D. simplex

2. WAN communications are transmitted as a _____ signal.

A. baseband

B. non-return-to-zero inverted (NZRI)

C. broadband

D. non-return-to-zero (NRZ)

3. 4:2:2 sampling reduces video bandwidth by _____.

A. sampling blue and red at half the rate of green

B. sampling color at half the rate of luminance

C. sampling luminance at half the rate of color

D. sampling every other frame at half the bit depth

4. _____ is an unmodulated, raw electromagnetic signal.

A. Broadband

B. Half-duplex

C. Baseband

D. Duplex

5. Modern Ethernet networks operate in _____ mode by default.

A. simplex

B. half-duplex

C. full-duplex

D. broadband

6. Sampling rate is the _____.

A. number of times per second an analog signal is sampled in order to encode it as digital data

B. synchronization signal used to encode digital data for fast Ethernet transmission

C. number of bits available to describe the state of each sample as an analog signal is encoded as digital data

D. number of stages a digital wave has available when it is encoded for transmission

7. In general, you should specify no more than _____ of a shared network connection’s bandwidth availability for average use by AV systems.

A. 10 percent

B. 25 percent

C. 50 percent

D. 70 percent

8. Ethernet is transmitted as a ____ signal.

A. baseband

B. bipolar

C. non-return-to-zero (NRZ)

D. broadband

9. The total bandwidth of an uncompressed audio signal is equal to _____.

A. (sampling rate + bit depth) × number of channels

B. (sampling rate / bit depth) × number of channels

C. sampling rate × bit depth × number of channels

D. (sampling rate × bit depth) / number of channels

Answers

1. D. In simplex communication, data is sent to nodes, but the nodes cannot respond.

2. C. WAN communications are transmitted as a broadband signal.

3. B.4:2:2 sampling reduces video bandwidth by sampling color at half the rate of luminance.

4. C. Baseband is an unmodulated, raw electromagnetic signal.

5. C. Modern Ethernet networks operate in full-duplex mode by default.

6. A. Sampling rate is the number of times per second an analog signal is sampled in order to encode it as digital data.

7. C. In general, you should specify no more than 50 percent of a shared network connection’s bandwidth availability for average use by AV systems.

8. A. Ethernet is transmitted as a baseband signal.

9. C. The total bandwidth of an uncompressed audio signal is equal to sampling rate × bit depth × number of channels.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.250.203