CHAPTER 12

Digital Signals

In this chapter, you will learn

•  How to determine signal strength based on the eye pattern of a digitally encoded signal

•  How to calculate the required bandwidth in bits per second of an uncompressed digital audio or video signal

•  How to differentiate between compression and encoding

•  How to describe common digital video compression methods


In modern AV systems, audio and video are increasingly transported as digital signals. As an AV professional, you need to be familiar with the physical characteristics of digital signals, including how far you can transport them and what happens when the waveform collapses. You also need to be aware of digital media bandwidth requirements and the burden they can place on a shared network. Finally, you need to be conversant in common encoding and compression techniques so that you can specify AV systems with the right capabilities.



The Analog Sunset

December 31, 2013, marked the end of the analog era. That was the day, under the Advanced Access Content System (AACS) license agreement, when AV manufacturers had to stop making devices with analog outputs. Older devices had their analog outputs disabled.

The AACS license agreement was adopted by content owners and manufacturers to protect movies and other consumer media on Blu-ray Discs and certain other sources from illegal copying. But its ramifications are felt in professional AV. Much of today’s video equipment features High-Definition Multimedia Interface (HDMI) and DisplayPort connectivity for transporting digital signals.

In addition, many laptop manufacturers, including AMD, Dell, Intel Corp., Lenovo, Samsung, and LG agreed to eliminate Video Graphics Array (VGA) connections in 2015. In other words, the standard analog connections that AV professionals have made for decades are slowly becoming a thing of the past. Designers still need to accommodate legacy signals, but it’s more important that they embrace the digital present and future.

Consider the following:

•  Players manufactured after 2010 limited analog video outputs to standard-definition (SD) interlaced signals (480i, 576i, S-Video, composite).

•  The end of analog computer video interfaces was announced in 2010.

•  Finding a new device with an HD-15 (VGA) connection is now uncommon.

•  No player manufactured after 2013 includes analog outputs.

•  Intel planned to end support of VGA and low-voltage differential signaling in 2015 in its PC client processors and chipsets.

Regardless of whether a signal is analog or digital, one of the key specifications to factor into an AV design is bandwidth. Analog signals are quantified using frequency, usually in megahertz in the content of video. With digital signals, bandwidth is quantified in terms of bits per second (bps), megabits per second (Mbps), or (most commonly these days) gigabits per second (Gbps).

Here is a formulaic representation of analog (top) and digital bandwidth, for your reference:

1920 × 1080 × 60 / 2 (On/Off pixel pattern) × 3 (3rd harmonic) = 186,624,000 Hz or 186.624 MHz

versus the following:

1920 × 1080 × 60 × 3 (red, green, blue) × 8 (8-bit color) = 2,985,984,000 bps or 3 Gbps (3G) uncompressed

Digital Signals

Digital information is like a standard light switch. A common light switch has only two positions—on and off. In the world of signals, a digital signal is either on or off. These two states are numerically represented with a 1 (on) or a 0 (off).

Digital formats are capable of carrying a lot more than just one signal type at a time. Some digital connections can carry video, audio, control, and Internet. Oscilloscopes visually represent signals that vary with time, with the vertical axis depicting voltage and the horizontal axis showing time.

Some oscilloscopes can take a digital signal, trace the 1s and 0s that make it up, and display important information in a visual pattern. Because of its ovular shape, this pattern is called an eye pattern (see Figure 12-1).

Images

Figure 12-1    An eye pattern

You can use an eye pattern to view many aspects of a digital signal. For example, you can view a signal’s amplitude. If the amplitude of the signal is weak or if the receiver has poor sensitivity, the signal will fall within the hexagonal shape indicated in Figure 12-1. When this happens, the signal will become unstable. You may see green sparkles or other color anomalies. The audio quality may suffer as well. If the eye pattern degrades too much, you may lose your signal. When that happens, you may need to plan for repeaters along the signal path (Figure 12-2).

Images

Figure 12-2    Repeaters help boost digital signals.

Digital Audio Bandwidth

An unencoded audio signal’s bandwidth requirements are in direct relationship to the signal’s sampling rate (measured in hertz) and bit depth (measured in bits). See Figure 12-3. The formula for the required data throughput of a single unencoded audio stream is as follows:

Images

Figure 12-3    Bit depth is like a ruler; the more granular, the more accurate the result.

Sampling Rate × Bit Depth × Number of Channels = Bit Rate (bps)

This unencoded data can be streamed over a local area network (LAN), saved to a file (for archiving or editing), or compressed with an encoder to reduce the file size even more. It should be noted that compressing a smaller file takes less processing than compressing a larger file, easing the central processing unit (CPU) load for other purposes. Note also that a 7.1 audio file will be significantly larger than a mono voice file. As always, the question is, “How good is good enough?” Is reducing bandwidth worth the quality trade-offs?

Common audio sampling rates include the following:

•  Telephone    8 kHz

•  Audio CD    44.1 kHz

•  DVD audio    48 kHz

•  Blu-ray Disc    96 or 192 kHz

•  Super Audio CD (SACD)    2.8 MHz

Images

NOTE    For more in-depth information about bit depth and sampling, you can review the CTS Certified Technology Specialist Exam Guide (McGraw-Hill Education and InfoComm International, 2013) or Networked AV Systems (McGraw-Hill Education and InfoComm International, 2014).

Digital Video Bandwidth

Audio and video streams are both digital representations of analog data, sound, and light. So, why does video require so much more bandwidth than audio?

It’s because digitally representing video requires much more data. For video, each pixel has a bit depth—a number of possible states. The moving image has a sampling rate, represented as the number of frames per second. That total must be multiplied by the total number of pixels in the image to find the uncompressed bandwidth.

The formula for black-and-white, uncompressed digital video signal bandwidth is as follows:

Horizontal pixels × Vertical pixels × Bit depth × Frames per second = Bit rate (bps)

The number of pixels is determined by the video format. For example, each frame of 720p digital video is 1280 pixels wide by 720 pixels high. That’s 921,600 total pixels in each frame. Tables 12-1, 12-2, and 12-3 list some common video pixel resolutions for your reference.

Images

Table 12-1    High-Definition Format

Images

Table 12-2    Common Intermediate Format (PAL)

Images

Table 12-3    Standard Intermediate Format, National Television System Committee (NTSC)

Of course, when was the last time you streamed black-and-white video? Color video is not a single signal. It’s three: red, blue, and green. That means a color video stream requires even more data. The amount of bandwidth you need to stream color video depends on how that video is sampled. Here’s how to calculate the bandwidth (bps) for color video, depending on the sampling method used.

4:4:4 Sampling

Color video is not a single signal. It’s three: red, blue, and green. 4:4:4 sampling uses all three signals in equal proportion. A 4:4:4 color video signal requires three times the bandwidth of its black-and-white counterpart.

You may have seen the bit rate of digital video expressed as 24 or 32 bits. This is just a bit depth of 8 multiplied by the number of channels. The formula for three-channel, uncompressed digital video signal bandwidth is as follows:

Horizontal pixels × Vertical pixels × Bit depth × Frames per second × 3 = Bit rate (bps)

4:4:4:4 Sampling

Computers have a transparency layer called an alpha channel. This channel is the same size as the red, green, or blue signal. When you set your computer resolution, you see either 24 bit or 32 bit. 24 bit has 8 bits for quantizing red, 8 for green, and 8 for blue. Add the 8 bits for alpha channel, and you have 32 bits (and hence 4:4:4:4).

The formula for four-channel uncompressed digital video signal bandwidth is as follows:

Horizontal pixels × Vertical pixels × Bit depth × Frames per second × 4 = Bit rate (bps)

4:2:2 Sampling

Because human eyes can’t detect color information as well as black-and-white detail, you can get away with limiting the color information included in the digital sample. 4:2:2 sampling includes all the luminance information, but only half the red and blue channels. The drop in quality isn’t noticeable; 4:2:2 sampling is used for broadcast TV. Keep in mind, you can’t use this sampling method for computer graphics because computers don’t have component output.

The formula for 4:2:2 digital video signal bandwidth is as follows:

Luminance + Cr + Cb= Bit rate (bps)

where

•  Luminance = Frames per second × Bit depth × Horizontal pixels × Vertical pixels

•  Cr (red) = Frames per second × Bit depth × 1/2 Horizontal pixels × Vertical pixels

•  Cb (blue) = Frames per second × Bit depth × 1/2 Horizontal pixels × Vertical pixels

4:1:1 Sampling

Digital video and some Moving Picture Experts Group (MPEG) video formats (see Chapter 17, “Streaming Design”) use 4:1:1 sampling, limiting the color information even further. The full luminance signal is still present, but only one-fourth of the color information is sampled.

The formula for 4:1:1 digital video signal bandwidth is as follows:

Luminance + Cr + Cb = Bit rate (bps)

where

•  Luminance = Frames per second × Bit depth × Horizontal pixels × Vertical pixels

•  Cr (red) = Frames per second × Bit depth × 1/4 Horizontal pixels × Vertical pixels

•  Cb (blue) = Frames per second × Bit depth × 1/4 Horizontal pixels × Vertical pixels

Images

NOTE    4:2:0 sampling is also a common choice. It is mathematically the same as 4:1:1 sampling in terms of required bandwidth, with basically the same color resolution and a different pixel pattern. Both 4:2:0 and 4:1:1 sampling suffers from blocky color patterns and poor color fidelity.

Bandwidth: Determining Total Program Size

When determining the total required bandwidth of an AV stream, it’s important to remember that reducing or increasing the video or audio information changes the total data transmission rate. Yes, video requires a lot of bandwidth, but sometimes people forget the audio. Audio can require a lot more bandwidth than you might think, especially for surround-sound applications. All those channels add up. Remember also to account for overhead, namely, the addressing and control information that must accompany AV packets.

When it comes to AV program bandwidth, here is the formula:

Video bit rate + Audio bit rate + Overhead = Total required bandwidth (bps)

Once AV program bandwidth is calculated, you may find that you must choose a lower sampling rate to fit within the bandwidth limitations of your overall system. You don’t necessarily have to think of choosing a lower sampling rate as sacrificing quality, though, especially when it comes to video.

Digital techniques have become far more sophisticated in recent years. Modern digital codecs can analyze image content and decide how to prioritize that detail with greater ease. With a color subsampled image, the program decoding the picture estimates the missing pixel values, uses the surrounding intact color values, and provides the link between the two. That means a 4:2:2 image can look almost as good as a 4:4:4.

Content Compression and Encoding

A single stream of uncompressed 4:1:1 video with audio and overhead can easily exceed 21 percent of the rated capacity of anything short of a gigabit LAN or optical fiber network. AV data must therefore be compressed or encoded before it can travel across an enterprise network.

Compression is the process of reducing the data size of video and audio information before sending it over a network. The information is then decompressed for playback when it reaches its destination. Compression may be “lossless” or “lossy,” which is a more important distinction for AV signals than it is for, say, e-mail messages.

With lossless compression, the AV data is the same after it’s been compressed and decompressed as it was originally. All the data remains intact. (In IT, lossless compression is important for maintaining the integrity of data such as financial records, text, and other non-AV information that would otherwise lose its meaning if any of it were altered.)

In AV, lossless compression schemes are important when the application demands perfect fidelity of AV data throughout its transmission—for example, in video security or command-and-control systems. Apple Lossless and Dolby TrueHD are examples of lossless audio compression; JPEG 2000 and H.264 offer lossless video compression.

Lossy compression techniques compress and discard “unneeded” data while still returning an acceptable-quality stream after decoding. The compression program examines a video or audio segment for detail that is less important or less noticeable. It then drops the data describing that detail from the stream. When the stream is re-created during decoding and playback, people are unlikely to notice the dropped information. For example, a video compression scheme might focus its work more on moving images in a stream than on static background images.

Lossy compression is common for AV applications, such as streaming media and IP telephony. Advanced Audio Coding (AAC) and MPEG-1 or MPEG-2 audio layer 3 (MP3) are examples of lossy audio compression; MPEG-2 and VC-1 offer lossy video compression. Some lossy schemes also implement lossless techniques to further decrease file sizes.

Codecs

The software or hardware that actually performs the lossless or lossy compression—as well as the decompression—is the codec, short for “encoder/decoder.” Codecs come in an almost bewildering variety: one-way and two-way, hardware- and software-based, compressed and uncompressed, specialized videoconferencing, and so on. Deciding what type of codec to use for a design requires considering several factors, including the following:

•  IT policies. The playback software that users currently have (or are allowed to have) determines the codecs their playback devices use, which in turn determines the codec you need to encode streams.

•  Licensing fees associated with the codec.

•  The resolution and frame rate of the source material.

•  The desired resolution and frame rate of the stream.

•  The bandwidth required to achieve your desired streaming quality.

When it comes to matching the bandwidth you need to a codec’s capabilities, you may not be able to find technical specifications to help you. Some testing may be in order.

Digital Audio Compression: MP3

MP3 is one of the more common audio encoding formats. It was defined as part of a group of audio and video coding standards designed by the Moving Picture Experts Group. MP3 uses a lossy compression algorithm that reduces or discards the auditory details that most people can’t hear, drastically reducing overall file sizes.

When capturing and encoding MP3 audio, you can choose your bit depth and sampling rate. The MPEG-1 format includes bit rates ranging from 32 to 320 kbps, with available sampling rates of 32 kHz, 44.1 kHz, and 48 kHz. MPEG-2 ranges from 8 to 160 kbps, with available sampling rates of 16 kHz, 22.05 kHz, and 24 kHz.

MP3 encoding may be accomplished using constant bit rate (CBR) or variable bit rate (VBR) methods. You can predefine CBR encoding to match your bandwidth, especially where bandwidth is limited. VBR encoding uses a lower bit rate to encode portions of the audio that have less detail, such as silence, and uses a higher bit rate to encode more complex passages. CBR encoding results in more predictable file sizes, which helps in planning for bandwidth requirements. However, VBR can result in better perceived quality.

Digital AV Compression

Compressing a data stream that contains both audio and video is considerably more complex. Because video has more information than audio, the resulting files and streams are much larger. There are many AV compression formats, but they generally fall into two categories.

The first is intraframe compression, which compresses and sends each frame individually (Figure 12-4). Because it sends each individual frame, intraframe compression is preferable for editing video. Motion JPEG uses intraframe compression. Not surprisingly, the resulting data files are large.

Images

Figure 12-4    Intraframe compression

Interframe compression, on the other hand, detects how much information has changed between frames and sends a new frame only if there are significant differences (Figure 12-5). Interframe compression requires less bandwidth to stream and results in smaller files. MPEG video algorithms use interframe compression; so do most videoconferencing codecs.

Images

Figure 12-5    Interframe compression

Images

NOTE    You may have heard it’s a bad idea for your clients to wear patterned clothes during a videoconference. It’s true! An interframe codec, common in videoconferencing systems, sends a new frame every time it sees the pattern shift, resulting in a far higher bandwidth stream.

When it comes to encoding and decoding a digital signal, video frames are formed into logical groups, known as groups of pictures (GoPs), shown in Figure 12-6. A GoP is a set of video frames, which, when played in succession and in line with other GoPs, creates the video stream. In a GoP, there are three picture types.

Images

Figure 12-6    A group of pictures

•  I-frames (intraframes), which are complete reference pictures. Every GoP starts with an I-frame.

•  P-frames (predictive frames), which contain information about how the P-frame is different from the preceding I- or P-frame.

•  B-frames (bidirectional frames), which contain information about how the B-frame is different from the preceding and following I- or P-frames.

Encoding software chooses which frames in a GoP to actually transmit in a video stream. Interframe encoding jumps from I-frame to I-frame, compressing several frames at a time.

You will learn more about compression and its role in AV systems design in Chapter 17 where we discuss streaming applications.

Chapter Review

At this point in the evolution of AV, your design is likely to transmit mostly digital signals. In that case, you need to plan your design around the proper bandwidth requirements, based on the client’s needs, and understand the basics of compression and encoding.

Review Questions

The following questions are based on the content covered in this chapter and are intended to help reinforce the knowledge you have assimilated. These questions are not extracted from the CTS-D exam nor are they necessarily CTS-D practice exam questions. For an official CTS-D practice exam, download the Total Tester as described in Appendix D.

  1. Your customer wants to stream CD-quality music at 24 bits in stereo to his lobby area. How much bandwidth will this application require?

A. 754 bps

B. 1.5 Kbps

C. 2.1 Mbps

D. 1 Gbps

  2. You need to stream 10 channels of 96 kHz audio. You have 25 Mbps of bandwidth available. What is the highest bit depth you can use?

A. 24 bit

B. 26 bit

C. 32 bit

D. 48 bit

  3. You currently have the bandwidth capacity to stream 30 channels of 48 kHz, 24-bit audio. How many channels could you stream if you upgraded to 96 kHz, 24-bit audio?

A. 15 channels

B. 25 channels

C. 30 channels

D. 60 channels

  4. What is the required bandwidth for 4:4:4, progressive digital video signal (1920 × 1080, 8 bits at 30 Hz)?

A. 1.49 Gbps

B. 2.60 Gbps

C. 3.20 Gbps

D. 460.4 Mbps

  5. What is the required bandwidth for a 4:4:4 computer image (1280 × 1024, 8 bits at 80 Hz)?

A. 2 Gbps

B. 2.5 Gbps

C. 3 Gbps

D. 4 Gbps

  6. What is the required bandwidth for 4:2:2 progressive digital video (1920 × 1080, 8 bits at 60 Hz)?

A. 100 Mbps

B. 1.99 Gbps

C. 3.24 Gbps

D. 1 Gbps

  7. What is the required bandwidth for 4:1:1 progressive digital video signal (1920 × 1080, 8 bits at 30 Hz)?

A. 746 Mbps

B. 1.49 Gbps

C. 3.30 Gbps

D. 1.30 Gbps

  8. _________ compression is common for networked AV applications, such as streaming media and IP telephony.

A. Lossless

B. Lossy

C. Intraframe

D. Apple QuickTime

Answers

  1. C. 44,100 × 24 × 2 = 2.1 Mbps

  2. B. 96,000 × X × 10 = 25 Mbps; X = 26 bit

  3. A. 48,000 × 24 × 30 = 34.6 Mbps = 96,000 × 24 × X; X = 15

  4. A. 1920 × 1080 × 8 × 30 × 3 = 1.49 Gbps

  5. B. 1280 × 1024 × 8 × 80 × 3 = 2.5 Gbps

  6. B. Luminance + Cr + Cb = Bit rate (bps); (60 × 8 × 1920 × 1080) + [60 × 8 × (.5 × 1920) × 1080] + [60 × 8 × (.5 × 1920) × 1080] = 1.99 Gbps

  7. A. Luminance + Cr + Cb = Bit rate (bps); (30 × 8 × 1920 × 1080) + [30 × 8 × (.25 × 1920) × 1080] + [30 × 8 × (.25 × 1920) × 1080] = 746 Mbps

  8. B. Lossy compression is common for networked AV applications, such as streaming media and IP telephony.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.136.142