CHAPTER 17

Streaming Design

In this chapter, you will learn about

•  Conducting a needs analysis for streaming AV systems

•  Conducting a network analysis for streaming AV systems

•  Designing an AV streaming system that takes into account bandwidth, latency, and other requirements

•  Quality of service (QoS) and its effect on streaming AV systems

•  The network protocols used for streaming

•  The difference between unicast and multicast and the basics of implementing multicast distribution on an enterprise network


The network environment is as important to a streaming application as a physical environment is to a traditional AV system. When it comes to designing for a streaming AV system, you need to analyze the network as carefully as you would a physical room by exploring its potential, discovering its limitations, and recommending changes to improve system performance.

Streaming media comprises Internet Protocol television (IPTV), live video and audio streaming, and video on demand. It is also the foundation for other networked AV systems, such as digital signage and conferencing.

To design a successful streaming application, you must think carefully about the client’s needs and how they impact service targets. For example, how much bandwidth will your AV streams require? How much latency and packet loss can users tolerate? Some answers depend on the network itself; others depend on how content is encoded to travel over that network. As you begin to discover the number and quality of streams your client needs, you may set off some bandwidth alarm bells. Video and audio streams require significantly more bandwidth than other AV signals, such as control. If you understand the concepts underlying the compression, encoding, and distribution of digital media, you’re on your way to designing a variety of streaming solutions.



Streaming Needs Analysis

When conducting a needs analysis for a streaming solution, you need to discover not only what the client wants, but also what the current networking environment can deliver. Typical questions related to traditional audiovisual applications—regarding room size, viewer positioning, and so on—may be impossible to answer because the client may not know with any certainty where end users will be when they access streaming content. Instead of asking about the room where an AV system would reside, you need to delve into issues relating to streaming quality, bandwidth, latency, and delivery requirements. You need to think in terms of tasks, audience, end point devices, and content. And you need to explore these issues in the context of an enterprise network, not a particular venue.

Ultimately, what you learn in the needs analysis will help inform the service-level agreement (SLA) for your streaming AV system—and possibly other, more encompassing SLAs. Not to mention, as you’re collecting information about the client’s need, you should always ask yourself, “How is this going to impact network design?” because, eventually, you may have an IT department to answer to.

Streaming Tasks

In the design of any AV system (networked or non-networked), form follows function. The needs analysis begins with discovering the purpose of the system. Assume you’ve already established your client needs to stream AV. Why? What tasks will streaming AV be used for?

Decisions regarding bandwidth allocation, latency targets, and port and protocol requirements should be driven by a business case that is established in the needs analysis. How do the tasks that the prospective streaming system will perform contribute to the organization’s profitability, productivity, or competitiveness? If you can make a valid business case for why your streaming system requires a high class of service, opening User Datagram Protocol (UDP) ports, or 30 Mbps of reserved bandwidth, you should get it. But you need to understand the following:

•  “The picture will look bad if we don’t reserve the bandwidth” is not a business case.

•  “The picture will look bad if we don’t reserve the bandwidth, and if the picture looks bad, the intelligence analysts won’t be able to identify the bunker locations” is an excellent business case.

If there’s no business case for high-priority, low-latency, high-resolution streaming video, for example, then the user doesn’t need it, and it probably shouldn’t be approved. In that case, be prepared to relinquish synchronous HD streaming video if the client doesn’t really require it. Just be sure you document the lower expectations as your system’s service target.

Streaming Task Questions

What tasks will the streaming system be used for? This is the most basic question of a needs analysis. Answering it in detail should reveal the following:

•  The importance of the system to the organization’s mission

•  The scope of use in terms of number and variety of users

•  The frequency of use

If the task is a high priority, the streamed content will require priority delivery on the network, which will affect the bandwidth allocated and the class of service assigned to it. The nature of the task will also impact latency requirements. As you delve further into the tasks that the streaming system must support, you may ask questions such as, “Do users need to respond or react immediately to the streamed content?” If so, latency requirements should be very low—such as 30 milliseconds.

Furthermore, find out the answer to this question: “Will any delay in content delivery undermine its usefulness?” This question is also aimed at determining how much latency is acceptable in the system, outside of immediate-response situations. And the answers will help address follow-on questions: “Can we use TCP transport, or is UDP necessary?” and “How should the data be prioritized in a QoS architecture?”

Audience

The user is the star of the streaming needs analysis (or any needs analysis, for that matter). In fact, who the audience is specifically may be a major factor in a business case for a high-resolution, low-latency streaming system. We’re talking about the organization’s hierarchy here. If the main users are fairly high in the hierarchy—management, executives—they may demand a high-quality system to match their status.

If, on the other hand, the system will be used broadly throughout the organization, you may need to consider the bandwidth implications of such widespread use. Would a multicast work? (We will cover multicasting in detail later in this chapter.) Should different groups of users be assigned different service classes? You’ll understand better when you identify the audience.

Where the audience is located, however, may be your primary challenge in offering a streaming system that meets the client’s need. It’s actually a far more complex issue when it comes to streaming applications than traditional audiovisual systems. Asking about location should reveal whether all users (the audience) are on the same local area network (LAN) or wide area network (WAN) and whether some users will access streaming content over the open Internet. How does that impact design? There are various reasons, including the following:

•  Bandwidth over a WAN is limited.

•  QoS and multicast are impossible over the open Internet.

•  LAN-only solutions have a far greater array of data link and quality options.

End Points

Related to audience are the end points people will use to access streaming content. Your client may need to stream data to a handful of overflow rooms, hundreds of desktop PCs, or thousands of smartphones. The number and type of end points have a direct impact on service requirements.

How many different end points does the client need to stream to? With unicast transmissions, the bandwidth cost will rise with each additional user. If the user needs to stream to a large or unknown number of users within the enterprise network, you need to consider using multicast transmission or reflecting servers to reduce bandwidth. What kind of end points will people be using? Different end points support different resolutions, for example. If end points will be especially heterogeneous, you may want to implement a variable bit-rate intelligent streaming solution. If most users will be using mobile devices, content must be optimized for delivery over Wi-Fi or 3G/4G cellular networks.

Content Sources

The type of content the client will stream has a direct impact on the network, which in turn affects the streaming system design. Like other parts of the needs analysis, the content will factor in its bandwidth usage, latency targets, and class of service. The following are some basic questions to ask:

•  Do you need to stream audio, video, or both? Video requires far more bandwidth than audio. Codec options will also differ based on type of media.

•  How will content be generated? Video content generated by a computer graphics card may require more bandwidth than content captured by a camera. You’ll also have different codec options for different sources.

•  Will streaming images be full-motion or static? Still images, or relatively static images, require a lower frame rate than full-motion video.

•  Will the client be streaming copyrighted material or material that’s protected by digital rights management (DRM)?

You will also need to consider how and where content will enter the network. This is often referred to as the place where content is ingested (or uploaded) and can be a computer, a server, a storage system, or a purpose-built network streaming “engine.” You should determine how many ingestion points there will be and ensure each has the bandwidth required for expedient uploading. In addition, how many concurrent streams, or “channels,” must be uploaded? The answer should help establish how many video servers will be needed.

Moreover, can you exercise any control over the format, bandwidth, and so on of the content that will be ingested for streaming? If your clients are ingesting several different formats, they may need a transcoder, which automatically translates streams into formats that are compatible with the desired end points or are bandwidth-friendly toward certain connections.

And when it comes to the streaming content itself, you may want to make recommendations on frame rates, resolutions, and other streaming media characteristics based on the network design you anticipate. For video, how much motion fidelity do users need to accomplish their tasks? Lower acceptable fidelity will allow you to use a lower frame rate. Higher fidelity will result in increased latency or higher bandwidth.

What level of image quality or resolution is required for the task? Again, the focus is on requirement, not the desire. Use the answer to these questions to help establish a business case: At what point will limiting the video resolution detract from the system’s ability to increase the client’s productivity, profitability, or competitiveness? How low can the video resolution be before the image is no longer usable?

Using Copyrighted Content

Your clients may want to stream content that they didn’t create, such as clips from a media library or music from a satellite service. Unfortunately, such digital distribution can violate copyright laws.

Make sure clients are aware of the potential licensing issues related to the content they want to stream. You may need to negotiate a bulk license with a content service provider, such as a cable or satellite television provider or a satellite music service.

If you fail to obtain the proper licenses to stream content, you aren’t just risking legal repercussions; you’re risking the system’s ability to function at all. Publishers and copyright owners use DRM technology to control access to and usage of digital data or hardware. DRM protocols within devices determine what content is even allowed to enter a piece of equipment. Copy protection such as the Content Scrambling System (CSS), used in DVD players, is a subset of DRM. Actual legal enforcement of DRM policies varies by country.

High-bandwidth Digital Content Protection (HDCP) is a form of DRM developed by Intel to control digital audio and video content as it travels across Digital Video Interface (DVI) or High-Definition Multimedia Interface (HDMI) connections. It prevents the transmission or interception of nonencrypted HD content.

HDCP support is essential for the playback of protected high-definition (HD) content. Without the proper HDCP license, material will not play.

It can be difficult—though possible—to stream to multiple DVI or HDMI outputs. All the equipment used to distribute the content must be licensed. When in doubt, always ask whether a device is HDCP-compliant.

Images

NOTE    IPTV and streaming are similar, given that both utilize most of the same technology, so for the purposes of this book, the two will be handled as one topic. IPTV is a system that delivers television services over a packet-switched network, such as a LAN or the Internet. Streaming is traditionally the transfer of audio and video files that are played at the same time they’re temporarily downloaded to a user’s computer or other device.

Streaming Needs Analysis Questions

Use the following questions to gather information from clients about streaming applications. Consider how the users’ needs with respect to each item could impact the system’s design, cost, or network.

Tasks

•  What tasks will the system be used to perform?

•  Do users need to respond or react immediately to the streamed content?

•  Will any delay in content delivery undermine the usefulness of the content?

Audience

•  Who is the intended audience?

•  Where is the intended audience (onsite, within the LAN; offsite, within the company WAN; offsite, outside the company WAN; and so on)?

•  What are the audience access control requirements?

End Points

•  How many different end points (devices) do you need to stream to?

•  What kind of end points will your end users be using to view content (desktops, mobile devices, large displays, and so on)?

Content

•  What kind of content do you need to stream (for example, full-motion video and audio, full-motion video only, audio only, still images only, still images and audio, and so on)?

•  How will content be generated?

•  Will you be streaming Voice over IP (VoIP)?

•  How will content be ingested into the network?

•  For motion video, how much motion fidelity is required?

•  What level of quality and/or image resolution do you require (standard definition, high definition, best possible, adequate, and so on)?

•  How many concurrent upload streams, or “channels,” do you require?

•  How many concurrent download streams, or “channels,” do you require?

•  What are the accessibility requirements, if any?

Storage

•  Will content be available on demand?

•  How long will content need to be stored?

•  How quickly does content need to be propagated from storage?

•  What are the backup or redundancy requirements?

Streaming Design and the Network Environment

You’ve talked to users to find out what they need from a streaming media solution, so now it’s time to visit the client’s IT department. The network environment is as important to a streaming application as the physical environment is to a traditional AV system. You need to analyze the network as carefully as you would a physical room by exploring its potential, discovering its limitations, and recommending changes to improve system performance. Let’s get started.

Topology

In the needs analysis stage, you determined whether streaming users would be accessing content inside or outside a LAN. Remember that a LAN in this case is a single, openly routable location. Until you’ve had to traverse a firewall, you’re still on a LAN.

If your streaming system will remain within a single LAN, you’re lucky. The system’s routing will be considerably less complex, multicasting will be far easier to implement, and bandwidth availability is unlikely to be a problem. If you’re streaming content over a WAN, however, you have several additional factors to consider, including the following:

•  The physical location of streaming servers

•  Bandwidth availability on every network segment

•  The possible need for hybrid unicast/multicast implementation

•  The addressing scheme of the organization

The addressing scheme will determine what information you need to gather about your ports and protocols (see Chapter 9). It will also impact how access control and system control will be managed. Can the system identify authorized users via the Domain Name System (DNS)? Do you need to reserve a set of Internet Protocol (IP) addresses for the streaming servers on the Dynamic Host Configuration Protocol (DHCP) server?

Bandwidth: Matching Content to the Network

Bandwidth availability represents the single largest constraint on most networked AV systems. It will drive your choice of transmission method and codec for streaming applications. We will cover these considerations later in this chapter.

Remember that the network is only as fast as its slowest link. The rated capacity of the network should be based on the bottlenecks—the lowest-bandwidth throughput link that data will have to traverse. Whatever the rated capacity of the network is, you can’t use all of it. That would be like mounting a display with bolts that can hold exactly the display’s weight; the first time someone bumps into the screen, it will tear right off the wall. How much of the network can you use?

Realistically, only 70 percent of rated network capacity is available. The remaining 30 percent should be reserved to accommodate peak data usage times and avoid packet collision. Some industry experts recommend reserving as much as 50 percent of the network for this purpose. Ask the network manager what network capacity is available for each client.

In a converged network, of the 70 percent considered “available,” only a portion should be used for streaming media—about 30 percent of the available 70 percent. Otherwise, you won’t have enough bandwidth left for other network applications.

Depending on the importance of the streaming application, you may want to reserve this bandwidth with Resource Reservation Protocol (RSVP). RSVP is a Transport layer protocol used to reserve network resources for specific applications. The reservation is initiated by the host receiving the data and must be renewed periodically. RSVP is used in combination with differentiated service (DiffServ). At the least, DiffServ QoS will be required for WAN streaming. We will discuss DiffServ later in this chapter.

Your goal, then, is to design a streaming application that consumes no more than 30 percent of the available network capacity. You may also need to implement some form of bandwidth throttling to prevent the streaming application from overwhelming the network during peak usage, setting the limit at that 30 percent mark. Traffic shaping will introduce latency into the stream, causing it to buffer but preserving its quality. Traffic policing drops packets over the limit, which reduces video quality but avoids additional latency. What’s more important to your client: timeliness or quality?

Images

NOTE    Based on estimates that only 70 percent of rated network capacity is considered available and only 30 percent of the available capacity is available for streaming, consider 21 percent of a network’s rated capacity available for streaming. See Figure 17-1.

Images

Figure 17-1    The bandwidth available for streaming may be a small part of a network’s capacity.

Image Quality vs. Available Bandwidth

Working with the client and the client’s IT group, you must determine whether the amount of bandwidth allocated for streaming will be driven by current network availability or the end user’s quality expectations. It may be anathema to AV professionals, who pride themselves on high-quality experiences, but the client may be willing to sacrifice quality in favor of network efficiency. In that case, it’s your job to ensure that the client knows what he’s giving up and that the SLA reflects an acceptable balance between efficiency and quality.

In general, streaming bandwidth will be driven by network availability when the client is extremely cost-conscious or the streaming service is being added to an existing network that’s difficult to change. Streaming bandwidth will be driven by the user’s need for quality when high image resolution and frame rate are required to perform the tasks for which the streaming service will be used or when the tasks for which the streaming service will be used are mission critical. The need for quality also seems to get a boost when your client audience is from the C-level suite.

What can you do when you discover that streaming requirements exceed available network resources? If quality takes precedence, either add bandwidth, optimize existing bandwidth, assign streaming a high QoS class, or reserve bandwidth using RSVP. We will cover QoS and RSVP in more depth later in this chapter.

If availability takes precedence, you may need to reduce image resolution, frame rate, or bit rate; cut the number of channels by implementing multicast streaming or cache servers; or implement intelligent streaming.

Images

TIP    When discussing bandwidth capacity with a network administrator, make sure you’re on the same page. Ask yourself, “Is the admin telling you how much bandwidth the whole network is rated for?” If so, you can use only 21 percent of that for streaming. Is the admin telling you the average bandwidth availability? If so, you can use 30 percent of that for streaming. Is the admin telling you the availability during peak usage? If so, it will help you determine QoS requirements.

Streaming and Quality of Service

Quality of service is a term used to refer to any method of managing data traffic to provide the best possible user experience. Typically, QoS refers to some combination of bandwidth allocation and data prioritization.

Many different network components have built-in QoS features. For example, videoconferencing codes sometimes have built-in QoS features that allow various devices on a call to negotiate the bandwidth requirements. Network managers may also use software to set QoS rules for particular users or domain names. AV designers need to be more concerned with QoS policies that are configured directly into network switches and routers.

The bandwidth required for streaming video makes QoS a virtual requirement for streaming across a WAN. If you’re adding a streaming application to an existing network, you need to find out the following:

•  Whether QoS has been implemented across the entire WAN    To provide any benefit, QoS must be implemented on every segment of the network across which the stream will travel.

•  What differentiated-service classes have already been defined for network-based QoS (NQoS)    Every organization prioritizes data differently. You may think that AV data should always be assigned a high service class because it requires so much bandwidth and so little latency. However, a financial institution may value the timely delivery of stock quotes above streaming video. Where will the streaming application fit within the enterprise’s overall data priorities?

•  Whether RSVP can be implemented to reserve bandwidth for the streaming application    RSVP must be implemented on every network segment to function, which is a labor-intensive process. Start by finding out whether RSVP is currently implemented on the network—whether or not you should use it is a separate question. Does the importance of the streaming application really merit reserving 21 percent of the network for its traffic at all times?

•  What policy-based QoS rules are already in place, if any?    Assigning QoS policies to particular user groups can be helpful for networked AV applications. You may want to place streaming traffic to and from the content ingest points in a higher class of service than on the rest of the network, for instance. If remote users are accessing the system via IPSec, you may have to use policy-based QoS. The ports, protocols, and applications of IPSec traffic are hidden from the router.

•  Whether traffic shaping can be used to manage bandwidth    Again, traffic shaping policies will have to be implemented on every router on the network. However, if your client doesn’t mind the additional latency, traffic shaping can be an effective way to prevent the network from being overwhelmed by streaming traffic. Of course, if traffic policing has been implemented, you need to know this too. Find out where bandwidth thresholds have or will be set for streaming traffic and set expectations for dropped or delayed packets accordingly.



Latency

If your application will include live streaming, the amount of latency inherent to the network can be nearly as big a concern as bandwidth availability. What latency is present, and what causes it?

Find out whether your client has an internal speed-test server. If it does, work with the network manager to determine the inherent latency of the network. You can use a WAN speed test to verify network upload and download bit rates, as well as latency inherent between IP addresses.

If your client does not have an internal speed test server, free WAN speed test tools are available from many sites, including www.speedtest.net, www.speakeasy.net, and www.dslreports.com.

Varying degrees of latency are acceptable, depending on your client’s needs. Here are some examples:

•  Videoconferencing    200 milliseconds

•  High-fidelity audio    50 microseconds

•  Streaming desktop video    1 second

Eliminating latency entirely from streaming applications is impossible. The trick is to document in your SLA how much latency your client can tolerate for the application. Keep in mind, if your client has a transportation security requirement, more latency will be introduced into the network. Encryption and decryption take time.

Network Policies and Restrictions

Remember, what the network is capable of doing and what the network is permitted to do are two separate issues. You must determine not only whether bandwidth optimization and latency mitigation strategies, such as differentiated service, policy-based QoS, multicasting, and UDP transport, are possible but whether they are permitted. Many of these strategies are extremely labor-intensive to implement, and some, such as multicast and UDP transport, may represent significant security risks. For a review of UDP, see Chapter 16.

You should also always investigate what software and hardware the enterprise already has. You can bet that staying within an organization’s existing device families will ease the process of getting approval to connect devices to the network. For example, the software that users currently have (or are allowed to have) will determine what codecs are available on the end-user playback device, which in turn determines the encoder your system should use.

Cheat Sheet: Streaming Network Analysis Questions

Use the following questions to gather information from your client’s IT department about supporting streaming media applications. Consider how network policies with respect to each item could affect the streaming system’s design, cost, and network impact. Some of the technical issues in these questions are covered in greater depth later in this chapter.

Topology

•  Will content be streamed within a LAN, within a WAN, or over the open Internet?

•  If content will be streamed over a WAN, what is the network topology?

•  What is the addressing scheme (DNS, DHCP, static)?

Bandwidth Availability

•  What is the network’s total available end-to-end upload bandwidth?

•  What is the network’s total available end-to-end download bandwidth?

•  What is the network’s typical available worst-case upload bandwidth?

•  What is the network’s typical available worst-case download bandwidth?

•  If content will be streamed to the Internet, what upload and download bandwidth is provided by the Internet service provider (ISP)?

•  If traffic will be streamed over a WAN, has QoS been implemented? If so, what queue level will AV traffic occupy?

•  If traffic will be streamed over a WAN, has traffic shaping been implemented?

•  If traffic will be streamed within a LAN, has Independent Group Management Protocol been implemented? If so, what version?

•  Is a hybrid solution required to preserve bandwidth? If so, is protocol-independent multicast available to transport?

•  Do you need to relay multicast streams across domains?

Latency

•  How much latency is inherent to the network?

•  Will there be transport-level security requirements?

Network Policies

•  Are multicast and UDP transport permitted?

•  Are there restrictions on protocols allowed on the network?

•  Are there restrictions on what software may be installed on the hosts?

•  How is access control currently managed on the network?

•  What video player software is currently installed on the hosts? What codecs does it include?

Designing the Streaming System

Most live video with low latency requirements will be delivered via UDP transport. That’s because if a stream is time sensitive, it’s generally preferable to drop a few packets than wait for the source to confirm delivery and have to resend dropped packets.

Multicast transmissions are always delivered via UDP. UDP packets include little data, however, because they’re designed to deliver data as efficiently as possible.

Real-time Transport Protocol (RTP) is a Transport layer protocol commonly used with UDP to provide a better AV streaming experience. (RTP also works with Transmission Control Protocol [TCP], but the latter is more concerned with reliability than speed of transmission.) In addition to a data payload, RTP packets include other information, such as sequence and timing. RTP helps prevent jitter and detects when video packets arrive out of sequence. It also supports one-to-many (multicast) video delivery.

RTP is often deployed in conjunction with the Real-time Transport Control Protocol (RTCP), a Session layer protocol for monitoring quality of service for AV streams. RTCP periodically reports on packet loss, latency, and other delivery statistics so that a streaming application can improve its performance, perhaps by lowering the bit rate of the stream or using a different codec. RTCP does not carry any multimedia data or provide any encryption or authorization methods. In most cases, RTP data is sent over an even-numbered UDP port, while RTCP is sent over the next higher odd-numbered port.

Other Streaming Protocols

When setting up a streaming system, in addition to transport protocols, you should also consider streaming protocols. Streaming-specific protocols serve different functions and are based largely on the types of end points clients will use to view content.

Real Time Streaming Protocol (RTSP) comes in handy if all end points are desktop computer clients. This is because RTSP supports RTCP, while MPEG transport stream (MPEG-TS), for example, does not. RTSP is a control protocol that communicates with a streaming server to allow users to play, pause, or otherwise control a stream. For its part, RTCP sends back user data, allowing the streaming device to dynamically adjust the stream to improve performance.

If any of the end points are set-top boxes or similar devices (maybe a streaming box behind a screen in a restaurant), you’ll probably use MPEG-TS. The frequency of some displays used with set-top-style boxes can sometimes cause a perceived lag between audio and video. MPEG delivery prevents this from happening by combining audio and video into a single stream. MPEG-TS is defined as part of MPEG-2, but it is the transport stream used to deliver MPEG-4 audio and video as well.

Session Description Protocol (SDP) is a standardized method of describing media streamed over the Internet. The information carried by SDP generally includes session name, purpose, timing, information about the media being transported (though not the media itself), and contact information for session attendees. In short, SDP is used to kick off a streaming session—ensuring all invited devices are in contact and understand what’s coming next. The information contained in SDP can also be used by other protocols, such as RTP and RTSP, to initiate and maintain a streaming session.

High-Quality Streaming Video

As you probably know, the Motion Picture Engineering Group defines the compression formats commonly used for streaming high-quality video: MPEG-1, MPEG-2, and MPEG-4. There used to be an MPEG-3 format, but it’s no longer used and shouldn’t be confused with the MP3 audio compression format.

All three major MPEG standards stream over UDP. Today, MPEG-2 and MPEG-4 are the most prominent for networked AV systems, though MPEG-1 is also ubiquitous—MP3 audio is part of the MPEG-1 standard. For our purposes, we’ll explore further the flavors of MPEG you’re most likely to use for a streaming design.

MPEG-2

MPEG-2, also known as H.222/H.262, is the most common digital AV compression format. It’s an international standard, defined in ISO/IEC 3818. MPEG-2 is used to encode AV data for everything from DVD players and digital cable to satellite TV and more. Notably, MPEG-2 allows text and other data, such as program guides for TV viewers, to be added to the video stream.

There are various ways to achieve different quality levels and file sizes using MPEG-2. In general, however, MPEG-2 streams are too large to travel on the public Internet. MPEG-2 streams have a minimum total bit rate of 300 Kbps. Depending on the frame rate and aspect ratio of the video, as well as the bit rate of the accompanying audio, the total bit rate of an MPEG-2 stream can exceed 10 Mbps.

MPEG-4

MPEG-4 is designed to be a flexible, scalable compression format. It is defined in the standard ISO/IEC 14496. Unlike MPEG-2, MPEG-4 compresses audio, video, and other data as separate streams. For applications where audio detail is important, such as videoconferencing, this is a major advantage. MPEG-4 is capable of lower data rates and smaller file sizes than MPEG-2, while also supporting high-quality transmission. It’s commonly used for streaming video, especially over the Internet.

The MPEG-4 standard is still developing. It is broken down into parts, which solution providers implement or not, depending on their products. It’s safe to say there are a few complete implementations of MPEG-4 on the market, but it isn’t always clear what parts of MPEG-4 an MPEG-4 solution includes. Therefore, it’s important to understand the major components of MPEG-4.

MPEG-4 Levels and Profiles

Within the MPEG-4 specification are levels and profiles. These let manufacturers concentrate on applications without getting bogged down in every aspect of the format. Profiles are quality groupings within a compression scheme. Levels are specific image sizes and frame rates of a profile. This breakdown allows manufacturers to use only the part of the MPEG-4 standard they need while still being in compliance. Any two devices implementing the same MPEG-4 profiles and levels should be able to interoperate.

MPEG-4 Part 10 is the most commonly implemented part of the MPEG-4 standard for recording, streaming, or compressing high-definition audio and video. It is also known as H.264 Advanced Video Coding (AVC). AVC can transport the same quality (resolution, frame rate, bit depth, and so on) as MPEG-2 at far lower bit rates—typically half as low as MPEG-2.

AVC profiles include but are not limited to the following:

•  Baseline profile (BP), used for videoconferencing and mobile applications

•  Main profile (MP), used for standard-definition digital television

•  Extended profile (XP), used for streaming video with high compression capability

•  High profile (HiP), used for high-definition broadcast and recording, such as digital television and Blu-ray Disc recording

AVC also includes intraframe compression profiles for files that might need to be edited—High 10 Intra Profile (Hi10P), High 4:2:2 Intra Profile (Hi422P), and High 4:4:4 Intra Profile (Hi444P).

MPEG-H and H.265

MPEG-H is the latest group of standards under development. It includes H.265, also known as High Efficiency Video Coding (HEVC), which is the successor to H.264. With H.265, AV designers can double the data compression ratio of a stream compared to H.264/MPEG-4 AVC without sacrificing video quality. On the flip side, they can offer much better video quality at the same bit rate. H.265 is said to be able to support 8K ultra high-resolution video at up to 8192×4320.

Because networked AV applications can have voracious bandwidth appetites, H.265 is a significant advance. The most recent version of HEVC/H.265 was approved as an ITU-T standard in 2015. Companies are slowing introducing H.265 products, but the industry is still just on the cusp of H.265 adoption.

Unicast and Multicast

As someone who’s probably watched a little TV in your time, you know what broadcasting means. To broadcast is to transmit data to anyone who can receive it at the same time. This doesn’t happen much in networking. When it comes to streaming, we talk in terms of unicast or multicast.

Unicast streaming establishes one-to-one connections between the streaming server that sends the AV data and client devices that receive it. Each client has a direct relationship with the server. The client sends a request to the server, and the server sends the client a stream in response. Because the server is sending out a separate stream to each client, each additional client consumes more of the available bandwidth. Streaming media to three clients at 100 Kbps actually uses 300 Kbps of bandwidth. Unicast streams may use either UDP or TCP transport, although with TCP transport, you can assume there will always be some measure buffering, or client devices waiting for parts of the stream to arrive before playing it.

An encoder can typically produce only five or six unicast streams, depending on the streaming device’s available resources. If your client needs more than a handful of unicast streams, you’ll need a streaming or caching server to replicate and manage them.

That said, unicast is easier to implement than multicast, and it’s cheaper for small applications. Consider using unicast for point-to-point streaming, and even point-to-point plus recording. If you’re unicasting to thousands of users, however, you’ll have to invest in streaming devices that sit at the network’s edge for handling the extra streams, which can be expensive.

Multicast streaming is a one-to-many transmission model. One server sends out a single stream that multiple clients can access. Multicast streams require UDP transport. They can be sent only over LANs or private networks, not over the open Internet. So-called Class D IP addresses are set aside for multicast transmissions (see Table 11-1). In multicast streaming, the following happens:

1. A server sends the stream to a designated Class D IP address, called the host address.

2. Clients subscribe to the host address.

3. Routers send the stream to all clients subscribing to the host address.

Subscribing to an active multicast host address is like tuning into a radio station: All the users receive the same transmission, and none of them can control the playback. There is no direct connection between the server and the clients. Because the server is sending only one stream, the transmission should theoretically take up the same amount of bandwidth no matter how many clients subscribe. Sounds efficient, right?

However, not all networks are capable of passing multicast streams. For multicast to work, every router in the network must be configured to understand multicast protocols and Class D addressing. If the router doesn’t recognize the Class D IP address as a multicast host address, the clients will have no way to access the stream. As a result, multicasting can be very labor-intensive to implement. Only small portions of the Internet are multicast-enabled, so it’s usually not possible to multicast over the open Internet. If you want the client’s IT department to allow multicast, you’d better make a good case. Figure 17-2 illustrates the main differences between unicast and multicast streaming.

Images

Figure 17-2    Differences between unicast and multicast

Unicast vs. Multicast

How do you choose whether to stream using unicast or multicast? The decision should be based on your streaming needs and network capabilities. In many cases, you won’t have the option to use multicast streaming, particularly if the network you’re working with isn’t multicast-enabled. Managed switches must be capable of multicasting. Independent Group Management Protocol (IGMP) should be implemented on the router, and if you want to send multicast streams over a wide area network, you have to set up a protocol-independent multicast (PIM) relay, which forwards only the unicast streams that are in use. (We’ll cover IGMP, PIM, and more later in this chapter.) All the routers in the network have to support this functionality.

If the network isn’t ready for multicasting, use unicast streaming. Moreover, as long as the projected number of clients and the bit rate of the streams won’t exceed the network’s bandwidth capacity, stick with unicast because it’s much easier to implement. And if you’re implementing video on demand, you want to use unicast so users can control playback.

On the other hand, when you need to stream to a large number of client devices on an enterprise network, multicast can help limit your bandwidth usage. For example, streaming a live lecture to all the student desktops on a campus might benefit from multicasting. Multicasting is typically used for IPTV systems, with each channel sent to multiple hosts.

Compromise between unicast and multicast is possible. For example, using a caching or reflecting server, you can convert a multicast stream to a unicast stream to pass it over a network segment that cannot accept multicast transmissions. This hybrid approach is the most common approach for WAN implementations.



Implementing Multicast

To review, if you need to send a large number of streams or you need to stream to many users over a LAN, you’ll probably use multicast. It is a great preserver of bandwidth, but it takes an enormous amount of configuration to implement. For starters, every router on the network must be configured to understand what are called group-management protocol requests. These requests allow a host to inform its neighboring routers that it wants to start or stop receiving multicast transmissions. Without group-management protocols, which differ depending on the Internet Protocol version a network uses, multicast traffic is broadcast to every client device on a network segment, impeding other network traffic and overtaxing devices.

Internet Group Management Protocol (IGMP) is the IPv4 group-management protocol. It’s gone through a few revisions. IGMPv1 allowed individual clients to subscribe to a multicast channel. IGMPv2 and IGMPv3 added the ability to unsubscribe from a multicast channel. IGMP is a communications protocol used by hosts and their adjacent routers to allow hosts to inform the router of their desire to receive, continue receiving, or stop receiving a multicast.

Multicast Listener Discovery (MLD) is the IPv6 group-management protocol. IPv6 natively supports multicasting, which means any IPv6 router will support MLD. MLDv1 performs roughly the same functions as IGMPv2, and MLDv2 supports roughly the same functions as IGMPv3.

IGMPv3 and MLDv2 also support source-specific multicast (SSM). SSM allows clients to specify the sources from which they will accept multicast content. This has the dual benefit of reducing demands on the network while also improving network security. Any device that has the host address can try to send traffic to the multicast group, but only content from a specified source will be forwarded to the group. This is in contrast to any-source multicast (ASM), which sends all multicast traffic sent to the host address to all subscribed clients.

Protocol-Independent Multicast

With the amount of configuration required to implement multicast streaming, many IT managers are hesitant to implement it beyond a single LAN. Multicast routing beyond the LAN is made possible by protocol-independent multicast (PIM).

PIM allows multicast routing over LANs, WANs, or even, theoretically, the open Internet. Rather than routing information on their own, PIM protocols use the routing information supplied by whatever routing protocol the network is already using, which is why it’s protocol independent. PIM is generally divided into two categories: dense mode and sparse mode.

Dense mode sends multicast traffic to every router of the network and then prunes any routers that aren’t actually using the stream. By default, dense mode re-floods the network every three minutes. PIM-DM is easier to implement than sparse mode but scales poorly. It’s suitable only for applications where the majority of users will join each multicast group.

Sparse mode sends multicast traffic only to those routers that explicitly request it. This is the fastest and most scalable multicast implementation for WANs. We’ll examine it further.

PIM Sparse Mode

PIM sparse mode (PIM-SM) sends multicast feeds only to routers that specifically request the feed using a PIM join message. The multicast source sends its stream to an adjacent router. That router must be configured to send the multicast traffic onward to a specialized multicast router called a rendezvous point (RP). There can be only one RP per multicast group, although there can be several multicast groups per RP.

Here’s how it works:

  1. The destination host sends a join message toward the multicast source or toward the rendezvous point.

  2. The join message is forwarded until it reaches a router receiving a copy of the multicast stream—all the way back to the source if necessary.

  3. The router sends a copy of the stream back to the host.

  4. When the host is ready to leave the multicast stream, it sends a prune message to its adjacent router.

  5. The router receiving the prune message checks to see whether it still needs to send the multicast stream to any other hosts. If it doesn’t, it sends its own prune message on to the next router.

Packets are sent only to the network segments that need them. If only certain users will access the multicast stream or if many multicast streams will be broadcast at once, sparse mode is the most efficient use of network resources. IPTV, for example, is typically implemented using PIM-SM.

However, PIM-SM also requires the most configuration (see Figure 17-3). All routers on the network must be aware of the multicast rendezvous points. Routers should also be configured with access lists so that only designated users will have access to the multicast group. RPs themselves must be configured with a traffic threshold. Once the threshold is exceeded, join messages will bypass the RP and go directly to the source. This threshold is usually set to zero automatically, which means the RP will direct all join requests to the source by default. It can also be lifted entirely. The decision depends on how much multicast traffic each network segment can bear.

Images

Figure 17-3    Setting up a multicast network using PIM sparse mode

Images

NOTE    PIM “sparse-dense” mode is not a type of PIM but rather a router setting on Cisco routers that allows the interface to act in both PIM sparse mode and PIM dense mode simultaneously (on different multicast groups) to support a network with both modes used for different multicast groups and applications.

Multicast Addressing

You can’t use just any IP address as a multicast host address. The Internet Assigned Numbers Authority (IANA), the same organization responsible for assigning port numbers, has divided the Class D IP address range into a series of address blocks allocated to different types of multicast messaging. See Table 17-1.

Images

Table 17-1    IANA Multicast Address Assignments

Each block in the Class D address assignment table serves a different function.

•  The local network control block, or link-local block, is used for network protocols that will remain within the same subnet. Control or announcement messages that must be sent to all the routers in a subnet are addressed to destinations in this range.

•  The internetwork control block is used to send multicast control messages beyond the local network segment. For example, in PIM sparse mode, rendezvous point information may be communicated across broadcast domains using an address in this range.

•  Ad hoc addresses are assigned by IANA to multicast control protocols that don’t fit clearly into the first two blocks. However, much of the ad hoc block was assigned prior to the existence of usage guidelines, so many addresses in this range are simply reserved for commercial uses.

•  The SDP/SAP block is reserved for applications that send session announcements.

•  The source-specific multicast block provides multicast host addresses for SSM applications.

•  The GLOP block (see the following note) provides multicast host addresses for commercial content providers.

•  Some address ranges remain unassigned for experimental or future uses.

•  The administratively scoped block is reserved for private multicast domains. These addresses will never be assigned by IANA to any specific multicast technology or content provider. You’re free to use these as multicast host addresses within the enterprise network, but they can’t be used to send multicast content over the Internet.

Any multicast host address used by a private enterprise (other than a commercial content provider, such as an ISP or television network) will come from either the SMM or the administratively scoped block.

Images

NOTE    GLOP is not an acronym. It turns out the original authors of this request for comment (RFC) needed to refer to this mechanism by something other than, “that address allocation method where you put your autonomous system in the middle two octets.” Lacking anything better to call it, one of the authors simply began to refer to this as “GLOP” addressing, and the name stuck.

Streaming Reflectors

A streaming video reflector (sometimes called a relay) subscribes to a video stream and re-transmits it to another address. This re-transmission can be any combination of multicast or unicast inputs and outputs. In the case of forwarding multicast across a VPN, a pair of reflectors can be used. In such a situation, a streaming source outputs a multicast stream. The reflector service subscribes to the stream and de-encapsulates layers 3 and 4 (IP and UDP headers). It then re-encapsulates the data with new TCP and IP headers and forwards the packet. The receiving end receives the unicast stream and performs the reverse process, forwarding a multicast stream.

When implementing multicast reflecting, you need to consider the following:

•  Administration    The reflector is typically configured by the same administrator who is responsible for the streaming service. Beyond the initial configuration at installation, no network configuration is required to change the reflector service. For occasional use, such as company announcements, a reflector at the source site can be configured and left active with the receive-side reflectors for the duration of the event.

•  Bandwidth    Because the reflected streams are individually configured, there is no risk of inadvertently forwarding multicast streams. The receive-side reflector requests the stream based on local administration. While the receive-side reflector service is enabled, the bandwidth is used even if no one is subscribed to the multicast stream. There is a small packet overhead increase with the conversion from UDP to TCP.

•  Scalability    A single reflector at the source location can reflect to multiple receive-site reflectors, but the bandwidth is unicast, so each receive-side reflector gets a separate stream.

•  Configuration    A separate PIM rendezvous point will need to be configured at the receiving site. Multicast addresses do not need to map between the sites. Each site can have its own addressing scheme. For IPTV applications, a separate channel guide will have to be implemented if the multicast addresses do not map between the sites.

Chapter Review

Streaming is an increasingly important component of AV systems design. Once you’ve performed the requisite analyses for a streaming system, you need to be able to identify the impact of bandwidth restrictions and network policies and the impact of inherent network latency on the streaming application. You should also calculate the required bandwidth for uncompressed digital AV streams and factor in the appropriate transport and distribution protocols for the client’s streaming needs.

Always keep in mind that streaming AV consumes network resources—perhaps more resources than a network manager anticipated. Document everything, from the needs analysis to network restrictions to the technical specifications of your proposed solution. If you can make the case that your streaming system meets the customer’s needs and you’ve created a design that respects the requirements of the larger enterprise network, you will be successful.

Review Questions

The following questions are based on the content covered in this chapter and are intended to help reinforce the knowledge you have assimilated. These questions are not extracted from the CTS-D exam nor are they necessarily CTS-D practice exam questions. For an official CTS-D practice exam, download the Total Tester as described in Appendix D.

  1.  If your streaming AV system does not support High-bandwidth Digital Content Protection (HDCP), it’s likely the system _______________.

A. Won’t be able to play high-definition video content

B. Won’t be able to play standard-definition video content

C. Won’t be able to play live video streams

D. Won’t be able to play protected audio streams

  2.  In a converged network, where 70 percent of the bandwidth capacity is available and 30 percent of the available capacity is available for streaming, what percentage of the network’s rated capacity is available for streaming?

A. 21 percent

B. 30 percent

C. 50 percent

D. 70 percent

  3.  What might be considered acceptable latency for streaming desktop video?

A. 50 microseconds

B. 30 milliseconds

C. 200 milliseconds

D. 1 second

  4.  Which of the following protocols might you use in a streaming AV system? (Select all that apply.)

A. User Datagram Protocol

B. Real-time Transport Protocol

C. Real-time Transport Control Protocol

D. MPEG

  5.  Which of the following statements apply to unicast streaming?

A. It consumes more bandwidth the more people who need to view the stream.

B. It uses group-management protocols to control streaming channels.

C. It requires special routers called rendezvous points.

D. It uses special IP addresses assigned by the Internet Assigned Numbers Authority.

  6.  If your multicast streaming AV system runs on an IPv6-based network, the group-management protocol to use is called _______________.

A. Internet Group Management Protocol

B. Multicast Listener Discovery

C. Source-Specific Multicast

D. Protocol Independent Multicast

Answers

  1. A. If your streaming AV system does not support High-bandwidth Digital Content Protection (HDCP), it’s likely the system won’t be able to play high-definition video content.

  2. A. In a converged network, where 70 percent of the bandwidth capacity is available and 30 percent of the available capacity is available for streaming, 21 percent of the network’s rated capacity is available for streaming.

  3. D. In general, streaming desktop video can tolerate one second of latency.

  4. A, B, C. When designing a streaming AV system, you might use User Datagram, Real-time Transport, and Real-time Transport Control Protocols.

  5. A. Unicast consumes more bandwidth the more people who need to view the stream. The three other statements apply to multicast.

  6. B. If your multicast streaming AV system runs on an IPv6-based network, the group-management protocol to use is called Multicast Listener Discovery.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.22.41.235