10.4 3D Video Broadcasting

When considering services that broadcast 3D video content, it is natural to expect that these services will be designed with the constraint of maintaining backwards compatibility with legacy video services that do not provide 3D rendering. As such, this is a consideration that has influenced most of the systems available today for broadcasting 3D video.

On a broad view, broadcasting 3D video involves the capture of 3D video information, its encoding into a usually compressed representation, the transport over a network, and the reception of the 3D video. All of these components have been described throughout this book, so in this section we will focus on describing how they are applied to different system configurations for 3D broadcasting.

The first element of the broadcasting system to look at is the video encoder. Here, the approaches heavily rely on the use of modified MPEG-2 and H.264 codecs. One approach [28] is to encode the left view using a standard MPEG-2 encoding. This part of the video bit stream is considered as a base layer and can be decoded by non-3D video receivers. To provide a 3D video service, the right view is encoded as an enhancement layer and transmitted as a second component of the video stream. In the ATTEST (Advanced Three-Dimensional Television Systems Technologies) project, the encoding of 3D video was done following a video-plus-depth representation using the H.264/AVC codec. Interoperability with legacy systems not capable of representing 3D video is maintained, in this case through the availability of a 2D video description within the transmitted stream. This approach has the advantage that the transmission of additional information to provide 3D service, in the form of a video depth description, adds only a relatively small overhead. Indeed, as reported in [28], the video depth information can be compressed at rates of 200–300 kbps, which amounts to roughly 10% of a typical 3 Mbps 2D broadcast-quality video.

Transmission of broadcast 3D video can be done using a number of transport methods also available with different degrees of modifications for the transmission of legacy 2D video. One such case is transmission using the infrastructure for broadcasting 2D digital television. Being used in Europe, Australia and good proportions of Asia and Africa, the DVB (Digital Video Broadcast) standard is the most widely used standard for broadcasting 2D digital TV in the World. Depending on the application, there are different variations to the DVB standard [29]. The DVB-T, DVB-S, and DVB-C are applied to the digital broadcasting of, respectively, terrestrial, satellite, and cable television. The three variations of the standard share the use of an outer Reed–Solomon (204,188) FEC code and an inner punctured convolutional code. After the FEC blocks, and following an interleaving operation, DVB-T transmits using OFDM where each subcarrier can carry QPSK, 16-QAM, or 64-QAM-modulated symbols. Differently from DVB-T, DVB-S transmits using QPSK or BPSK and DVB-C uses 16-QAM, 32-QAM, 64-QAM, 128-QAM, or 256-QAM. In addition to these three variants, there is also the DVB-H standard for broadcasting digital video to handheld devices [30].

The DVB-H adapts the DVB standard for the transmission of digital video over the much more error-prone environment found in mobile communications. This is done by maintaining the final stage of data transmission using DVB-T but, at the transmitter side, the video encoder and other possible higher network layers (i.e., transport and network layers) are fed into an MPE-FEC (multi-protocol encapsulation – forward error control) module before going through the stages for DVB-T transmission. The function of the MPE-FEC module is to add extra error protection for the transmission over the more challenging wireless channel. A study of this system was presented in [31] where the extra error correcting in the MPE-FEC module is implemented with a Reed–Solomon code. The results from the study show that the MPE-FEC adds the needed extra level of protection against errors but for channels that are very degraded it is also necessary to implement unequal error protection techniques.

It is also commonplace to add in the transmission chain of 3D video the processing steps for transmission over IP networks. With IP-encapsulated 3D video, the delivery of the service can be done using server unicast to a single client, server multicast to several clients, peer-to-peer (P2P) unicasting, or P2P multicasting. For transmission of 3D video over IP networks, the protocol stack can be organized as RTP/UDP/IP. This is the most widely used configuration today because UDP is needed for applications that have timing constraints for the transmitted video, but it faces the problem that UDP does not provide congestion control functionality. Transmission of video using protocols that provide congestion control is especially useful due to the large volume of traffic that are expected in relation to video services. Consequently, a new protocol, called “datagram congestion control protocol” (DCCP), was published in 2006 [32] as an alternate to UDP that provides congestion control functionality. One of the congestion control methods provided by DCCP calculates a transmit rate using the TCP throughput equation (which depends on an estimated round trip time for the connection). This congestion control method provides a smooth transmit rate, with slow changes over time, and is also fair to other TCP traffic streams with which it needs to coexist on the Internet. DCCP also provides a congestion control mechanism that behaves similarly to the Additive Increase, Multiplicative Increase method used in TCP.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.216.96.94