Digital compression

Digital compression is now an essential characteristic of satellite transmissions, and as we have already seen, increasingly in terrestrial microwave as well. Compression is a highly complex subject, and to fully explain is far beyond the scope of this book, but we do need to have some concept of the process in order to understand its fundamental advantages (and disadvantages!).

So why would we want to compress a signal? The answer is that in an ideal world, we would not. If we had unlimited bandwidth available to us to send our information, then there would be no need for compression. Unfortunately, bandwidth is a valuable resource, and great efforts are made to make sure that there is efficient use of it.

Digital compression is essentially about squeezing large bandwidth signals into narrower available bandwidth (frequency spectrum). But satellite communication revolves around the issues of power and bandwidth, and by digitizing and compressing the signal, we can reduce demand for both power and bandwidth – and hence reduce cost.

Hence there is a need to strip out the redundant and insignificant information in the signal thus producing a compressed signal.

The process of compression can be likened to the use of concentrated orange juice.

Squeezing the juice

Consider a carton of concentrated orange juice that you buy in a supermarket. Before you picked it up off the shelf, the process began with squeezing oranges to produce orange juice, of which the major constituent is water.

The water is removed to produce a concentrate that can be put into a small carton and transported to the shop for sale. The customer buys the orange juice concentrate, takes it home, and reconstitutes the orange juice by adding back in water.

Effectively, the orange juice is compressed (just like the signal), so that a comparatively large volume can be transported in a small carton (or bandwidth), and then the juice is reconstituted by adding the equivalent volume of water back in that was removed in the first place (decompressed).

What is the advantage of this to the consumer? Because a relatively large amount of orange juice has been reduced to a smaller volume in the carton, the costs of transportation are much lower, the orange juice can be kept on the shelf for longer, and does not need as much care and attention in storage and transportation as fresh juice.

Because of these factors, the cost of the concentrated orange juice is much less than the freshly squeezed juice, and is therefore much more affordable.

But there is a compromise, as I hear you say ‘Ah, but it does not taste the same as freshly squeezed juice’ – and that inevitably is also the result of the compression processes we will be looking at.

Just as the reconstituted juice is not quite the same as when it began life after being squeezed from the fruit, at the end of the compression and decompression process the signal is never quite the same as the original – something is lost along the way, just as with the orange juice.

However, it costs less in the end, and is a reasonable facsimile of the original for most people. But there should never be any doubt that a signal that has been compressed, transported and decompressed (reconstituted) will never be the same as the original.

So, compression is necessary to reduce the amount of bandwidth used on the satellite, and hence reduce the cost of space segment. Increasingly the process of digital compression is far more efficient and cost-effective as it uses technology developed for computers.

Pixels

In digital video compression, each frame of the TV picture can be considered as being made up of a number of picture elements (‘pixels’): a pixel is the smallest element that can be seen by the human eye and measured (quantified) in a picture.

Each pixel is measured – sampled – and this sample is then an instantaneous measurement of the picture at a particular point in time. As we learnt earlier in the description of the TV picture, a TV frame is made up of a number of lines (625 or 525), and a picture can be divided up horizontally and vertically into a number of pixels – and in a TV picture, pixels in the horizontal direction correspond with the lines of the TV picture.

The number of pixels defines the resolution or sharpness of the picture. This is the same as the way a photograph in a newspaper is made up of individual dots of ink – the more dots per millimetre, the greater the resolution or detail of the picture.

Principles of compression – digital sampling

We covered digitization of a signal earlier, but just to recap, a digital signal conveys information as a series of ‘on’ and ‘off’ states, which is represented as the numbers ‘1’ and ‘0’ in binary arithmetic.

In a stream of 1s and 0s, each 1 and each 0 is termed a bit, and a signal is transmitted at a certain speed or bit-rate, measured in bits per second (bps). The analogue signal is converted to a digital signal by a process called sampling. Sampling is when a signal is instantaneously measured, and the value of the signal at that instant is converted to a binary number. The signal is typically sampled thousands (for audio) to millions (video) of times per second.

The more accurate the binary number obtained from each sample, the better the quality of the signal that can be transmitted, and the more accurate the original signal can be reproduced. The greater the number of bits per sample, the more precise is the instantaneous value of the signal obtained. These bits are transmitted at a given data rate, expressed usually in million bits per second (Mbps) because of the huge number of bits to be transmitted.

In both DSNG and DENG, digital transmissions are typically compressed down to a rate of 8 million bits per second (8 Mbps) or lower. For higher quality, the bit-rate may be as high as 18 Mbps, but this is usually only for very high quality event or sports coverage.

The original standard for digital video was published as the European standard CCIR Rec. 601 standard. In general, digital video at broadcast quality is loosely referred to as ‘601’ video, and an uncompressed digital video signal has a data rate of 270 Mbps. Signals at this data rate are referred to as serial digital interface (SDI) as the digits are transported as a ‘serial’ stream of data (as opposed to a parallel stream). The significance of this is that particularly in DSNG trucks, the baseband video is processed and transported between equipment within the truck as an SDI signal. Additionally, some ENG camcorders are capable of producing an SDI output on the rear of the camera, which can make subsequent handling of the signal much easier. Audio, on the other hand, is generally left in analogue form up until the input to the digital compression encoder.

An SDI signal is over thirty-three times the data rate of the typical 8 Mbps DSNG/DENG signal, and this data rate of 270 Mbps, when modulated would require over 250 MHz of bandwidth.

It is not feasible to allocate this amount of bandwidth on commercial satellite systems or terrestrial microwave bands, hence the need for compression.

Video is typically sampled at 10 bit accuracy (i.e. 10 bits are used to describe each sample), with a sample for each luminance pixel value and also for each of the two colour difference pixel values. Thus we have separate streams of samples for luminance and for each of the chrominance difference signals.

Redundancy

Once a signal has been digitized, the signal is a bit stream at a given overall data rate. Within that data stream, there will inevitably be sets of data that are repeated, and the repeated sections represent redundancy in the signal (like the water in the orange juice).

The aim of any compression system is to remove as much redundancy in the information signal as possible to minimize the bandwidth or data rate required.

Compression depends on flaws in human psychovisual and psychoacoustic characteristics – these describe human sensory perception in sight and hearing.

Digital compression is ‘perceptive’ compression, since the results are pictures and sounds that ‘trick’ the brain into thinking that the material looks and sounds like the original. Looking at our orange juice analogy, we know that the taste of the reconstituted juice is ‘perceptively’ close to the original – but not identical.

With the orange juice, we removed the redundant component – water – to achieve the compression. That is the essence of video and audio compression – to remove the redundancy in the signal.

So how do we identify which are the redundant parts of the signal? In any frame of a TV picture there are parts of the image that have the same values of brightness and colour in particular areas. So, instead of sending a repeating string of numbers that represent individual values of brightness and colour in these areas, one string of numbers can be sent that represent brightness and colour in these parts of the image that are essentially the same.

In audio, there are parts of the signal that can be removed without the brain noticing the difference. The standards used in digital newsgathering signal compression integrate both the audio and the video into a single combined (multiplexed) signal.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.82.154