Chapter 11:
HDTV, 24p, and the Future

Writing about the future is dangerous. Time is the test of any predictions, and even the most well-educated guesses can seem ridiculous in the years to come, particularly when it comes to technology. One of my favorite books is a 1950’s science work about outer space. The book quotes the scientists of the day, including Arthur C. Clarke, and claims that one day “soon” we will have outposts on asteroids and study Venusian swamp creatures. Computer mogul Bill Gates was once quoted when he asked why anyone would need more than 720K of storage.

With those thoughts in mind, this chapter is not so much a prediction of the future as it is a report of what is here today and might become widely accepted.

The Standard(s)

HDTV (High Definition Television) comes in a variety of flavors. In fact, there are 18 standards. You’ve probably encountered some of the buzzwords of the HDTV age: 480i, 720p, 1080p, up-convert, down-convert, 16:9 and so forth. So we’ll begin by defining some of these new and somewhat confusing terms.

There are three terms which define the format of an HDTV standard. The first is a number which defines the vertical resolution of the picture. The second defines how the picture is stored or displayed. The third defines the frame rate of the images. When the letters i, p, or sF are used, it determines how the frame is stored or displayed. An i frame is segmented into fields, as it is in standard television (SDTV) signals used today. A p frame is progressive. The frame that is field based is said to be interlaced, hence the i suffix. SDTV signals scan every other line of information to form a field (as discussed in Chapter 3) then return to the top of the frame and scan the other lines not scanned in the first field to form a second field. The sF frame is a segmented frame, not to be confused with field-based frames. Segmented frames will be discussed later in this chapter.

A progressive frame contains all of the information and is presented on a single temporal frame. The frame that is progressive is non-interlaced, hence the p suffix. So a resolution of 1080/24p is a frame that is 1080 horizontal lines and a single progressive frame displayed at a frame rate of 24 fps.

The progressive frame is much like a film frame. It holds all of the elements of an entire picture. Unlike interlaced fields, it does not appear on the screen in succession. When recording a standard television signal, the first field is recorded, then the second field. Further, field-based frames are vertically filtered to prevent small area flicker or “twitter” between the fields. This represents a passing of time for each field or motion phase, which could have different content based upon the moment between fields. For example, if we have a picture of a child kicking a ball, the position of the child’s leg in the first field would be different in the second field. This is the nature of field–based imaging. But progressive imaging holds for the entire duration of the frame. There is no interfield motion or jittering when showing an entire frame, because the frame is recorded all at once. When a progressive frame is displayed, all lines of resolution are shown at any point in time. Thus, we could say that a progressive frame is far superior in quality to an interlaced frame.

There are advantages to using a progressive frame when it comes to interformat delivery of SDTV signals, too. Let’s consider the common idea of transferring an NTSC picture to PAL. If you’re from a PAL country, you’re probably frowning about now. There can be nothing uglier than stretching a 525 line frame into 625 lines and eliminating 5 frames per second. It is a jumpy, blurred, and inferior picture.

But if we transferred progressive frames, it would be different. PAL is field based, just like NTSC. Instead of 2 fields of roughly 263 lines, it has two fields of 313 lines— a big difference, for sure. If you then transferred a full frame of 525-line progressive images to field-based PAL format, the resolution artifacts would disappear because 525 lines of resolution would be present at all times in the source via a progressive frame.

The best example of a progressive frame would be your computer’s monitor. If you compare the definition of your computer monitor with a conventional NTSC video signal, you’ll note that graphics are softer on NTSC. Images that tend to shimmer on NTSC don’t have the same artifacts on a computer desktop. And a frozen frame in NTSC will contain motion artifacts. A screenshot of your computer monitor contains no such issues.

So the advent of HDTV has created more alternatives than ever for broadcasters. But in that process, it has created a nightmare for those wishing to create material for a mass audience. Which format should be used?

Bandwidth and Format

The progressive frame presents new bandwidth issues for broadcasters. Now, instead of delivering 60 fields of information, 30 full frames will have to be shown. The broadcasting of full frames creates a need for more bandwidth. But then again, the higher resolution of HDTV demands more bandwidth. The need for more bandwidth is offset somewhat by the frame rate, which is 24 fps.

For broadcasters, HDTV can point to only one thing: compression. Television stations and broadcast companies do not have enough signal bandwidth to recreate an HDTV signal. So, to solve this issue, it is presumed that some, possibly a lot of compression will be necessary. Estimates go as high as 65:1 compression rates and more. This seems to destroy the intent of HDTV in the first place.

For editors, the story is different. As long as machines have the ability to create an uncompressed HDTV picture, that will be the demanded standard. But in its final delivery, what you see may not always be what you get. The issues of the choice of HDTV formats are going to cause considerable headaches for editors. Thus far, the four major U.S. networks have adopted four different HDTV standards (one of the networks is going to use two different standards, and two networks have actually agreed on the same standard—hooray!) As a result, the editing room is going to be a Tower of Babel, where no two videotapes speak the same language.

24p

In the midst of this brouhaha is the need for a universal standard that could potentially be used for upconverting and downconverting of the massive variety of HDTV signals available, according to the needs of broadcasters, filmmakers and producers. Enter 24p, a standard which could not only resolve many of the multiple format issues, but also solve some of the old problems that have been plaguing the video world for years. 24 progressive frame video has a number of advantages, including

  • A 1:1 frame correspondence with film without 2:3 pulldown and.01 percent slowdown of audio.
  • 24 fps nondrop frame time code.
  • The ability to down-convert to SDTV signals such as NTSC, PAL, PAL-M, SECAM and NTSC-J
  • High definition 1080p frames, which can be down-converted to 1080i, 780p, 780i, 480p and 480i frames.
  • 16x9 frame, which can be panned and scanned, letterboxed, or shown full screen, depending upon the capabilities of television monitors, broadcast bandwidths, and individual needs of the originating broadcast source.
  • Less storage space is required to deliver 24 frames per second as opposed to the NTSC standard 30 fps.

1:1 Frame Correspondence with Film

Because of its 24 fps framerate, 24p video has the potential of eliminating all of the pulldown issues associated with transferring film to video. These issues have plagued American television for decades. Combining high definition with a universal standard, 24p can be used as an editing source and a master output source to a variety of formats.

Most prime time television created in the United States is filmed, then transferred to standard NTSC video and edited on NLEs. As a result, a matchback of those edits must be made for overseas distribution. The conversion of an NTSC video signal to PAL is so horrid that most distributors won’t accept it. With a 24p universal standard, digital cuts from NLEs could be delivered in most any world standard, including PAL, with no down-conversion artifacts. The result would save extensive labor costs on the conforming and retransfer of the film to PAL.

24 fps Nondrop Frame Time Code

24p uses a 24 fps time code that does not necessitate dropping of frames to establish accurate timing. As a result, the usual issues associated with frame code modes would go away. In fact, 24fps EDLs would allow a worldwide standard of production in which one could shoot in one country and edit in another. EDLs could be used with original transferred or 24p camera masters to recreate content in any country at any time.

This particular concept may seem ordinary, but it isn’t. Finding an NTSC cutting facility in Europe can take a lot of research. If 24p becomes the accepted universal standard of recording video and transferring film, the issues of region and format become invisible.

Universal Mastering

Imagine delivering a 480p made-for-television movie for Fox and then later having to recreate it for syndication on late night ABC 1080i television. How could it possibly be done? Using older methods, it would require retransferring the original film and delivery of a new master. But if the original was mastered on 24p, a quick dub of the 24p 1080p master with a down-conversion to 1080i could solve the problem.

This issue becomes even more grotesque in the independent television market. What if you have to deliver reruns of M*A*S*H* at 480p, 1080i, 720p and 1080p? Telecine and on line would take days at a minimum. All of these issues could potentially go away with the use of a single HDTV 24p standard.

24sF

Although 24p has many advantages, there are some technical drawbacks. Broadcasters and post production companies have sunk thousands of dollars into SDTV formats and will have to bear the brunt of the extra expense that 24p introduces. A variation of 24p has been proposed that might save millions of dollars in conversion costs of videotape equipment. That format is known as 24sF, twenty-four segmented frames per second. 24sF would deliver 24 progressive frames, but they would be stored in a segmented format. That is, information about each frame would be recorded into a pair of divided parts of the frame. By doing this, equipment that normally runs at 60 Hz and 50 Hz (NTSC and PAL, respectively) could easily be adapted to 24sF, which, with two iterations of twenty-four frames, would run at 48 Hz.

It should be noted that a segmented frame is not the same as a field. A segmented frame is a recording of a progressive frame broken into two parts. Therefore, there is no vertical filtering, as with fields, nor is there any motion artifact representing the passage of time, which is characteristic of field based recording. 24sF would reproduce 24p accurately, but would store it in segments.

But like every proposed standard, 24sF has some drawbacks. If the 24sF signal were displayed on an interlaced monitor, the viewer would see significant aliasing, a stair-step artifact which is most apparent on diagonal or curved lines. In order to prevent this, the 24sF picture must be reconverted into a 24p frame before being displayed. In addition, any effects or titling would have the same conversion requirements, because the title or effect would show aliasing artifacts without it.

Monitoring and Flicker

Twenty-four frame video has many of the same problems as twenty four frame film. To correct issues of flicker with 24 fps film, a double bladed shutter is used in projection. This shutter displays each film frame in two iterations, reducing the flicker artifacts and smoothing the motion. So while film is played back at 24 fps, you see 48 fps, or 24 pairs of duplicate frames. This also poses a challenge for 24p. In order to prevent flicker artifacts, the frames must be reiterated. To compound this problem, the phosphors in a monitor have an instantly decaying nature that would amplify the flicker. Brightness and contrast restrictions on the monitors could restrict the flicker somewhat, but not enough to smooth it out to the human eye. In its native format, a 24p picture would have 24 iterations (24 Hz), each of a different frame. However, to smooth out the process, each frame can be reiterated two or three times (48 Hz or 72 Hz) to eliminate the flicker altogether. But a high definition monitor running at 72 Hz could prove challenging for design engineers.

The 24sF format would solve some of the flicker issues, as it runs at 48 Hz, but there would still be the problem of aliasing artifacts on interlace screens.

Color

As discussed in Chapter 10, the range of color in YUV color space is limited by some estimates to as little as 2.75 million colors. YUV uses a color standard approved by the International Telecommunications Union, or ITU. The specification, called ITU 601, defines the range of color that is used and acceptable for YUV video. The serial digital signal of 24p and other high definition signals uses a new standard from ITU known as ITU-R Rec. 709. This standard is designed for optimal display, just like the 601 standard, but it is different. As a result, conventional SDTV signals and HDTV signals are different with respect to their colors. As a result, any cross-format transfer of film or video will necessarily be monitored on two different monitors for optimal results on each. This could lend itself to the need for more monitors in telecine and editing rooms.

DataCine

Another possible intermediary for film-to-tape as well as film-to-film is the DataCine, developed by Phillips. The DataCine is capable of scanning files up to 2K and storing them on a disk array, using an average of 11 MB per frame. The advantages are many to this form of mastering.

One of the first films to use the DataCine was Pleasantville. The images combined black and white and color, using a color correction system by DaVinci Systems on a Phillips Spectre Virtual DataCine. The result was a brilliant effect of selective colorization combined with black and white, which was central to the film. All of the colorization effects were created within the digital domain.

Film images can be transferred from OCN using the Phillips Spirit DataCine scanner. From there, the information is downloaded to the disk array using a high performance parallel interface, or HIPPI. The data amount is huge—up to 350 MB per second. The rate of capture varies, but averages about 6 fps. From there, the information can be controlled by what Phillips terms a “virtual DataCine.” This means that the data files can be controlled at telecine speed, 24fps. The advantages of virtual DataCine are many, including:

  • It frees up the scanner for transfer of other films.
  • The digital data on the virtual telecine can be manipulated like a regular telecine session, using Phillips’ Spectre Virtual DataCine.
  • The files can be downloaded or converted into any format, including film, HDTV or SDTV
  • Color correction and secondary color correction can be implemented for video or film.
  • Opticals normally created at the lab can be created on the DataCine and rendered to the disk array.
  • No need for dupe lists! The digital information is random access and frames of data can be repeated wherever necessary.
  • No worries about neg cutting and the potential destruction of the OCN. The negative is never cut, only transferred to the disk array.
  • Direct implementation of EDLs or cut lists to the virtual telecine for direct transfer to film or video. The entire film could be cut on an NLE and the EDL could be sent direct to the DataCine house for conforming digitally.
  • All timing or grading of the film is performed digitally using color correction systems.

Many films are attracted to the advantages of DataCine, but there are some minor limitations. The 2K frame size isn’t quite the equivalent of film resolution. Phillips readily acknowledges this, but points out that the size of the file approaches diminishing returns after it has reached 2K resolution. And of course, DataCine would not be an interformat solution for video-based mediums. But it has changed the methods of many film producers in that, other than the processing of OCN, a film can conceivably be created in its entirety digitally without a considerable loss of resolution or color and still avoid the lab.

Further developments in transfer machines have been made since Phillips introduced the concept of DataCine. Innovation ITK has created a Millennium Machine which can capture 4K sized frames and is, in their words, “technology proof.” Further innovations have been made regarding color systems and data storage. The future clearly rests with film resolution images being manipulated by high tech color systems at faster rates that can be transferred to several mediums for near instant distribution. The age of the digital cinema has truly arrived.

Issues of Cutting and Placement

With every great idea comes a multitude of problems. If you’re going to deliver a film to multiple standards, including HDTV and SDTV (Standard Definition Television), how do you cut it?

Somewhere in every great editor’s brain is the spark of genius that tells them when to cut. In his book, On Film Editing, the late Edward Dmytryk even defines some of the rules of cutting. Ask most editors how they know when to cut, and they’ll shrug their shoulders. Most of the rules seem innate.

But they’re not. Editors tend to follow lessons learned from viewing too many films and thousands of hours of television. For example, when a subject leaves a frame and reenters in another room in the next scene, we tend to cut when the subject’s eyes hit the edge of the frame, and cut to the next scene when the subject’s eyes reenter it.

But what happens when you’re cutting the same scene to be delivered to both wide and center panned formats? Do you cut when the subject leaves the SDTV frame or wait until it leaves the HDTV frame? Cutting too early for HDTV seems abrupt. Cutting for HDTV will leave SDTV viewers with a stale image and an unintended passage of time. As the subject reenters, the same problems will occur, only inverse. Cutting for HDTV will increase the passage of time and enter the new scene with a stale frame for SDTV viewers. Cutting for SDTV creates an almost comical effect of sped up motion for HDTV viewers.

Another dilemma is the placement of graphics. Do you place graphics within the title safe zone of HDTV or SDTV? Common sense would dictate that you’d want to create two separate masters to avoid graphics flowing off of the screen or odd placement. The ultimate answer would be to cut for HDTV and letterbox for SDTV, but not all producers are going to go for this solution. Thus editors face even more challenging questions as to how they will respond creatively to the problems associated with multiple formats.

For those who choose not to letterbox 16:9 pictures to SDTV, another solution is pan and scan. An area fitting the lower resolution format can be chosen to be panned and scanned to create a final picture for that format in the proper aspect ratio. Panning and scanning fills the screen, eliminating the drawbacks of a letterbox. The down side is that you will have to choose what elements of picture are eliminated when panning and scanning.

HDTV NLEs

The inevitable question about HDTV and progressive frame editing is how soon it will begin. The answer is that it has already started, but the transition phase is taking time. Several NLEs are coming on line with HD formats. Avid led the charge in 1999 with its Symphony Universal. But 24p is a fairly unsexy concept and it took a while to take hold.

The Symphony Universal has some limitations. As of this writing, it cannot accept an HD 24p signal, but can recreate a progressive frame. Thus the original 24p master must be downconverted to a standard NTSC videotape before digitization. Through an Avid patented process, the picture is converted into a 24p SDTV format. The final output uses progressive frames and can be made into SDTV PAL or NTSC video.

image

Figure 11.1 The Avid Symphony Universal with 24p capability.

Avid is moving ahead with newer products, such as the Avid DS HD. This particular product supports 1080p, 1080i, 720p, and SDTV PAL as well as SDTV NTSC signals. You can choose between 24, 25, 29.97, and 30 fps. The Avid DS HD is list compatible with other Avid systems and can convert EDLs and accept OMFI files from other Avids as well.

Pinnacle Systems is also heading into the HD market with its TargaCine video capture card. The TargaCine works with Final Cut Pro and is able, with an optional HDTV tether, to capture 1080/24p, 1080/60i, and 720p formats within a serial digital interface.

Sony has been involved with HDTV since its inception. They provided prototypes in the early 1980s and have been steadily working with development of solutions for broadcasters, video producers, and filmmakers. Currently Sony has a line of HDTV cameras and videotape recorders. As the technology becomes more commonplace, there is little doubt that it will be passed on to the consumer and prosumer markets at lower cost.

image

Figure 11.2 A reason for pan and scan. Subject exits through a door and enters another room. If the picture were converted to SDTV from center screen, the editor would want to cut when the subject’s eyes left the frame and when they entered the frame in the next shot on the right. If this occurred, the subject would appear to jump from room to room on the wider screen HDTV master. If we waited for the subject to clear the HDTV frame, the SDTV picture would exit and enter the shots “stale” with a lot of empty space between cuts, making continuous action impossible.

24 fps EDLs

Whether you choose Final Cut Pro or Avid as your HD solution, the EDLs created by these machines must support a 24 fps time code to use a 24p source properly. The good news is that 24 fps lists are supported both on Avid’s EDL Manager version 10 and FilmLogic version 3. Previous versions of these EDL applications do not support 24 fps time code. For filmmakers, each frame of video corresponds with a frame of film. There are no pulldowns, conversions, sound speed adjustments or reverse pulldowns necessary. 24p lists end all of that. Cut lists are perfect, matchback lists are unnecessary. Both film and video source originate in a tidy 24 frame format.

HDDV Cameras?

As 24p increases in popularity, it is assumed that many of the problems associated with creating DV originated films will go away as well. But there are currently no cameras that are cost efficient to allow such a method. It would be cheaper, in fact, to originate an HDTV project on film.

There are a growing number of HD camcorders on the market. Sony, Panasonic and others are working toward a lower-cost solution. Currently, the lowest priced HD camcorders run in the $45,000+ range, but it is likely that there will be an introduction of a lower-cost, less pristine HD solution for low budget filmmakers in the near future. Still, it’s doubtful that any $5,000 HD wonders will come around anytime soon to replace conventional DV.

Electronic Cinema

As technology evolves, there will no doubt be a means of making 24p HDTV video much more cost effective. When this happens, digital projection is ready. Digital projection may be, in fact, the single most powerful argument for video-based films. It has a lot more capability than any current standards, and might even help to reduce some of the confusion associated with HDTV.

In 1999, George Lucas used digital projection on a limited basis for the release of his film Star Wars I: The Phantom Menace. It was displayed via a JVC Hughes ILA-12K projector in some theatres. The frames, all digitally projected from hard drives, were not film or standard video, but CGI, computer generated imagery. The resolution was extraordinary, partially due to the output of the CGI, but also due to the projection power of the ILA. The JVC Hughes projector has the capability of displaying 17,000 lumens with resolution greater than 2000 lines and a contrast ratio that exceeded 1500:1. Compared to SDTV and even 35mm film, it is clear that digital electronic cinema projection is here to stay.

Table 11.1 Resolution/Contrast Comparison of SDTV, Digital Projectors and 35mm Film

Vertical Pixels

Contrast Ratio

SDTV

480

150:1

Digital Projection

1536

1500:1

35mm Film

~3072

1000:1

Well on the warpath toward achieving film resolution, JVC and other companies are pressing the technological envelope to develop even higher resolution for their projectors. JVC recently developed a single panel projection system capable of delivering twice the horizontal pixels of the HDTV spec.

The proposed next step for JVC has been tentatively called Q-HDTV. Q-HDTV will have four times the pixel density of the HDTV ITU Spec (1290 x 1080), delivering a whopping 3840 x 2160 image at a 16:9 aspect ratio. The future of digital cinema could potentially exceed the resolution of 35mm film. And based upon the spec for JVC’s single plane Digital ILA device, it is certainly possible.

Although the numbers indicate a marked difference in vertical resolution, it could be said that the normal projection of film, with gate jitter and similar artifacts, narrows the gap between Q-HDTV and 35mm significantly. In the case of D-ILA projection, JVC claims that the engineering of their systems allows for zero pixel artifacts.

Table 11.2 Future Shock: Proposed Q-HDTV Projection vs. Video & Film

Format

Vertical Pixels

Contrast Ratio

SDTV

480

150:1

HDTV

1080

>150:1

Q-HDTV

2160

1500:1

35mm

~3072

1000:1

The development of such powerful projection systems creates more questions than it answers. How will production equipment manufacturers respond? Will there be a new standard for video that exceeds current HDTV formats? No one can be sure. But for those creating animation or CGI, the possibilities for digital projection already seem endless. Without the need of film recorders, CGI films can be rendered, transferred, and projected almost immediately.

Cost will certainly play a major role in the acceptance of digital projectors. Currently, the very best systems on the market run into the hundreds of thousands of dollars. This is not a very pretty scenario for those operating a local Bijou. But as development increases, costs go down.

Another factor is size. Many manufacturers are increasing the size of their projectors to increase resolution. A lighter and more compact system would be more desirable for smaller digital cinemas and for industrial and corporate use. Many digital projectors are already on the market for much less money, with a lot less firepower. Prices tend to hover in the low five figures, but will no doubt drop over time.

Summary

In the original preface to his 1953 book, Fahrenheit 451, Ray Bradbury envisioned a day when teenagers would tune out the world by attaching radios to their heads, where crazed drivers would maim and kill on our roads for thrills, and where people would be addicted, even mesmerized, by picture wall screens in their homes.

The future of electronic film is closer than most people think. With HDTV, digital broadcast and high quality digital projection systems coming on line, there might be a niche in the market again for Mom and Pop local Bijous with limited seating and plenty of atmosphere. One wonders whether we’ll return to the cozier settings of the theater from years gone by.

But then again, if the prices continue to plummet on technological advances, the theater might be in our own homes, as Bradbury predicted. And the biggest fears of filmmakers during the 1950’s may come true: we might all be sitting around in our homes looking at wall screens.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.128.105