CHAPTER image

Postproduction and Distribution

After planning a project in preproduction and acquiring all the aural and visual elements during the shooting or production phase, it is time to edit everything together in postproduction. In this computer-based, nonlinear editing (NLE) age, many editors begin their work during the production phase, selecting and sequencing shots at the end of each day’s shoot. Digital editing makes this possible because editors do not have to wait until all the film footage or analog videotape is complete. Instead, they can complete a digital sequence, save it on a computer, work another sequence, and then another, ultimately retrieving all the saved sequences and making some tweaks for the final edit.

Three digital NLE systems have captured most of the market for video postproduction: Avid, Final Cut, and Adobe Premiere. Each of these comes in different versions to meet different budgets: Avid Media Composer and Pinnacle Studio; Final Cut Pro and Final Cut Express; and Premiere Pro and Premiere Elements. To be sure, there are many other editors, such as Windows Movie Maker, which comes with Microsoft Windows; iMovie, which comes with Apple computers; and Sony’s products, including Vegas Pro and Vegas Movie Studio. These are just some of the available video editing programs; by the time this book goes to print, there might be further buyouts and mergers, as well as new products.

Although each editing program has its unique characteristics, all the programs share a common logic and use similar functions. In the first section of this chapter, “Editing Process,” we examine the common ideas and steps for nonlinear editing, regardless of the hardware and software. It is beyond the scope of this section to train you on any one editing system; instead, this section is intended to be a companion as you learn the editing software available to you with that system’s manual and tutorial, and maybe an instructor. The next section, “Distribution,” explores the various means to get your project seen. Following that is a section on “Technical Concepts,” which familiarizes you with the basics of how the recording, editing, playback, and storage technology works. The chapter closes with “Creative Editing Concepts,” a section in which we explore the aesthetic or theoretic principles of editing: the constructs of audiovisual storytelling. These concepts apply to all video editing, regardless of the hardware or software.

EDITING PROCESS

The objective of film and video editing is to tell a story effectively. That was the goal in the first era of motion image editing: cutting and splicing film. That continued to be the intent in the second era: analog tape-to-tape editing. In today’s third era of digital NLE, effective storytelling remains the aim. To achieve a successful story in digital postproduction, a series of 10 logical steps is recommended:

1.  Log

2.  EDL

3.  Capture

4.  Import

5.  Trim

6.  Sequence

7.  Layers and Effects

8.  Mix and Balance

9.  Render

10.  Output

Log

During production (the acquisition or shooting phase), you can begin to watch and log your footage immediately after you shoot it. Logging continues after you’ve finished shooting, until you have all your audio and visual elements logged. A log is a list of what you have acquired, with brief notes describing each item. You can create your log with simple pencil and paper (see Figure 7.1), or use any number of software programs to assist you (see Figure 7.2).

For your log, make a list of each shot or audio clip in order on each tape, disc, or drive. As you review your video and audio elements, note the timecode number if the element is on tape or the filename if it is on a disc or solid-state card or drive. Briefly describe the shot, including the shot size (LS house, MS Bob, CU Carol, etc.), to help you make editing decisions later. Also, note any problems with the shot (out of focus, bad lighting, shaky, poor audio, etc.), or if the shot is good, and where you might include the shot in your final piece.

Edit Decision List

Having logged your video footage and sound elements, you can make decisions about how the project might edit together. You can create an initial edit decision list (EDL), using pencil and paper to write down the shots and sounds in the order you think you might edit them (see Figure 7.3). Or you can use a software program to help you generate your EDL (see Figure 7.4).

For the EDL, you select the shots from the log that you want and write them in the sequence they are to appear in the master edit. In addition to noting the length of each shot, you can keep a running sum of the overall length, or total running time (TRT), to get an idea of how long the project will be. You can also note special considerations, such as transitions you plan to use, music clips you plan to lay under, and so on.

It is very valuable to create an initial EDL because that focuses your thinking and helps you make some preliminary decisions before you sit down at the computer. The acronym EDL refers not only to this initial plan of editing, but also to the final list of edits after you have finished. This final EDL is valuable if you first edit an initial, rough cut of your project, and then use the EDL to make the final cut. A rough cut is a preliminary version of your project, without fancy transitions, sound sweetening, and other items that will be added later to give the project its final polish.

Sometimes preliminary editing is done in an offline edit session. This term goes back to the days of videotape editing but is useful in computer-based editing as well. An offline editor typically uses low-resolution copies of the original footage to edit a rough cut, making the critical editing decisions (sequence of shots, trimming heads and tails, length, pace, etc.) without the added time and cost of rendering high-resolution files, transitions, audio mixes, and the like. The offline editor also generates an EDL to give to the online editor. The online edit session costs more than the offline session because of the advanced equipment and editing talent. Here, the editor uses the edit decisions from the offline session, but edits the high-resolution files for the final cut and includes transitions, effects, titles, full sound mix, and any other elements needed for the finished piece.

image

FIGURE 7.1
A sample paper-and-pencil video logging sheet.

For many low- and no-budget ENG and EFP projects, including student productions, the offline and online sessions are combined, with the editor making the initial edits from full-resolution files, adding effects, mixing audio, and the like as he or she works through the footage, using the preliminary EDL as a guide. Of course, many changes are usually made along the way as the editor tries different and better ways to cut shots together. In the computer world, editors can save projects at the end of their sessions and open them another day, changing them over and over until the editors, directors, producers, clients, and others are happy with the final project, or until the deadline hits.

image

FIGURE 7.2
A screenshot from Ice House Productions’ TCLogger software. (Courtesy of Ice House Productions)

Capture

With all the aural and visual elements assembled, and with a log and EDL to get started, the actual editing session (whether offline or online or combined) begins by capturing the footage onto a computer drive. If the footage was acquired on videotape, such as Mini-DV or DigiBeta, a VTR or camcorder must be connected to the computer, usually by FireWire (IEEE 1394), Universal Serial Bus (USB), or Serial Digital Interface (SDI) connections. If the source material is on a disc, such as Mini-DVD or Mini-CD, the disc must be placed in the disc drive. If the original files are on a solid-state medium, such as a memory card, Flash device, or computer drive, that medium needs to be connected to the computer.

After launching the editing software program, one of the menu items—often under the File menu—is capture. Capturing is the process of transferring the footage from its acquisition medium to the computer drive from which you will edit. A capture window typically prompts you to name each file and to select a drive and folder to store the captured clips. Capture all the elements you plan to use in editing your project. Note that by logging your footage in advance, you can save time and drive space by not capturing the shots that are no good and that you do not plan to use. Additionally, by creating an EDL in advance, you can name each clip title beginning with a number in the sequence of your planned shots (e.g., 001, 002, 003, etc.).

Import

Once the footage has been captured onto your computer drive, you need to import it into the software program for editing. Some programs automatically import the clips when they are captured; others do not. Look for the Import command—usually under the File menu—which prompts you to go to the folder where you stored the captured clips. Click on the clips you wish to edit and import them. Most software interfaces have a clip storage area (bin, browser, or whatever your software calls it), which is a window that contains the icons and/or names of the clips you import. (See Figure 7.5.) Note that importing does not actually move a clip to a new place on the computer drive; it merely creates an icon in the software interface’s clip storage area that “points” to the actual clip. If you move that clip to a different folder or location on your drive, or if you rename it or delete it, the software will not be able to find the clip. When that happens, you will be prompted that the clip cannot be located.

Trim

Once you have imported the clips, you need to trim each one. Trimming refers to cutting the head—front or beginning of a clip—and tail—back or end of a clip—so only the frames you want are seen or heard in the final project. Most editing software gives you at least two ways to trim. One is to drag a clip from the bin to the play monitor (source, viewer, whatever). Usually at the bottom of the monitor window are buttons to click for the in-point and out-point. (See Figure 7.5.) The in-point is the frame on which you want the clip to begin playing in the final edit. The out-point is the last frame you want. Play the clip and click on the buttons for the in- and out-points on the frames you desire.

image

FIGURE 7.3
A sample paper-and-pencil EDL sheet.

For example, imagine a clip that begins with some shaky footage as the camera operator attempts to pan and zoom onto a subject. Eventually, the shakiness stops and the subject is seen in a steady shot with good focus, lighting, framing, composition, and audio quality. After the desired soundbite, the subject ad libs some irrelevant comments. You only want the part of this clip with the steady shot and the usable soundbite, so you mark the in-point for a frame after the shakiness and before the subject speaks the soundbite. Likewise, you mark the out-point after the soundbite but before the irrelevant ad lib.

image

FIGURE 7.4
A sample EDL from Avid. (Courtesy of Avid)

The second way to trim a clip is to drag it to the timeline. (See Figure 7.5.) Once the clip is in the timeline, you can move the cursor to the head of the clip, and the pointer changes to another icon for trimming. Click and drag forward, or right, to lose the unwanted frames (the shaky footage in our example). To trim the tail, place the pointer at the end until it changes to the trim icon, then drag back, or left, to lose the unwanted footage at the end (the ad lib in our example).

Sequence

The timeline is the window you use to arrange the clips in the order, or sequence, you wish. If you trimmed a clip in the source monitor, you can simply click on that image in the monitor and drag it down to the timeline. If you already dragged the clip to the timeline and trimmed it there, it’s ready to go. You can watch your edits as you build them on the timeline by looking at the playback (record, canvas, whatever). (See Figure 7.5.)

image

FIGURE 7.5
A comparison of screenshots from Apple’s Final Cut Pro 7 (top) and Avid’s Media Composer 5 (bottom). The screens are arranged in a similar fashion, though both can be rearranged by the editor. Here, the source monitor (“viewer” in FCP) is top left; the record monitor (“canvas” in FCP) is top right; the clip bin (“browser” in FCP) is bottom left; and the timeline is bottom center and left. (Courtesy of Apple and Avid)

Arranging clips is as easy as clicking and dragging on their icons in the timeline and dropping them where you want them. This works the same for both video and audio clips. Usually, you will experiment with different arrangements of shots, continuing to trim them a bit here and there until the sequence of shots plays back with a clear storyline and smooth flow. An important feature in most software programs is the Undo function, usually under the Edit menu. When you trim or move a clip and decide you don’t like what you’ve done, you can select Undo and voilà, you are back to the way things were before you made the trim or move!

Layers and Effects

On the timeline, you can add more video and audio tracks. The new video tracks stack on top of the existing tracks, and the new audio tracks appear under the existing tracks. These multiple tracks allow you to layer new video and audio into your project.

Typical uses for multitrack layers of video elements are matte shots, which are images that key or superimpose one image into another. Another use is for titles that are keyed into other shots, such as lower-thirds, which identify a subject’s name on the lower third of the screen. Typical uses for multiple layers of audio elements are to add sound effects and music tracks to dialogue tracks.

There are numerous visual effects you might want to add. These can include transitions between shots, rather than straight cuts, such as fades, dissolves, or wipes. Effects can also include filters that change brightness, contrast, and color. Other effects blur and distort video clips, or alter the original footage in all kinds of ways, such as giving it a sepia tone. Most software programs have an Effects menu under the Window or similar command.

Caveat editor: Just because your software allows you to add effects doesn’t mean that you should. If your story is strong, your shots are good, and your audio is clean, you can communicate your message effectively—and even better—without effects. Use them cautiously, only when they truly contribute to the message. Consider the wisdom of Huell Howser, the producer and talent of a popular travel show in California, who states that whenever he sees lots of fancy effects he assumes the producer is covering for the story that’s missing.

Mix and Balance

With the audio elements trimmed and arranged, you might want to make some changes to the aural clips. You can do basic audio editing, and also add some audio effects, on the audio portion of the timeline. For more advanced audio editing and processing, you can import the audio clips into an audio editing program, such as Adobe’s Audition, Apple’s Soundtrack Pro, or Avid’s Pro Tools. (See Figure 7.6.) When you finish editing and processing the audio, you can paste it back into the timeline of your video editing program, syncing it back up to the images.

Whether you edit audio in your video software program or in an audio program, the objectives are the same. You want to balance the various audio clips so that the most important element (usually the dialogue) can be heard easily (foreground), with the other elements (usually sound effects and music) at a lower level (background). This is the same figure–ground principle used in composing visual images (discussed in Chapter 4 on framing and composition). With audio, it is the primary sound and not the picture element that is the “figure” and receives foreground attention, while the other contextual audio elements are in the background. To adjust the audio levels, most software provides nodes (rubber bands, whatever) with which you can pull the audio level up or down. Typically, you place the cursor over the line in the middle of an audio clip, changing the cursor to a pointing finger, allowing you to click to create a node. Drag up for increased volume and down for decreased volume.

You might want to add some audio effects, such as fades and cross-fades, allowing the audio to come in or go out gradually rather than harshly. Other effects allow you to add echo or reverberation. High-pass and low-pass filters allow you to reduce frequencies at the low and high ends, perhaps to make a voice sound hollower and less resolute, as if it were coming through a telephone. You can also clean up audio that has been recorded improperly or poorly, up to a point (it’s always better to record properly, of course, for both audio and video—postproduction cannot fix everything). Notch filters let you reduce a certain range, or notch, of frequencies, such as a 60 Hz electrical hum that might have been recorded inadvertently with the dialogue, or reduce room tone if the microphone was too far away from the subject. Parametric and other equalizers allow you to manipulate frequencies even more, such as boosting or lowering bass or treble frequencies in a musical recording.

image

FIGURE 7.6
A comparison of screenshots from Apple’s Soundtrack Pro 3 (top) and Avid’s Pro Tools 8 (bottom). The screens are arranged a bit differently, though both can be rearranged by the editor. Here, the Soundtrack Pro screen shows multiple audio layers at the center, with the layer chosen for editing in an expanded waveform view at the bottom. The Pro Tools screen shows multiple layers in the waveform view at the center, with other views elsewhere, such as musical bars at the bottom. (Courtesy of Apple and Avid)

Whatever you need or want to do to the audio, the final audio mix should be balanced optimally to tell the story without confusing the audience.

Render

While the simplest audio and video effects, such as fades and dissolves, can usually be seen in real time on the timeline, other more complex effects must be rendered before they can be seen or heard. To render all or part of the timeline with those effects, you must create a new file. Unless a new file is created, the software can only play back the footage from the original file, which does not have the effects. Many editors render short effects as they work so they can double-check how they look or sound. For example, most computers can render a one-second wipe pattern quickly. For longer effects, such as adding a sepia tone to a one-hour western to give it an old-world look, the editor often waits until the end of the day, saves everything in case the computer crashes, and then selects the Render or Export or similar command just before leaving, allowing the computer to render this large effect through the night.

However often you render or do not render during editing, when the final project is complete on the timeline, you will want to render the final piece. Most software can play back the production from the timeline and even print it to videotape from there. However, if the playback magnet or laser has to skip from one sector or drive to another sector or drive, that skip can appear as a glitch on the video. Is it best to render, or export, the entire project into a single file. That way, when printing to video, mastering to CD, DVD, or Blu-ray, or copying to another drive or solid-state medium, the playback magnet or laser only needs to read one file continuously.

Output

Having rendered the final, master project, you need to save it as a master file. Just like a traditional movie has a master film version stored safely somewhere (e.g., in an underground salt mine in Hutchinson, Kansas, where the climate is optimal for film storage), so too you should save a master version of your finished video in the best format you can. For example, you can save it as a full-resolution QuickTime file or other high-quality format. Then copy that file to use for additional compressing and file conversion to other formats, saving the original in the event you need to copy it again for additional format conversion or compression in the future.

With the edited master saved in the best-resolution format your software offers, you need to output it for storage and eventual distribution. If you have a large computer drive, either internal or external (e.g., a terabyte Firewire drive), you can save your project there. You can also print the copy to tape if your computer is connected to a tape deck. Your software program should have a print-to-tape function. You can burn your project to DVD, assuming you have an internal or external DVD burner and the appropriate software. If your project is true high-definition, and if you have the newer Blu-ray technology, you can burn a Blu-ray disc. If your project is no more than 650 MB, you can burn it to a data CD. Depending on the project’s file size, you can also save it to a solid-state medium with sufficient storage capacity, such as an SD card or USB Flash thumb drive or memory stick of 1, 2, 4, 8, 16, 32, or more GB. (See Figure 7.7.) With a full-resolution, master version of your project safely stored, and with a copy made for additional work, you are ready to compress and/or convert to other formats for distribution.

image

FIGURE 7.7
A sample of different storage media for digital files: external hard drive, disc (CD, DVD, or Blu-ray), USB Flash drive, DV, secure digital (SD), and SD micro cards. (Courtesy of GNU Free Documentation License and public domain)

Distribution

Once your final project is safely saved and output to a storage medium, it is time to let people see it. A number of distribution options are available. In general, there are two types of distribution: physical and online media.

Physical media include tapes, discs, and portable microchip or solid state devices, such as USB drives and SD cards. (See Figure 7.7). Tapes may still be analog, but most likely will be digital, such as Digibeta or DV. Discs include compact discs (CDs) with video files stored as QuickTime, AVI, Windows Media, or other file formats, as well as DVDs and Blu-ray discs. Sold state microchip media (e.g., USB drives and SD cards) can also store project files in any digital format. These physical media are useful for a variety of film-video festivals and competitions, many of which accept tape or disc submissions. A good clearinghouse for these competitions is the web site http://www.withoutabox.com. At withoutabox.com you will find links to most film and video competitions (including screenplay competitions).

image

FIGURE 7.8
A screenshot of a web site promoting the sale of a DVD about magicians. (Courtesy of Shelley Jenkins)

Online media include streaming video projects as well as films and videos that can be downloaded and viewed. A number of festivals and competitions accept online submissions, particularly those specializing in short films and videos. You can find these also at withoutabox.com. Additionally, you can post your project to a web site for viewing. This might be another web site, such as the ever-popular YouTube, or you might create your own web page to promote and offer your production for streaming, downloading, or selling a physical disc. (See Figure 7.8.) Whatever venue you select to distribute your project online, you will need to pay attention to the file format, or codec (compression–decompression scheme). Some web sites require that you submit in a particular codec, while others accept a variety of codecs (though they might transcode videos into their codec of choice).

YouTube, Vimeo, and many other video streaming sites use the popular Flash codec from Adobe because almost every computer already has a Flash player on it and no additional media player download is required. QuickTime (.mov file extension) is also popular, and all Macintosh computers ship with the QuickTime player, but Windows users need to download the QuickTime player (it’s free at http://www.quicktime.com) to view the videos. The Windows Media Video format (.wmv file extension) plays on all Windows computers that ship with the Windows Media Player, but Mac users need to download a QuickTime plug-in, such as Flip-4-Mac (it’s free at http://www.flip4mac.com), to view Windows media within the QuickTime player. Files in the Audio-Video-Interleaved format (.avi file extension) play in most video players, including Flash, QuickTime, and Windows, but they generally are larger file sizes than the same videos in either of those other formats. Increasing in use is the newest, fifth version of the hypertext mark-up language code, HTML5, which plays motion video natively without first linking to another player.

In addition to selecting the codec, you must also decide how much to compress the video file. More compression means more loss of image quality, but faster buffering or downloading time for the audience. Less compression means better image quality, but slower buffering or downloading.

How do you let people know when your video is online and available for viewing or downloading? Video festivals and competitions, as well as streaming video sites such as YouTube, offer some promotion. Festivals have e-mail distribution lists to get the word out about their screenings. YouTube and other sites feature most-viewed videos each day. Keyword searching allows your family and friends to look for your video. You can also set up your own channel on YouTube. If you create your own web site to promote and make your video available, either by streaming, downloading, or selling a physical disc, you can promote that web site. Advertisers tell us the best publicity is still word of mouth. So whatever online venue you choose, e-mail and text as many people as you can to drive up visits to that web site, and ask them to pass the word along to their families and friends, and their families and friends, and so on. With luck, your video will be the next viral masterpiece!

TECHNICAL CONCEPTS

When working with video, you can gain a better understanding of the editing and distribution process by familiarizing yourself with some key terms and concepts that explain video recording, playback, and storage. A solid grasp of the fundamental technical aspects of video should provide a good context for understanding what happens during editing. This technical knowledge, in turn, can help you with editing decisions.

Scanning

Chapter 8 on video explains in detail how a camera converts light coming into the lens into an analogous electrical signal for recording. If it is a digital camera, it then samples that analogous waveform into a binary series of 1 s and 0 s. That signal is played back in the same way it was recorded: through a process of scanning. Three different scanning processes make up the global television and video market in standard definition: the National Television System(s) Committee (NTSC) standard, used in the United States, Japan, and other countries; Phase Alternation Line (PAL), used in Germany, Australia, and other countries; and Séquentiel Couleur Á Mémoire (SECAM), used in France, Russia, and other countries. High-definition standards are also in use around the world.

NTSC video is created by a scanning beam that traces out 525 lines on the face of a TV monitor in an aspect ratio that is four units wide by three units high, or 4:3. It performs the scans in two moves, called interlaced scanning. (See Figure 7.9.) It scans first the 262.5 odd-numbered lines and then the 262.5 even-numbered lines. This entire scanning process occurs 30 times per second (actually 29.97 frames per second [fps] for historic engineering reasons that have to do with image stability when TV migrated from black-and-white to color). Thanks to our persistence of vision, this is fast enough to allow our eyes and brains to believe that we are seeing a solid, constant image.

In contrast, true high-definition television (HDTV) scans 1080 lines progressively in a screen aspect ratio that is 16 units wide by 9 units high, or 16:9. Progressive scanning means each line is scanned in turn, rather than in an interlaced fashion. In addition to this 1080p scanning, some HD equipment scans 1080 lines in an interlaced pattern, or 1080i, and some scan 720 lines progressively, or 720p. Current television sets are able to “read” the type of signal coming into the set and up-convert, if needed, to play on a 1080p screen.

Fields, Frames, and Segments

In standard-definition NTSC, the complete scanning of either the odd or the even lines forms a half-picture known as a field. Two fields, when combined or interlaced, form a frame or complete picture. In some tape formats (e.g., older one-inch type C), a frame is encoded onto tape in one continuous line and is called a nonsegmented format. Other formats (e.g., mini-DV) that encode each frame on a separate line are called segmented formats. (See Figure 7.10.) The nonsegmented format allows for some special effects, like noise-free slow motion or freeze-frame, without using a time-base corrector; segmented formats require time-base correction.

image

FIGURE 7.9
For interlaced scanning, the odd-numbered fields, represented here by solid lines, are scanned first, followed by the even-numbered fields, represented by dashed lines.

image

FIGURE 7.10
Segmented track pattern on a cross-section of mini-DV tape.

Tracking and Skew

When video is recorded on any tape format, it is recorded at a particular speed with the information placed onto the tape at particular locations and angles. The precise way in which the image is laid down on the tape is called tracking. Because each videocassette recorder (VCR) might have slightly different tracking from the next, the tape might play back differently in different VCRs, or “decks.” Many VCRs have a control that allows the playback machine to track very closely to the way the original recorder placed the information onto the tape. Older videotape machines required you to set the tracking control (measured by a small meter) on your source and playback machines to optimize tracking. Most current video decks have automatic track finding (ATF), an electronic system that lets the machine adjust itself for optimal playback.

Tracking refers to the precise angle of each scan across the tape; skew refers to the tension of the tape around the video drum. Like tracking, skew can vary from machine to machine. As with tracking, older VCRs had manual skew controls, while today’s machines adjust the skew automatically.

Digital Recording

Analog and digital are two ways of encoding information onto a storage medium. Conceptually, the difference between the two may be understood by considering analog to be a continuous process while digital is a discrete process. Analog videotape recording occurs when the recorder receives electrical signals that are converted from, and analogous to, the light and audio waves that the camera and microphone pick up when recording. The recorder continuously makes a copy of those signals.

Digital encoding breaks down the electrical signals into small “pieces” or bits and assigns a numeric description of the signal using 1 s and 0 s. This process is called sampling: taking a sample of each point along the analogous waveform and assigning that point a digital, or binary, code. (See Figure 7.11.) The higher the sampling rate—that is, the more samples per second—the better: More points along the wave are used so they reproduce the original wave more accurately. The more bits used to code each point the better: More 1 s and 0 s give each point a more discrete, or unique, code. For example, many DV formats can record audio with a sampling rate of 48,000 Hertz (Hz), or 48,000 times per second, meaning that each second of the audio waveform is broken down into 48,000 unique pieces. Additionally, these formats can record at 16 bits, meaning that a string of 16 1 s and 0 s is assigned to each of those 48,000 points per second. This is a very high-quality sampling and recording rate—even slightly better than is necessary for human hearing.

The device that samples the original analog waveform and converts it to a binary digital signal is a coder–decoder (codec). Different digital video formats have different codecs, and the devices are built into the cameras. For example, the mini-DV codec compresses the digital signal by a ratio of 5:1. To convert video that has been recorded in an analog format (e.g., VHS), standalone analog–digital (A/D) converter boxes are available that can be connected to the analog videotape recorder (VTR) for input into a digital source, such as a computer, to convert the video to the software program’s codec.

image

FIGURE 7.11
Sampling involves the selection of many points each second along a waveform and assigning each point a binary code of 1 s and 0 s; here, 16-bit sampling assigns a string of 16 1 s and 0 s to each point.

Compression

Digital video requires many megabytes (MB) or gigabytes (GB) of storage space. Uncompressed, one frame of standard-definition NTSC video needs about 1 MB. That means one second (30 frames) needs about 30 MB. Even a multigigabyte hard drive can fill up fast at those rates. For this reason, most codecs also compress the signal: They remove redundant information to reduce the file size. For example, if a newscaster sits in front of a still backdrop, that backdrop does not have to be recorded separately for each frame. Instead, the backdrop can be digitized once and the signal can be coded to include that same backdrop in each frame for however many seconds the clip lasts. Then only the newscaster’s movements must be recorded for each individual frame, reducing the amount of digital information required for each frame while maintaining the image.

Compression may be either lossless or lossy. Lossless compression algorithms reduce the file size without any loss of detail. In the previous example, if neither the backdrop nor the newscaster is recorded with any loss of resolution and at a full frame rate (30 fps), the compression is lossless. However, it is sometimes desirable to compress a file even more, particularly for delivery over the Internet to people who do not have very high-speed connections. To accomplish this, lossy compression is necessary: removing additional information to reduce the file size considerably. In our example, the newscaster’s segment could be reduced from, say, a full DV 720 × 480 pixels to just 320 × 240 pixels. The number of bits recorded per pixel could be cut, reducing some color and brightness information. The frame rate could be cut in half to 15 fps, discarding every other frame so that the remaining frames each hold on the screen for two frame counts. The audio could be cut down by resampling from 48,000 Hz to 22,000 Hz, and the bit rate could be cut in half from 16 to 8. All this compression seriously reduces the file size, but on playback the viewers see a smaller, jerkier image with tinnier sound because both video and audio fidelity have been lost.

It should be noted that audio information does not require nearly the storage space that video information requires. You can download the audio file of a favorite three-minute song in a second, but downloading the three-minute music video of that song takes longer. Because audio requires less file space than video, compressing audio does not result in as much savings as compressing video. For this reason, some content creators choose to leave the audio uncompressed for full fidelity sound even when they compress video. That way, the viewers can at least hear high-quality audio even if they have to watch small, jerky video.

A number of different compression systems are available. New systems are constantly being tested to improve video quality while decreasing the space required after compression. Some still cameras that also record short video clips use a lossy compression system developed by the Joint Photographic Experts Group (JPEG) called Motion-JPEG. Common DVDs use a lossless scheme created by the Moving Pictures Expert Group (MPEG) called MPEG-2, which reduces video to about 20 percent of its original file size. That same group also developed the newer compression algorithm called MPEG-4, which is supported by all major Internet media players. For high-definition video, a number of major video camera manufacturers developed “advanced video coding high definition” (AVCHD), a widely used format based on the popular MPEG-4 H.264 compression algorithm. Today, some video content creators are codign directly to HTML5.

For video editing, most professional software offers a variable compression rate that allows for different levels of quality. A high-compression mode (smaller file size) is used for doing offline edits, which increases the speed of the process and reduces the amount of storage needed, but greatly lowers the quality. A low-compression mode (larger file size) is used for online editing where the edited master file is high resolution, suitable for broadcast or high-quality storage, such as DVD, professional digital tape, or high-definition Blu-ray.

Digital Videotape

A number of digital videotape formats are on the market, including DigiBeta, DVCam, DVCPro, and digital HDTV. While each has its unique characteristics, such as the width of the tape, the angle and speed with which the video head passes across the tape, the placement of the video fields on the tape, and the method for recording audio, all the formats share some common characteristics. Here we’ll use one popular DV format as an example to explore a bit more detail: mini-DV.

image

FIGURE 7.12
This DV player/recorder has time-based stability and is used to feed digital nonlinear editing systems. (Courtesy of JVC)

Mini-DV is a low-cost format used by many consumers, as well as prosumers and even some professionals. While mini-DV and other tape formats seem to be giving way to tapeless recording (see next section), mini-DV is still in wide use. The audio and video recording is excellent for standard definition and, as with all digital formats, tape dubbing has no generation loss. This format features a very small videotape cassette (the tape is just ¼-inch wide, or 6 mm), thus allowing for very small camcorders. Tapes consist of a plastic ribbon covered with a metal oxide that is magnetized to record the digital signal. VCRs that play and record mini-DV range from inexpensive camcorders to sophisticated units with many features. (See Figure 7.12.)

Mini-DV resolution is 720 × 480 pixels. Scanning is interlaced. Audio may be recorded at different sampling levels, the best being 48,000 Hz at 16 bits. The mini-DV codec uses a 5:1 compression ratio, allowing just under five minutes of full-resolution video per 1 GB. Mini-DV contains the typical information tracks of all videotape formats—audio, video, and control track—but also has a subcode and an Insert and Track Information (ITI) track. The subcode track contains information about timecode, date, time, and track numbers. The ITI area contains information that allows video insert editing. Audio information is stored in two ways: One way, pulse-code modulation (PCM), offers CD quality, while the other, linear, allows audio dubs after the original video is shot. (See Figure 7.13.) A higher-definition version of DV is available, HDV, which provides progressive scanning rather than interlaced scanning and the wider HD aspect ratio of 16:9 rather than the NTSC aspect ratio of 4:3.

Tapeless Recording

While tape-based recording is still in wide use, tapeless recording systems are increasing daily. The first tapeless cameras used mostly optical disc recording, which burns the digital signal onto a disc with a laser light. For example, some photo cameras would record short, compressed video clips on mini-CDs. Some full-resolution camcorders would record on mini-DVDs. Today, though, solid-state recording devices are mostly used. For example, some cameras record on memory cards or sticks. (See Figure 7.7.) Some can be connected directly to hard drives. The advantage to solid-state recording is that there are no moving parts that can wear down or break: No magnets pass across moving tape, and no lasers burn spinning disk surfaces. Because of the promise of longer-lasting hardware, many manufacturers began debuting solid-state recording devices for cameras in the early 2000s. As storage devices increase in capacity while decreasing in size and cost (a 16 GB USB Flash drive or SD card costs less than $20 at discount stores as of this writing), solid-state recording will continue to replace magnetic tape and optical disk recording.

image

FIGURE 7.13
Track pattern on a cross-section of mini-DV tape.

Nonlinear Editing (NLE)

Whatever the format for acquiring footage—tape, disc, or solid state—once it is in the computer, each shot is a separate file. Like a word processor that stores documents as separate files, these digital shots and sounds can be stored and retrieved easily and in any order, unlike a linear tape, which has to be shuttled back and forth to arrive at different shots. This ability to access shots randomly is called random access.

Nonlinear, random-access editing may be done in one of two different working environments: standalone and work group. The standalone station is just that—an editor that is self-contained and can be used to edit all images into the finished product. Once all information and files are available at the station, a final cut can be put together by the operator. Standalone stations do not require network connections with other equipment, and therefore there is no need for network compatibility. A standalone system includes everything the editor needs: CPU, software, monitors, keyboard, mouse, and other peripherals for input and output, including tape decks, camera connections, and so on.

Work group editing requires workstations that are interconnected by some type of local-area network (LAN). Video and audio workstations, graphics stations, digitizing stations, logging stations, character generation stations, and special effects and animation stations are all connected to allow multiple editors to access a centrally stored data bank with raw video and audio that can be transferred to and from editors.

Whether standalone or work group, the raw audio and video files can be edited using either destructive or nondestructive editing. In destructive editing, once a file is edited, it is saved in place of the original file. This is useful when storage capacity is limited and there is too little room to save both the original file and the edited version. However, if the editor changes his or her mind and wishes to retrieve the original file to make a different edit, it is too late. In nondestructive editing, the original file is maintained. The edited file is given a different name and saved apart from the original file. This is the preferred method because the original file can always be retrieved for editing anew. However, this method requires more storage capacity to keep both the original file and the edited file.

CREATIVE EDITING CONCEPTS

In addition to understanding the process and technical aspects of editing and distribution, it is vital to know and apply creative editing considerations—the conceptual process of telling a story in images and sounds. You may think of each shot and each audio piece as a sentence or phrase. Just as we apply proper grammar when using language, a grammar of sorts exists in the assemblage of shots and sounds on the way to forming stories.

Everything in an edited piece should have a purpose and a relationship. Every shot is there for a reason; every sound is there for a reason. Every shot should be related to the shot before it and after it; every sound related to the video over it. Everything should work together to tell the story so that the product is greater than the sum of its parts.

Each shot must be in its position to meet an objective—to carry out a function. The story line must be advanced by the constant progression of video images (still or moving) and audio sounds (dialogue, music, sound effects). Editing is not just the butting together of shots; it is the creation of a story with a beginning, a middle, and an end, all working to communicate an idea or show an event.

Sequencing the Shots

Ideally, every grouping of shots, or sequence, should have an overall statement or idea. The viewer should come away with more than just the experience of seeing a collection of pictures—a slideshow. There should be an understanding of the point just made. The idea might be as simple as a blood shortage at the Red Cross, but a random collection of shots on this subject adds nothing to a viewer’s perception of the shortage. To the contrary, a well-thought-out ordering of the proper shots can convey much added information and understanding for the viewer.

Instead of random shots of the interior of the blood center, a careful selection can show the viewer what the script is conveying. A good four-shot segment on the blood shortage might consist of: (1) an opening shot of a nearly empty room of donors giving blood; (2) a medium shot of a nurse assisting a donor; (3) a close-up of a blood bag being filled at the end of a tube; and (4) a closing shot of a technician stacking filled bags in a large but empty cooler. This series shows how few people are giving blood and demonstrates that very little blood is on hand to give to hospitals.

BASIC SEQUENCE

A basic sequence is made up of a wide shot, a medium shot, a tight shot, and a cutaway. This is the minimum sequence, but the idea usually stays intact within many variations. This basic sequence translates into the following:

1.  Establish what the viewer is seeing: wide shot.

2.  Develop that idea by giving more detailed information: medium shot.

3.  Emphasize the details: tight shot.

4.  Add any related information, if necessary, to break the thought and prepare for the next sequence, perhaps using a cutaway.

Preparing for the next sequence, in most cases, simply means allowing for an unnoticed bridge of time in the telling of a story.

SAMPLE SCRIPT AND PHOTOBOARD

Consider the following script to open a soccer story, along with a photoboard (like a storyboard with photos) to illustrate the edits. (See Figure 7.14.)

(Fade in ELS soccer stadium with crowd natsound.)

What draws people to soccer? Is it a large stadium? How about the national anthem? Could it be a roaring crowd? Maybe it’s the sheer action: running fast, kicking hard, making the goal. When it’s over, one team leaves the winner, and one team leaves the loser, every time.

(Take shot of player with SOT.)

There can be two story lines here: the written story as it appears, and a visual story that can add even more information to what is being said. Read the script over to determine:

•  The amount of on-air time you have to cover.

•  The specific subjects that must be shown.

•  The picture information that can be added to enhance the story.

image

FIGURE 7.14
Photoboard of a possible opening sequence for a soccer story.

Allowing time between question marks for a little natural sound, or “natsound,” the script could be about 25 to 30 seconds long. Considering that a shot needs to last 3 or 4 seconds for the viewers to process (with exceptions—shorter for fast sequences, longer for slow sequences), this means the script can be covered with about 8 to 10 shots. Let’s consider 8 possible shots; more can be inserted later for quicker action.

Each story for a TV newscast should start off with about 3 or 4 seconds of pad: video with natsound. Because the news is live, timing errors can be made on roll cues, which can cut off, or “clip,” the first second or two before the roll-in goes out over the air. It is better to lose some natural sound than part of the reporter’s audio track, which is essential to understanding the story.

In our example, a good opening is an extreme wide shot of a soccer stadium with natsound of a crowd. After a few seconds of pad, the natsound is brought down to bed level (background) and the reporter’s voice begins the opening script. Shots two and three illustrate the reporter’s speculation about the national anthem and a roaring crowd. Shots four and five show the action of the game. Shot six is the climactic close-up of a goal. Shots seven and eight bring closure to the sequence, contrasting the winners and losers.

Note a few other important elements in this sequence:

•  It has a beginning that sets up the story—a question about soccer’s draw, followed by a middle—some reasons for the game’s popularity, and an end—winners and losers.

•  The basic shot sequence is used, beginning with a (very) wide shot to establish the story, then moving to medium shots, followed by a close-up for the greatest detail at the most dramatic moment, and ending with two slightly looser shots.

•  The closing two shots set up the contrast between winners and losers, showing the reveling winners in a group with their faces to the camera, while a member of the losing team stands alone with his back to the camera.

•  The sequence comes full circle, beginning and ending with stadium shots, starting with the anticipation of a full crowd in an extremely long shot and ending with a tight shot of a lone player in a defocused, empty stadium.

•  The guidelines for framing and composition are used. For example:

image  The rule of thirds and figure–ground principles are applied throughout.

image  Shot two uses selective focus and leading lines to draw attention to the last players in the line.

image  Shot two also shows the players looking screen right, using normal left-to-right screen direction, appropriate for the start of a game before the action begins.

image  Shots five and six reverse the screen direction, using right-to-left movement, reinforcing the drama of a competitive sport.

image  Shots five and six also maintain consistent screen direction; that is, the goalie jumps screen left and the ball enters the goal screen left.

image  Shots five and six also use diagonal lines to reinforce action.

image  All the shots flow smoothly from one to the next, maintaining continuity with no jump cuts. In most cases, the cuts reveal new subjects. In one case—the goal—two shots illustrate (five and six). Here, the match-action cut (defined in the next section) would occur on the motion of the goalie jumping toward the ball and the ball hitting the net.

Think about these shots, cut on the words as illustrated in the photoboard, and see how they fit with the script. We have interpreted the script visually. We have captured the feel of the story. We have also added to the words by showing more than the words alone can reveal, such as the excitement of the goal and the contrasting images between winners and losers.

This combination of sequence and pacing is just one possible way to cover this script. As you work with the shots in the edit room, you might find that some will look better when allowed to run longer and others when used very briefly. No two editors will cut the story the same way. In effect, there is no single right way to do it. The only common denominator is that it should tell a compelling story that flows smoothly and can be understood by the viewers.

MATCH-ACTION CUTTING

Within sequences of this type, there is a method called match-action cutting that can really make a sequence come alive. If the video was shot with this in mind, or if you as the editor are clever enough to see it in the raw material given you, match-action editing can help give dynamics to a story. The idea is to make it appear as though more than one camera is recording a scene and it is being edited live, as if you are switching between two cameras the way a director does in a studio.

To perform match-action editing in ENG or EFP, the photographer must separate the action into the different parts and then shoot each part separately. One example is the goal in our sample sequence earlier. (See Figure 7.14.) The cut between shots five and six is a match-action cut. In shot five, the goalie leaps to block the ball. On the motion of him in the air, a cut occurs to show a close-up of the ball hitting the net. Assuming the videographer covered the game with just one camera, the goalie’s leap and the ball going into the net are actually two different shots at two different times, but by editing them together with a cut “on the action” the editor creates the illusion that this is one goal.

Another good example is a factory assembly line. A sheet of metal is taken from a stack, put into a drill press, drilled, removed, and put on a new stack. Each part of the process is broken down into different shots, each from a different angle and with at least some variation in focal length.

The shots are edited together so that the viewer follows the sheet of metal through the drilling process but from many vantage points instead of just one. The worker removes a sheet from the pile in a wide shot; on his action of swinging it into place on the press, you cut to a medium shot taken from the side of the press to see the sheet slide into position (it wouldn’t be the same sheet, but they all look alike), and so on. As in the soccer goal example, the actual cut should be made on motion or movement, just after the movement begins. The human eye is programmed to track motion, so when a cut is made just after some motion begins, and that same motion continues in the next shot, our eyes follow the motion and we don’t even see the edit.

For smoothly matched action, the edits must be precise. Each edit must be very accurate with respect to the action so all the movement appears continuous. The position of the subject in the last frame of the first shot must be the same (or appear to be the same) as the position in the first frame of the second shot. If an edit is not “spot on,” it will look like a jump cut. The assembly line example is an easy one because the same thing takes place over and over; there is repeated action. It is harder to get the shots necessary for match-action editing when you have no control over the situation and things are not following a set pattern. For the soccer example, matching action is harder than for the assembly line; still, sports have regular patterns of play, making it possible for a thoughtful and quick sports shooter to shoot for match-action cuts.

Matching or repeated action exists in most things you shoot—look for it. If you are shooting in an office and one of the subjects answers the phone, talks, and then hangs up, perhaps another call will need to be answered. For the second call, choose a different angle and/or focal length. You could, say, match-cut a tight shot of the phone ringing to a wider shot just as the person picks up the receiver. A good editor sees the sequence and cuts it together to put life and interest in an otherwise dull office sequence. This can be done without staging the events; simply look for repeated action on the part of the subjects and anticipate where best to place the camera.

Even in standard interviews, the establishing two-shot can have the interviewee in the exact same position saying something similar to the beginning of the soundbite. The edit from the two-shot to the talking head shot can be made into a match-action edit. It looks sharp, but it has to be done correctly. Watch movies to see how they use matched action and then look for examples of it in TV news.

Maintaining Continuity

Even in news shooting, like movie making, the visual story is often done in bits and pieces to be assembled later. The continuity of the finished product determines how well the viewer is able to follow the story. There are a number of aspects to maintaining good continuity when it comes to choosing camera angles and shot choices in the editing process.

180-DEGREE RULE

The main element of continuity is the 180-degree rule. A simple example of this is an interview for TV news or any two-person conversation in production or theatrical settings. In the theater, the audience stays on one side of the subjects. When you are shooting, the camera replaces the audience and therefore should always stay on one side of the action. Draw a line between the two people involved in the interview or conversation. All camera angles should be taken from one side of that line or the other. You choose which side of the line to shoot from, but you must stay on only one side.

This line is sometimes called the line of interest, the vector line, or the action axis. The direction in which a person is looking determines a line of interest or vector or axis of action, such as two people in an interview: They look at each other creating one line. All your camera angles should be looking either up or down one side of that line.

For ENG and EFP videography, a line should be established in most shooting situations: meetings, speeches, concerts, protests, marches, sports, or simply any place where there is movement. If the subject does not determine the line, draw one where you will have the best background or lighting conditions and stick with it. Your wide shot not only establishes what you are looking at but also the relationships among the objects in the picture. These relationships must be maintained. People walking left should continue walking left in any shots that show them. The line rule keeps the relationships constant throughout your sequence of shots, no matter how many shots you use. (See Figure 7.15.)

For example, a speaker delivering a speech shot from the left side of the room (as you face the speaker) will be facing screen-right. Through the rest of the piece the speaker will always face right. A line of interest is drawn between the speaker and audience. The audience will always be facing screen-left. If you shoot all your shots with this in mind, any combination of shots can be edited together, and the audience will always appear to be facing the speaker and vice versa. The viewer is never at a loss to identify the relationships among the subjects.

CROSSING-THE-LINE EDITING

These points work well when the shots are done correctly and in a controlled situation. What if the shots were not done correctly, or the situation was uncontrolled and no line was ever established? The editor still must maintain continuity for a good, understandable flow of shots. By letting the line float but always keeping it in mind, the editor can move the camera angles anywhere if he or she does it carefully step by step.

The key to continuity is movement or direction, both actual and implied: the actual movement of a basketball team on the court or the implied direction of a person sitting on a park bench. As long as it has movement or direction, any shot you start with defines your first continuity line—your line of interest. A good rule of thumb to get to the other side of that line in editing is to “turn” on one of these types of shots:

•  A shot straight down the line of interest

•  A shot that literally moves across the line, taking the viewers along from one side to the other

•  A close-up shot

•  A wide shot that cuts to another wide shot with a different line

In the example of the speaker and the meeting, a shot straight down the line of interest would come from the center back of the room and have no movement or direction. This type of shot destroys the line started from the left side of the room and gives you freedom to reestablish a different one. You could also dolly a camera from the left to the right at the back of the room, visually taking the viewers across the line. You might also turn using a close-up: Shoot a tight shot of the speaker facing left, and then cut to a wide shot from the other side of the room so that the speaker is now facing right. You have crossed the line but not confused the viewer, because the speaker (the reference point) is in both shots. The dramatic change in focal length will mask any jump in speaker’s position or posture. If you want to turn using two wide shots, you can shoot one from the left rear of the room and cut to a wide shot from the middle of the right side of the room. It is possible to turn and not confuse the viewer by using wide shots, because all the elements of the scene are present in both shots, although they are still very different in position.

image

FIGURE 7.15
Camera placement and several sample shots for covering a typical speech. Here, the 180-degree line, or vector line, or action axis, is between the speaker and the audience.

For this crossing-the-line editing to work, the line cannot be crossed very often or the continuity will be lost anyway. As always, when you sit down to edit, look at all the shots available to you, not only for content but also for continuity. You should be able to separate shots into sequences by continuity, grouping shots with common lines of interest and identifying turn shots to cross those lines if necessary.

CONTINUITY WITHIN SEQUENCES

Continuity can be changed at the end of a sequence, but not in the middle of one. Each visual sequence, like a written paragraph, must stick to one subject. Within a sequence, every subject that has movement or direction must maintain that direction. To allow the viewer to understand that subject fully, each shot in the sequence must flow easily to the next shot. Each aspect of continuity must be maintained within the sequence.

Movement If the subject’s direction or movement is to the right at the beginning, it should always be to the right throughout the sequence. Watch a good action movie and look for the direction of the subjects (cars, people, backgrounds). Look for the 180-degree rule and study how it is used. The continuity is usually very good in action movies. Also watch how they use turn shots to change the line.

Details Continuity also refers to other elements in the picture besides movement. Not only must directional and spatial relationships be maintained, but also the details within the sequences. An obvious example is the clothing a subject is wearing. If the subject has on a green shirt in one shot and a blue shirt in the next, but there is no implied change in time or place, then there is an obvious break in continuity. This also applies to the details of position and tone. You can’t cut from a medium shot of the mayor slumped in his chair on the phone to a wide shot of him leaning forward drinking coffee, or from a tight shot of the councilwoman’s angry glare on the podium to a medium shot of her laughing. They just don’t fit together.

Background Objects in the background cannot move from one shot to the next because there will be a disruption in the sense of reality. For example, furniture in a room must stay in the same arrangement. Continuity means that background elements must remain the same within the framework of the story line. For ENG and EFP, many elements are not controllable, but you still must avoid very obvious breaks in continuity. In a story about a family moving out of their house, you would not show a scene with the father packing the last box in an empty room and then cut to the mother packing a box with that same room half full.

Lighting The lighting within a sequence must also remain the same. Shots taken on a cloudy day cannot be intercut with shots taken in full sunlight. A dusk-to-night outdoor concert should not show the group playing at full darkness cut with shots of the audience in sunset lighting. The time difference is too great and noticeable to even the least discriminating viewer. TV viewers are all professionals at TV watching—they have been doing it almost all their lives.

Sound Continuity of sound, or aural continuity, is as important as visual continuity. When an exterior shot shows a busy street, we expect to hear traffic sounds. When a person is in a cave, we expect to hear an echo. When two people are talking in a room, we expect them to sound as if they’re in the same room. If the sound does not match the image, or if the sound quality changes noticeably mid-conversation, the audience will be distracted. This is the aural equivalent of a visual jump cut.

To maintain aural continuity, two elements are key. First is the microphone. Use the best-quality microphone you can, with the appropriate pickup pattern, for the recording situation, and place the microphone as close to the sound source as you can without distortion. In this way, you will record clean, top-quality sound from all audio sources, and that high quality will help maintain aural continuity in editing.

Second is background sound, or ambient audio. Always record 30 to 60 seconds of just the background, with no one speaking, for every scene. This background ambience is called “natural sound” (natsound) for exteriors (outside) and “room tone” for interiors (inside). At the location you are shooting, with the same good-quality microphone you selected for the shoot, either before or after the scene, when no one is talking, simply press the record button to record natsound or room tone for one-half to one minute. In postproduction, when the dialogue is edited together from the different camera angles, if the corresponding audio has an aural “jump” in continuity, the editor can lay down a “bed” (background level) of natsound or room tone to smooth over the edit. With the background sound smoothly and slyly edited under the dialogue, the ear is fooled into thinking the conversation is continuous and uninterrupted; that is, the aural continuity is restored.

Establishing a Story Line

ENG and EFP productions tell some sort of story. As a videographer or editor, it is your job to make that story come alive and make it understandable within the confines of the script and the time limit. Many news pieces and commercials have no real visual story line, just a sequence of shots that show a particular subject. But whenever any action occurs or time obviously passes during a shoot, it can be put into story form. The script determines most, but not all, story lines in EFP. In ENG, the subjects themselves determine the story lines. Whenever you shoot or edit for either news or nonnews, your goal should be to establish a good story line.

In the scriptwriting chapter, we cover the elements of a basic story: a beginning, a middle, and an end. In visual terms, the beginning is like a wide shot; it establishes the primary setting, characters, and relationships. The middle is like a medium shot; it pulls the viewers into the story with more details that move the characters and events forward. The end is like a close-up; it drives home the main idea with the most intense or dramatic moments. Each of these three parts of a story can be conceived as consisting of one or more paragraphs, or segments. By visualizing each segment, you can shoot and edit a thorough, interesting, and satisfying story.

VISUALIZING PARAGRAPHS

No set length or number of shots makes up any segment. The beginning may be just one shot or many. The total length of the story usually determines how long each segment will be. A 90-second story probably will not have a 30-second opening sequence. Once you have established a story line in your head or on paper, break it down into its beginning, middle, and end. Take each part and look for the visual paragraphs, or sequences, that make up that part. By organizing yourself before you shoot and edit, these visual paragraphs should come together in a flowing, descriptive story.

In many TV scripts, it is impossible to establish much in the way of a visual story line. Many pieces end up being laundry lists of shots or wallpaper jobs. The script has no real visual interpretation, except for the very literal. A story about banks that are in financial trouble may be made up of exterior shots of the banks named in the story. The news photographer and editor have little creative input on the story line. If the writer and photographer can work together as much as possible, some of these situations can be avoided or worked out.

The point is to strive for good TV—that mesh of good audio and good pictures that communicates the maximum information to the viewer. In following the script, strive for the best sequencing and story line. You have a good chance of communicating something if you can visually hold the viewer’s interest. Sometimes pretty pictures are the best solution to the story line problem if you cannot obtain sequencing within the confines of the script. In this case, each shot should be able to stand alone as a complete idea or picture.

SHOOTING WITHOUT A SCRIPT

The biggest difference between ENG and EFP is the order in which the product is assembled. For EFP, you are shooting to a script, and it is easy to get what you need to cover that script. You go out knowing which pictures to get. For ENG, you are shooting for a script that has not been written yet. It is hard to second-guess how the final story will be structured, what parts will be included or left out, and what specifics will be written about. You must shoot to maximize the editor’s latitude when the piece is edited. At the same time, you cannot provide too much material, because there will not be enough time to go through it all within the usual TV news deadlines.

Sometimes you must shoot for two or three different story lines because the outcome or direction is unclear as the story develops before you. At a certain location, the story might be the crowd at the beach, the heat, the traffic, the troublemakers, or people being turned away because the park is full. All these elements, or only a few, can be included in one story, or you may concentrate on just one. The final script determines the type and amount of material that should be shot, but the final script does not materialize until after the shooting is over. How do you cover all the possibilities and come up with good sequences and story lines but not overshoot?

The writer-producer is often not present when you shoot the video. However, if you follow the basic guidelines regarding what kind of shots to get, and keep in mind what it takes to edit a story, you should have the material for any good basic piece. If you look at each situation as a mini-story (beginning, middle, end) and shoot each situation as though it will be sequenced together (wide shot, medium shot, tight shot, cutaway), then you have covered all the bases.

By getting the minimum number of essential shots, you have covered the story and given the editor the basis for cutting to almost any script. Get the basic four-shot sequences first, just in case that is all you get. Extra shots or artistic shots can be taken only after the basics are recorded and time permits. If the editor is in a hurry, there must be places on the videotape or clips among the video files where the basic shots can be found without much searching through shots that, although good, may be of lesser interest or importance to a basic story line.

Pacing

The last element in the relationship among shots in editing is the pacing, or timing, of the shots. The timing of each shot helps determine the mood of the piece. As a general rule, a shot less than two seconds long will not be consciously perceived by the viewer unless it is a very graphic or aesthetically simple shot. A shot longer than seven seconds with no movement is usually longer than the viewer’s attention span. The average length of most shots in EFP and ENG is about four seconds. A zoom, pan, tilt, or action in the picture can allow a shot to run almost any length, depending on the mood you are trying to capture.

Ultimately, it is not a fixed number of seconds that is important. What counts is that the shot is on long enough for the viewers to “get” the information and not so long that it causes them to lose interest. So just when should you cut from one shot to another? The overarching answer is: when the visual statement is complete. That is, once the viewers see the expression on the face in the close-up, or note the object in the subject’s hands in the medium shot, or recognize in the wide shot that the scene is set on a flooded street, it is time to cut to the next shot.

EDITING FOR DYNAMICS

If all the shots are static with no camera moves, the pace of the edits will generally be quicker than if there are some camera moves or action shots. If you are cutting several static shots together, try not to make the edits on a predictable beat. Vary the time between edits to give the piece some dynamics of its own. Let wide shots stay up longer than tight shots. It is easy to see what is in a tight shot, but a wide shot usually contains more information that takes longer to perceive.

Zooms and pans must be allowed to run their course. Cutting in the middle of camera movement is most often uncomfortable to the viewer. By their nature, these types of shots should be going somewhere, and cutting out early makes them unfulfilling to the viewer. Anticipation is created with no real payoff.

If movement is needed, but the entire shot is too long, it is better to start in the middle of the movement than to end in the middle. Let the shot finish. It is usually easier to see where the shot was coming from than not to know where it is going. It sometimes works just to use the middle of the movement, no start or finish, as long as you can tell what it is you are looking at, such as a long pan of rows of books. Seasoned videographers sometimes shoot camera moves three times: one with a slow pan or tilt or zoom, one with that same movement at a medium speed, and a third at a fast pace. This gives the editor a choice to select the camera move at the speed that best fits the pace of that moment in the story. Additionally, trained shooters hold the start of the shot static for about 10 seconds, and then hold the end of the shot after the camera move for another 10 seconds. In this way, the beginning and ending images are recorded long enough to use as static shots, in the event the editor chooses not to use the camera move itself.

A camera move shot has a certain mood to it that may not fit with the rest of the piece. Camera moves are most often used to add dynamics to what editors and producers see as dull pieces. There is a fine line between adding editorial interest and false excitement. Camera moves can add complexity that you may not want in your piece, distracting the viewer from the subject at hand. That is why most new videographers are asked not to use zooms and pans until all other basics have been mastered.

AVOIDING PREDICTABILITY

While staying within the sequencing, story line, and continuity guidelines, try to vary the pace of the shots enough to avoid any predictability. The worst case is when the viewer can tell when the next edit is about to occur. The viewer should always be expecting more information (until the end of the piece) but should never be able to guess how or when it will come. As long as this anticipation is satisfied, and the viewer cannot predict the next edit, the edit pace is correct. A fast-moving story requires faster edits. A slow-moving story requires more time between edits. A good action piece can have quite a few short shots if they advance or enhance the action. In a fast-paced sequence, the shots may be shorter than three seconds, even as short as 20 frames, but they must still be aesthetically clean so that there is not too much information and the viewers can perceive what is in the frame. This usually means using many close-ups and extreme close-ups.

EDITING TO MUSIC

Cutting to music is a good example of following a preset pace. Most of the time, it does not look good to cut on the simple beat of the music because it is too predictable. You will have a better flowing piece if you cut on the back beat, but even then not on every beat. Use the edits to emphasize or punctuate the music, so that the cuts are not simply a tapping foot, blindly following the lead of the music.

Picking out one instrument to follow with the edits can give the edit pace a nice tie-in: The images will flow with the music but never be predictable. Sometimes, switching from one instrument to another for different parts of the song can add to the interest of the pacing. With the current abundance of rock videos, there are many examples of good editing to music. Take a close look. If you turn down the sound and watch the edits, you can get a feel for the dynamics of the editing without the music. Learning to feel the pace without audio clues is a good way to learn any style of editing.

VARYING THE EDITING SPEED

By changing the pacing of edits, you can change the whole mood of the piece. Switching from long-running shots to quick edits can heighten tension, action, excitement, or anticipation. Slowing down the pace can give a more relaxed feeling, an easier flow, or an emotional touch with the feeling of relaxation, serenity, or even sadness. Sit back and watch how your piece plays after you complete each segment. Do not just watch how the shots fit together, but watch how the piece feels as it moves along. Is it too fast or too slow? Does it convey the wrong mood? Does it flow as one unit, or is it simply a slide show?

Ask another editor to take a look at your piece. Sometimes you can be too close to your own work to give it an objective critique. Bad pacing can make a piece drag on forever or seem as choppy as rough seas. Good pacing can make a piece fly by while conveying much information, or touch the hearts of the viewers through its warm flow of images.

COMPREHENSION

One of the biggest and most common mistakes made in all forms of video production is failing to perceive the finished product as a viewer would. The mistakes are often most noticeable in news stories. In the drive to make pieces exciting and dynamic for the viewers, the editors make use of every trick to keep the flow of images coming at a blinding pace. Fast cuts, zooms, and special effects abound. Music videos seem to be the standard by which stories are cut.

The problem with the music video style is basic: comprehension—and the lack of it. Music videos are cut the way they are so teenagers can see the same video dozens of times and still get something new out of it each time. The satisfaction gained from a single viewing is extremely low for that reason: The producers want you to see it over and over. The typical news story is just the opposite. By far, the majority of the audience sees a story one time and one time only. If there are any distractions at all while viewing the already short presentation, comprehension is thrown off. If there is no time allowed for absorption, what chance does the viewer have for understanding?

It is important that you, as the videographer-editor, make sure the edit pacing is right for the comprehension of the story as well as the dynamics of the story. Sometimes the rapid assault of images effectively conveys the emotion and content for which you are striving. But if there is more to communicate than that, make sure there is breathing space for the audience to take it in.

Adding Postproduction Value

Up to this point, we have been addressing the most used type of edit, the simple cut: an instantaneous transition in which the full frame of one shot replaces the full frame of the previous shot. However, editing software allows you to add other types of transitions and effects to your editing. The most common effect is the dissolve or mix, in which the two images are blended momentarily during the transition. Other effects are also available, such as wipes and squeezes, in which geometric patterns are used to replace one image with another or one image changes size or dimension, perhaps appearing to zoom into or out of the previous image.

In most video editing programs, you simply place two shots together on the timeline and then select the effect you want from some type of “Effects” menu. In some programs, you might place one shot on the “A” video track and the second shot on the “B” track. You then overlap the two shots by the number of frames you want the effect to last (e.g., 30 frames for a one-second effect), and drag and drop the type of effect over the overlapping edit.

The use of an A track and a B track has its roots in the early days of TV, when most news stories were cut on A and B film reels (or rolls). The A-reel had all the pictures with the sound: talking heads, soundbites, stand-ups, and so forth. The B-reel had all the cover footage to be used over the reporter’s voice track. The piece would then be assembled live on the air. It required the technical director to switch from one camera pointing at a film machine to the other and back at the proper times so that there was always a picture on the air. This led to the term B-roll, referring to all the shots without dialogue that illustrate what a speaker is discussing—the close-ups, inserts, cutaways, and so on that visualize a story. This technique was appropriately called A/B-roll editing. (See Figure 7.16.) That same concept is found today in editing software that uses both A and B tracks on the timeline.

THE DISSOLVE

Special effects can allow an editor to explore a whole new area of pacing and mood creation. The dissolve or mix can be a boon or bust to the finished piece. The dissolve is an excellent way of showing the lapse in time from one shot to the next. To go from the countryside in daylight to the city at night with a straight cut would be rather abrupt, but a dissolve can make the transition smooth and even artistic. (See Figure 7.17.)

image

FIGURE 7.16
An editor operates an older A/B-roll editing system. Multiple VCRs and monitors are mounted in the racks. The edit controller, video switcher, effects generator, and audio mixer are mounted in the console. Today, all this equipment is replaced with a computer and editing software. (Photo by John Lebya)

In many pieces, this way of showing the passage of time can aid in the telling of the story, because fewer shots are needed to make the transition. You can take a subject from one location to another with a simple dissolve instead of transition shots. In a long piece, it is a good idea to use both transition shots and dissolves for variety.

When a piece calls for a slow-edit pace, a dissolve adds to the relaxed feeling and to the flow from one shot to the next. In going from static shot to static shot, such as shots of photographs from an old family photo album, the dissolve takes the hard edge off the edit and gives a desirable, fluid transition. For the artistic piece—the story on fall colors or the day in the life of a nursing home—the dissolve can add to the beauty of the shots or give a feeling of sensitive compassion.

The basic rules of editing should still apply, however. You do not dissolve between two shots that are very similar in composition. You still try to give variety to the shot selection and follow basic sequencing patterns. For a solo dancer on a stage, dissolves are desirable, but each shot should be as different as possible from the next. If the dancer is framed screen right in a wide shot, the next shot could be a medium shot with the dancer in the left part of the picture. In other words, do not overlap similar images.

Let the mood and pacing of the piece determine how long a dissolve should last. A duration of 30 to 90 frames (one to three seconds) seems to look best for most uses. The slower the pace, the slower the dissolve. You must keep in mind, however, that making all edits into dissolves can make the piece boring and predictable. Try to have a good practical or artistic reason for each dissolve and any other effect you use.

THE WIPE

Wipes come in a great variety; the standard left-to-right straight edge is the most common. (See Figure 7.18.) With digital effects, wipes can be as wild as you can imagine and the effects just as varied. Most of the literal graphic ones, like spinning stars or heart shapes, have little place in ENG, but have some application in EFP. The straight-line wipe is used in live newscasts to go from one story to the next without having to cut back to the set for the transition.

You very seldom see a wipe used in a produced news story. When it is used, however, its use is similar to that in the newscast itself: to go from one thing to something totally separate. If several pages of written information are to be put on the screen, a wipe is used to go from one page to the next, such as in election-night tallies. Digital wipes, such as page or cube wipes, are very popular for this type of transition. The different types of wipes are often used in entertainment programs and commercials, to give the production variety and a jazzy look.

image

FIGURE 7.17
A dissolve is used here to signify the transition of time from day to night. (A) Daytime shot of villagers outdoors. (B) Dissolve is half finished (superimposition), blending daytime and nighttime shots. (C) Dissolve is complete, ending on nighttime shot.

COMPOSITING

Many software programs allow you to composite, or matte, images—to blend parts of two or more pictures to create new pictures. (See Figure 7.19.) You probably know that when you watch the weather announcer on your local news, that person is really standing in front of a screen (probably green or blue) and the imagery behind the person is placed there electronically. This is one common use of compositing, called a chroma key, in which a color (blue or green) is keyed out and replaced with something else—in this case the weather images. Other types of keys include luminance keys, in which brightness levels are used to insert other images, and matte keys, in which any of a number of items can be used, including selecting individual pixels.

Compositing is more likely to be used in EFP than in ENG. Because of journalists’ ethical obligations of truth and accuracy, it is not usually acceptable to remove a person from one image electronically and insert that person into another image. Only in rare cases where a person really was in some situation or location, and the original photos or video sequences were lost or destroyed, could an argument be made for compositing that person into an image of that same situation or location. Again, the obligation is to represent the truth of the story, so caution must be exercised not to distort any facts with fancy matting tricks.

In EFP, however, with the understanding that scripts are created to tell the clients’ stories rather than to report the news, it is more common to see composite images. They can enhance a script with the viewers’ understanding that the illusion of certain places and times is important to the story, even if the actual places and times cannot be photographed. For example, a corporate video might call for the company CEO to appear in a faraway location that the budget cannot afford. In this case, the CEO can be shot against a solid-color backdrop, and stock footage of the exotic location can be keyed in later, creating the desired illusion and saving a costly trip.

Editing Sound

Traditionally, sound editing has not been as complicated for TV as it has been for the movies. The poor quality of most older TV speakers and the conditions under which most people watch TV reduced the need for good sound. Today, however, sound editing is every bit as crucial as picture editing. Anyone can download high-fidelity MP3 sound files from the Internet, so why should TV audio be of any less quality? Additionally, today’s high-definition big-screen TVs demand big sound from the speakers. With HDTV, viewers expect HD sound quality. High-fidelity stereo mixes and 5.1 surround sound are the norm. Of course, these high-quality soundtracks might be mixed down to simple mono for playback on low-end systems, such as standard VHS tape machines, just as the high-quality images might be compressed for Internet or cell phone distribution. However, many clients want the original project to be mastered with the highest possible sound and picture quality so that high-quality copies can be dubbed for distribution in addition to any lower-quality versions.

image

FIGURE 7.18
A wipe is used here to signify the transition from one location to another. (A) Golden Gate Bridge. (B) Wipe is half finished (split screen), combining the bridge and lighthouse shots. (C) Wipe is complete, ending on lighthouse.

ACCURATE REPRESENTATION OF THE EVENT

The two sources of audio in ENG are the audio of the talent (news anchor or recorded reporter) and the sound accompanying the pictures. Because most TV news is based on journalistic standards, the addition of any other audio is frowned on. The addition of music is the only exception, although in some cases it is not desirable. Adding sound can be misleading, deceptive, and sometimes downright dishonest.

The most you can do in ENG is move the sound around from one shot to another, but the sound must accurately represent what you would hear if you were there. An example is a shot of a mine with a whistle blowing; the next shot is of miners filing out to go home. The sound of the whistle may not have been recorded at the same time as that shot of the mine, but it did blow while the crew was recording, and it did signal the end of a shift. The sound was used correctly.

An example of sound used incorrectly is a shot of people at an accident scene and the news photographer running up to the injured on the ground while a siren is heard. The siren in this case was taken from a story shot last week and used to add a feeling of breaking news to the piece. The photographer actually arrived late. In this case, the siren should not have been used at all. It made the story into something it was not. If you did not get the sound at the location, you should not manufacture sound to make it appear as though it came from the location. If it makes the pictures seem different from what they really were, then the sound should not be added. Sound needs to depict what happened accurately.

ADDING SOUND FOR EFFECT

If you record an explosion from a mile away, it can take the sound of the explosion several seconds to reach the camera. Do you move the sound? For EFP the answer is simple, because any sound is fair game if it enhances the idea you are trying to get across. You would have to get very far out of line to violate the “truth-in-advertising” law.

For ENG the question is harder to answer. Years of Hollywood conditioning have made audiences expect to hear the sound at the same time they see the explosion. In real life, however, the sound and picture do not match. What do you do?

You can assume that sound and picture are in sync at the point of origin (the explosion site). The audio can be synced back up in editing if the shot contains the explosion as the only audio source. If, however, there are people in the foreground reacting to the explosion as it happens, their audio, and therefore the audio from the explosion, cannot be moved. Moving the explosion’s audio would distort the people’s reaction to it.

There is, nevertheless, room for creativity when it comes to audio in ENG. You can add sound where it is obvious to the viewer that the sound is added for effect. Shots of an abandoned schoolhouse with the sounds of a school bell and children playing can give a powerful emotional touch to the scene. It is obvious that no children have been there in decades, but the audio implies the rich history of the once thriving school.

Imagine a reporter doing a stand-up in front of a roaring water pump with a mic that does not pick up the sound of the pump because of its placement. You see the pump, but you do not hear it. By adding the background sound of the pump in editing, the shot seems to come together better. All the pieces fit and work together for the overall effect. These are just a few examples of adding sound to enhance ENG work, but you must use sound carefully. It is a fine line that separates enhancement from deception.

image

FIGURE 7.19
Compositing, or matting, is used here to blend two images. (A) Woman with megaphone. (B) Soccer team. (C) Composite image—the white background behind the woman has been replaced with the soccer team. Note that the soccer team picture has been cropped to place the team farther to the right, allowing room on the left for the woman with the megaphone. Also, the image of the woman has been resized (slightly smaller) to fit better into the composition, while still covering the light pole in the background so it does not appear to grow out of her head. Together, the composited images create the illusion that the woman could be a cheerleader for the team.

AVOID ABRUPT EDITS

In general, avoid abrupt starting and stopping when editing audio. Abrupt starts and stops are to audio what jump cuts are to video: They distract the audience. Even when audio must come in very quickly, a fast fade-in is better than a full-volume take. A cut made in the middle of the bell’s ring doesn’t sound right. Either a quick fade-up or finding the natural starting point for the sound would be preferred. Audio cutoff is the same. Find a natural end for the sound or fade it out quickly. Background audio can come and go with the edit points as long as the audio is truly in the background. Every picture has a sound, unless it is a graphic or a freeze-frame. There is background sound for just about everything.

NATURAL SOUND

A good news package opens with a picture that begins to tell the story or captures the viewer’s attention. A reporter stand-up opening is often boring and gives the viewer little to look forward to. It looks like more news anchor and not more news. With an opening shot, there should be some good natsound.

Use with Opening Video A story on flooding might open with a shot of water flowing over a dam. The roar of the water is heard for a few seconds before the reporter’s voice comes in. This breaks the constant flow of talking and can spark someone’s interest to look at the TV instead of only listening to it.

Not only must the pictures be good, but the sound must also be good enough to make someone want to watch the pictures. Good use of natural sound can draw the viewer into the story and give the pictures that “you-are-there” feeling. This means you should open a story not only with the best picture you can but also with the best sound.

Use as a Transition You can use the natural sound of the pictures to break up paragraphs in the track, get into or out of soundbites or talking heads, and bridge a gap from one part of a story to another. To move from talking about people buying new homes to discussing the number of new homes being built, you could make the transition on a shot such as an electric saw (with the sound up full) cutting a board in front of new construction. After a couple of seconds of the saw, the reporter continues the story, now talking about all the new construction. Time limits can make this type of editing difficult, but if the story is well thought out, and the reporter and videographer work together on producing it, the end product will show the effort and have a greater impact on the viewer.

ROOM TONE

One thing professional editors always ask camera crews to get while recording interviews on location is 30 to 60 seconds of room tone: the ambient sound of the location without any of the subjects talking. In high-quality editing, where two soundbites are to be edited together, there might be a difference in the background noise from one bite to the next. To disguise that difference, some of the ambient sounds of that location can be laid in under the edit point to bridge from one shot to the next. This makes the audio edit sound more seamless.

image

FIGURE 7.20
A diagram of the L cut; here, the audio for segment B precedes the video by a few seconds, giving segment B the appearance of an L lying on its back.

THE L CUT

A popular form of creating an audio transition is to start the audio of the next shot (usually the beginning of a new sequence) under the current shot. This is called the L cut. On the editing timeline, it actually looks like a “lazy L” because the audio on the lower audio track begins ahead of the video on the upper video track. (See Figure 7.20.) For example, we see the building planner looking over the drawing for the new housing development as the reporter’s track about the project comes to the end of a paragraph. While the planner is still on screen, we hear the sound of a buzz saw ripping through wood for about one second or so before the picture of that saw pops up and starts the next section of the story about construction. The audio pulls the viewer into the next sequence and softens the transition from one location to the other.

MULTIPLE-SOURCE AUDIO MIXING

Every editing software program offers multiple channels of audio with which to work. (See Figure 7.6.) One track is usually designated for the reporter’s audio or dialogue and a second track for all natural sound and soundbites. If music is appropriate and desired, it goes on a third track. Additional sounds might require additional tracks.

Most editing programs offer enough audio tracks for ENG and EFP. Low-budget projects have many fewer audio tracks than professional movies, which can reach 100 or more channels. However, in the rare case that an ENG or EFP project requires more audio layers than are available in the software, the editor can use the methods of laydowns and laybacks.

In our previous example, let’s assume only two tracks are available in the editing program. The editor would decide which two audio sources were most important for determining the pacing and shot selection for the story. The story would be edited with just those sources all the way to its conclusion. For this discussion, assume it is a reporter’s voice track and the natural sound of the pictures that are laid down first. Wanting to add a music track under the entire piece, the editor would first take both of the audio tracks of the finished piece and do a mixdown or laydown by exporting the project as a new file with those two audio channels mixed. This mixed audio would then be laid back onto just one channel, leaving the second channel free to mix in the music.

image

FIGURE 7.21
The top diagram shows checkerboard editing. Here, the editor first lays down the reporter with sync dialogue at the front; then lays down the music while leaving that section of video and sound effects blank to edit visuals and natsound to the music later; then lays down the interviewee with sync soundbite. In contrast, the bottom diagram shows section-by-section editing. Here, the editor lays down each piece as he or she works: first the reporter with sync dialogue; then music with the accompanying visuals and natsound edited right away; then the interviewee with sync soundbite. Note that the editor chooses to fade the music in at the end of the reporter’s stand-up (L cut, see Figure 7.20); then bring it up full for the montage of B-roll shots; then fade it out under the interviewee’s soundbite (reverse L cut).

EDITING METHODS

Many problems can arise in audio editing. An image you choose to accompany a soundbite turns out to be too short or too long. An audio clip of natsound does not match a picture. The reporter’s voice is difficult to time with music fading to background. The best way to avoid problems is to plan the editing well in advance. Two methods, or approaches, to editing are useful to consider: checkerboard and section by section. (See Figure 7.21.)

Checkerboard One method is to edit only the principal images and sounds first for the whole piece. For ENG, this might be the reporter’s talking head and some images that have natsound you plan to use “up full.” After editing these primary shots and full-volume sounds, the video and audio tracks might have some blank spaces where you plan to fill in additional images and sounds to finish the story and give it its final polish. This pattern of a shot followed by a blank space on the video track (which is black by default) followed by another shot followed by more black is sometimes called checkerboard editing. The idea is to edit only the major story images and sounds, and then go back and fill in the rest, including graphics.

The advantages of this approach are that you can:

•  Make the most critical editing decisions first, followed later by the quicker decisions about filler elements.

•  See what video and hear what audio stand on their own and identify what you will need to replace or enhance later.

•  Time your piece before it is actually completed.

The disadvantage is that if you are facing a severe time crunch, you might still have blank moments of video or audio, or have incomplete graphic work, such as missing lower-thirds, when the project is due.

Section by Section A second approach to editing is to edit one complete section at a time. With this method, you include all the visual and aural elements as you go, finishing one part of the whole story before continuing to the next part. You leave no blank spaces, but complete the polished edit piece by piece.

The advantages of this method are that you can:

•  Fine-tune each part of the story as you go, giving you the freedom to change any parts of the story as you come to them.

•  Get a stronger sense of the overall visual style of the piece, including type fonts and colors for graphics and other elements.

•  Skip over entire parts of the story and go straight to the end in the event of a severe time crunch, giving you at least some form of finished project to meet an air deadline.

The disadvantage is that it is too easy to get hung up on one little edit, such as the exact position of a graphic element or the exact number of frames for an audio fade-in, resulting in a loss of editing momentum and the ability to see the “big picture” of telling the whole story.

In reality, most editors use a combination of both checkerboard and section-by-section editing. Depending on the time crunch, the amount of A-roll footage versus B-roll footage, the editor’s own working style, and other variables, an editor might start with a checkerboard, then come to a segment that he or she completes in its entirety, then go back to a checkerboard for another segment, and so on.

Music Editing When using music, it must be laid down first if any of the video is to be edited to the music. If nothing is to be in sync with the music, then it is best left until last so it is easier to mix it with other audio. Again, planning is the key.

If all or part of the story is to be edited to music, start by laying in the music where it is the primary sound. Next, lay in all the other audio that is to be up full (reporter’s track, soundbites, and natsound with pictures) in the proper place. Finally, insert the rest of the shots, editing them to the music and any natsound, if appropriate or needed. Keep the music level up full when the music is the primary, or foreground, sound element. Fade the music underneath the other sound elements when they are foreground and the music is the secondary, or background, element (figure–ground principle). If planned properly, this method lets you edit to the music without affecting the placement of the rest of the audio so that the finished piece has all the elements timed perfectly.

SUMMARY

Editing can be every bit the creative challenge that shooting is. The best shooters are the ones who learn how to edit. Just as a shot can sometimes be improved by moving the camera just a few inches, an edit can sometimes be improved by changing the timing by just a few frames. The goal of the editor is to take the material at hand and make an understandable presentation: an interesting and comprehensible story. While all editors share that goal, the different methods and varieties of solutions are as expansive as the number of editors themselves.

Whatever their individual styles, all editors share an understanding of both the technical and creative editing basics. In today’s NLE world, the technical basics include logging, creating an EDL, capturing audio and video clips, importing them into the software, trimming the heads and tails, sequencing the elements, layering additional audio and video tracks, adding effects (if appropriate), mixing the audio, rendering the final cut, outputting to a storage medium, and distributing the project by any of various means (e.g., physical media, online). Understanding the technical concepts of how audio and video are recorded and played back can also enhance an editor’s decision making. Some important technologies include scanning, fields and frames, tracking and skew (for tape-based recording), digital sampling, compression, tape-based and tapeless recording, and nonlinear editing.

The creative basics include how to sequence shots, maintain continuity, establish a story line, pace the edits, add postproduction effects (if appropriate), and edit sound. The guidelines for these aesthetic concerns have been honed by a century of editing, beginning with the first filmmakers. By understanding these creative issues, as well as the technical basics, and by following the guidelines put forth in this chapter, you can get started editing your project—the story you want to tell.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.58.200.78