Key Terms

Action Safe

Alpha Channel

Audio Sweetening

Audio Track

Chroma Key

Clip

Color Correction

Compositing

Cut

Cutaway

Dissolve

Effect

Fade

Freeze Frame

IN and OUT Points

Jump Cut

Keying

Log and Capture

Matte

Mixing

Nonlinear Editing (NLE)

Playhead

Render

Rough Cut

Scratch Disk

Scrubbing

Slug

Sound Effects (SFX)

Split Edit

Three-Point Editing

Timecode

Timeline

Title

Title Safe

Transition

Video Track

Voiceover (VO)

As we get newer and better compression algorithms for video and still-frame images, we’ll approach the next major leap in desktop computing: video on the desktop.

—John C. Dvorak, technology journalist and radio broadcaster (1995)

Chapter Highlights

This chapter examines:

  • The use of nonlinear editing (NLE) in video and audio production
  • The visual interface components in an NLE workspace
  • Strategies for project organization and asset management
  • Working with project media files and clips in a nondestructive editing environment
  • General concepts and principles related to the aesthetics of editing

From Linear to Nonlinear Editing

Video editing is the art of arranging static and time-based media assets into a linear form of presentation for the purpose of telling a story or communicating a message. The goal of editing is to produce a thoughtful narrative with a clear beginning, middle, and end. Audio editing is similar but with a focus on sound-based media elements only. Many of the basic editing techniques used in video postproduction are applicable to audio, particularly if you are using professional audio editing software such as Avid Pro Tools or Adobe Audition.

Motion picture editing began as a simple manual process. In the early days of film, an editor would review and edit footage using a hand-operated device (see Figure 14.1). Editing was performed by physically cutting film into sections, rearranging the pieces into a new linear order, and then splicing them together again using clear tape or another type of adhesive—hence the term linear editing. This method was also used in audio tape editing. You’ll find that many of the terms used in digital audio and video editing today have their origins from this era, particularly the use of terms such as the razor blade tool—a reference to the traditional cutting instrument—and the term bins—a virtual desktop alternative to the physical bins, such as repurposed coffee cans, that film editors once used to hold segments or clips of film.

Figure 14.1 Left: In the early days, motion picture editing required manually viewing, cutting, and splicing pieces of film together on an editing table to make a movie. Right: Today, film and video editing is done virtually using an NLE program such as Adobe Premiere Pro, Apple Final Cut Pro, or Avid Media Composer. There’s no more mess and no more film left behind on the cutting room floor.

Figure 14.1 Left: In the early days, motion picture editing required manually viewing, cutting, and splicing pieces of film together on an editing table to make a movie. Right: Today, film and video editing is done virtually using an NLE program such as Adobe Premiere Pro, Apple Final Cut Pro, or Avid Media Composer. There’s no more mess and no more film left behind on the cutting room floor.

While film editing began as a physical process, video editing started out as an electronic process. Unlike film, one cannot expose videotape to the light to view the transparent frames of each still image. Video images are processed electronically, which means that viewing and editing have to be done using electromechanical players, recorders, and monitors. In the beginning, video editing was nothing more than systematic duplication. Material was copied segment by segment from one videotape to another—a process known as tape-to-tape or machine-to-machine editing.

Machine-to-machine editing worked pretty well for many years. The simplest machine-to-machine editing system involved five pieces of equipment: 1) a playback deck (VCR) attached to 2) a source monitor for viewing; 3) a record deck attached to 4) program monitor for recording; and 5) an edit controller. The edit controller was connected to both decks, allowing the editor to control them remotely. To perform an edit, the editor would begin by “electronically” marking the beginning and end of a selected shot on the source deck. This involved shuttling the deck (moving or scrubbing the tape backward or forward), pausing it at a specific spot on the tape, and pressing physical buttons to mark the respective IN and OUT points. The editor would perform the same procedure to set an IN point on the record deck. Pushing one last button allowed the editor to preview or perform the edit. During the pre-roll phase, both machines would back up three to five seconds ahead of the designated IN points and then roll forward in unison. Once the IN point was reached on the record VCR, recording would begin and continue until the OUT point triggered the end of the edit.

This type of machine-to-machine editing configuration was sometimes called a cuts-only system because the only type of edit you could perform was a cut. If you wanted to create anything other than a straight cut, you had to use an A/B Roll editing system. The A/B Roll workstation had two source decks instead of one. The A-Deck was loaded with primary footage (interviews, scripted dialog, etc.) while the B-Deck contained secondary material (cover shots, establishing shots, cutaways, etc.) that would appear while the main person or character was talking on screen. By the way, this is where the familiar, yet sometimes confusing, term B-Roll originated.

The computerization of media has made machine-to-machine editing a thing of the past. The edit controller is no longer a physical device sitting on a tabletop, but rather software powered by a computer. Welcome to the age of nonlinear editing (NLE), where programs such as Adobe Premiere Pro, Avid Media Composer, and Apple Final Cut Pro are used by professional editors to craft their stories. For consumers, the marketplace offers a number of simple-to-learn NLEs such as Pinnacle Studio, Apple iMovie, and Windows Movie Maker. Professional NLE titles are designed for film-makers and producers who require advanced tools to support more complex post-production needs and workflows. Audio editing software comes in a similar range, from consumer-level software such as GarageBand and Audacity to professional-grade software such as Adobe Audition, Apple Logic Pro, or Avid Pro Tools.

Building Blocks of an Edited Sequence

Most professionally produced television shows or video programs contain basic components that, when edited together, tell a story, communicate a message, or stimulate an emotion. Raw footage for a project is acquired during the production phase. The amount of raw footage you have depends on the type of project you are working on and your skill in keeping the number of “bad takes” to a minimum. For a 1-hour documentary, you may have 10 hours of source material (interviews, B-Roll, etc.) to work with. We refer to this as a 10 to 1 shooting ratio (10:1) because the amount of source footage is 10 times greater than the length of the finished product. For a 60-second news package, an editor will often have much less footage to work with. Although the shooting ratio may remain the same at 10:1, given the shorter program length, the amount of actual footage drops to 10 minutes.

The media assets used to build an edited sequence typically fall into four main categories: 1) pre-scripted elements; 2) unscripted or post-scripted elements; 3) music and sound effects; and 4) graphics.

Pre-Scripted Elements

In a pre-scripted movie or television program, a written script is completed prior to the start of production. Feature films, television newscasts, dramas, sitcoms, commercials, and educational videos are traditionally pre-scripted—leaving it up to the writers to determine the on-camera action, narration, or dialog ahead of time during the pre-production phase of the project. In such cases, the script serves as a master blueprint for directing the on-camera performances of professional actors or talent. In postproduction, the same script will assist the editor in arranging shots and scenes into the predetermined linear presentation.

Figure 14.2 Television news anchors are able to maintain eye contact with the viewer by reading their scripts via a teleprompter mounted to the front of the camera.

Figure 14.2 Television news anchors are able to maintain eye contact with the viewer by reading their scripts via a teleprompter mounted to the front of the camera.

In narrative filmmaking, actors are called to memorize their lines and rehearse them before the actual performance. Still, on-camera performances are rarely perfect, and directors will often ask talent to perform multiple takes of a scene before they are satisfied with the results. In some situations, such as in a game show, corporate video, or television newscast, scripted lines or dialog are performed using a teleprompter (see Figure 14.2). A teleprompter projects words onto an angled glass panel that’s mounted in front of a television camera lens. It allows the talent to read their script while looking directly into the camera. Teleprompters are used often in studio productions (news, sports, talk shows, etc.) but can also be used in remote field production. While a teleprompter can eliminate or cut down on rehearsal time and speed up production, their use doesn’t guarantee a perfect take. Reading from a teleprompter takes practice, and some people are naturally better at it than others.

Unscripted or Post-Scripted Elements

Nonfiction stories such as broadcast news packages and documentary films rely heavily on unscripted action (or candid footage) and post-scripted elements such as voiceovers or standups. Candid shots are unscripted, intended to capture the normal actions, speech, and sounds of real subjects and objects in their natural setting. Ideally, it means that nothing shot by the camera is intentionally scripted, staged, or performed. The camera is merely there to document reality. Whereas in scripted programs, shots are rather rigidly preplanned and constructed, here the camera is used to capture a story—as reflected through the actions, behaviors, thoughts, ideas, feelings, and expressions of everyday people, however ordinary or extraordinary they may be. Likewise, the written copy for voiceovers and standups is typically post-scripted—written after production has ceased and postproduction has begun in order to most accurately tell the story that unfolds. Four of the most common media assets used in editing non-fiction programs are: 1) sound bites, 2) B-Roll, 3) natural sound, and 4) voiceover.

Sound Bites

Whereas interviews are shot or recorded during the production phase of a project, sound bites are “constructed” by editors during postproduction. A sound bite can be as short as a single word or several sentences long. The phrase talking head is used to describe an on-camera interview segment that goes on for too long, potentially causing the viewer or listener to lose interest and disengage from the message. The average length of a sound bite on network news programs is less than 10 seconds. As a rule of thumb, sound bites should be kept to under 20 seconds. Usually, the shorter, the better!

To avoid having a long sound bite of, say, 30 seconds, an editor can divide it in half, producing two shorter bites that are roughly 15 seconds each. The editor can then insert a stand-up, natural sound pop (or snippet), music, voiceover, or other sound bite in between the two halves as a narrative bridge, transition, or cutaway. With this technique, the content of the 30-second bite still gets communicated but in a more dynamic way by intercutting it with other elements.

Sound bites from different people can be combined through editing into a sequence to form a continuous thought or interwoven narrative. The rapid intercutting of shorter sound bites in combination with other program elements is a great way to keep an audience engaged in the program. A skillful editor can even join noncontiguous sound bites in such a way as to make the person speaking sound as if they are saying something entirely different. At such times, the editor is a potentially powerful gatekeeper and shaper of ideas and, as such, needs to follow the ethical standards and practices of the profession and the organization he or she works for. Nowhere is this more important than in journalism, where reporters are expected to uphold the virtues of truth, fairness, and objectivity. An editor at Comedy Central likely has more latitude and creative freedom in editing (under the cover of parody and satire) than a news editor working at CNN.

B-Roll

B-Roll serves an important secondary role. It refers to video footage that’s used to visually support the spoken word narrative. Visual storytellers and scriptwriters are often encouraged to “write to the pictures,” which means making sure there is thoughtful harmony and synergy between what the viewer hears and what he or she sees on screen. For example, when a voiceover is added to a sequence, B-Roll is placed on top to visually illustrate the narrative content of the VO. Likewise, in an interview segment, B-Roll can be used, in part or in whole, to replace the shot of the person speaking, effectively converting an SOT into a VO. B-Roll includes specialty shots such as cut-ins and cutaways, which we will discuss later, graphics, animations, and other footage designed to visually enhance the audio portion of the program. For example, as a reporter is speaking off-camera about a local bank robbery, the viewer sees a sequence of B-Roll shots depicting the scene of the crime. A sequence like this might include material obtained from surveillance cameras along with other shots filmed during the aftermath of the event. There’s a popular saying among shooters and editors that “you can never get enough B-Roll.” The more B-Roll an editor has to work with, the more options he or she will have for improving a story’s visual pace and structure.

Natural Sound

Natural sound (also known as nat sound or ambient sound) is the synchronized audio portion of a video recording that is acquired when shooting B-Roll. For example, while shooting B-Roll of a mountain waterfall, be sure to record the sound of the water cascading over the rocks. This is natural sound, and it comes in handy when editing! For example, in a scene where a hiker speaks on camera during a 15-second sound bite, the editor inserts B-Roll of the waterfall for the last 5 seconds. The waterfall shot will have a greater impact on the viewer if nat sound is incorporated in the mix. When mixing the two sources, the audio levels for the hiker’s SOT should be set to full (100%) and the audio levels for natural sound much lower (less than 50%). This will ensure natural sound stays in the background (where it normally belongs) and that it doesn’t compete or interfere with the main audio source.

Voiceover (VO)

A voiceover (or VO) is a narrative device used to audibly support or describe the visual portion of a video program or segment. Recorded off-camera, a VO is the sound of the hidden announcer, storyteller, reporter, narrator, or host, who guides the audience through a program—filling in details while adding color and continuity to the linear presentation of visual information. Voiceovers are often used in conjunction with sound bites. In news reporting, the VO-SOT (pronounced VOH-soht) is a technique in which a television anchor reads a scripted story off-camera

Great Ideas

The Stand-Up

In television, a reporter stand-up or on-camera host segment can be used as an alternative to a voiceover or in combination with it. For example, a reporter may choose to use a stand-up at the beginning and end of a story and voiceovers throughout the rest of the package. The best stand-ups add meaningful visual value to the story by providing something other than a good look at the reporter or host. A stand-up can be used to perform a visual demonstration, stress a key point, or take the viewer on a brief walking tour.

Figure 14.3 A TV reporter delivers an on-camera stand-up.

Figure 14.3 A TV reporter delivers an on-camera stand-up.

while related images and sounds appear on screen. VO-SOT-VO describes a fundamental editing technique where voiceovers and sound bites alternate to form the narrative structure of a linear story.

Music and Sound Effects

Music and sound effects are usually inserted into the timeline beneath the primary audio tracks containing sound bites (SOTs), VOs, and nat sound—giving the editor independent control over each track when mixing levels or adding audio processing effects such as compression and EQ. They should be imported into the NLE project bin in an uncompressed file format such as WAV or AIFF. If you import a compressed audio file, particularly a lossy file such as MP3, you will end up recompressing the audio when you export it at the end of the project, thus compromising its fidelity. Professionals usually avoid working with MP3 audio assets in video editing. MP3 is a consumer distribution format that is heavily compressed and technically inferior to WAV or AIFF.

Graphics

A lower-third is a television graphic used to identify the name of the person appearing or speaking on screen (see Figure 14.4, top). Sometimes the person’s title, or, as in this case, email address, is included as well. As the name suggests, a lower-third is usually positioned in the lower-third area of the frame They are sometimes referred to as supers because they are typically superimposed over a background video source or graphic.

Figure 14.4 Top: A lower-third is superimposed over the background video to identify both newscasters. Bottom: Full-screen graphics, such as this one, fill the entire video frame.

Figure 14.4 Top: A lower-third is superimposed over the background video to identify both newscasters. Bottom: Full-screen graphics, such as this one, fill the entire video frame.

A lower-third is placed on screen through a process called keying. Keying replaces the background pixels in a title graphic (known as the alpha channel) with video from another source. This technique is similar to the superimposition of a weather map behind the weathercaster during the evening news. The weathercaster usually stands in front of a green or blue wall. Using a technique called chroma keying, the colored wall is replaced with video of a weather graphic. For best results, the weathercaster cannot wear clothing containing the same color as the wall. Can you imagine what that would look like? Like the green or blue wall in a TV studio, the alpha channel serves as a mask, allowing a title to be seamlessly merged with the video beneath it.

Most NLE software programs make it relatively easy to create and insert a simple title graphic. However, when you want something more complex, you will want to turn to a graphic editing program such as Adobe Photoshop (see Figure 14.5) or to a motion graphics program such as Adobe After Effects. You can then use the NLE to import the graphic into the project as a standalone media asset. NLE programs recognize most common graphic file formats including JPEG, GIF, TIFF, and PNG. In fact, professional editing software usually allows you to import native Photoshop files (PSDs) directly into the project. When the NLE opens a PSD file, it converts each layer to a corresponding video track, allowing the editor to have independent control of each layered element in the timeline.

Television graphics come in all shapes, sizes, and formats. For example, an over-the-shoulder graphic appears at head level just to the left or right of a television anchor or host. Crawls scroll from right to left across the bottom of the screen with the latest stock information or school closings while credit rolls scroll vertically, identifying members of the cast and crew at the end of a television show or movie. A full-screen graphic fills the entire screen while a bug (a tiny network or station logo) takes up very little real estate in the corner of the screen (see Figure 14.4).

Figure 14.5 Adobe Photoshop includes several film and video presets. For an HD television program choose the one shown here. Film and video presets include visual guides to show the action and title safe areas.

Figure 14.5 Adobe Photoshop includes several film and video presets. For an HD television program choose the one shown here. Film and video presets include visual guides to show the action and title safe areas.

Source: Adobe product screenshot reprinted with permission from Adobe Systems Incorporated.

Figure 14.6 Adobe Premiere Pro includes this title generator for composing on-screen graphics. Notice the title and action safe guidelines displayed in the design window to aid in the placement of text and graphics.

Figure 14.6 Adobe Premiere Pro includes this title generator for composing on-screen graphics. Notice the title and action safe guidelines displayed in the design window to aid in the placement of text and graphics.

Source: Adobe product screenshot reprinted with permission from Adobe Systems Incorporated.

Designing Graphics for Television

When designing television graphics, keep in mind that viewers may be watching the video on a low-resolution screen or even in a small window on a computer monitor or digital device. Viewers have a short time in which to process visual information that flashes on the screen for only seconds at a time. Follow these tips when planning your graphics:

  • Avoid clutter. Remember, less is more. Don’t pack more information into the visual screen space than is absolutely necessary. Avoid long sentences and paragraphs. Instead, use short phrases or bulleted text whenever possible.
  • Use thick sans serif fonts. Select high-density fonts with a thick stroke. Script fonts and other light stroke fonts do not translate well to video, where much of their detail is lost, making them hard to read and frustrating for the viewer.
  • Use large font sizes. Use font size to denote visual hierarchy, but keep the sizes relatively large. It’s better to use a readable font and break up information across several screens than to cram too much information into a single title screen or graphic.
  • Strive for good contrast. If the background is dark, the foreground text should be light. Separate the text from the background by using drop shadows, stroke, light, and texture. Try using a high-contrast background graphic or colored box behind text instead of superimposing text directly over video and running the risk of making it hard to read.
  • Use video-friendly colors. White and yellow are popular colors for screen text because they are easy to read against a dark background. Black also works quite well when placed over a white or light-colored background.
  • Margins and white space. Stay within the title safe area of the frame—the area where titles will display well—and don’t forget to leave white space. Most NLEs will show you the title safe area of the screen. You’ll also see lines marking the action safe area. The reason for this is that older television sets didn’t actually show the entire picture; a mask usually covered part of the picture tube, and some pixels along the outer edges were lost during broadcast transmission. While newer flat panel displays don’t have this problem, it’s still a good idea to pay attention to the positioning of titles and action within the frame as this will help ensure there is sufficient white space in the frame.

Continuity Editing

Combining shots in an edited sequence should be done in such a way that the linear presentation of visual and aural elements in a timeline is perceived as natural and coherent to the viewer. Bad edits can be disruptive to the narrative flow of a story, causing viewers to disconnect, get confused, lose their place, or stop watching altogether. There’s a saying that “the best edits are the ones you never see.” In other words, good edits are so seamless and logical that the viewer doesn’t even notice when they occur. And this is what editors often hope to achieve. Continuity editing is a term used to describe a wide range of practices and techniques that editors use to achieve smooth and natural transitions from shot to shot over time. When edits are performed well, the narrative continuity of the story is enhanced—leading to a more satisfying and enriching experience for the viewer (see Figure 14.7).

Cut-Ins

The cut-in is a popular editing technique used in continuity editing. A cut-in directs the viewer’s attention to a related object or alternative viewpoint within the current scene. For example, in a scene showing a woman stepping into an elevator, the sequence might begin with a wide shot of the elevator as the woman enters through the opening doors. She turns, glances down, and reaches out her hand, at which point we cut to a close-up shot of her finger pushing the button for the 10th floor. The cut-in of her finger may be framed from her point of view as if we are looking at it through her eyes or through the eyes of someone standing nearby. A cut-in is often shot as a close-up that guides the viewer’s attention to a related element within the frame. Be careful when you are cutting your scene together. Avoid cutting between two shots that look almost the same. In doing so you run the risk of creating a jump cut—where the position of the main subject shifts suddenly on screen for no apparent reason. You’ll see this error sometimes in news, when the video editor joins two non-adjacent sound bites together in the timeline. This is one of the reasons cut-ins (and cutaways) are so important. Editors often use them to cover or hide a jump cut by positioning them over a transition point on the video track above.

Cutting on Action

Cutting on action is a technique editors use to match continuous action in a scene as it unfolds across two sequential shots. In our previous example, the woman on the elevator performs a simple action by reaching out her hand to push a button on the control panel. Cutting on action means the editor will cut from the wide-angle view to the close-up (or cut-in shot) while the hand is in motion. Typically, the best transition points occur at the apex of movement, when the action is most intense, and before the subject or object begins to decelerate or stop. The editor must ensure the last frame of the action in the first shot matches the first frame of the continuing action in the second shot. For example, if the right hand is used to extend the finger in the first shot, the right hand should be shown completing the action in the second shot. If the actor uses a different hand in the second shot, then a continuity error will occur, drawing attention to the edit and disrupting the viewer’s attention.

Figure 14.7 The film Chariots of Fire won an Oscar for Best Picture in 1981. The movie included a one-minute scene, pictured here with still images, of two athletes competing in the Great Court Run. The runner’s goal was to complete one lap around the main college square before 12 strikes of the courtyard clock (roughly 43 seconds). The edited scene was comprised of 30+ shots from multiple camera angles. Through continuity editing, the director was able to portray the scene in real time with continuous action, an amazing feat considering the scene was likely shot using only one camera. Notice how many cut-ins are interspersed with shots of the runners.1

Figure 14.7 The film Chariots of Fire won an Oscar for Best Picture in 1981. The movie included a one-minute scene, pictured here with still images, of two athletes competing in the Great Court Run. The runner’s goal was to complete one lap around the main college square before 12 strikes of the courtyard clock (roughly 43 seconds). The edited scene was comprised of 30+ shots from multiple camera angles. Through continuity editing, the director was able to portray the scene in real time with continuous action, an amazing feat considering the scene was likely shot using only one camera. Notice how many cut-ins are interspersed with shots of the runners.1

Great Ideas

Multi-Camera Editing (or Multi-Cam)

Multi-cam is a production and editing technique whereby shots of an event or scene are recorded simultaneously from multiple camera sources and then assembled in editing to resemble a live switched event or to produce a visually fast-paced sequence. Each iso (isolated) camera records a unique view of the action at a different angle and distance from the subject. After the shoot, footage from each camera is imported into the editing software, where the video and audio in each clip is then synchronized to a common master clip or sync source. In a professional multi-cam production, cameras are often synchronized to a timecode generator that produces a common reference signal that each camera records during the shoot. This is the most foolproof method for synchronizing multi-cam clips in postproduction. Unfortunately, many less expensive cameras are not designed to accept an external timecode signal. As an alternative, multi-cam producers must ensure that each camera records live audio of the event as it occurs. In a scene where there is no dialog or dominant sound source, you can use a clapboard to mark the beginning of a recording with a visual reference and an audible sound stamp. Professional NLE software—such as Adobe Premiere Pro, Avid Media Composer, and Apple Final Cut Pro—that supports multi-cam editing also includes tools for automated audio-based synching of iso footage obtained from multiple cameras (see Figure 14.8). Once all the clips have been synchronized, you can use your NLE’s multi-cam editing mode to cut or transition from source to source (or clip to clip) in real time as though you were switching multiple cameras during a live event. The transition points between shots can then be tweaked and refined manually as desired.

Figure 14.8 Multi-cam editing in Adobe Premiere Pro. Six cameras were used to record this live band performance. A switched program feed was recorded on-site along with six ISO recording (one for each camera). In Adobe Premiere Pro, the six ISO camera recordings were imported into a multi-cam editing session and synchronized to the original switched recording of the event.

Figure 14.8 Multi-cam editing in Adobe Premiere Pro. Six cameras were used to record this live band performance. A switched program feed was recorded on-site along with six ISO recording (one for each camera). In Adobe Premiere Pro, the six ISO camera recordings were imported into a multi-cam editing session and synchronized to the original switched recording of the event.

Source: Adobe product screenshot reprinted with permission from Adobe Systems Incorporated.

Project Organization and Asset Management

NLE Project Folder

At the end of production you will likely end up with a fair bit of footage and other project assets, and you need to know how to effectively organize and manage them. It helps to understand how your NLE software stores and saves project files. As a general rule, at the beginning of every new project, it’s a good idea to create a master project folder to serve as the container for all the project files and assets associated with it. The project folder should contain the native project file generated by your NLE software and numerous topical subfolders for storing media content such as video, music sound effects, still images, titles, and so forth. In addition, NLE applications will often create additional subfolders for storing video and audio capture files, render files, and backup copies of the master project file. Again, each project you are currently working on should have its own designated project folder (see Figure 14.9).

Figure 14.9 Left: Upon starting a new project in Adobe Premiere Pro, the editor is prompted to identify the scratch disk locations for the captured video and audio files, preview files, and backup files. All the media assets and project files associated with a video project should be stored in a master project root folder. This way, you will never lose track of files associated with your project. Right: Based on the information you provide in the scratch disk window, the program will create physical folders and files on your drive as shown.

Figure 14.9 Left: Upon starting a new project in Adobe Premiere Pro, the editor is prompted to identify the scratch disk locations for the captured video and audio files, preview files, and backup files. All the media assets and project files associated with a video project should be stored in a master project root folder. This way, you will never lose track of files associated with your project. Right: Based on the information you provide in the scratch disk window, the program will create physical folders and files on your drive as shown.

Source: Adobe product screenshot reprinted with permission from Adobe Systems Incorporated.

Adobe Premiere Pro and the legacy version of Apple Final Cut Pro use the term scratch disk as the name of the interface component for setting up the name and location of the project folder and subfolders. Hence, the phrase “setting your scratch disks” is synonymous with the process of linking to or creating the project folder at the beginning of an editing session. In Avid Media Composer, the editor uses the “media creation settings” window to set up the name and location of the project folder. Similarly, Avid Pro Tools refers to each project as a “session” and requires the user to create a “session folder” at the start of each new project. Whatever the name or approach, be sure you understand how to properly designate the name and location of your project folder. This step is critically important and, when done well, will save you a lot of time and potential frustration during the editing process. Also, consider using an external hard drive as your scratch disk instead of the one on your computer running the OS and NLE software. Using a dedicated external drive for your video project assets can help speed up performance and allow you to move your project easily from one computer to another. Once you’ve created the project folder, be sure not to move it or change its name. This goes for all the content within the project folder as well. Doing so can potentially confuse the NLE program, causing it to lose track of the media files and assets it needs to properly display and run your edited sequence.

Figure 14.10 The Adobe Premiere Pro interface is divided into four main regions. 1) The Source Monitor is used to preview media clips and to trim video and audio clips prior to inserting them in the timeline. 2) The Program Monitor displays video from the timeline during playback. 3) The Project Panel displays bins and image thumbnails for each asset associated with the project (video, audio, titles, sequences, etc.). Double-clicking on a clip loads it into the source monitor. 4) The Timeline is where you assemble your project—insert and order your clips; add titles, effects, and transitions; mix audio levels; render frames; and so on.

Figure 14.10 The Adobe Premiere Pro interface is divided into four main regions. 1) The Source Monitor is used to preview media clips and to trim video and audio clips prior to inserting them in the timeline. 2) The Program Monitor displays video from the timeline during playback. 3) The Project Panel displays bins and image thumbnails for each asset associated with the project (video, audio, titles, sequences, etc.). Double-clicking on a clip loads it into the source monitor. 4) The Timeline is where you assemble your project—insert and order your clips; add titles, effects, and transitions; mix audio levels; render frames; and so on.

Source: Adobe product screenshot reprinted with permission from Adobe Systems Incorporated.

NLE Project File

The project file is a proprietary data file used for keeping track of every detail associated with an NLE project. Once an editing project is created, it can be revisited in later sessions by double-clicking on the project file icon. The project file can be thought of as a set of instructions for playing media clips back in real time as an edited sequence.

Media Files and Media Clips

Media files are the raw project assets that are created or acquired prior to the start of an editing session. These include the actual video and audio files captured from tape or through direct transfer from a video camera, memory card, or other digital source. Media files are typically the largest file assets associated with an editing project. For example, digital video (DV) requires 1 GB of storage for every five minutes of recorded footage (or 250 MB per minute). High-definition video files consume an even larger amount of digital real estate, which is why high-capacity hard drives are so often used in editing.

Figure 14.11 Top: Red is generally a bad color to see on a media clip or sequence. Here, in Adobe Premiere Pro, it signifies that a media file has gone offline. In other words, the media file associated with media clip in the timeline has either been deleted from the project’s hard drive or moved to a new location—thus breaking the link that was established when the media was first imported. Bottom: The Link Media panel in Premiere Pro is used to locate and re-link the missing asset.

Figure 14.11 Top: Red is generally a bad color to see on a media clip or sequence. Here, in Adobe Premiere Pro, it signifies that a media file has gone offline. In other words, the media file associated with media clip in the timeline has either been deleted from the project’s hard drive or moved to a new location—thus breaking the link that was established when the media was first imported. Bottom: The Link Media panel in Premiere Pro is used to locate and re-link the missing asset.

Source: Adobe product screenshot reprinted with permission from Adobe Systems Incorporated.

Tech Talk

Intermediate Formats A nonlinear editing system can experience problems and slowdowns when decoding highly compressed interframe video formats such as HDV, AVCHD, or H.264 on the fly. Depending on the speed of your system and NLE software, it is possible to edit interframe streams natively. However, to reduce the likelihood of playback artifacts or rendering slowdowns, editors will often convert interframe video to an intermediate format prior to editing. For example, in legacy versions of Final Cut Pro (version 7 and earlier), HDV footage is automatically upconverted to ProRes 4:2:2 during the log and capture process. ProRes is Apple’s intermediate codec for editing uncompressed full-bandwidth (1920 × 1080) HD video. It is comparable to DNxHD, the intermediate codec used in Avid Media Composer. Like Avid, Adobe Premiere Pro includes DNxHD as an intermediate format. The intermediate codecs mentioned here use intraframe compression, which is easier to decode, but which also produces a transcoded file that will be considerably larger than the original source footage. By transcoding to an intraframe format for editing, playback and rendering artifacts are kept to a minimum and the editing process will typically go more smoothly.

When a media file is imported into an active NLE session, a clip is created and added to the media browser or project bin. A clip is a relatively small data file that “points” to a larger underlying media file that’s stored on the hard drive. It functions as an alias or virtual representation of the real thing, and for this reason, it is sometimes called a pointer file. A clip can be placed in the timeline, trimmed, cut, and deleted without changing or destroying the actual media file it is linked with. A single clip can be added to the timeline sequence multiple times without adding significantly to the physical size of the project file. Each instance of a clip is a standalone digital asset that keeps track of the pointer data for a particular event in the edited sequence. You can delete a media clip from the bin without deleting the media file. While deleting it makes the clip icon disappear from view in the project window or browser, the actual media file is safe and sound in its original location on the hard drive. A media file can be reconnected with a media clip at any time though re-linking or by re-importing it into the project.

Capture and Render Files

Capture files are time-based media files that are created when footage is ingested into the computer through a connected camera or videotape recorder. The video and audio tracks recorded on tape exist as linear streams of digital information that are read and rendered in real time as the tape is played back. Capturing involves converting these time-based streams (or video and audio tracks) into a file-based format that can be read and processed by the NLE program. Once captured, a media file can be easily moved and copied, allowing it to be used in other projects or saved in a digital archive for retrieval at a later time.

Logging and Batch Capturing

With the logging and batch capturing method, an editor will preview footage prior to capturing. He or she works through the footage one scene at a time, stopping at the beginning and end of each good take to set an IN and OUT point. In the logging fields, the editor can designate the tape or reel number, clip name, scene and take number, shot description, camera angle, and other notes or comments as desired. This information will be permanently attached as metadata to the captured media file. After viewing and logging the source footage, the editor will select the batch capture option. At this point, the NLE will proceed to capture each of the scenes specified in the editor’s logging notes. The bad takes are skipped and the total capture time is cut down significantly, especially for big projects with hours and hours of footage. Batch capturing allows the editor to automate the capturing workflow, saving time and hard drive space, while tagging clips with descriptive information that will stay attached to the footage as long as it remains in its current digital form.

Figure 14.12 This is the log and capture window in Adobe Premiere Pro. User-provided metadata can be attached to each imported clip using the data fields on the right.

Figure 14.12 This is the log and capture window in Adobe Premiere Pro. User-provided metadata can be attached to each imported clip using the data fields on the right.

Source: Adobe product screenshot reprinted with permission from Adobe Systems Incorporated.

Rendering

From time to time during every editing session, the software will need to create new video files in response to the placement of a transition, effect, filter, title, or multilayer composite. This process is called rendering (see Figure 14.13). For example, when a one-second dissolve is attached to two adjacent clips in the timeline, a one-second video clip must be rendered to represent the visual effect.

Figure 14.13 Top: In Adobe Premiere Pro, a color-coded render bar appears near the top of the timeline to indicate whether or not the underlying clips have been rendered. When the bar is yellow or red, it means the frames associated with a particular clip, transition, or effect have not yet been rendered. Simple effects can be previewed in real time without rendering while more complex ones will need to be rendered first in order to view them smoothly or even at all. Bottom: The render bar turns green once rendering has been successfully completed.

Figure 14.13 Top: In Adobe Premiere Pro, a color-coded render bar appears near the top of the timeline to indicate whether or not the underlying clips have been rendered. When the bar is yellow or red, it means the frames associated with a particular clip, transition, or effect have not yet been rendered. Simple effects can be previewed in real time without rendering while more complex ones will need to be rendered first in order to view them smoothly or even at all. Bottom: The render bar turns green once rendering has been successfully completed.

Source: Adobe product screenshot reprinted with permission from Adobe Systems Incorporated.

You can render clips as you go or wait until the end of a project and render everything at once. The more complex the effects and transitions, the longer it will take for rendering to be performed. Depending on the speed of your computer, the NLE can often play back unrendered clips in real time. However, the display quality may be poor because the program has to process the transition or effect on the fly in real time. It’s a good idea to stop occasionally throughout a session to render all the unrendered clips in the sequence.

Flashback

NLE Pioneers

Avid Technology is one of the oldest developers of nonlinear editing software. Its flagship program, Media Composer, was released in 1989 and continues to be one of the most recognized NLE applications used in the motion picture and television industries. Adobe’s NLE solution is called Premier Pro, which Adobe released in 1991. Adobe’s chief competitor, Macromedia, was instrumental in developing a rival NLE called KeyGrip, a product designed to work natively with Apple QuickTime. KeyGrip was renamed Final Cut and eventually sold to Apple, who released it commercially as Final Cut Pro in 1999. While there are many professional NLE solutions to choose from, Avid Media Composer, Adobe Premiere Pro, and Apple Final Cut Pro have stood the test of time and remain three of the most recognized names in nonlinear video editing.

Components of an NLE Interface

Project Panel

The NLE project panel goes by different names. Look for a window or panel group that allows you to browse all your project media files that are stored on a local or attached hard drive, bins containing your media clips, and other tabs or sections granting you access to clip and sequence information, effects, and global project or session properties. Think of the project panel as command headquarters— the main portal and information vault for everything related to your project. In many respects, it serves a purpose very similar to the OS file browser on your computer—the Finder in Mac OS X or File Explorer on a Windows PC—but from within the application. Through it you can locate and import files into your project, open them in the source monitor, drag them to the timeline, search for and re-link missing clips, and keep track of where everything associated with your project is stored.

Figure 14.14 The Media Browser in Adobe Premiere Pro allows you to locate, view, import, and re-link media files stored on a physical hard drive from within the application.

Figure 14.14 The Media Browser in Adobe Premiere Pro allows you to locate, view, import, and re-link media files stored on a physical hard drive from within the application.

Source: Adobe product screenshot reprinted with permission from Adobe Systems Incorporated.

Timeline

The timeline displays your edited sequence and the position of audio and video clips that are arranged and kept in order on linear regions called tracks. When a video clip is added to the timeline, the video portion of the clip is placed on a video track while the audio portion is inserted beneath it on an audio track. Audio tracks are normally grouped in linked stereo pairs. By default, the first track (or left channel) is sent to the left speaker and the second track (or right channel) is output to the right. Some NLEs combine the left and right audio channels into a single stereo track. When a synchronized video clip (such as an SOT) is placed on the timeline, it usually occupies three adjacent tracks (one video track and two audio tracks). If a single microphone was used in the recording, there will be only one channel of associated audio and thus only one corresponding audio track. A mono track can be easily converted to a dual mono pair by the NLE, sending it to both the left and right speakers.

When editing a video clip containing synchronized audio, you will often want to apply the same action simultaneously to both the video and audio portions of a clip. For example, when a clip is split in two using the splice or razor blade tool, the edit will typically affect all three tracks at once. Likewise, when you reposition a video clip within the timeline, the linked video and audio segments will travel together in tandem, allowing them to remain in sync. Problems can occur when video and audio become unsynchronized. Even a slight shift of only a few frames can affect lip-synching in an SOT, causing you and the audience to squirm in momentary discomfort.

Sometimes an editor will choose to unlink a clip in order to control the video and audio separately. For example, you may want to perform a split edit by assigning different IN and OUT points to the video and audio clips. This technique is commonly used when editing sound bites. With a split edit, the audience might see a person talking for one to two seconds before they actually hear him or her. Another use of this technique would be to show B-Roll during the first five seconds of a sound bite before cutting to a synchronized headshot of the person speaking. Split edits are performed routinely in editing and are extremely useful. Just remember to re-link the clips when you’re finished working with them to prevent accidentally unsynchronizing them later.

In addition to synchronized clips, editors deal with a variety of standalone media assets. Digital images, for example, contain no corresponding audio. They are also static, meaning they have no duration (length), as compared to time-based media assets. Static clips like images, titles, backgrounds, and slugs (a solid black video frame) are automatically converted to time-based video clips when imported or created by the NLE software to allow them to be extended within the timeline to any length the editor desires.

Video Compositing and Audio Mixing

Compositing is the process of combining two or more video tracks together to form a new image or visual effect. Video tracks are used in much the same way as layers in Photoshop to segregate clips into discrete editable regions, thereby allowing the editor to maintain individual control over the settings for each asset and its position in the timeline. In the timeline, tracks are viewed from the top down, meaning that a clip placed on video track 2 (V2) will partially or completely obscure the view of a clip placed beneath it on video track 1 (V1). Let’s look at a few examples.

Example 1: A simple two-track composite: The editor begins by placing an SOT on V1, the first video track in the timeline. Next, he inserts a lower-third title graphic above the SOT on V2 (see Figure 14.15). At this point, the program monitor will display a composite image showing the title superimposed over the top of the person speaking. Because each element in the composite image resides on its own track, it can be edited independently without affecting other clips in the timeline.

Figure 14.15 Example 1: Adobe Premiere Pro is used to create a two-track composite image comprised of a video clip (V1) and lower-third title graphic (V2).

Figure 14.15 Example 1: Adobe Premiere Pro is used to create a two-track composite image comprised of a video clip (V1) and lower-third title graphic (V2).

Source: Adobe product screenshot reprinted with permission from Adobe Systems Incorporated.

Figure 14.16 Example 2: The timeline shown on the left includes five video tracks containing media clips that are aligned to produce the composite graphic on the right. Each track contains one of the five elements used in the composite.

Figure 14.16 Example 2: The timeline shown on the left includes five video tracks containing media clips that are aligned to produce the composite graphic on the right. Each track contains one of the five elements used in the composite.

Source: Adobe product screenshot reprinted with permission from Adobe Systems Incorporated.

Example 2: A five-track composite: In this example, a composite image is formed by vertically aligning five clips on adjacent tracks in an edited sequence (see Figure 14.16). First, a gradient background is inserted on V1. Next, a close-up shot of the host is inserted on V2 and then scaled down and positioned on the right side of the frame. This step is repeated for the wide shot, which is placed on V3. Finally, two text graphics are positioned on tracks V4 and V5 respectively. The composite image shows all five elements co-residing within the frame. Incidentally, the five elements do not have to appear or disappear at the same points in the timeline. The editor can choose a different IN and OUT point for each element. The possibilities are endless.

Mixing is the audio version of compositing. With mixing, multiple audio clips can be combined in an unlimited number of ways to form a complex aural experience. For example, in a motion picture film, an action sequence often contains multiple tracks of dialog, background sounds, music, and sound effects (SFX). Again, let’s consider a common example.

Example 3: A four-track stereo mix: Here, we begin a sequence by placing an SOT on the timeline. The audio portion of the SOT resides on A1 (see Figure 14.17). Music is then added to A2 and placed beneath the tracks of the person speaking. In this example, the primary, or foreground audio, is being produced by the SOT. The editor wants it to stand out above the music and sets the A1 audio level to full (100%). Music is a background element and as such should not compete for the audience’s attention. The editor adjusts A2 to a lower level that’s more appropriate for a background source. As additional sources are added to the sequence, the editor will mix them accordingly until he or she achieves the intended balance and effect called for.

Figure 14.17 Example 3: The audio track mixer interface in Adobe Premiere Pro is used for mixing audio from the SOT on A1 with music on A2. The editor needs to set the level of each stereo track pair independently to achieve an aesthetically pleasing mix. Since the SOT is the main element, tracks A1 and A2 are set to normal levels (full). Notice how the music is set much lower to keep it from competing with the main spoken word audio.

Figure 14.17 Example 3: The audio track mixer interface in Adobe Premiere Pro is used for mixing audio from the SOT on A1 with music on A2. The editor needs to set the level of each stereo track pair independently to achieve an aesthetically pleasing mix. Since the SOT is the main element, tracks A1 and A2 are set to normal levels (full). Notice how the music is set much lower to keep it from competing with the main spoken word audio.

Source: Adobe product screenshot reprinted with permission from Adobe Systems Incorporated.

Source and Program Monitors

The NLE interface contains two virtual television monitors. The source monitor (sometimes called the preview monitor) sits on the left and is used for reviewing project assets and trimming (shortening) and adjusting them before inserting them as clips on the timeline. Double-clicking on a clip in a bin or on the timeline typically opens it in the source monitor, where edits and changes can be performed. The source monitor can only hold one clip at a time. When you are done working on one, simply double-click on another clip to swap them out. The program monitor is positioned to the right and is linked to the timeline. As you play back or scan through an edited sequence in the timeline, the program monitor displays its contents. Likewise, as you scrub the program monitor playhead back and forth, the companion playhead in the timeline moves left and right in perfect unison. The source and program monitors contain a variety of identical virtual controls for viewing, playing, and marking clips.

Transport Controls

The transport controls act in much the same way as the buttons on a VCR or DVD player, allowing you to scan the contents of a time-based media clip. The editor has access to familiar functions like play, pause, stop, rewind, and fast-forward. The transport controls can be activated through keyboard input as well. Experienced editors prefer using keyboard shortcuts because they’re generally much quicker to execute than virtual controls. For example, most NLEs allow you to press the keys J, K, and L on your keyboard to scrub through a clip in reverse (J), to pause (K), and to scrub forward (L). Jog and shuttle are terms used to describe the speed at which you scrub (move) through a clip. Jog moves the playhead a few frames at a time or in slow-motion while shuttle advances it rapidly in either direction.

Playhead

The playhead defines your position in a time-based clip. You can scrub slowly or rapidly through a clip by dragging the playhead horizontally in either direction. Audio scrubbing allows you to hear the audio portion of a clip as you move the play-head back and forth and is a useful technique for locating sound artifacts or verbal stutters you want to eliminate or isolate in editing (such as um, uh, ah, etc.). The source monitor playhead allows you to scrub through the currently loaded clip. The program monitor playhead allows you to scrub through the entire length of the timeline. Because the program monitor is linked directly to the timeline, moving either playhead causes the other one to move in tandem.

Timecode Fields

Every frame of video is addressable via timecode. Whenever you mark an IN point or OUT point, the location is stored as a numerical series of eight digits denoting hours, minutes, seconds, and frames. A timecode display of 01:18:54:27 indicates a frame location of 1 hour, 18 minutes, 54 seconds, and 27 frames. This format is used to denote the duration of marked clips, transitions, and other program assets. For example, a one-second dissolve would appear as 00:00:01:00 in the duration field.

Great Ideas

NLE Keyboard Shortcuts

Keyboard shortcuts like the ones listed here are relatively common in all NLE programs and help speed up the editing workflow

Keyboard Shortcut Action
Space bar Starts or stops the playhead (Play/Pause)
I key Marks the IN point
O key Marks the OUT point
J key Plays backward—pressing multiple times speeds up scrubbing
K key Stops playing
L key Plays forward—pressing multiple times speeds up scrubbing
Home key Moves playhead to the beginning of the sequence
End key Moves playhead to the end of the sequence
Left/Right arrow Nudges the playhead one frame to the left or right
Up/Down arrow Moves the playhead forward or backward to the first or last frame of an adjacent clip

Image Frame

The largest area of a monitor control interface is reserved for the picture data. What you can do in this region is largely dependent on the features available to you inside your NLE software. Typically, you will be able to transform the size and aspect ratio of the image through scaling, cropping, and wireframe controls. A wire-frame acts like a bounding box that allows you to freely transform video within the frame—whether it’s rotating the image, changing the playback speed, adding a visual affect or filter, or otherwise altering a clip’s movement or appearance.

When an audio asset is loaded into preview, the visual interface looks different. Instead of a video image you’ll see an audio waveform. A waveform is a visual representation of the amplitude signature of the audio clip across time. The height of the waveform indicates the volume level. By looking at the waveform, you can often tell whether audio was recorded at an appropriate level or not (see Figure 14.18). Under-modulated audio (audio that is too soft) will have a short waveform height or perhaps none at all—simply a flat line. Overmodulated audio (audio that is too loud), which may be distorted, will have an intense waveform pattern that extends from the bottom of the track to the top (or from the “floor” to the “ceiling”). A waveform’s fluctuations can provide the editor with a visual representation of timing and rhythm, which comes in handy when setting IN and OUT points. In a music clip, the waveform can reveal the beat structure and tempo of a selection, providing visual cues for determining transition points. In addition to having a waveform on the timeline, audio and video editing software includes a VU (volume-units) meter to let you monitor the loudness of a source objectively in decibel units.

Figure 14.18 When audio is loaded into the source monitor in Adobe Premiere Pro, it appears on screen as a virtual waveform. Top: Visually, this waveform suggests that the audio levels are properly set. Bottom: This waveform looks relatively flat, suggesting the recording was severely under-modulated.

Figure 14.18 When audio is loaded into the source monitor in Adobe Premiere Pro, it appears on screen as a virtual waveform. Top: Visually, this waveform suggests that the audio levels are properly set. Bottom: This waveform looks relatively flat, suggesting the recording was severely under-modulated.

Source: Adobe product screenshot reprinted with permission from Adobe Systems Incorporated.

Adding Clips to the Timeline

Video editing can be compared to the process of building a car on an assembly line. Before production can begin, all the individual automobile parts must be designed, constructed, and set out in proper order on an assembly line. The order of assembly is critically important, as a car is built from the inside out one part at a time. The chassis comes first and serves as the skeletal framework that all other parts connect to in one way or another. Tools are used to secure parts to the chassis. In editing, you can think of clips as car parts and the timeline as the chassis. Clips are constructed in the source monitor where they are trimmed and marked before inserting them into the timeline from end to end. Tools are used in editing to add clips to the timeline, move and trim clips, and modify their behavior. Program length is determined by the combined duration of all of the clips in the timeline from left to right.

There is no single prescribed method or workflow for constructing an edited sequence, as each genre and story have their own unique structure and content. However, the editing process usually starts with establishing a simple ordered sequence of clips from beginning to end and then going through it again and again in a series of passes to advance it from a simple rough cut to a polished finished edited master. Each pass is additive, building upon previous work performed during an earlier stage in the editing process. No matter what process you adopt, try to be systematic, orderly, and deliberate in how you edit and the workflows you employ. As an example, consider the following process involving six editing stages or passes.

First Pass—Construct the Primary Audio Narrative

The primary audio portion of a program is often constructed first by inserting the spoken word or performance-based segments of a story such as dialog, sound bites (SOTs), narration, and voiceovers into the timeline in sequential order from the beginning to the end. Think of audio as the wheels of the chassis on top of which everything else rides. The tracks in the timeline are even structured in this way, with video tracks located in the top half of the timeline and audio tracks below. Audio serves as a cue to help inform editing decisions—for example, where to position the beginning or end of a media clip. It’s generally a bad idea to cut from one shot to another in the middle of a spoken word or sentence. Taking a cue from an SOT or music track, a better approach would be to time the cut to occur at a natural break in the conversation or in conjunction with a significant shift or transition in the tempo or rhythm of the audio soundtrack—such as during a pause or breath between spoken words or just before or after a sound effect, music stinger, or particular beat in a musical score.

Second Pass—Insert B-Roll and Natural Sound

With the primary audio portion of the narrative in place, you are ready to jump back to the head of the timeline to begin inserting B-Roll and natural sound. One purpose of B-Roll is to fill in holes in the video track where only audio currently exists—for example, above a voiceover or narration clip. Another use of B-Roll is to cover the video portion of a talking head (or SOT), in whole or in part, and to hide any visual jump cuts that resulted from cutting together non-adjacent segments of an audio interview.

Third Pass—Insert Titles and Graphics

Once all your video clips are in position in the timeline, you can begin inserting lower-thirds and graphics. If a title or graphic is to be superimposed with an underlying video clip, be sure to place it on the track immediately above the corresponding clip. If your project includes several lower-thirds—each identifying a different subject—then consider using the first title as a template for constructing the others. Doing so will ensure that the duration of your titles and their corresponding font and style attributes are consistent throughout the program. Begin by editing the first title and adding it to the timeline. Next, use the copy and paste function to duplicate the title clip. Open this clip in the title editor to change the text identifiers to match the second subject—repeating this step for as many titles you have in the project. This method is much faster than creating each individual title from scratch when you have a series of identically formatted graphics.

A graphic should appear on screen long enough for the viewer to read it. For lower-thirds, five to seven seconds generally does the trick. Also, it’s good practice to establish the subject on screen for at least one second before superimposing the title. Likewise, take down the super before cutting to another shot. Finally, consider fading lower-thirds in and out rather than using a hard cut, as the gradual transition of a fade is far less abrupt.

Fourth Pass—Add Sound Effects and Music

Not every project requires music and sound effects, but when you want to include them in a project, it is usually better to hold off doing so until the basic structure of your story is sufficiently developed. An exception to this, of course, is a music video or a musical segment that is a part of a longer sequence. In such cases, a recorded song serves as the primary audio and needs to be placed on the timeline first before inserting video. This way the beat structure of the song can be used for timing the duration and placement of video clips and transitions. When using SFXs and music as a secondary or background audio source, they should be mixed with primary audio at an appropriate level so as not to compete with or distract from the main subject or message.

Fifth Pass—Add Transitions and Effects

Once all your clips have been inserted into the timeline and you are satisfied with their placement, you can begin the process of adding transitions and effects. Trimming and repositioning clips after transitions have been attached can be difficult and time-consuming, so it’s best to add them near the backend of postproduction. Likewise, adding effects prematurely to clips—before you have to—can slow down the editing process. Effects usually require rendering, and the more effects accumulate as your timeline grows, the longer it can take to render them, and the more often you will find yourself having to pause from editing to do so.

Sixth Pass—Finishing Touches

Before closing out an editing project, there will be lots of nitty-gritty details to attend to. You will spend lots of time making minute adjustments to the placement of clips and the timing of transitions, titles, effects, and so forth as you work to refine and polish the overall presentation. You may also need to perform color correction to fix, enhance, or alter the color properties of video clips and their consistency across the length of your program. With audio, you will likely have to tweak levels and add sound processing filters and plug-ins to improve individual audio clips or the overall mix of the composite soundtrack. Finally, you will need to perform a final comprehensive rendering of every clip in the timeline before exporting the project to a distribution format.

Great Ideas

Track Management

Table 14.1 shows an example of a track management scheme you could adopt for assigning video and audio tracks to specific types of content. Organizing your tracks this way will help keep your timeline looking logically ordered and uncluttered. It’s a good practice to assign specific types of content to dedicated video and audio tracks and to be consistent throughout the project from beginning to end. Doing so will ensure your timeline is constructed in an orderly and logical manner, making it easier for you and others to navigate within it as the project sequence grows. Failing to properly manage your timeline—the placement of clips and the assignment of tracks— will result in a timeline or sequence that is visually cluttered, chaotic, and poorly constructed.

Table 14.1 Assigning Tracks in the Project Timeline

Video Track 3 (V3) Titles/Graphics
Video Track 2 (V2) B-Roll
Video Track 1 (V1) SOT Video
Audio Track 1 (A1) SOT Audio and Voiceovers (L)
Audio Track 2 (A2) SOT Audio and Voiceovers (R)
Audio Track 3 (A3) Natural Sound (L)
Audio Track 4 (A4) Natural Sound (R)
Audio Track 5 (A5) Sound Effects (L)
Audio Track 6 (A6) Sound Effects (R)
Audio Track 7 (A7) Music (L)
Audio Track 8 (A8) Music (R)
Figure 14.19 This close-up view of the source monitor in Adobe Premiere Pro illustrates the process of trimming. The original video clip is 0002:56:00 in length. The editor isolates a 20-second sound bite by marking an IN point at the beginning of the sound bite and an OUT point at the end.

Figure 14.19 This close-up view of the source monitor in Adobe Premiere Pro illustrates the process of trimming. The original video clip is 0002:56:00 in length. The editor isolates a 20-second sound bite by marking an IN point at the beginning of the sound bite and an OUT point at the end.

Source: Adobe product screenshot reprinted with permission from Adobe Systems Incorporated.

Figure 14.20 When the edit is performed, only the 20-second portion of the clip that’s been marked will be inserted in the timeline. Trimming clips prior to putting them in the timeline is highly recommended. While you can also trim directly in the timeline, it is best to do so only to fine-tune your initial edit decisions.

Figure 14.20 When the edit is performed, only the 20-second portion of the clip that’s been marked will be inserted in the timeline. Trimming clips prior to putting them in the timeline is highly recommended. While you can also trim directly in the timeline, it is best to do so only to fine-tune your initial edit decisions.

Source: Adobe product screenshot reprinted with permission from Adobe Systems Incorporated.

Three-Point Editing

Let’s assume all the constituent parts for our video (our media clips) have been created and imported into the project window and have been properly logged, labeled, and organized. Where do we go from here? While it’s possible to start dragging clips willy-nilly from the project bin into the timeline, this is a bad habit and promotes an unproductive workflow. The original media clips you acquired are raw and uncut—and should be marked and trimmed prior to inserting them in the timeline, not afterward. Instead, work through your footage methodically, opening your clips one at a time in the source monitor of your NLE. Here, you can preview them and make thoughtful decisions about what portion of each clip to include in the timeline. To illustrate, let’s walk through the process of editing a basic sequence comprised of three clips: a voiceover, a sound bite, and a second voiceover. To do this, we’ll use the standard editing technique of three-point editing. (Note: You can view this editing demonstration on the companion website for this textbook.)

Step 1: Set an IN Point and OUT Point in the Source Monitor

I begin by opening the voiceover clip in the source monitor. The clip is more than five minutes long and contains numerous voiceover segments along with several takes of each one. In the end, I will not use most of the content in this clip. I only need to extract a few choice sound bites for my project. I scrub the playhead to the first voiceover (VO #1) and set an IN point by pressing the I key on my keyboard. Next, I move the playhead to the end of VO #1 and set an OUT point by pressing the O key on my keyboard. The marked clip has a duration of 00:00:20:10 and is ready to be inserted into the timeline (see Figure 14.21, top).

Figure 14.21 Top (Step 1): An IN and OUT point are set to mark the beginning and end of the 20-second clip in the source monitor. Bottom (Steps 2 and 3): An IN point is set on the timeline and then the edit is performed.

Figure 14.21 Top (Step 1): An IN and OUT point are set to mark the beginning and end of the 20-second clip in the source monitor. Bottom (Steps 2 and 3): An IN point is set on the timeline and then the edit is performed.

Source: Adobe product screenshot reprinted with permission from Adobe Systems Incorporated.

Step 2: Set an IN Point in the Timeline

Next, I change the focus of the NLE to the timeline by clicking on it. Since this is my first edit, setting the IN point is relatively easy. I advance the playhead one second into the timeline and press I on my keyboard to mark the IN point (see Figure 14.21, bottom). With three points marked, the NLE has enough information to automatically calculate the timeline OUT point. For this reason, I do not have to specify it. In fact, when using this method of editing, a fourth point is never required and should never be entered. Hence the term … three-point editing.

Step 3: Perform the Edit

With three points marked, I’m ready to perform the edit. Presto! Just like that, a new 30-second clip is added to the timeline at precisely the right spot (see Figure 14.21, bottom). The original three-minute clip remains in the project bin for future use and will remain visible in the source monitor until I load a different project asset. Keep in mind that the newly created clip is merely an alias, a visual representation of a pointer file that can be changed at any time without affecting the corresponding media file stored on my hard drive. I can work with the clip directly in the timeline—repositioning it or trimming it or extending it as required. I can also load it back into the source monitor by double-clicking on it. As your project evolves, the number of clips and aliases you have access to will steadily increase. You need to keep track of which clip is loaded into the source monitor at any one time. It’s easy to get confused and make mistakes by inadvertently editing the wrong project asset or timeline clip.

Step 4: Repeat Steps 1 through 3 for SOT #2 and VO #2

The next step in this example is to repeat steps 1 through 3 for the remaining clips in the sequence (see Figure 14.22). On the timeline, the IN point for each subsequent edit will be the OUT point of the previous one. This will ensure that each clip is butted tightly against the other with no empty frames in between.

Step 5: Add B-Roll over VO Segments

Once the three clips have been added to my sequence, I return to the beginning of the timeline and start inserting B-Roll over each voiceover segment (see Figure 14.23). To keep from overwriting the voiceover clips on A1 and A2, natural sound is placed beneath these tracks on A3 and A4.

Figure 14.22 Step 4: Repeat the first three steps until you complete the first pass for a simple VOSOT-VO sequence.

Figure 14.22 Step 4: Repeat the first three steps until you complete the first pass for a simple VOSOT-VO sequence.

Source: Adobe product screenshot reprinted with permission from Adobe Systems Incorporated.

Figure 14.23 Step 5: The completed sequence with the inclusion of B-Roll and natural sound.

Figure 14.23 Step 5: The completed sequence with the inclusion of B-Roll and natural sound.

Source: Adobe product screenshot reprinted with permission from Adobe Systems Incorporated.

When editing B-Roll, I slightly modify my three-point editing technique. Since my goal now is to fill precise gaps or segments in the timeline sequence, I want to set IN and OUT points on the timeline first. Next, in the source monitor, I mark only an IN point to designate the starting frame of each B-Roll clip. The clip will end when it reaches the OUT point as designated on the timeline.

Transitions

A transition is a visual effect applied to the timeline at the beginning or end of a video or audio clip. Transitions are used to enhance the flow and rhythm of a project and to guide the viewer’s senses through changes in the narrative structure of a story as it unfolds (see Figure 14.24).

Cut

The most common transition is called a cut. A cut is an instantaneous transition from the end of one shot to the beginning of another shot. Because of this, a cut has no duration per se. In fact, you can think of a cut as the invisible transition. Cuts are popular with editors because of their simplicity. When properly timed, the audience will hardly know a cut has occurred. As we’ve already discussed, however, cutting in the middle of a word or sentence, or on an awkward beat in the underscore, can disrupt continuity and be self-defeating. Most often, the editor’s goal is to make the presence of a cut invisible to the audience. Remember, the best edits are often the ones you don’t notice. Incidentally, the first beat of a bar, called the downbeat, is the strongest point in a melody line and a great location for the placement of a cut or other transition.

Figure 14.24 These icons represent many of the standard video transitions included in Adobe Premiere Pro CC.

Figure 14.24 These icons represent many of the standard video transitions included in Adobe Premiere Pro CC.

Fade

The absence of picture data on a television screen is referred to as black. A slug is a media clip comprised entirely of black frames. A slug (of any length) can be inserted into the timeline as a clip separator—as a way of signaling a key transition or scene change—in much the same way that a lighting blackout is used during a theatrical production to herald the beginning or end of a scene or act. Programs usually begin and end in black with a fade providing the transition. A fade up is a gradual transition from black to a fully opaque television image, while a fade down is a gradual transition in the opposite direction. Video and audio fades often occur simultaneously. For example, in an end-of-program fade to black, the video image should reach 100% black at precisely the same time the audio signal is fully muted. For this reason, it is standard practice to use the same duration time for both the video and audio portions of a fade.

Dissolve

A dissolve is a gradual transition from one shot to another that is created by overlapping the fade down of one clip with the fade up of the next adjacent clip in a sequence. Dissolves are less abrupt than cuts and are often used to signify a change in time, a change of location, or a change in tempo. Dissolves can be used to slow the pace of a program when a gentler timing structure or rhythm is called for. Short dissolves of less than 10 frames can be used in place of a cut to soften a transition without necessarily affecting an otherwise upbeat tempo. The longer the duration of a dissolve, the more dramatic and pronounced the effect.

Wipe

Most other transitions fall into the category of wipes or 3D effects. A wipe uses a linear movement or sweeping pattern to transition from one image to another. For example a vertical wipe moves from the top of the frame down, or from the bottom of the frame up, as it overwrites the current image with new pixels. A horizontal wipe does the same thing by sweeping left to right or right to left. A circle wipe moves in a radial direction from the center of the image outward and vice versa. Wipes come in a virtually unlimited assortment of shapes and patterns. There are checkerboard wipes, clock wipes, and slide wipes, which push the entire frame of video off-screen. A spin wipe is used to rotate the video frame as it moves out of view, while an explosion wipe shatters the picture into pieces before sending them off in every direction. If your selection of wipes is limited, chances are that a third-party software developer has produced wipes for your NLE that can be obtained for free or purchased and downloaded for a fee.

Like wipes, 3D transitions come in all shapes and sizes. Using powerful algorithms, video can be squished, stretched, warped, morphed, and distorted in any number of ways to achieve creative and unusual visual effects. Page peel transitions curl the edge of the video frame inward to simulate turning pages in a book. Cube spin transitions turn the video frame into a rotating object with multiple sides displaying different images or picture data. The list goes on and on.

Wipes and 3D transitions are visually very cool. They are fun to play with and can add legitimate value to a visual experience for the viewer. However, you must use them with caution. Anything you add into the video timeline should be motivated by the story and the impact you wish to make on the viewer. Simply throwing things into an edited sequence because you can, or because they look cool, is not necessarily going to make the end product look any more professional or be any more effective. In fact, it may backfire, producing a negative effect. Until you have more experience, stick with cuts and dissolves—and whatever you do, be consistent in terms of when and where you apply visual transitions. Do not use a pinwheel wipe in the first transition, a page peel in the second, and exploding video in the third. This sort of sporadic and random use of transitions will only confuse the viewer or overwhelm his or her visual senses. As we’ve said before, less is more!

Transition Properties

Once a transition has been added to the timeline, it becomes a customizable asset with properties that can be easily changed. Transition values are saved in a data file that’s stored in the render folder. While most transitions will play back in real time, at some point before exporting your project, you will need to fully render them. Once rendered, they will perform more smoothly and appear sharper when playing them in the timeline.

The most important property you need to pay attention to is the transition duration. Duration is the amount of time a transition takes to perform from beginning to end, as indicated in seconds and frames. The default transition in most programs is one second (or 30 frames). When adding a dissolve between two clips, I often use a 20-frame duration, which is roughly two-thirds of a second. This duration serves me well most of the time. For something particularly dramatic, I might use a longer increment of one to three seconds. Only rarely will I use anything longer. You need to remember that transitions take time to execute and that this can affect the runtime of your project, consuming precious seconds and minutes over the course of a long program. For example, in a 60-second commercial spot with 10 default transitions, one-sixth of the presentation will be taken up by transitions. The simple act of changing the default duration from one second to 15 frames will give you five additional seconds to work with. Think about it: 5 seconds in a 60-second spot is a lot of time.

In addition to the duration property, you can adjust a number of other variables. For example, you can change the direction of an edge wipe, causing it to start from the left, right, top, or bottom of the frame. The properties you have access to will depend on the transition you’re working with. Each transition is unique and therefore has different parameters and values you can adjust or manipulate to change the way it performs.

When applying a transition, make sure the adjoining clips you’re attaching it to are connected. If there’s a gap between the two clips that you can’t see—perhaps because you’re zoomed out too far—then the transition may not attach. Zooming in to the timeline for a close-up view of the transition point is really helpful. Once attached, transitions become modular visual objects you can modify.

Chapter Summary

Not surprisingly, you’ll find that the NLE interface varies from program to program. What doesn’t change, however, are good editing practices. For example, the technique of using a cutaway or B-Roll to mask a jump cut won’t become irrelevant simply because you switch from Avid Media Composer to Adobe Premiere Pro—nor will it eradicate the need to carefully monitor your audio levels in post. For the most part, you will find that the core concepts and aesthetics of editing are the very things that influenced the design of whatever NLE you’re using—along with the hodgepodge of tools and features they include. Most professional programs include a timeline for arranging clips linearly from beginning to end, a media browser or library to organize your project assets into bins, a source monitor to mark and trim clips, and a program monitor to review and keep track of your progress. While you need to become familiar with the operational features and functions of the NLE you’re using, applying good practices and principles to your project—such as three-point editing—is far more important.

Note

1 Puttnam, D. (Producer), & Hudson, H. (Director). (1981). Chariots of Fire [Motion picture]. USA and Canada: 20th Century Fox.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.218.221