Tweak, cut, blend, and steal whatever you have to so that the audience can understand every word. They should be convinced it was all recorded in one take.
Ric Viers, sound designer
Author of The Sound Effects Bible
This chapter is about noise: how to remove it (at best); how to manage it (as a middle ground); and how to find a way to cohabitate with it (a dissatisfying but grown-up possibility). Since “noise” is an unmanageably broad topic, I've divided this chapter into two types of noise, and therefore two means of attack:
When if comes to noise reduction, the roles, rules, and relationship between dialogue editor and mixer are tricky. Communication is vital. There may be turf wars over who does what (only a foolish dialogue editor would take on this fight). There should be discussions concerning what to do, which tools to use, and how to use them. And there must be understandings about how you build and organize tracks that will yield the best results with the least pain.
All mixers are different. Each has his own noise reduction plans, habits, and peeves. Respect all of these; otherwise you're doing nothing to help your tracks or your vision. Come up with a noise reduction strategy before you get started. Know what's yours to do and what isn't. This is the most important advice you will get in this chapter.
The world is a noisy place, and even the most resolute location mixer can't do much about airplanes, traffic, and other exterior sounds other than to beg for additional takes. Once these infiltrators show up on your lap, it's your job to find them, assess their worth, and then determine their fate.
Logically, you'd choose to remove all unwanted noise in order to make room for the dialogue and to present a smoother scene, with no artifacts of the filmmaking process. Give the actors the space they need—free of pesky noises—and the film opens up. But noise control requires more than just a vigilant ear. There are borderline cases, scenes where noise is a nuisance but doesn't interfere with the dialogue. A scene shot next to a busy street, for example, can be justifiably noisy. The fact that you have to labor to hear a few words might help the viewer to sympathize with the character, who also must strain to understand and be understood. If you like this tension, sell it to the sound designer. Remember, though, that a noisy dialogue track saddles you with a noisy scene. The scene will never be quieter than the dialogue premix, and you won't be able to isolate the dialogue.
Even the slightest amount of inappropriate background noise can kill a quiet, intimate scene. A scene of a couple sitting in their living room in the middle of the night, discussing their troubled relationship, loses its fragility and edge if we hear traffic, airplanes, the neighbor's TV, or a crewmember walking around the set. This scene must have dead quiet dialogue. The supervising sound editor may choose to color the scene with quiet spot effects, an interesting and mood-evoking background, or music. But the dialogue editor must be able to deliver a track with no disturbances to create a world of two people in a very quiet room—alone with all their problems.
Before you can fix the noises in your dialogue, you have to find them. In most cases, eliminating noises isn't difficult, but noticing them is daunting. It sounds easy, but it's a skill that separates experienced dialogue editors from novices. Even if you pride yourself on your superhuman listening skills and canine hearing, you have to learn how to listen for the annoying ticks, pops, and screeches that besiege every dialogue track. They're in there, coyly taunting you to find them.
First, you must discover as many noises as you have time, patience, and budget for. Then, before blindly eliminating every one, you have to ask yourself, “What is this noise? Does it help or hurt the story?” Period. It's actually as simple as that; you just have to stay awake and aware.
I hate when the dialogue editor keeps only the ugly HF, thinking sound is only “words;” when he processes “hard” corrections and tries to clean everything, making for dead tracks.
(“Noise is not dirty. Silences are beautiful.” – John Cage)
Cécile Chagnaud, film editor/sound editor/sound designer
Or, Mon trésor; Nizwa, sur les traces d'H. de Monfreid
Every step in the process of filmmaking is sorcery. It's all about getting the viewer to believe that this story really happened as shown in the final print. As a film professional, you like to think that you're immune to this seduction. You know it's not real. Yet when you screen the offline picture cut or watch a scene over and over, the most obnoxious noises might slip right past you. Just like the average movie fan, you're sucked in.
It's pretty embarrassing to screen your seemingly perfect tracks that you lovingly massaged for weeks, only to have the director comment “What about those dolly sounds?” Sure enough, a shot with flagrant dolly noises. You fell for the story, you heard the scene too many times, you overlooked the dolly. Ouch! It's your job to hear and correct these sounds, so you must find ways not to fall victim to the story's siren song.
Question every noise you hear. Don't fall into the trap of “Well, it was part of the location recording, so it must be legitimate.” Obviously, if the gaffer dropped a wrench during a take, or the producer sneezed, you'd replace the damaged word. However, when an actor's footstep falls on a delicate phrase, you might be reluctant to make the repair, thinking it's a “natural” sound.
Remember, there's nothing natural in the movies. To see whether it's a good decision to lose the errant footstep, fix it. Either you'll miss it, finding the rhythm of the scene suddenly damaged and unnatural, or you'll see a new clarity and focus. If removing the footstep results in a rhythmic hole but greatly improves articulation, tell the supervising sound editor before the Foley is recorded, so that the necessary footstep will be in place but controllable. Better yet, find another, quieter production footstep to replace the offender.
The most rewarding part of careful listening is that once you've heard a noise or had it pointed out to you, you'll never again not hear it. The 1915 illustration “My Wife and My Mother-in-law” is a classic example of not seeing the obvious until it's pointed out1 (see Figure 14.1). At first glance, you see either a maiden or a hag, but not both. Even if someone tells you what to look for, you're stuck. Finally, you have your breakthrough moment, and what was hidden from you becomes clear. Henceforth, you'll always be able to find each of these women, like it or not.
Similarly, you can listen to a track many times and never hear the truth. But once your brain wakes up to it, you'll hear that click in the middle of a word and wonder how you missed it the first few times around. Ignoring the meaning of the dialogue and focusing on the sounds is a useful tool when searching for unwanted noises. Listening at a reduced monitor level can help you hear beyond the words.
The following sections describe the origins of the most common noises you'll encounter in your tracks. Use this list to help you learn to locate these unexpected interlopers.
Transient noises, the not-so-silent warriors in the conspiracy to screw up your tracks, have several sources, some from the set, some from beyond.
A shooting set—especially a location set—is composed of lots of people and lots of equipment, usually surrounded by lots of other things that make noise. Even under the best of circumstances, these things aren't quiet. For example:
Most of the noises you encounter don't come from the set itself, but from further away. We'll deal with them later.
Dolly noise is easy to spot since they don't make noise when they're not moving. It's simple: When the camera moves, listen for weird sounds—for example:
Unnecessary footsteps are easy to hear but hard to notice. When you first listen to a scene involving two characters walking on a gravel driveway, all seems normal. You hear dialogue and some footsteps. But something inside tells you to study this shot more closely and check for problems. Ask yourself how many pairs of feet you hear. If it's more than two (which is likely), you have a problem. Picture how the shot was made and you'll understand where all the noise comes from. How many people were involved? Let's see: two actors, one camera operator, one assistant camera operator, one boom operator, one location mixer (maybe), one cable runner (probably), one continuity person, one director. That's a lot of feet. But because you expect to hear some feet in the moving shot, you initially overlook the problem.
As with dolly noise, be on the lookout for a moving camera—in this case handheld; that's where so many noise problems breed. Find out how the footsteps interfere with the scene by replacing a section of dialogue from alternate takes or wild sound (discussed in later sections), noting any improvement. It's likely that the scene will be more intimate and have a greater impact after you remove the rest of the crew's feet from the track.
Fortunately, a good location mixer will spot the trouble in the field and provide you with workable wild lines, and perhaps even wild footsteps, to fill in the gaps. Otherwise, you'll have to loop the shot.
Remember, there are a lot of people on a set all of whom move, breathe, and make noise, so be on the lookout for these noises:
Sadly, many unwelcome noises can and will get into the tracks as by-products of the recording process itself. The boom operator, location mixer, and cable puller are all very busy capturing manageable dialogue, and sometimes bad things happen.
With so much attention paid to getting the best sound from actors' voices, it's no surprise that you're occasionally faced with all sorts of sounds coming from an actor—sounds that you'd just as soon not hear. We all make noises that aren't directly part of speech. Someone you're talking to may be producing an array of snorts, clicks, pops, and gurgles, yet you'll rarely notice.
Comparatively normal human noises often sneak under the radar when we're in the heat of a conversation because our brains simply dismiss them as noninformation, of no consequence. Yet when you record this conversation, unseemly body sounds stand shoulder-to-shoulder with the dialogue.
What sorts of vocal and body noises should you be on the lookout for?
Dentures, plates, bridges, and the like can make surprisingly loud noises. They're easy to spot because they almost always coincide with speech. Unfortunately, denture clicks usually get louder in the dialogue premix, where dialogue tends to pick up some brightness. At that point, they become impossible to ignore.
Most people with fake teeth aren't thrilled about advertising the fact, so a serious round of click removal is usually welcome. Also, relentless dental noise is almost certain to get in the way of story and character development. There's a chance that the character's persona calls for such unenviable dental problems. If that's the case, the supervising sound editor may elect to remove most dental details from the dialogue and have the effects or Foley department provide something more stylized, personalized, and controllable.
People make lots of nonverbal sounds with their mouths. Sometimes these sounds have meaning that would be difficult to express in words: a sigh or a long, dramatic breath can say worlds; a contemplative wetting of the lips can imply a moment of reflection before words of wisdom; a clicking of the teeth or tongue may suggest thought or nervousness. An actor's clicking, chomping, snorting, and sighing may be just what the scene calls for, or it may be just more commotion that comes between the scene and the audience.
Your job is to spot each nonverbal sound and decide if it conveys the mood and information you want for the scene or if you need to thin out or replace or eliminate it altogether. Things to think about when listening for smacks:
The sounds a character makes between sentences or words can be as important as the information contained in the text. Get it right and you'll greatly increase the drama and emotion of the scene.
Interestingly, I take out mouth clicks in the production track, but I find putting mouth clicks in when I'm cutting ADR sometimes helps “sell” the ADR as real.
Jenny Ward, ADR editor
King Kong
We've seen how clothes rustling against a body mic can be a nuisance. Many other common clothing noises are just as bad and require a sharp mind and a keen ear.
You'll inevitably encounter places in the track where the actor sounds a plosive consonant, such as a P or B and the rush of air distorts the microphone capsule. There's no point getting into why this happens; your job is to fix it. Usually, you'll have to replace that section of the contaminated word, but there are some filtering tricks that may work.
An actor unexpectedly yells, the recorder input level is set too high, a limiter is used too aggressively. You're left with distortion, an ugly flaw that's very hard to fix. We'll get to this later.
Once you've trained yourself to be alert for the countless rattles, pops, clicks, and snorts squatting in your tracks, the next step is to decide what to do with them. There are two basic editorial tools for removing unwanted noises: room tone fill and replacement. Noises falling between words or action can almost always be removed by filling with appropriate room tone, whereas noises falling on top of words or actions, or even just before or after dialogue, require searching through alternate material to find appropriate replacements.
Let's look at these two techniques, remembering that there are many ways of fixing noises and as many opinions as there are editors. With time you'll settle into your own way of working, synthesizing all of the techniques and creating your own private stockpile. Generally, exhaust your editorial options before moving on to electronic solutions. The are some great plugins that will help sort out transient noises, but you're usually best off trying an editorial fix before opening up your plugin arsenal.
Small electrical clicks, individual clothing-mic collisions, lip smacks, and the like, are easily removed with room tone, but only between words, rarely within them (see Figure 14.2).
Here's what you do to remove these tiny noises:
Figure 14.2 Small clicks typical of radio microphone trouble.
|
Figure 14.4 Good room tone can usually be found nearby. Always leave a bit of usable room tone outside the selection.
A couple of additional tips and tricks will come in handy:
Figure 14.5 Splicing at the zero crossing increases the likelihood of a successful edit.
Extremely short nonacoustic clicks can often be removed with your work-station's pencil tool. (Most workstations have a tool that enables you to redraw a soundfile's waveform—these tools are usually shaped like a pencil, so “Pencil Tool” is a pretty good generic name.) When used on an appropriate noise, the pencil tool can be miraculous. However, there are a couple of very important things to remember when using them. First, unlike almost every other process, the pencil tool modifies the parent soundfile. Second, it's inappropriate for all but the shortest problems and if there's a significant acoustic element—even the tiniest ringout—it just won't work and you'll be stuck with a low-frequency “thud” where once you had only a tiny click.
Using the scrub tool and waveform display, find the click in question (see Figure 14.6). The waveform usually isn't very helpful until you've zoomed in quite close. More often than not, the click will appear as an unimpressive jagged edge along an otherwise smooth curve. It could also appear as a very small sawtooth pattern along the line of the curve. Although small to the eye, a glitch like this can cause a lot of trouble.
Always remember that using the pencil is destructive, which is rare in non-linear editing. Any change you make will affect the original soundfile—and thus every occurrence of this part of it. This is definitely a mixed blessing. If the click occurs in the middle of a line repeated many times in the film, the modification will present itself in every recurrence of that line, for better or worse.
Say a film begins on the deathbed of a family patriarch, who at the moment of his demise manages to murmur, “The butler did it!” Throughout the film we hear Dad's ghost say, “The butler did it!” If there's a short electrical click in the middle of the word “butler,” you fix it once with the pencil and you've fixed every occurrence. Since the same part of the original sound was reused several times, a change in one occurrence affects all other appearances of Dad's disembodied ghost.
This, however, is unusual behavior for an audio workstation. You want to protect the original soundfile from your lapses of judgment, so remember this rule of thumb: before using the pencil tool, make a copy of the section you intend to repair. Here's the safest way to proceed:
Figure 14.7 Top: The segment from Figure 14.6 with two clicks (labeled with markers). Bottom: The area to be repaired was consolidated (highlighted in black) to create a new, tiny soundfile. The pencil was used to redraw the waveform. Compare the smooth curves below with the jagged originals.
If you're not happy with the results, then delete the new consolidated clip, re-join the two sides of the resulting hole, and start over. This is why you repaired a copy soundfile rather than blazing ahead on the original.
When clothing rubs against a lavaliere microphone you hear a nasty grinding. This can often be avoided with careful mic placement, but by the time the problem gets to you, it's a bit late to care what caused it. You can't filter out the noise, as it covers a very broad frequency range and it poisons everything else in the track. You can try a de-esser, but the odds of this working are pretty small. Normally, the only way to rid yourself of this sound is to collect the alternate takes of the shot and piece together an alternate assembly (see the upcoming section on alternate takes). You should also add this line to your ADR spotting list.
However, if you've exhausted the alternate lines and the actor is no longer on speaking terms with the director and refuses to be looped, you can try a trick that occasionally works. There are many plugins—usually bundled as “restoration suites”—that are the grandchildren of software originally designed to reduce surface noise when re-mastering old 78 rpm recordings. Waves, Sonic Studio, IZotope, Cedar, and Sonnox are some of the big players in the noise reduction universe.2 Later in this chapter we'll use these tools to reduce broadband and harmonic noise. But for now, we're interested in the components that focus on clicks and crackle. Collectively referred to as impulsive noises, these distortions come from physically abrading a surface or microphone, or by overloading or otherwise abusing an electronic device. When closely compared, the waveform of a transcription from an old vinyl
record and that of a dialogue recording contaminated with mild clothing rustle have many similarities. In each case, what should be a smooth curve is instead serrated stubble.
De-cracklers and de-clickers use very clever math to smooth out local irregularities (see Figure 14.8). De-click and de-crackle processors don't work by filtering, but rather by interpolating (in other words, if you know what precedes and follows a moment, then you stand a good chance of figuring out what ought to be there). Once you define just what constitute clicks and crackles (usually defined by amplitude and duration or “shape”), the processor will identify the appropriate events and remove them. Then, by “looking” both before and after the excised click it will fill in the hole. Of course, it's more complex than this, and each product has its own way of going about the process, but they think in more-or-less the same manner.
Maybe, just maybe, you can use them to smooth out your curve, reducing the clothing rustle to a manageable distortion. Before you start, make a copy of the region so that you have a listening reference and can return easily to the original should this noise-removal plan prove ill-conceived. (You don't need to create a new soundfile, since these processes aren't destructive.) As with all interpolation processes, you're usually better off making several small, low-power passes than one powerful pass. Work in small chunks of time so that you neither over-process words that aren't particularly damaged, nor undercook those sections in need of extra attention. Don't develop great expectations for this method of cleaning up clothing rustle. Its results range from mediocre to surprisingly good. Still, when you have no other choice, a bit of de-crackle may be an acceptable fix.
Distortion can originate in the analogue chain (an overloaded mic, a poorly set limiter, etc.), or because of digital clipping (perhaps the nastiest noise on the planet). It can't be removed. Really, it can't, and your only real recourse is to replace the distorted words with alternate material. However, when your back's against the wall and there's no choice, there are plugins that can help lessen the ugliness and restore some of the original fidelity.
These programs are designed to reconstruct signal peaks damaged by clipping—analogue or digital. They identify clipped areas, after which the user determines a threshold for processing (see Figure 14.9). The actual repair involves resynthesis of the affected area by comparing it to surrounding material, which is presumably less damaged. Depending on the type and severity of the distortion, declipping can reduce distortion's nasty “fingernails on the blackboard” fingerprint. Typically you'll run several passes; with each iteration the lopped off waveform will round out, get taller, and sound better. The result certainly doesn't outperform a good recording, but it likely beats what you started work with. However, declippers are potent tools, and if used imprudently can cause more troubles than they solve. Experiment with threshold levels and take care to process only what's needed. And it wouldn't hurt to read the manual.
If you don't have declipping software and but you're nonetheless confronted with distortion, there are two common noise reduction tools—de-click and de-crackle—that may lessen the pain. Look closely at the waveform of a distorted track and you'll see two ugly problems (see Figure 14.10). First, the waveform is truncated, like sawed-off pyramids. That gives you the ugly compression of a distorted sound. Second, the plateaus are jagged and rough, not unlike the surface of a wear-worn 78 rpm record. As with removing clothing rustle, repeated passes of a de-click utility followed by de-crackling may, just may, smooth the rough edges and even rebuild some of the waveform's natural contours.
Figure 14.11 If you must reduce gain prior to processing, create a new soundfile at a lowered level. Session automation is not sufficient.
As with most audio signal processes, de-click and de-crackle can raise the level of a soundfile, so make sure that there's at least 3 dB of headroom in the original soundfile when you start this operation. If the audio level of your original recording is unusually high, you may need to lower its gain—especially if you're working with 16-bit soundfiles.3 In truth, you'll almost never encounter location recordings that are extremely hot. But if you must reduce the level of a clip, you can use the “Gain” AudioSuite processor (or equivalent), which will yield a new, quieter, soundfile (see Figure 14.11).
Most restoration tools allow you to monitor the removed noises, switching between the dregs you're removing and the cleansed results. This is handy for determining if you're overprocessing. If you can hear components of the program material (that is, the dialogue) while monitoring the removed noises, you're damaging the original track and should back off. If you don't have this monitoring option, you can listen to what you've removed by placing the de-clicked/de-crackled soundfile on a track adjacent to the original region, syncing the two, and listening with one of the regions out of phase. If your sync and level are precisely aligned, you'll hear only the removed sounds or distortion harmonics.
De-crackling shouldn't be your first line of defense against distortion or removing clothing rustle. You stand a much better chance of making a proper fix if you go back to the original recordings and find an alternate take, or just give up and loop the line. Even so, it's good to have a bag of tricks to help you out of impossible situations. The result may not be glorious, but at times mediocre is better than nothing.
Dollies are a particularly ugly source of noise, and the damage they cause tends to go on much longer than run-of-the-mill ticks, clicks, and pops. A moving dolly can spread its evil across several lines of dialogue, so doing something to fix such noises is much more complicated. Before giving up and calling the scene for ADR, try to reconstruct the damaged part of the scene from alternate takes, hoping that the noises don't always fall on the same lines.
Fixing a line ruined by dolly noise is no different from other fixes that call on alternate takes. First find valid alternates, then line them up under the damaged shot and finally piece together the outtakes to create a viable line. You have to know how to quickly and painlessly locate the other takes from that shot in order to find alternate lines, more room tone options, and the comfort of knowing you've checked all possibilities. Read on.
Life isn't always fair, and sooner or later you'll run into noises within words—noises you can't remove with room tone or de-clickers. Then you'll have to look through the dailies for alternate takes that convey the same energy and character as the original but without the unfortunate noise. At first, going back for alternate takes seems a monumental task, so you invent all sorts of excuses not to do it. Once you realize that it's a not lot of work, though, you'll discover a huge new world of possibilities that make your editing not only more professional and effective but much more fun and rewarding.
Before you begin the quest for alternate takes, check your list of wild dialogue cues to see whether you have wild coverage for the scene (see Chapter 10 for more on wild sound). You never know, and it could save you some grief. And if you're looking for a very small sound, say a new attack or a word said repeatedly in a sentence, you may not need to search beyond the existing clip. Copy it to a work track, pull out the handles and see if you can find the sound you're seeking, and cut it in. You're done.
Usually, though, it takes a bit more work.
As a matter of course, dialogue occupies a significant amount of the director's and editor's attention during the picture edit. They've combed through all the takes and made decisions about the best performances, so the majority of the dialogue editing decisions have already been made. So when you try to introduce, suggest, or use alternate takes, you may meet some resistance.
Use an alt that is the best possible match, especially in terms of performance. Use as little of it as possible. Can you just use part of a word? Can you get rid of that bump by using just the beginning or middle or end of the same word from another take?
Where there is a larger problem and you have to swap out a half or whole sentence, then you will need to show this to the director/editor before it goes to the dialogue premix. And if possible, have more than one alt ready to show. This way, it will not be a shock to anyone when you get to the final mix.
Jenny Ward, supervising dialogue editor
Happy Feet Two
The process of finding alternate takes starts at the beginning of the project. Success hangs on getting from the production the items you absolutely must have to safely start any dialogue editing project (see Chapter 7). Let's review:
It's useful to have the continuity and camera reports and whatever relevant notes you can get your hands on. Some dialogue editors think this is too much information, a waste of space, and an indulgence. Let them. When it's three in the morning and you must find an alternate take and all of your normal paper trails have failed you, the script continuity or camera reports may be just what it takes to fix the line and get home to bed.
A film is a living, breathing thing, and each has its own idiosyncratic way of organizing itself. Naming conventions, use of non-alphanumeric characters, inconsistency between the camera, sound, and picture departments, and even choice of field recorder influence how you go about finding the takes you need. As with any relationship, you and the movie will get to know each other over time and you'll learn how best to go about the search. Until then, start with the simplest, straightest path between you and the alternate takes. If the simple route works, great; you saved some time and headache. If this path takes you nowhere, then try other routes. None of these methods are difficult. Here are some options for searching, from simple to not so simple:
Figure 14.12
Original recording soundfiles are found in folders by shoot date.
Once you locate the file, the rest is comparatively easy. It's just editing. As you search for the alternate take that will save the day, you will quickly learn the secrets of this film's filing system.
This is by far the simplest approach. Any DAW has some means of finding and importing files, some systems easier than others. Type the name, choose from the list, and you'll likely see the scene/shot you're looking for. This usually works. However, a number of things can go wrong with a name search, usually caused by problems with the filenames.
More often than you'd like to think, the information you see in the clip does not match the filename. You see Sc4c-tk1 in the clip, but a search yields nothing. Try 04c-t1, or sc4-3-1, or another reasonable sequence. If you tire of guessing, open any folder containing original recordings. Original recordings are almost always organized by shooting day, each of which has its own folder (see Figure 14.12). Note how files are named and you've cracked the code. If more than one location sound team worked on the film, you may have to learn two different systems.
If you can't manage to find the original files by a search, try another method. You can almost always find the shot you're looking for by using the EDLs. Follow these steps and it won't seem scary.
If you're using Pro Tools, the start and end time codes of the highlighted region are displayed at the top of the screen as Start, End, and Length (see Figure 14.13). Other workstations have similar displays. Use the Start and End timecode values to find this take in the EDL.
Figure 14.13 Select the region and note the start and end times. Find these times in the appropriate EDL to determine the source of the soundfile. If you are working with sounds that exist only on tape, you must use this method.
Each CMX 3600 EDL represents four tracks of audio, so you may have several of them for each reel. Normally, the events you care about will fall in the first or second EDL, representing tracks 1–8. (EDLs are discussed in great detail in Chapter 7.)
Always follow the same sequence of EDL columns to track down your alternates:
Figure 14.14 Record In time. This is where the event begins in your timeline. Refer to Figure 14.13 “Start” time.
Figure 14.15 Record Out time. Refer to Figure 14.13 “End” time.
Figure 14.16 Comment field, used in this case to show scene/slate/take information.
Figure 14.18 Source field. This is the name of the soundfile you're looking for.
Remember, be flexible. No two films have precisely the same organizational quirks. Some projects will use the fields as shown in this example; others will have their own systems. Examine the raw material, come up with a plan for finding alternate takes, and then be quick in abandoning plans that don't work in favor of those that do.
Once you find the right folder, you can use the sound reports to find out if anything else of interest was recorded for that shot (e.g., room tone, wild lines, PFX, etc.). Location mixers commonly use abbreviations to describe takes (see Table 14.1). This will also tell you if takes are complete or cover only the start or end of a shot.
Some field recorders name their files with a unique codename rather than the expected Sc/Sh/Tk format. There are all sorts of good reasons why a manufacturer chooses to do this, but it can make for problematic searches. One solution is to use a search engine that's smarter than the one in your workstation. Search engines that specialize in sound, such as Soundminer,
Audiofinder, and BaseHead allow you to find soundfiles regardless of naming mistakes, glitches in metadata, and other screw-ups.4 Plus, they enable intelligent auditions, and they allow you to import only a portion of the soundfile. So if you need only a breath from the middle of a four-minute take, no need to import the entire file.
If you don't have a clever file searching application, this problem may seem insurmountable. Remember, though, that sound reports will inevitably come to you as PDF or Excel documents, so you can easily search them in order to gain your bearings. To bridge the gap between clip names and filenames, use the EDLs as in the example above. In order to search across all of the shooting days, you'll need to merge all of the PDF sound report files into one. If you have a decent command of Excel or another spreadsheet application, you can find out what's what. You just can't audition or load the sounds.
Once you find the right shot, listen to the takes in order to figure out your options.
Obviously you want to find an alternate take from the same shot (i.e., camera position) as the original to increase your chances of a decent sound match. However, sometimes you can't find useable alternate takes from the same angle. When that happens, first make a note in your ADR spotting calls but don't give up on the sound rolls just yet. Perhaps you can find the replacement lines within a compatible shot. This is where the sound reports really pay off.
Say you're working on scene 88, an interior scene with two characters, Alfred and Elizabeth. The scene is made up of these shots:
You need to fix a problem in Alfred's close-up lines, but you've already exhausted all takes for shot 88B, the angle used in the film. Where else should you look for material that will save Alfred from looping?
Sometimes it's not even about replacing the whole line. It could be just one word or even one letter of a word. Some dialog editors do a great job in blending alternate takes into the selected take so that it cleans up the noise and at the same time does not change the performance of the actors.
Kunal Rajan, supervising sound editor
The Mourning Hour; The Root of the Problem
You've imported the likely alternates into your session. Now you have to find out which one will make the best fix and then cut it into your track. I find it easiest to move my work tracks directly beneath the track with the damaged region.
To select the best alternate take, try the following procedure. You'll develop your own technique with time, but this isn't a bad way to start.
On rare occasions, an alternate take will have all the right attributes—the speed, mood, and linguistic “music” (cadence, timbre, energy, spirit) of the original. You need only sync it to the original and edit it into the track. However, you usually have to work a bit harder. Often, one part of the line will work well but another will be wrong. There are a number of things you can do to create the perfect replacement.
Listen to the original line—beginning, middle, and end. Describe to yourself its spirit. I often invent a nonsense rhyme to describe the music and energy of each part of a line. Then I play back the nonsense tune in my head as I listen to parts of each potential replacement. By taking the language out of the dialogue, I can better focus on its music. It's not uncommon to combine pieces of two or three or more takes to make a good alternate line.
Tricks and tips for syncing are akin to fishing advice—everyone has the perfect secret, certain to give you great results in the shortest time. In truth, it's a matter of time, experience, and a knack for pattern recognition. Try a few of these pointers and develop them into a technique of your own.
After you've completed these steps, it's time to listen again. It's easy to get caught up in the graphics and begin slipping here, nipping and tucking there, with little regard for content. Remember, you're performing a very delicate operation here, replacing words while respecting the character, mood, focus, and drama of the original line and at the same time worrying about sync. Listen to the original. Close your eyes so that you can visualize the flow of the phrase. Sometimes I see a phrase as colors with varying intensities, modulating with the line. This lava lamp of transposed information helps me categorize the line's technical as well as emotional attributes.
If you're allergic to touchy-feely notions like “visualize the phrase,” please indulge me. First, I find closing my eyes very valuable. It removes the stimulus of the computer monitor so I'm not influenced by visual cues. Second, I find it useful to assign shapes and/or colors to the elements of a phrase or word, as this rich shorthand is often easier to code and remember than the raw sound. As I said, sometimes I reduce the phrase to nonsense sayings to provide a sort of mental MIDI map for interpreting it. Finally, imagining the phrase as colors or shapes is very visceral and helps me quantify its real workings. Of course, you're free to think of all of this as hogwash and use your own tricks.
Once you find the replacements for each section of the line, you're ready to construct the fake. First, however, you need to copy the original line to a safe place, whether muted on the main timeline or on a junk track. There are two kinds of lines you never want to throw away: those you replace with alternates and those you replace with ADR. The reasons are pretty obvious.
All of the preceding is true for ADR as well as alternate take replacements.
Most workstations have plugins for “fitting” replacement lines, whether ADR or alternates, to match your original, but you need to know how they operate before you can make them work for you. It's not uncommon to hear the telltale artifacts that these voice fitters create when used irresponsibly. The trick is to prepare the track before you use the fitter, never to ask the processor to do more than is reasonable, and to honestly listen after its every use. If it sounds weird, it will never get better.
Time-stretch tools (“word-fitting” tools like VocAlign fall into this category) change the duration of an event without changing its pitch. Unlike pitch-shift tools, which behave like variable-speed analogue tape machines by changing the sample rates and then resampling, time stretchers add or subtract samples as needed. If a phrase is too long, they'll remove enough samples to get to the right length. If the phrase is too short, they'll duplicate samples to lengthen the selection.
These tools have to know where to make the splices. If you tell it that you can't tolerate any glitches, the time-stretch tool will put most of its splices in the pauses between words or in other very safe places. After all, who's going to hear the splice where nothing is said? Or sung? Or played? What you end up with are essentially unchanged words with dramatically shortened pauses as well as truncated vowels and sibilants. Thus, if you order a 10 percent length reduction, the line will indeed be ten percent shorter but the speed change won't be consistent. This is especially noticeable with music, where time compression/expansion can result in a “rambling” rhythm section.
Choose a more “accurate” time change and you're telling the tool that local sync is very important, even at the risk of glitching. In extreme cases, you'll have perfect rhythm because the tool is splicing at mathematically ideal locations, ignoring content. But the glitches resulting from this “damn the torpedoes” approach are often unacceptable.
Here you have to make informed compromises (see Figure 14.20). All-time expansion/compression tools provide a way to control the delicate balance between “perfect audio” and “perfect rhythmic consistency.” You just have to figure out what it's called. Usually there's a slider called something like “Quality” that indicates how glitch-tolerant the tool should be. The less glitch tolerance (that is, higher “quality”), the worse the rhythmic consistency. The more you force the rhythm to be right (lower “quality”), the greater your chances of running into a glitch. As expected, the default average setting will generally serve you well.
Before you process a region with a time expansion/compression algorithm, make an in-sync copy of it. Here's how this will help you:
As you'll see in Chapter 15, there are processing tools for locally time-stretching a line; that is, comparing the waveform of the original with that of the alternate and manipulating the speed of the alternate to match the reference. Word fitters use, more or less, the technology of time expansion/compression, but they're largely automatic—able to look at small units of time and make very tight adjustments. Still, they have the same real-world limitations that time expansion/compression has: quality versus sync. All of these tools offer some sort of control to enable you to make that choice. Play with them and get used to how they work.
Time expansion/compression and word-fitting tools create new files. You'll have to name these. Do it, and be smart about it. I'll name a section of a shot that I stretched something like “79/04/03 part 1, +6.7%.” A word-fit cue I might name “79/04/03 part1, VOC” (for VocAlign). If you don't sensibly name your new files, you may eventually regret it. However, these complex clip-naming schemes are meant to make life easier, not to burden you with extra chores. It's up to you to find a reasonable balance between utility and neurosis when figuring out how to manage the many files that result from offline processing.
You'll find a full treatment of syncing and editing alternate lines and ADR in Chapter 15. Here I'll just briefly outline the steps.
− Get the length right. The best way is to try editorial nips and tucks to adjust the pauses. Do this before you begin any word fit or time-stretch processing. You can shorten and lengthen during pauses, but if you lengthen a bit of “silence,” make sure you don't introduce a loop by repeating a tick, click, smack, or other recognizable noise.
− Don't be afraid of cutting in the middle of a word. Contrary to common sense, you can actually trim in the middle of certain word sounds. Refer to “Where to Splice” in Chapter 11 to remind you how to use consonants and sibilants to splice within words.
− Do as much manual editing as you can before resorting to the length-changing tools. The easier you make life for the processor, the better results you'll achieve.
People interrupt each other all the time. Sometimes out of excitement, sometimes out of anger or arrogance, actors are always stepping on each other's lines, and such “overlaps” cause ceaseless headaches. In Chapter 11 we looked at problems caused when the sounds of two people, on two microphones, in the same shot overlap. Let's return to our friends Alfred and Elizabeth and see what can happen when people in different shots step on each other. Here, again, is the list of shots for scene 88:
In an otherwise outstanding take of 88A, Elizabeth interrupts the end of Alfred's sentence. Back in the picture editing room, the editor and director piece together a back-slapping spat between our two characters. The picture editor includes Elizabeth's interruption on Alfred's track, cutting to Elizabeth at the first rhythmic pause. No one but you notices that Elizabeth's first four words are off-mic, having come from Alfred's track. What do you do? You announce that it must be fixed, either with ADR or alternate material.
Overlaps put you in a bad position. Often the director and the editor won't notice them while editing because they're so used to hearing the cut. You're the only one who notices, so you'll be stuck trying to justify the extra ADR lines or the time spent rooting around in the originals to find the replacement material. Still, if you ever want to show your face at the sound editors' sports bar, you can't let it go. Overlaps with off-mic dialogue aren't acceptable.
When Elizabeth (off-mic) interrupted Alfred (on-mic), she ruined the last few words of both of their lines. The end of Alfred's otherwise pristine line is now corrupted by an ill-defined mess, so it must be replaced from alternates. (Refer to Figures 11.35 and 11.36.) Let's hope that Elizabeth won't jump the gun in other 88A takes. We also have to replace the head of Elizabeth's line (88B) so that she'll have a clean, steady attack. Again, we have to rub our lucky rabbit's foot in hope that there'll be a well-acted alternate 88C from which we can steal Lizzy's first few words. If alternates don't help, you'll have to call both characters for ADR on the lines in question. But since you'll face the problem of matching the ADR into the production lines, it's in everyone's interest to use alternates to fix the problem.
When shooting a fast-paced comedy in which the characters regularly step on each other's lines, a location mixer may use a single boom plus a radio mic on each actor. If this is recorded on a multichannel hard disk recorder, you stand a far better chance of sorting out the overlap transitions. However, even if you have nice, tight radio mic tracks of each character, you'll have to be careful of the off-mic contamination from one of them. There's no free lunch.
It's not unusual for actors to slip in their diction, slur a word, or swallow a syllable. Often you can fix these problems the same way you remove noises—go back to the alternate takes and find a better word or phrase. Of course, you'll copy and put aside the original line since that little “slip” may turn out to be the reason the director chose the shot. Also, when replacing a line because of an actor's problems, you'll keep it to the bare minimum so that the spirit of the line is unaltered. Plus, when it comes to “improving” acting, you're on thin ice. The director carefully chose this actor, whom she then directed. Together with the editor, she selected her favorite reading of the line. Be gentle when suggesting that a line is not up to par.
Location mixers go to great lengths to avoid wind buffeting distortion. They protect the mic from the wind using shock mounts and screens with all sorts of lovely names (including “zeppelin,” “wind jammer,” “woolly,” and “dead cat”). Regardless, it's certain that sooner or later you'll curse the location mixer for “not noticing” the wind buffeting the mic on the Siberian blizzard wide shot (while you were in your comfy cutting room).
Often you can tame this very low-frequency distortion in the mix with a high-pass filter set to something like 60 Hz. As with all filtering issues, you should talk with the rerecording mixer or the supervising sound editor about how to proceed. If you're really lucky, the mix room will be available and you can listen to the scene in the proper environment. You can also do a poor man's test by running the track through a high-pass filter in your editing room and playing with cut-off frequencies between 60 and 100 Hz. Keep in mind that wind distortion will always sound less severe in your cutting room than on the dubbing stage, so don't get too excited by the results.
If, God forbid, you decide to filter the tracks yourself in the cutting room, you must make a copy of the fully edited track before filtering and put the original on an adjacent track. Many a time I thought I was doing my mixer a favor by “helping” the track a bit with a high-pass filter, only to have the mixer stop, turn to me, and ask, “What were you thinking?” What sounded like a vast improvement in my little room was now thin and cheap. Plus, the energy from the wind noise was still evident. The mixer gently reminded me of the division of labor. “You cut,” he said, “I make it sound good.” Thankfully, I had stashed the original (completely edited) onto a nearby track, so with little effort the Empire was saved. Still, who needs the humiliation?
So what should you do about wind distortion? I suggest you build two parallel tracks: the original—fully edited and faded and cleaned of nonwind noises, but still containing the wind buffeting; and an alternate version assembled from other takes, reasonably free of the wind noises. Mute your least favorite version. This way you're prepared for anything that might happen in the mix. If the mixer can remove the wind noise from the original take without causing undue damage to the natural low frequency, great. If not, you're prepared with a wind-free alternate. Either way you don't get yelled at.
Like wind distortion, shock mount noises appear as unwanted low-frequency sounds. But unlike wind, which usually lasts a long time, they're almost always very brief. Like dolly noise, which occurs with camera motion, shock mount noise is usually tied to a moving fishpole (boom). This makes it easier to spot.
You can often succeed in removing shock mount noise with very localized high-pass filtering, usually up to 80 Hz or so. As with any filtering you perform in the cutting room, save a copy of the original. Don't filter the entire clip but just the small sections corrupted by the low-frequency noise. If possible, listen to the tests of your filtering in the mix room, so you get an idea of how your fixes translate in the big room. Here you'll learn if you under- or overfiltered, and you'll hear any artifacts you couldn't hear in the edit room.
Of course, the right way to fix shock mount noise is, yes, to find replacements for the damaged word in the outtakes. This way you don't risk any surprises in the mix.
It was dolly noise over dialogue that started the discussion on using alternate takes to repair damaged lines. By now it should be clear how to piece together a new sentence from fragments of other takes. What makes dolly-related damage interesting is the fact that the noise source is always changing, so you usually must line up all reasonable alternates and hope that the annoying cry of the dolly occurs at slightly different places on each one. You end up constructing an entirely new line from the best moments of all the takes. If this doesn't work, you'll have to rerecord the line.
You have a respectable arsenal for your battle against transient noises: room tone, impulsive noise reducers, pencils that can redraw a waveform, and knowledge of how to wrangle alternate material to your advantage. Still, there are sound problems that seem impossible to fix, especially if there was only one “perfect” performance, or only one take. A cell phone ringing, a truck backing up, a cough, or a car horn; even unfortunate noises coming from an actor's mouth—any one of these can wreck a unique recording.
Spectrogram editing allows you to locate an unwanted sound embedded within a signal and remove it with amazing precision. The Cedar Retouch spectrogram in Figure 14.21 illustrates the sound of a car horn sounding during a church concert. The horn's fundamental and harmonics can be seen as short horizontal lines, very evenly spaced—a telltale sign of a harmonic sound. One-by-one, components of the offender are highlighted and removed using the Patch tool. In Figure 14.22 the horn has been removed by selectively reconstructing its sound components.
Just as quickly as you learn how to “hear” the sounds displayed in DAW waveforms, so too can you catch on to the language of histograms, enabling
Figure 14.22 The car horn has been removed. The editor used the Patch tool in Cedar Retouch to remove and replace the noise by using sound from a different sound from part of the audio spectrum.
Figure 14.23 A bird call covers some dialogue, seen in IZotope RX2 Spectral Repair. Thanks to spectrogram editing, transient noises as shown in this image and in Figure 14.21 can often safely be removed, reducing the need for ADR.
you to differentiate friend from foe. Figure 14.23 shows a histogram in IZotope RX2's Spectral Repair module. The example consists of a few words that are corrupted with birdcalls. The horizontal waveform display reveals two sibilants—easy to spot due to their almond shape and relatively high amplitude. These sibilants, one on the far left and the other in the middle of the image, are also easily seen in the histogram. They contain a great deal of relatively broad energy, so their frequency (vertical) footprint is quite impressive. You can't easily miss them. Between these sibilants in Figure 14.23 are towers of horizontal lines, each beginning (left side of the noise) with a strong vertical element. These are the birdcalls, with their harsh attacks and notable harmonic regularities. Fortunately for us, in this example these cries come with amazing regularity, so it's easy to distinguish (desirable) dialogue from (undesirable) birdcalls.
Once you identify the fundamental and harmonics of the invasive sound, highlight each of them using a drawing tool and then perform the repair. Each software manufacturer has its own tools and choices for repairing sound in the selected area. Some selectively attenuate, others resynthesize. Some do both. What's right for any specific problem depends on the nature of the burst of noise and of the underlying signal—yet another good reason to read the manual. As with all noise reduction tools, you can easily overdo it. No point removing the birds at the expense of the voice.
As long as films have had sound they've had noise. Buzzing lights, grinding cameras, growling environmental noise, and any number of other hums, hisses, and roars. Sound engineers have continuously sought to vanquish these background sounds—with mixed results. In the years between vacuum tube gates and software plugins were two classic noise reduction tools that played dominant roles in quieting film sound—and in setting the stage for today's processors: the Urei 565 filter set, affectionately known as “Little dipper”; and the Dolby Cat. No. 43 noise reduction unit.
The Little Dipper was introduced at the 1970 AES show in New York, and addressed a problem desperately in need of a fix: the removal of very specific steady-state noises. (see Figure 14.24) With its very tight band filters and 18dB-per octave high- and low-pass filters, the Little Dipper became famous for attenuating camera sound and steady band-specific noises, and for years was a “must have” for film mixing theaters.5
Figure 14.24 The Urei 565 filter set. The “Little Dipper.”
Dolby Laboratories' Cat. 43 playback-only background noise suppressor, which appeared in 1978, was an offshoot of the company's 360 Series second-generation A-type noise reduction system (see Figure 14.25). This, the first widely distributed broadband noise reduction device, was not a filter, but rather a four-band compander with a master threshold control. In ways never before possible, the Cat. 43 enabled rerecording mixers to suppress broadband noises and rescue dialogue from the background muck.
Just as DAT was a bridge between old and new worlds of film sound, so were the Little Dipper and the Cat 43 instrumental in moving us from where we were to where we are.
Traditionally, noise reduction is done in the dialogue premix. You edit, the mixer mixes. But as technology improves, plugins get cheaper and better, budgets degenerate, and mixes get shorter, you may find yourself performing noise reduction in your cutting room. It's not necessarily a positive trend, but you should know how to deal with it.
Noise reduction can miraculously save a scene. Or it can make your tracks sound like a cell phone. Here are the secrets to nursing your tracks through noise reduction:
Discuss each noise problem with the mixer or supervising sound editor, and remember to ask the following questions:
Even an abbreviated version of a meeting like this will make for an enormously more productive mix. And you'll avoid those damning looks from the mixer during the dialogue premix that say “I can't believe you did that!”
Many broadband noise reduction plugins have enormous latencies, so you can't use them as real-time processors amid your other tracks. If you did, your processed tracks would be miserably out of sync. As computers get faster and plugins smarter, you will find a “real time” mode that allows you to use the processor as a track insert. Before committing to this plan, verify that your plugin/computer combination can indeed handle this workflow. And don't forget to check the system in the mix room. Real time hardware processors, such as the Cedar DNS3000 or plugins like the Waves WNS, offer broadband processing with no discernable latency.
If you can't or don't use noise reduction as an insert, you must process the clips and create new soundfiles offline—one clip at a time. However onerous this task seems, there is an upside. Since you are processing each clip individually, you can pay special—and better—attention to each. This takes more time, but can yield much better results. However, replacing old clips with new can get you into serious trouble if you don't maintain order and keep track of what you've done.
Before offline processing you have to make a copy of the original region. Since any AudioSuite (or similar) operation will create a new soundfile,
Figure 14.26 The highlighted region of this soundfile needs to be processed. Any AudioSuite operation will create a new region without handles.
Figure 14.27 Before processing, delete the fades and open the handles. This will give you greater editing and fade options when working with the soon-to-be-created region.
open up the beginning and ending handles of the region before processing (see Figures 14.26 and 14.27). Just because the edit works well before noise reduction doesn't mean that the transition will be effective once the tracks are cleaned. And previously unheard background sounds (such as “Cut!”) may emerge after cleaning. Similarly, track cleaning may alter the balance between adjacent shots, necessitating a longer fade. Pulling out an extra bit of handle means you'll have to redo the fade after noise reduction is complete, but you'll be left with a better set of options to play with.6
One reason noise reduction so often turns out miserably is that editors don't understand the tools. This is one of the key arguments that mixers have against editors reducing noise in the editing room rather than in the mix. You may have an impressive selection of tools at your disposal but use them incorrectly and you'll inflict damage. Typically, there are three types of tools for managing background noise:
Most botched, artifact-laden noise reduction jobs happen during broadband processing. Because processors that remove broadband noises often have names like “Z-Noise,” “DeNoise,” “Denoiser,” or “Noise Suppressor,” it's not irrational to think that this is the place to head with all your noise problems. The result: having failed to use filters or interpolation processors to remove harmonic noises or unwanted transients, you end up asking too much of the broadband processor, which repays you with a signature “I've been noise reduced” sound. The trick is to identify your particular noise problem and then apply the correct processors in the right sequence.
The following paragraphs describe a classic multipass noise reduction sequence. Once you understand the whys and hows of this plan (and have read the manuals), you should experiment with different sequences to see what works for a particular noise. The classic noise reduction plan looks like this.
Many modern noise reduction plugins can run several processes at the same time. This means that you can control and process harmonic and broadband background simultaneously. All manufacturers, however, recommend you remove impulsive noises before tackling steady-state broadband sounds.7
Create an FFT or interpolation display of your noise, and look at the low-frequency information (below 500 Hz), as shown in Figure 14.28.8 If you're chasing a harmonic problem like a rumble, hum, or buzz, you'll notice a distinct pattern. Look for the lowest-frequency peak—that's the fundamental frequency of the noise. You should easily see harmonics occurring at multiples of the fundamental frequency.
Write down the center frequencies of the fundamental and its harmonics, up to the tenth harmonic (or until you can't stand it any longer). Note the approximate “height” that each harmonic rises above the noise floor as well as its “width.” You'll use the width to calculate the Q for each filter and the height to determine the cut value (see Figure 14.29). Write all of this down
Figure 14.28 An FFT display created with soundBlade from Sonic Studio. This sample shows a classic North American hum, with peaks at approximately 30, 60, 120, 240 Hz, and so on. The frequency callout (left) reveals that the 60 Hz fundamental measure 58.91 Hz, indicating that the original analogue recording was at some point transferred off-speed.
Figure 14.29 A Waves PAZ Frequency Analyzer displaying center frequency and amplitude of a harmonic.
or enter it into a spreadsheet (Figure 14.30), which will do the math for you if you set up a few formulas.
Use a multiband EQ to create a deep-cut filter for the fundamental and for each harmonic (see Figure 14.31). For each filter, enter center frequency and calculate Q (center frequency ÷ bandwidth). Set the attenuation to a couple of dBs less than the height to which each specific harmonic rose above the visual noise floor on the FFT display. You'll end up with several deep, narrow
Figure 14.31 Once you determine the center frequency, gain, and Q for the principal harmonics, you can use a multiband EQ to remove the harmonic noise. In this case, a Waves Q10 is set to the parameters shown in the spreadsheet in Figure 14.30.
filters. These aren't notch filters because they're not infinitely deep; rather, they remove only what's necessary to reduce the noise back to the level of the existing noise floor. This should effectively eliminate hum, buzz, and rumble. If not, extend the filters further to the right to eliminate harmonics at higher frequencies and recheck the display to make sure you accurately measured the components of the noise.
If all this work seems silly, you can use a buzz removal processor. At their simplest, these filters find—or allow you to define—a fundamental frequency, after which they calculate the harmonics and create steep, narrow cuts, as seen in Figure 14.32.
Earlier in this chapter you saw how to tone down short term clicks, crackles, and rustles using interpolation processors, often called declickers and decracklers. Sometimes you can succeed in taming long sections of impulsive damage by using these tools as inserts. If you do choose to remove clicks in real time, select a processor that has very little latency.
Many manufacturers sell broadband noise reduction processors. They can make a hero or a fool out of you. The key to using them successfully is understanding what they do and what they don't do.
Broadband noise reduction devices—whether they work in real time or in offline processes—first take a sample of “pure noise,” ideally from a pause free of any valid signal.9 This serves as the blueprint of what's wrong with the sound. Next they convert the noise sample and signal from the time domain to the frequency domain by creating ever-updated FFTs. The FFT of the noise sample is divided into many narrow-frequency bins, in which the noise is reduced to a formula. Until recently, broadband processors were unable to attenuate simultaneously both tonal and broadband noise. This is the reason for the initial filtering pass in the classic noise reduction workflow. However, it's now often possible to address both at the same time (see Figure 14.33).
When the signal is played through the processor it, too, is assessed in the frequency domain. At each of the bins, the formula for the signal is compared to that of the noise sample. If the match is sufficient, then the sound within that bin is attenuated by a user-controlled amount. If there's no correlation between the noise formula and the incoming signal, no attenuation occurs in that bin since the dissimilar signal is likely valid audio rather than noise. This process is repeated for all of the frequency bins.
At this point there are usually control parameters for threshold and reduction. As would be expected, threshold determines the sound level at which processing begins; attenuation (or “reduction”) dictates what's done within each bin flagged as “noise.” If these settings are too aggressive, you'll hear very obvious artifacts. Back off first on the attenuation and then on the threshold (see Figure 14.34).
Figure 14.34 NoNOISE broadband denoiser control panel. Threshold and Attenuation govern where processing begins and by what amount. Sharpness and Bandwidth control how sound is recreated after noise reduction. Cutoffs determine what parts of the spectrum are not processed. These controls are common to most broadband denoisers.
The signal must now be recorrelated into a “normal” time domain sound. You usually have some control over this. “Sharpness” controls the slope between adjacent bins. The steeper the slope during the recombination process, the more effective the noise reduction but the “edgier” the sound. If you set the sharpness too high, you'll hear digital “swimming” artifacts, often called “bird chirping.” There's usually a control called something like “sharing” or “bandwidth” that determines the slope between adjacent bins during the recorrelation. The higher this setting, the warmer (but perhaps less articulate) the sound.
Low-frequency (LF) cutoff, if available on your processor, isn't a high-pass filter as you may think. Rather, it dictates the frequency beneath which there's no processing. If you're attacking traffic, set the LF cutoff to zero. If all you're fighting is hiss, set it to 2000 Hz or higher and you won't run the risk of damaging anything in your audio source below that level. Many processors have a high-frequency cutoff, which normally defaults to the Nyquist frequency. Change this setting if you're processing only low frequencies and want to leave higher frequencies unaffected.
All broadband noise removers are shackled by the fact that as they aim for greater resolution in the frequency domain, resolution in the time domain suffers. No way around it. Clever manufacturers offer options between frequency resolution and time resolution, giving you a bit of choice.
Very few background noises have but one component. A typical background noise will have harmonic elements from air conditioners or other machinery, ticking noises—from microphones or cables if not from speech—and broadband, random elements. Labeling a straightforward noise like air conditioning as “simple” is misleading. In addition to the obvious hissing air (solution = broadband denoise), there'll be harmonic sounds from the motor (solution = notch filters) and perhaps clicking from ineffective isolation springs or other causes (solution = declick interpolation). The answer is simple: don't run straight for the broadband. Think about the source of the noise and appropriately plan your processing. And don't think that once you find the “perfect” sequence you're off the hook: each noise reduction problem warrants its own solution.
Aside from picking the wrong noise reduction tool for the job, the most common way to bungle the process is not knowing when to quit. When you repeatedly listen to a noise, trying to get an even cleaner signal, you inevitably lose touch with the audio you're processing. Almost certainly you'll overprocess the tracks.
The antidote is annoyingly obvious: process less. You can always do more in the mix. Also, when you're happy with the noise reduction, leave it. Do something else. Take a walk and rest your ears. Listen to it later with a fresh ear to see what you think. If it passes this delayed listening test, it's probably acceptable.
A running battle exists between editors and rerecording mixers about processing. While there's something to be said for tracks arriving at the mix relatively clean, it's rare that an editor in a cutting room has the experience, tools, or acoustics to properly process them. Few things are more humiliating than listening to a mixer repeatedly say “If only I could get my hands on the original tracks.” In my experience, it's rare for a mixer to like the processing I've done in the cutting room, and more often than not my charmingly cooked tracks are greeted with disdain.
General hiss and background ambience reduction should ONLY be done during premix and final mix when the results can be judged against other sound elements. The vacuum of a dialog edit session is not a place to make broad, overall processing choices. Spot processing, like painting out car horns or distortions, can be done during the dialog edit when time permits but it must be completely seamless.
Richard Fairbanks, rerecording mixer
Confessionsofa Ex-Doofus-ItchyFooted Mutha
Sometimes it's appropriate for the dialogue editor to process certain tracks in the cutting room. Noise reduction, as we just saw, can be one such case. Another instance may be scenes in which one side of the conversation is wildly out of balance (for instance, a nearby truck was idling when one side of the scene was shot). In such cases, it's difficult even to edit the scene without some sort of processing.
Follow these rules and you can peacefully bring (some) processed tracks to the mix:
Figure 14.35 If you must perform noise reduction on a region, first make a copy. Mute this copy or place it onto a Junk track. After processing, add a suffix to the new region name that reminds you what processing you used. In this case, a 100 Hz high-pass filter was applied, as is reflected in the new region name.
Each time you process a region, you create a new file. When you name it, include information about the processing you did (see Figure 14.35).
The dialogue editor has become more detail-oriented, often removing some ticks previously left for the mixer. As expectations have increased and budgets decreased, editors have had to improve their speed and efficiency. Most have done an amazing job adapting.
Larry Benjamin, rerecording mixer
Act of Valor; The Good Wife
___________
1. Ely William Hill (1887–1962), “My Wife and My Mother-in-Law,” Puck, 6 November 1915.
2. See www.waves.com, www.sonicstudio.com, www.izotope.com, www.cedaraudio.com, www.sonnoxplugins.com.
3. Despite the mantra of the music recording world, “Keep the levels as hot as possible” (especially with 16-bit material), you must provide headroom for signal processing, since it will likely raise peak levels. Give your processors some extra bits with which to “think” and they'll perform better.
4. See www.soundminer.com, www.icedaudio.com, www.baseheadinc.com.
5. Will Shanks, “Analog Obsession: Classic Filters: The UA 550-A and UREI 565 Little Dipper,” Universal Audio WebZine, Volume 3, Number 3, May 2005. (www.uaudio.com/webzine/2005/may/text/content4.html).
6. Some workstations automatically create handles when performing offline processing. Consult your user manual.
7. There are many strategies for noise reduction. Some engineers suggest another sequence: reduction of clipping and overloads; removal of clicks and crackle; azimuth alignment; removal of stationary noise such as hum and broadband noise; removal of remaining unwanted sounds with a spectrograph editor. The sequence depends on the nature of the noise and the processors available to you.
8. An FFT, or Fast Fourier Transform, is a method of analyzing a signal in the frequency domain rather than in the time domain in which we live. FFTs (and occasionally Discrete Wavelet Transforms—DWTs) are the key building blocks in modern digital signal processing. Most DSP software manufacturers prefer FFT over DWT for audio applications due to wavelets' poor separation of frequency bands and difficulty in implementation.
9. Many broadband noise suppressors now allow you to create a noise signature “on the fly.” This is true—by necessity—of realtime processors such as the Cedar DNS series or Waves WNS. But if you are performing offline processing, it's always better to find a representative “pure” noise sample.