CHAPTER 8

Editing Digital Audio Content

In This Chapter

•  What is the art of audio editing?

•  How do you edit and process digital audio?

•  How do you go from editing to a complete podcast?

You’ve planned your podcast, structuring it around a learning objective and with a specific listener persona in mind. You’ve also recorded and collected audio elements such as monologues, interviews, and panel discussions. Now it’s time to assemble these disparate parts into a complete learning podcast that you can publish and distribute to your learners. Welcome to audio editing.

Audio editing used to be a physically intensive process. You had to cut the bits of your audio that you didn’t want out of the tape. You had to mark the specific location of that sentence on the tape with a grease pencil and then use a razor blade and cutting block to cut it out. After that, you had to join it back together using special sticky tape. Today the process is entirely digitalized. You locate the start and finish of segments you want to cut by looking at digital images of the waveform instead of marking it with a grease pencil.

Traditionally, editing was the process of cutting out unwanted material—such as mistakes, repeated words, or parts of an interview that went off track—from your audio. You might also break up an interview and change its order. In addition, you would edit music and sound effects.

Today, audio editing software does more than just cut audio. It offers functions not traditionally part of editing, such as adding special effects and creating multitrack packages. Audio editing is now more a process of creating digital packages by cutting and pasting elements of sound, adding effects to correct or improve sound, and mixing multiple audio elements together.

Well-edited audio does not sound as if it has been edited. Spoken word content flows like a natural conversation and edited music has no interruption to its beat or rhythm. Listeners should be unable to identify where words, phrases, or even large sections of an interview or monologue have been removed. The best way to achieve a natural flow is to rely on your ears and keep listening to your edits to be sure they flow smoothly. It can be tempting, once you get used to recognizing the shape of sounds appearing as waveforms, to simply use the visual representation of the waveform to identify where a sound starts and stops. But often it doesn’t capture the natural breath and intonation of the speaker.

Editing audio, as well as video and text, raises a number of ethical issues. You have the power to change what someone appears to be saying and if your skills are well developed, no one will ever know. Political news media has landed itself in numerous controversies in which key elements of a soundbite were left on the cutting room floor, thus contributing to the rising level of distrust in modern journalism. In reaction to this, some have suggested news media—and by extension, learning professionals—should not edit interviews. This is a knee-jerk reaction: It was not the editing that was bad in these cases, but the ethics. When you edit, it’s important to do so in a way that makes the content easier to understand without undermining its accuracy. As a general rule, you should also tell your interviewees they will be edited for brevity and relevance.

How do you decide what to cut from an interview and what to leave in? The answer is go back to your learning objective and determine whether the comment or sequence of comments in question leads the learner to the learning objective. If not, take it out. Good editors will ruthlessly cut material. But they won’t make the decision based on whether they thought the comments were nice or right. They’ll decide based on a clear understanding of the learning objective.

Let’s walk through the editing process. First we’ll look at cutting and splicing your audio—that’s the traditional term for audio editing. Then we’ll look at which effects can make your audio easier to hear. And we’ll finish with multitracking. The good news is that digital audio editing is very easy to learn and with practice you can quickly improve your skills. If you know how to edit the written word using a word processor, you’ll find editing audio easy and fun to learn.

Editing Audio

The technical process of audio editing is like editing a document in a word processor. You might read a sentence, select a word you think is redundant, and then delete it. Or you may cut and paste a sentence to another section of the document. When you edit audio, instead of selecting typed words, you are selecting a visual representation of the sound wave that was picked up in the recording and then deleting or cutting and pasting it. You’ll remember from chapter 7 that digital audio appears in editing programs as a waveform. Each blob on the waveform is a sound that may be a word, sound effect, or piece of music. Silence is represented by a thin line.

Figure 8-1 shows what a sound waveform looks like in the freely available audio editing software Audacity.

Figure 8-1. Example of Waveform in Audio Editing Software

You will manipulate this waveform using a number of controls. But before you can use these mechanical controls, you need to select the mode. For editing, you need to be in selector mode.

It’s important to familiarize yourself with the practical editing tools in your audio editing program. The most important ones are cut, copy, and paste. When editing, you will also need to use the zoom controls to zoom in and out of the waveform.

Once you have an understanding of the basic editing controls, it’s now time to edit. To do so you need to listen to your audio and determine what to cut. Position your cursor at the beginning of the track and either hit the space bar or click on the play button. You’ll see the cursor start moving to the right. As the cursor passes over the waveform, it will appear above the blob representing the sound you can hear. For example, when you hear the word hello, the cursor will be passing over the blob representing hello. If you hit the space bar again to stop playing, or use the stop button at the top of your screen, it will stop and the cursor will return to the position it started. If you want to remove the word hello, you will need to click your mouse over the blob representing it and the cursor will be positioned there (Figure 8-2).

Figure 8-2. Example Showing Selected Audio Segment

With the blob selected, you can delete it by clicking on the scissor icon at the top of the screen. You can also cut or copy and paste it if you wish.

It is best to listen to what you’re about to cut to ensure that you’ve selected the right element and cut all of it. You can listen by pressing the space bar or clicking on the play button at the top of the screen. You don’t want to cut just the -llo part of hello and leave the he-. You want to remove the whole word. Cutting a word or sound in half makes it sound very unnatural and distracts the learner’s attention away from the content. Once you have deleted your selection, play it back to make sure it sounds natural. If it doesn’t, undo the edit and try again.

Your task in editing is to work your way through the whole interview, monologue, panel discussion, or role play and remove any audio that distracts from the objective.

Cut Out Anything That’s Not Relevant

Highly disciplined editors are ruthless and cut as much out of interviews as possible. The shorter the better. Workshop attendees often cringe when I tell them this. They say they don’t want to offend their subject matter expert who gave them a half hour interview by using only two minutes. But isn’t it more offensive to the listener to drag him through half an hour of irrelevancies just because we don’t want to offend the expert?

Processing the Audio

Chapter 7 explored ways to use the microphone and your voice so you have more vocal presence. Here are some technical tricks that will add to your or your interviewee’s presence.

Editing packages offer a range of effects, most of which you’ll never need to use when making learning content, although they are fun to play around with. After all, how will a “digital delay” effect make your voice clearer and easier to understand? And how will changing the pitch of someone’s voice improve your interview? Despite the fun you may have in experimenting with them, many of the effects are pretty much useless for the serious podcaster. However, there are two that you should use to make your vocals sound superb: the graphic equalizer and the compressor.

Graphic Equalizer

The graphic equalizer enables you to make the voice crisper, clearer, and have more presence, even as many people experience a degradation in hearing quality. As we grow older, our ears become less efficient at hearing the bass and treble frequencies. So when you listen to your favorite symphony, sounds from instruments like the triangle might not be as clear. And you won’t hear sounds like the hi-hat in a rock song as strongly as you did 10 years ago. However, the human ear does not tend to have any problem with middle frequencies.

When you listen to a voice-over, your ear will hear the middle frequencies clearly but miss some of the resonant bass that adds warmth and authority and the treble that gives it presence. This can make the voice feel a little lifeless. To compensate for this and ensure that the listener enjoys the full range of their voice or their interviewee’s voice, audio professionals adjust the levels of certain frequencies using a graphic equalizer. Think of it as a little like the bass, middle, and treble controls on a stereo system or car radio, only more precise.

Professionals call this process “adjusting the EQ” (EQ stands for equalization). To make a voice-over sound clearer and stand out over a music bed, you can boost the higher and lower frequencies while lowering the middle frequencies. This may sound very academic, but when you listen to the difference, you’ll really hear the power of adjusting the EQ (Figure 8-3).

Figure 8-3. Adjusting the EQ

Audio editing software programs provide a graphic equalizer in the effects menu, and it’s valuable to experiment with it so you can hear the difference between an adjusted voice and a nonadjusted voice. As you get used to playing around with EQ, you’ll recognize that certain frequency ranges offer different qualities.

To boost the warmth and presence of someone’s voice, adjust the settings in the graphic equalizer so they look somewhat like a wave. You can do this by adjusting the frequencies so they are given a little boost below 250 hertz and then dip at about 250 hertz, before coming back at around 2 kilohertz and then increasing between 2 kilohertz and 4 kilohertz, as you can see in the diagram. Use these ranges as a start and experiment with them. The key to adjusting the EQ is not to memorize all the different ranges, but to listen to the voice and adjust the frequencies until it sounds warmer, clearer, and more present.

EQ is a hard concept to make sense of when written. The best way to really understand it is to play with it. So experiment with some spoken word recordings and listen to how your adjustments improve vocal quality. With practice it will become easy.

Adjusting the equalization on spoken word content will make the voice clearer and easier to pick up when combined with music and sound effects.

Compressor

Have you noticed that a lot of radio announcers have warm, full-sounding voices? Wouldn’t it be nice if your e-learning or podcast voice-overs had this richness? Beyond processing their voices through the graphic equalizer, audio professionals also use an audio compressor, which thickens the voice and balances all the audio levels so the person speaking is easier to hear.

It’s easy to get confused about compression because IT professionals also use the term to refer to the process of reducing file sizes, for example, when uploading content to the web. However, in this context, it refers to compressing the audio signal.

The compressor helps manage loud and soft volume levels and helps moderate them when they fluctuate. Volume often varies when we and others speak. For example, one person in your recording may speak loudly, while another person speaks softly. Some people start a sentence boisterously but trail off toward the end. All these factors make it difficult to ensure continuity in your podcast. Compression moderates the differences.

In technical terms, the compressor adjusts the volume of any sound in your recording above a certain volume level. Audio engineers call this level the threshold, similar to a thermostat on a heater. For example, if you set the threshold at -12 decibels, the compressor will kick in when any sound in your recording is louder than that threshold. So when a sound goes higher than -12 decibels, the compressor will reduce its level.

You set the ratio to determine how much the volume is adjusted per decibel when it goes above the threshold. It’s best to aim for a ratio between 2:1 and 4:1. Anything higher than that will sound unnatural and become even less natural as it heads up to 10:1.

The best way to fully understand compression is to play around with it. As with EQ, any written explanation will sound dry and academic. But fire up your audio editing software, record some audio, and play around with the compressor effect. You will hear how good your voice sounds once it has been compressed, and it will be easier for the learner to hear because any highs and lows will be evened out.

Noise Removal Effect

If you’ve ever recorded audio or video in a room with a noisy air conditioner, you know how distracting it can be. Most audio editing software programs have an effect called noise removal that can help cure your podcasts of such background noise. Unfortunately, the only time I have ever seen noise removal work well is on CSI: Miami, when Horatio Caine has one of his investigators remove the background noise from a nightclub scene. Like a lot of what you see on television, it’s fake. Noise removal works by sampling an element of your background, such as the loud air conditioner, which sounds like sh. It then goes through the whole recording and removes frequencies associated with this sample. It removes not just the air conditioner noise, but the sh sound from words like should and shall. It’s worth playing around with this function just to satisfy your curiosity. As you do, you’ll notice the whole recording just sounds odd and unnatural with this function. You can adjust the level of removal if you want to be subtle, but then what’s the point? Only in TV shows does noise removal work. Ultimately, the only way to get rid of noise in the background is to find a quiet location to record in.

Multitracking

Once you’ve edited your spoken word content, it’s time to add music, sound effects, and other segments such as panel discussions and monologues. To accomplish this, you use multitracking, the process of stacking multiple tracks and playing them at the same time (Figure 8-4). The process is similar to layers in Photoshop. Let’s say the first track is your introductory monologue. Multitracking allows you to add a music track so they both play at the same time. This is one of the most exciting aspects of audio editing, but it takes practice to learn efficiently.

Figure 8.4 Multitrack

But there’s more to multitracking than just layering tracks on top of one another. You need to adjust the volume of each track. If you play your introduction and music tracks together, the music would probably be so loud it would drown out your voice. And you will need to use the timeshifting tool to position audio elements on different tracks at different spaces along the timeline.

Audio Staircase Method

When you multitrack your audio segments, order the sequence of tracks so the track at the top of the screen is the first track to play, the one below is second, the one below that is third, and so forth. This will look like a staircase of audio tracks. Managing multiple tracks can be confusing, and this will reduce some of that. You can move tracks up and down by holding your mouse over the left control bar of each track.

Adjusting the Volume

As you mix different audio tracks together, you will need to adjust the volume level of each track. One track may be a presenter introducing the podcast. Another may be some background music. You will need to adjust the volume of the music so it does not drown out the presenter’s voice.

To adjust the volume of each track, we need to go into envelope mode and use the envelope tool. You find this at the top of Audacity in the cluster of buttons that also has the selector mode button.

When you are in envelope mode, you’ll see thick blue lines imposed on the outer extremities of each track. These represent the volume of your music. Along these lines you’ll create marker points that indicate current and then adjusted volume. To do this, take the envelope tool and position it so the line is in the middle of the tool. Then create one mark; that’s the start point. Next, create another mark and drag it down. You will see the visual representation of where the volume goes down. To bring the volume up again, you will create a marker at the point you want the volume to rise and then a second one to increase it.

Positioning Audio Elements Along the Timeline

Your podcast is going to be made up of more than just one audio element. You might have one or two interviews and some doorstop soundbites. So how do you combine them all? You can’t have them all playing at the same time.

This is when you will position each element along the timeline at the point you wish them to be heard. In Audacity, this is done using the timeshift tool.

When you position your cursor on the track you want to shift, hold down the left button on the mouse and move the track from left to right. You’ll see you can position it anywhere on the timeline.

Now you can create the order of each individual item in sequence.

Exporting

When you have finished building your podcast in the editing program, you will need to export it as a final media file that people can use. Most likely you’ll export it as an MP3. If you intend to use the file in other podcasts or software programs, you might select WAV; the file size will be larger, but the quality will be better. Under the file menu of your audio editing program, select the export function. It will bring up a window for you to select the format and where you want to save the file. When you save the file, it will also give you the option to enter metadata.

Work Flow

As discussed in chapter 3, it’s critical to develop a regular work flow to make the routine tasks habitual, freeing your mind to truly engage with the creative side of content creation. Here are some suggestions for how to sequence your editing tasks:

1.  Set up your folder structure, if you haven’t already done so, and save all the audio files into their respective subfolders. As you do, rename them according to your naming convention.

2.  Import your audio elements into your editing software and assemble them in sequential order.

3.  Position each audio element with the track along the timeline following the audio staircase method.

4.  Process your audio. If you choose to add EQ and compression to the tracks, do it to every track before you start editing. That way the same settings are applied to all.

5.  Perform your edits. If you are editing spoken word content and it features music in the background, or vice versa, you can click on the solo button, located on the control bar at the left of each track, so it turns off every other track and you only hear the one you are working on. Don’t forget to turn the solo button off when you move to the next edit.

6.  Adjust the volume. Use the envelope tool to adjust the volume of music and sound effects used behind spoken word content.

7.  When you are finished, export the complete project as a final copy either as a WAV or MP3 file.

8.  Get into the habit of saving regularly.

This is a good general sequence to follow, but editing is a creative process and, by its nature, an iterative experience. You will find yourself going in and out of the different editing modes to finesse aspects of your podcast. And you may import extra audio after having done most of your edits because you come up with a way to lift the overall package. This is fine. It’s just important to not approach these tasks ad hoc because otherwise you may forget a step here or there, thus slowing down your audio development.

Summary

Many people who work in radio will tell you they never want to work in television because they love all that the audio modality offers to communicate rich and engaging content. And many TV pros who started in radio will look wistful when they talk about their broadcasts without pictures. Audio is special. You’ll no doubt discover its magic as you work with it.

However, you of course are not creating podcasts and other audio content for your own consumption. You’re doing it for the learner. This modality offers exciting ways to create learning that draws on the learner’s experiences and knowledge, using the spoken word, music, and sound effects. Audio will continue to grow in popularity as more media companies produce high-quality podcasts and more people subscribe to them so they can listen during their commute or lounging around the house.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.22.27.45