CHAPTER 7
The Digital Audio Workstation

 

 

 

 

 

Over the history of digital audio production, the style, form and function of hard-disk recording has changed to meet the challenges of faster processors, reduced size, larger drives, improved hardware systems and the ongoing push of marketing forces to sell, sell, sell! As a result, there are a wide range of system types that are designed for various purposes, budgets and production styles. As new technologies and programming techniques continue to turn out new hardware and software systems at a dizzying pace, many of the long-held production limitations have vanished as increased track counts, processing power and affordability have changed the way we see the art of production itself. In recent years, no single term implies these changes more than the “DAW.”

In recent years, the digital audio workstation (DAW) has come to signify an integrated computer-based hard-disk recording system that commonly offers a wide and ever-changing number of production features such as:

Advanced multitrack recording, editing and mixdown capabilities

MIDI sequencing, edit and score capabilities

Integrated video and/or video sync capabilities

Integration with peripheral hardware devices such as controllers, DSP acceleration systems, MIDI and audio interface devices

Plug-in DSP (digital signal processing) support

Support for plug-in virtual instruments

Support for integrating timing, signal routing and control elements with other production software (ReWire)

Truth of the matter is, by offering an astounding amount of production power for the buck, these software-based programs (Figures 7.1 through 7.3) and their associated hardware devices have revolutionized the faces of professional, project and personal studios in a way that touches almost every life within the sound production communities.

fig7_1.jpg

FIGURE 7.1
Pro Tools hard-disk editing workstation for the Mac or PC. (Courtesy of Avid Technology, Inc., www.avid.com)

fig7_2.jpg

FIGURE 7.2
Cubase Media Production System for the Mac or PC. (Courtesy of Steinberg Media Technologies GmbH, a division of Yamaha Corporation, www.steinberg.net)

fig7_3.jpg

FIGURE 7.3
Logic DAW for the Mac. (Courtesy of Apple Inc., www.apple.com)

INTEGRATION NOW—INTEGRATION FOREVER!

Throughout the history of music and audio production, we’ve grown used to the idea that certain devices were only meant to perform a single task: A recorder records and plays back, a limiter limits and a mixer mixes. Fortunately, the age of the microprocessor has totally broken down these traditional lines in a way that has created a breed of digital chameleons that can change their functional colors as needed to match the task at hand. Along these same lines, the digital audio workstation isn’t so much a device as a systems concept that can perform a wide range of audio production tasks with relative ease and speed. Some of the characteristics that can (or should be) offered by a DAW include:

Integration: One of the biggest features of a workstation is its ability to provide centralized control over the digital audio recording, editing, processing and signal routing functions within the production system. It should also provide for direct communications with production-related hardware and software systems, as well as transport and sync control to/from external media devices.

Communication: A DAW should be able to communicate and distribute pertinent audio, MIDI and automation-related data throughout the connected network system. Digital timing (wordclock) and synchronization (SMPTE timecode and/or MTC) should also be supported.

Speed and flexibility: These are probably a workstation’s greatest assets. After you’ve become familiar with a particular system, most production tasks can be tackled in far less time than would be required using similar analog equipment. Many of the extensive signal processing, automation and systems communications features would be far more difficult to accomplish in the analog domain.

Session recall: Because all of the functions are in the digital domain, the ability to instantly save and recall a session and to instantly undo a performed action becomes a relatively simple matter.

Automation: The ability to automate almost all audio, control and session functions allows for a great degree of control over almost all of a DAWs program and session parameters.

Expandability: Most DAWs are able to integrate new and important hardware and software components into the system with little or no difficulty.

User-friendly operation: An important element of a digital audio workstation is its ability to communicate with its central interface unit: you! The operation of a workstation should be relatively intuitive and shouldn’t obstruct the creative process by speaking too much “computerese.”

I’m sure you’ve gathered from the above points that a software system (and its associated hardware) which is capable of integrating audio, video and MIDI under a single, multifunctional umbrella can be a major investment, both in financial terms and in terms of the time that’s spent learning to master the overall program environment. When choosing a system for yourself or your facility, be sure to take the above considerations into account. Each system has its own strengths, weaknesses and particular ways of working. When in doubt, it’s always a good idea to research the system as much as possible before committing to it. Feel free to contact your local dealer for a salesroom test drive, or better yet, try the demo. As with a new car, purchasing a DAW and its associated hardware can be an expensive proposition that you’ll probably have to live with for a while. Once you’ve taken the time to make the right choice for you, you can get down to the business of making music.

DAW HARDWARE

Keeping step with the modern-day truism “technology marches on,” the hardware and software specs of a computer and the connected peripherals continue to change at an ever-increasing pace. This is usually reflected as general improvements in such areas as their:

Need for speed (multiple processors and accelerated co-processors)

Increased computing power

Increased disk size and speed

Increased memory size and speed

Operating system (OS) and peripheral integration

General connectivity (networking and the Web)

In this day and age, it’s definitely important that you keep step with the ever-changing advances in computer-related production technology (Figure 7.4). That’s not to say you need to update your system every time a new hard- or soft-whiz-bang comes onto the market. On the contrary, it’s often a wise person who knows when a system is working just fine for his or her own personal needs and who does the research to update software and fine-tune the system (to the best of his or her ability). On the other hand, there will come a time (and you’ll know all too well when it arrives) that this “march” of technology will dictate a system change to keep you in step with the times. As with almost any aspect of technology, the best way to judge what will work best for you and your system is to research any piece of hard- and software that you’re considering—quite simply, read the specs, reads the reviews, ask your friends and then make your best, most informed choice.

fig7_4.jpg

FIGURE 7.4
Pro Tools HDX DAW for the Macor PC. (Courtesy of Avid Technology, Inc., www.avid.com)

When buying a computer for audio production, one of the most commonly asked questions is “Which one—Mac or PC?” The answer as to which operating system (OS) will work best for you will actually depend on:

Your preference

Your needs

The kind of software you currently have

The kind of computer platform and software your working associates or friends have

Cost: The fact that you might already be heavily invested in either PC or Mac software, or that you are more familiar with a certain platform will usually factor into your system choice

OS: Even this particular question is being sidestepped with the advent of Apple’s Boot Camp, which allows a Mac to boot up under the Mac or Windows OS, giving you freedom of choice to have either or both

The truth is, in this day and age there isn’t much of a functional difference between the two platforms. They both can do the job admirably.

Once you’ve decided which side of the platform tracks you’d like to live on, the more important questions that you should be asking are:

Is my computer fast and powerful enough for the tasks at hand?

Does it have enough hard disk space that’s large and fast enough for my needs?

Is there enough random access memory (RAM)?

Do I have enough monitor space (real estate) to see the important things at a single glance?

On the “need for speed” front, it’s always a good idea to buy (or build) a computer at the top of its performance range at any given time. Keeping in mind that technology marches on, the last thing that you’ll want to do is buy a new computer only to soon find out that it’s underpowered for the tasks ahead.

The newer quad and eight-core (multiprocessor) systems allow for faster calculations. Their tasks are spread across multiple CPUs; for example, a number of DAWs allow for their odd/even track counts and/or for effects processing to be split across multiple CPUs to increase the overall processing load, for added track count and DSP capabilities.

With today’s faster and higher capacity hard drives, it’s a simple matter to install cost-effective drives with terabyte capacities into a system. These drives can be internal, or they can be installed in portable drive cases that can be plugged into either a Thunderbolt®, FireWire® or USB port (preferably USB 3 or C), making it easy to take your own personal drive with you to the studio or on-stage.

The speed at which the disc platters turn will often affect a drive’s access time. Modern drives with a high disc spin rate (7200 rpm or higher) with large amounts of internal cache memory are often preferable for both audio and video production. SSD (solid state drives) are also available that don’t have moving parts at all, but include solid state memory that can have access times that often blaze at 6Gb per second or faster.

Within a media production system, it’s always a wise precaution to have a second drive that’s strictly dedicated to your audio, video and media files—this is generally recommended because the operating system (OS) will often require access to the main drive in a way that can cause data interruptions and a slowed response, should both the OS and media need to access data simultaneously.

Regarding random access memory, it’s always good to install a more than adequate amount of high-speed RAM into the system. If a system doesn’t have enough RAM, data will often have to be swapped to the system’s hard drive, which can seriously slow things down and affect overall real-time DSP performance. When dealing with music sampling, video and digital imaging technologies having a sufficient amount of RAM becomes even more of an issue.

With regard to system and application software, it’s often wise to perform an update to keep your system, well, up-to-date. This holds true even if you just bought the software, because it’s often hard to tell how long the original packaging has been sitting on the shelves—and even if it is brand-spanking new, chances are new revisions will still be available. Updates don’t come without their potential downfalls, however; given the incredible number of hardware/software system combinations that are available, it’s actually possible that an update might do as much harm as good. In this light, it’s actually not a bad idea to do a bit of research before clicking that update button. Whoever said that all this stuff would be easy?

Just like there never seems to be enough space around the house or apartment, having a single, undersized monitor can leave you feeling cramped for visual “real estate.” For starters, a sufficiently large monitor that’s capable of working at higher resolutions will greatly increase the size of your visual desktop; however, if one is a good thing, two is always better! Both Windows® and Mac OS offer support for multiple monitors (Figure 7.5). By adding a commonly available “dual-head” video card, your system can easily double your working monitor space for fewer bucks than you might think. I’ve found that it’s truly a joy to have your edit window, mixer, effects sections, and transport controls in their own places—all in plain and accessible view over multiple monitors.

fig7_5.jpg

FIGURE 7.5
You can never have enough visual “real estate”!

The Desktop Computer

Desktop computers are often (but not always) too large and cumbersome to lug around. As a result, these systems are most often found as a permanent installation in the professional, project and home studio (Figure 7.6). Historically, desktops have offered more processing power than their portable counterparts, but in recent times, this distinction has become less and less of a factor.

fig7_6.jpg

FIGURE 7.6
The desktop computer. (a) The Mac ProTM with Cinema display. (Courtesy of Apple Computers, Inc., www.apple.com) (b) CS450v5 Creation Station desktop PC. (Courtesy of Sweetwater, www.sweetwater.com)

The Laptop Computer

One of the most amazing characteristics of the digital age is miniaturization. At the forefront of the studio-on-the-go movement is the laptop computer (Figure 7.7). From the creation of smaller, lighter and more powerful notebooks has come the technological Phoenix of the portable DAW and music performance machine. With the advent of USB, FireWire and Thunderbolt audio interfaces, controllers and other peripheral devices, these systems are now capable of handling most (if not all) of the edit and processing functions that can be handled in the studio. In fact, these AC/battery-powered systems are often powerful enough to handle advanced DAW edit/mixing functions, as well as happily handling a wide range of plug-in effects and virtual instruments, all in the comfort of—anywhere!

fig7_7.jpg

FIGURE 7.7
The laptop computer. (a) The MacBook Pro 15”. (Courtesy of Apple Computers, Inc., www.apple.com) (b) PCAudiolabs laptop. (Courtesy of PCAudiolabs, www.pcaudiolabs.com)

That’s the good news! Now, the downside of all this portability is the fact that, since laptops are optimized to run off of a battery with as little power drain as possible, their:

Processors “may” run slower, so as to conserve on battery power

BIOS (the important subconscious brains of a computer) might be different (again, especially with regards to battery-saving features)

Hard drives “might” not spin as fast (generally they’re shipped with 5400 rpm speed drives, although this and SSD technologies have changed)

Video display capabilities are sometimes limited when compared to a desktop (video memory is often shared with system RAM, reducing graphic quality and refresh rate)

Internal audio interface usually isn’t so great (but that’s why there are so many external interface options)

As the last option says, it’s no secret that the internal audio quality of most laptops range from being quite acceptable to abysmal. As a result, about the only true choice is to find an external audio interface that works best for you and your applications. Fortunately, there’s a ton of audio interface choices for connecting via either FireWire, USB or Thunderbolt, ranging from a simple stereo I/O device to those that include multi-channel audio, MIDI and controller capabilities in a small, on-the-go package.

SYSTEM INTERCONNECTIVITY

In the not-too-distant past, installing a device into a computer or connecting between computer systems could’ve easily been a major hassle. With the development of the USB and other protocols (as well as the improved general programming of hardware drivers), hardware devices such as mice, keyboards, cameras, soundcards, modems, MIDI interfaces, CD and hard drives, MP3 players, portable fans, LED Christmas trees and cup warmers can be plugged into an available port, installed and be up and running in no time—generally without a hassle. Additionally, with the development of a standardized network and Internet protocol, it’s now possible to link computers together in a way that allows for the fast and easy sharing of data throughout a connected system. Using such a system, artists and businesses alike can easily share and swap files on the other side of the world, and pro or project studios can swap sound files and video files over the web with relative ease.

USB

In recent computer history, few interconnection protocols have affected our lives like the universal serial bus (USB). In short, USB is an open specification for connecting external hardware devices to the personal computer, as well as a special set of protocols for automatically recognizing and configuring them. Here are the current USB specs:

USB 2.0 (up to 480 megabits/sec = 60 megabytes/sec): For high throughput and fast transfer over the original USB 1.0 spec

USB 3.0 (up to 5 gigabits/sec = 640 megabytes/sec): For even higher through-put and fast transfer of the above applications

USB C (up to 10 gigabits/sec = 1.28 gigabytes/sec): For even higher throughput and fast transfer of the above applications and includes a plug that can be inserted in either orientation

The basic characteristics of USB include:

Up to 127 external devices can be added to a system without having to open up the computer. As a result, the industry has largely moved toward a “sealed case” or “locked-box” approach to computer hardware design.

Newer operating systems will often automatically recognize and configure a basic USB device that’s shipped with the latest device drivers.

Devices are “hot pluggable,” meaning that they can be added (or removed) while the computer is on and running.

The assignment of system resources and bus bandwidth is transparent to the installer and end user.

USB connections allow data to flow bidirectionally between the computer and the peripheral.

USB cables can be up to 5 meters in length (up to 3 meters for low-speed devices) and include two twisted pairs of wires, one for carrying signal data and the other pair for carrying a DC voltage to a “bus-powered” device. Those that use less than 500 milliamps (1/2 amp) can get their power directly from the USB cable’s 5-V DC supply, while those having higher current demands will need to be externally powered. USB C, on the other hand can supply up to 20 volts or 100 watts through the data cable.

Standard USB 1 through 3 cables have different connectors at each end. For example, a cable between the PC and a device would have an “A” plug at the PC (root) connection and a “B” plug for the device’s receptacle.

Cable distribution and “daisy-chaining” are done via a data “hub” (Figure 7.8). These devices act as a traffic cop in that they cycle through the various USB inputs in a sequential fashion, routing the data into a single data output line.

fig7_8.jpg

FIGURE 7.8
USB hubs in action.

FireWire

Originally created in the mid-1990s as the IEEE-1394 standard, the FireWire protocol is similar to USB in that it uses twisted-pair wiring to communicate bidirectional, serial data within a hot-swappable, connected chain. Unlike USB (which can handle up to 127 devices per bus), up to 63 devices can be connected within a connected FireWire chain. FireWire most commonly supports two speed modes:

FireWire 400 or IEEE-1394a (400 megabits/sec) is capable of delivering data over cables up to 4.5 meters in length. FireWire 400 is ideal for communicating large amounts of data to such devices as hard drives, video camcorders and audio interface devices.

FireWire 800 or IEEE-1394b (800 megabits/sec) can communicate large amounts of data over cables up to 100 meters in length. When using fiber-optic cables, lengths in excess of 90 meters can be achieved in situations that require long-haul cabling (such as within sound stages and studios).

Unlike USB, compatibility between the two modes is mildly problematic, because FireWire 800 ports are configured differently from their earlier predecessor, and therefore require adapter cables to ensure compatibility.

Thunderbolt

Originally designed by Intel and released in 2011, Thunderbolt (Figure 7.9) combines the Display-Port and PCIe bus into a serial data interface. A single Thunderbolt port can support a daisy chain of up to six Thunderbolt devices (two of which can be DisplayPort display devices), which can run at such high speeds as:

Thunderbolt 1 (up to 10 gigabits/sec = 1.28 gigabytes/sec)

Thunderbolt 2 (up to 20 gigabits/sec = 2.56 gigabytes/sec)

Thunderbolt 3 (up to 40 megabits/sec = 5.12 megabytes/sec)

Audio Over Ethernet

One of the more recent advances in audio and systems connectivity in the studio and on stage revolves around the concept of communicating audio over the Ethernet (AoE). Currently, there are several competing protocols that range from being open-source (no licensing fees) to those that require a royalty to be designed into a hardware networking system.

fig7_9.jpg

FIGURE 7.9
Thunderbolt ports on a MacBook Pro.

By connecting hardware devices directly together via a standard cat5 Ethernet cable (Figure 7.10), it’s possible for channel counts of up to 512 x 512 to be communicated over a single connected network. This straightforward system is designed to replace bulky snake cables and fixed wiring within large studio installations, large-scale stage sound reinforcement, convention centers and other complex audio installations. For example, instead of having an expensive, multi-cable microphone snake run from a stage to the main mixing console a single Ethernet cable could be run directly from an A/D mic/line cable box to the mixer (as well as the on-stage monitor mixer, for that matter)—all under digital control that often can include a redundant cable/system in case of unforeseen problems or failures.

In short, AoE allows for complex system setups to be interconnected, digitally controlled and routed in an extremely flexible manner, and since the system is connected to the Internet, wireless control via apps and computer software is often fully implemented.

fig7_10.jpg

FIGURE 7.10
MOTU AVB (Audio Video Bridge) Switch and AVB Control app for communicating and controlling audio over Ethernet. (Courtesy of MOTU, Inc., www.motu.com)

THE AUDIO INTERFACE

An important device that deserves careful consideration when putting together a DAW-based production system is the digital audio interface. These devices can have a single, dedicated purpose, or they might be multifunctional in nature. In either case, their main purpose in the studio is to act as a connectivity bridge between the outside world of analog audio and the computer’s inner world of digital audio (Figures 7.11 through 7.15). Audio interfaces come in all shapes, sizes and functionalities; for example, an audio interface can be:

Built into a computer (although, more often than not, these devices are often limited in quality and functionality)

A simple, two-I/O audio device

A multichannel device, offering many I/Os and numerous I/O expansion options

Fitted with one or more MIDI I/O ports

One that offers digital I/O, wordclock and various sync options

Fitted with a controller surface (with or without motorized faders) that provides for direct DAW control integration

A built-in DSP acceleration for offering assisted plug-in processing

fig7_11.jpg

FIGURE 7.11
Steinberg UR22 MkII 2x2 audio interface. (Courtesy of Steinberg Media Technologies GmbH, a division of Yamaha Corporation, www.steinberg.net)

fig7_12.jpg

FIGURE 7.12
MOTU Ultralite mk3 Hybrid FireWire/USB audio interface with effects. (Courtesy of MOTU, Inc., www.motu.com)

fig7_13.jpg

FIGURE 7.13
Presonus Studio 192 26x32 channel audio interface. (Courtesy of Presonus Audio Electronics, Inc., www.presonus.com)

fig7_14.jpg

FIGURE 7.14
Apollo audio interface with integrated UAD effects processing. (a) Apollo FireWire/Thunderbolt. (b) Apollo Twin USB. (Courtesy of Universal Audio, www.uaudio.com © 2017 Universal Audio, Inc. All rights reserved. Used with permission)

fig7_15.jpg

FIGURE 7.15
Burl Audio B80 Mothership audio interface. (Courtesy of Burl Audio, www.burlaudio.com)

These devices are most commonly designed as stand-alone and/or 19” rack mountable systems that plug into the system via USB, FireWire, Thunderbolt or AoE. An interface might have as few as two inputs and two outputs, or it might have more than 24. Recent units offer bit depth/sample rate options that range up to 24/96 or 24/192. In recent times, pretty much all interfaces will work with any DAW and platform (even Digidesign has dropped their use of proprietary hardware/software pairing). For this reason, patience and care should be taken to weigh the various system and audio quality options in a way that best suits your needs and budget—as these options could easily affect your future expansion and systems operation choices.

Audio Driver Protocols

Audio driver protocols are software programs that set standards for allowing data to be communicated between the system’s software and hardware. A few of the more common protocols are:

WDM: This driver allows compatible single-client, multichannel applications to record and play back through most audio interfaces using Microsoft Windows. Software and hardware that conform to this basic standard can communicate audio to and from the computer’s basic audio ports.

ASIO: The Audio Stream Input/Output architecture (which was developed by Steinberg and offered free to the industry) forms the backbone of VST. It does this by supporting variable bit depths and sample rates, multi-channel operation, and synchronization. This commonly used protocol offers low latency, high performance, easy setup and stable audio recording within VST.

MAS: The MOTU Audio System is a system extension for the Mac that uses an existing CPU to accomplish multitrack audio recording, mixer, bussing and real-time effects processing.

CoreAudio: This driver allows compatible single-client, multichannel applications to record and play back through most audio interfaces using Mac OS X. It supports full-duplex recording and playback of 16-/24-bit audio at sample rates up to 96 kHz (depending on your hardware and CoreAudio client application).

In most circumstances, it won’t be necessary for you to be familiar with the protocols—you just need to be sure that your software and hardware are compatible for use with the driver protocol that works best for you. Of course, further information can always be found at the respective companies’ websites.

Latency

When discussing the audio interface as a production tool, it’s important that we touch on the issue of latency. Quite literally, latency refers to the buildup of delays (measured in milliseconds) in audio signals as they pass through the audio circuitry of the audio interface, CPU, internal mixing structure and I/O routing chains. When monitoring a signal directly through a computer’s signal path, latency can be experienced as short delays between the input and monitored signal. If the delays are excessive, they can be unsettling enough to throw a performer off time. For example, when recording a synth track, you might actually hear the delayed monitor sound shortly after hitting the keys (not a happy prospect) and latency on vocals can be quite unsettling. However, by switching to a supported ASIO or CoreAudio driver and by optimizing the interface/DAW buffer settings to their lowest operating size (without causing the audio to stutter), these delay values can be reduced down to an unnoticeable or barely noticeable range.

In response to the above problem, most modern interface drivers include a function called direct monitoring, which allows for the system to monitor inputs directly from the monitoring source in a way that bypasses the DAW’s monitoring circuitry. The result is a monitor (cue) source that is free from latency, allowing the artist to hear themselves without the distraction of delays in the monitor path.

Need Additional I/O?

Obviously, there are a wide range of options that should be taken into account when buying an interface. Near the top of this list (audio quality always being the top consideration) is the need for having an adequate number of inputs and outputs (I/O).

Although a number of interface designs include a large number of I/O channels, by far the most have a limited I/O count, but instead offers access to addition I/O options should the need arise. This can include such options as:

Lightpipe (ADAT) I/O, whereby each optical cable can give access to either 8 channels at sample rates of 44 or 48k or 4 channels at 96k (if this option is available), when used with an outboard lightpipe preamp (Figure 7.16).

Connecting additional audio interfaces to a single computer. This is possible whenever several compatible interfaces can be detected by and controlled from a single interface driver.

Using an audio over the Ethernet protocol and compatible interface systems, additional I/O can be added by connecting additional AoE devices onto the network and patching the audio through the system drivers.

fig7_16.jpg

FIGURE 7.16
Outboard lightpipe (ADAT) preamp. (a) Audient ASP800. (Courtesy of Audient Limited, www.audient.com) (b) Presonus Digimax D8. (Courtesy of Presonus Audio Electronics, Inc., www.presonus.com)

It’s often important to fully research your needs and possible hardware options before you buy an interface. Anticipating your future needs is never an easy task, but it can save you from future heartaches, headaches and additional spending.

DAW Controllers

Originally, one of the more common complaints against most DAWs (particularly when relating to the use of on-screen mixers) is the lack of hardware control that gives the user direct, hands-on access. Over the years, this has been addressed by major manufacturers and third-party companies in the form of:

Hardware DAW controllers

MIDI instrument controller surfaces that can directly address DAW controls

On-screen touch monitor surfaces

iOS-based controller apps

It’s important to note that there are a wide range of controllers from which to choose—and just because others feel that the mouse is cumbersome doesn’t mean that you have to feel that way; for example, I have several controllers in my own studio, but the mouse is still my favorite tool. As always, the choice of what works best for you is totally up to you.

HARDWARE CONTROLLERS

Hardware controller types (Figure 7.17) generally mimic the design of an audio mixer in that they offer slide or rotary gain faders, pan pots, solo/mute, channel select buttons, as well as full transport remote functions. A channel select button might be used to actively assign a specific channel to a section that contains a series of grouped pots and switches that relate to EQ, effects and dynamic functions, or the layout may be simple in form, providing only the most-often used direct control functions in a standard channel layout.

Such controllers range in the number of channel control strips that are offered at one time. They’ll often (but not always) offer direct control over eight input strips at a time, allowing channel groups to be switched in groups of 8 (1–8, 9–16, 17–24, etc.), any number of the grouped inputs can be accessed on the controller, as well as on the DAW’s on-screen mixer. These devices will also often include software function keys that can be programmed to give quick and easy access to the DAW’s more commonly used program keys.

fig7_17.jpg

FIGURE 7.17
Hardware controllers. (a) Mackie Control Universal Pro DAW controller. (Courtesy of Loud Technologies, Inc., www.mackie.com) (b) SSL Nucleus DAW controller and audio interface. (Courtesy of Solid State Logic, www.solid-statelogic.com)

INSTRUMENT CONTROLLERS

Since all controller commands are transmitted between the controller and audio editor via MIDI and device-specific MIDI SysEx messages (see Chapter 9). It only makes sense that a wide range of MIDI instrument controllers (mostly keyboard controllers) offer a wide range of controls, performance triggers and system functionality that can directly integrate with a DAW (Figure 7.18). The added ability of controlling a mix, as well as remote transport control is a nice feature, should the keyboard controller be out of arm’s reach of the DAW.

fig7_18.jpg

FIGURE 7.18
Keyboard controllers will often provide direct access to DAW mixing and function controls. (a) Komplete Kontrol S49 keyboard controller. (Courtesy of Native Instruments GmbH, www.nativeinstruments.com) (b) KX49 keyboard controller. (Courtesy of Yamaha Corporation, www.yamaha.com)

TOUCH CONTROLLERS

In addition to the wide range of hardware controllers that are available on the market, an ever-growing number of software-based touch-screen monitor controllers have begun to take over the market. These can take the form of standard touch-screen monitors that let you have simple, yet direct, control over any software commands, or they can include software that gives you additional commands and control over specific DAWs and/or recording-related software in an easy-to-use fashion (Figure 7.19a). Since these displays are computer-based devices themselves, they can change their form, function and entire way of working with a single, uh, touch.

In addition to medium-to-large touch control screens, a number of Wi-Fi-based controllers are available for the iPad (Figure 7.19b). These controller “apps”, offer direct control over many of the functions that were available on hardware controllers that used to cost hundreds or thousands of dollars, but are now emulated in software and can be purchased from an app “store” for the virtual cost of a cup of coffee.

fig7_19.jpg

FIGURE 7.19
Touch screen controllers. (a) The Raven MTi Multi-touch Audio Production Console. (Courtesy of Slate Pro Audio, www.slateproaudio.com) (b) V-Control Pro DAW controller for the iPad. (Courtesy of Neyrinck, www.vcontrolpro.com)

LARGE-SCALE CONTROLLERS

Another controller type that is a different type of beastie is the large-scale controller (Figure 7.20). In fact, these controllers (which might or might not include analog hardware, such as mic preamps) are far more likely to resemble a full sized music and media production console than a controller surface. They allow for direct control and communication with the DAW, but offer a large number of physical and/or touch-screen controls for easy access during such tasks as mixing for film, television and music production.

fig7_20.jpg

FIGURE 7.20
Digidesign S6 integrated controller/console. (Courtesy of Avid Technology, Inc., www.avid.com)

SOUND FILE FORMATS

A wide range of sound file formats exist within audio and multimedia production. Here is a list of those that are used in professional audio that don’t use data compression of any type:

Wave (.wav): The Microsoft Windows format supports both mono and stereo files at a variety of bit and sample rates. WAV files contain PCM coded audio (uncompressed pulse-code modulation formatted data) that follows the Resource Information File Format (RIFF) spec, which allows extra user information to be embedded and saved within the file itself.

Broadcast wave (.wav): In terms of audio content, broadcast wave files are the same as regular wave files; however, text strings for supplying additional information (most notably, time code data) can be embedded in the file according to a standardized data format.

Apple AIFF (.aif or .snd): This standard sound file format from Apple supports mono or stereo, 8-, 16- and 24-bit audio at a wide range of sample rates. Like broadcast wave files, AIFF files can contain embedded text strings.

Sound File Sample and Bit Rates

While the sample rate of a recorded bit stream (samples per second) directly relates to the resolution at which a recorded sound will be digitally captured, the bit rate of a digitally recorded sound file directly relates to the number of quantization steps that are encoded into the bit stream. It’s important that these rates be determined and properly set before starting a session. Further reading on sample and bit rate depths can be found in Chapter 6. Additional info on sound files and compression codecs can be found in Chapter 11.

Sound File Interchange and Compatibility Between DAWs

At the sound file level, most software editors and DAWs are able to read a wide range of uncompressed and compressed formats, which can then be saved into a new DAW session format. At the session level, there are several ways to exchange data for an entire session from one platform, OS or hardware device to another. These include the following:

Open Media Framework Interchange (OMFI) is a platform-independent session file format intended for the transfer of digital media between different DAW applications; it is saved with an .omf file extension. OMF (as it is commonly called) can be saved in either of two ways: (1) “export all to one file,” when the OMF file includes all of the sound files and session references that are included in the session (be prepared, this file will be extremely large), or (2) “export media file references,” which does not contain the sound files themselves but will contain all of the session’s region, edit and mix settings; effects (relating to the receiving DAW’s available plug-ins and ability to translate effects routing); and I/O settings. This second type of file will be small by comparison; however, the original sound files must be transferred into the proper session folders.

One audio-export only option makes use of the broadcast wave sound file format. By using Broadcast wave, many DAWs are able to directly read the time-stamped data that’s imbedded into the sound file and then automatically line them up within the session.

Software options that can convert session data between DAWs also exist. Pro Convert from Solid State Logic, for example, is a stand-alone program that helps you tailor sound file information, level and other information, and then transfer one DAW format into another format or readable XML file.

Although these systems for allowing file and session interchangeability between workstation types can be a huge time and work saver, it should be pointed out that they are, more often than not, far from perfect. It’s not uncommon for files not to line up properly (or load at all), plug-ins can disappear and/or lose their settings—Murphy’s law definitely applies. As a result, the most highly recommended and surefire way to make sure that a session will load into any DAW platform is to make (print or export) a copy of each track, starting from the session’s beginning (00:00:00:00) and going to the end of that particular track. Using this system, all you need to do is load each track into the new workstation at their respective track beginning points and get to work.

Of course, the above method won’t load any of the plug-in or mixer settings (often an interchange problem anyway). Therefore, it’s extremely important that you properly document the original session, making sure that:

All tracks have been properly named (supplying additional track notes and documentation, if needed).

All plug-in names and settings are well documented (a screenshot can go a long way toward keeping track of these settings).

Any virtual instrument or MIDI tracks are printed to an audio track. (Make sure to include the original MIDI files in the session, and to document all instrument names and settings—again, screenshots can help.)

Any special effects or automation moves are printed to the particular track in question (make sure this is well documented) and you should definitely consider providing an additional copy of the track without effects or automation.

DAW SOFTWARE

By their very nature, digital audio workstations (Figures 7.1 through 7.3, as well as 7.21 and 7.22) are software programs that integrate with computer hardware and functional applications to create a powerful and flexible audio production environment. These programs commonly offer extensive record, edit and mixdown facilities for such uses in audio production as:

Extensive sound file recording, edit and region definition and placement

MIDI sequencing and scoring

Real-time, on-screen mixing

Real-time effects

Mixdown and effects automation

Sound file import/export and mixdown export

Support for video/picture playback and synchronization

Systems synchronization

Audio, MIDI and sync communications with other audio programs (e.g., ReWire)

Audio, MIDI and sync communications with other effects and software instruments (e.g., VST technology)

This list is but a small smattering of the functional capabilities that can be offered by an audio production DAW.

fig7_21.jpg

FIGURE 7.21
Reaper DAW software. (Courtesy of Cockos Incorporated, www.reaper.fm)

fig7_22.jpg

FIGURE 7.22
Presonus Studio One DAW software. (Courtesy of Presonus Audio Electronics, Inc., www.presonus.com)

Suffice to say that these powerful software production tools are extremely powerful and varied in their form and function. As you can see, even with their inherent strengths, quirks, and complexities, their basic look, feel, and operational capabilities have, to some degree, become unified among the major DAW competitors. Having said this, there are enough variations in features, layout, and basic operation that individuals (from aspiring beginner to seasoned professional) will have their favorite DAW make and model. With the growth of the DAW and computer industries, people have begun to customize their computers with features, added power and peripherals that rival their love for supped-up cars and motorcycles. In the end, though (as with many things in life), it doesn’t matter which type of DAW you use—it’s how you use it that counts!

Sound Recording and Editing

Most digital audio workstations are capable of recording sound files in mono, stereo, surround or multichannel formats (either as individual files or as a single interleaved file). These production environments graphically display sound file information within a main graphic window (Figure 7.23), which contains drawn waveforms that graphically represent the amplitude of a sound file over time in a WYSIWYG (what you see is what you get) fashion. Depending on the system type, sound file length and the degree of zoom, the entire waveform may be shown on the screen, or only a portion will show as it scrolls over the course of the song or program. Graphic editing differs greatly from the “razor blade” approach that’s used to cut analog tape, in that the waveform gives us both visual and audible cues as to precisely where an edit point should be. Using this common display technique, any position, cut/copy/paste, gain or time changes will be instantly reflected in the waveforms on the screen. Almost always, these edits are nondestructive (a process whereby the original sound file isn’t altered—only the way in which the region in/out points are accessed or the file is processed will be changed, undone, redone, copied, pasted— virtually without limit.

fig7_23.jpg

FIGURE 7.23
Main edit window within the Cubase audio production software. (Courtesy of Steinberg Media Technologies GmbH, a division of Yamaha Corporation, www.steinberg.net)

Only when a waveform is zoomed-in fully is it possible to see the individual sample amplitude levels of a sound file (Figure 7.24). At this zoom level, it becomes simple to locate zero-crossing points (points where the level is at the 0, center-level line). In addition, when a sound file is zoomed-in to this level, the program might allow the sample points to be redrawn in order to remove potential offenders (such as clicks and pops) or to smooth out amplitude transitions between loops or adjacent regions.

fig7_24.tif

FIGURE 7.24
Zoomed-in edit window showing individual samples.

The nondestructive edit capabilities of a DAW refer to a disk-based system’s ability to edit a sound file without altering the data that was originally recorded to disk. This important capability means that any number of edits, alterations or program versions can be performed and saved to disk without altering the original sound file data.

Nondestructive editing is accomplished by accessing defined segments of a recorded digital audio file (often called regions) and allowing them to be reproduced in a user-defined order, defined segment in/out point or level in a manner that can be (and often is) different than the originally recorded sound file. In effect, when a specific region is defined, we’re telling the program to access the sound file at a point that begins at a specific memory address on the hard disk and continues until a specified ending address has been reached (Figure 7.25). Once defined, these regions can be inserted into a program list (often called a playlist or edit list) in such a way that they can be accessed and reproduced in any order and any number of ti-ti-ti-times. For example, Figure 7.26 shows a snippet from Gone With the Wind that contains Rhett’s immortal words “Frankly, my dear, I don’t give a damn.” By segmenting it into three regions we could use a DAW editor to output the words in several ways.

fig7_25.tif

FIGURE 7.25
Nondestructive editing allows a region within a larger sound file to begin at a specific in-point and play until the user-defined end-point is reached.

fig7_26.tif

FIGURE 7.26
Example of how snippets from Rhett’s famous Gone with the Wind dialogue can be easily rearranged using standard non-destructive editing.

When working in a graphic editing environment, regions can usually be defined by positioning the cursor over the waveform, pressing and holding the mouse or trackball button and then dragging the cursor to the left or right, which highlights the selected region for easy identification. After the region has been defined, it can be edited, marked, named, maimed or otherwise processed.

As one might expect, the basic cut-and-paste techniques used in hard-disk recording are entirely analogous to those used in a word processor or other graphics-based programs:

Cut: Places the highlighted region into clipboard memory and deletes the selected data (Figure 7.27a).

Try This: Recording a Sound file to Disk

1. Download a demo copy of your favorite DAW (these are generally available off the company’s website for a free demo period).

2. Download the workstation’s manual and familiarize yourself with its functional operating basics.

3. Consult the manual regarding the recording of a sound file.

4. Assign a track to an interface input sound source.

5. Name the track! It’s always best to name the track (or tracks) before going in to record. In this way, the file will be saved to disk within the session folder under a descriptive name instead of an automatically generated file name (e.g., killerkick.wav instead of track16–01.wav).

6. Save the session and assign the input to another track, and overdub a track along with the previously recorded track.

7. Repeat as necessary until you’re having fun!

8. Save your final results for the next tutorial.

fig7_27.tif

FIGURE 7.27
Standard Cut, Copy & Paste commands. (a) Cutting inserts the highlighted region into memory and deletes the selected data. (b) Copying simply places the highlighted region into memory without changing the selected waveform in any way. (c) Pasting copies the data within the system’s clipboard memory into the sound file at the current cursor position.

Copy: Places the highlighted region into memory and doesn’t alter the selected waveform in any way (Figure 7.27b).

Paste: Copies the waveform data that’s within the system’s clipboard memory into the sound file beginning at the current cursor position (Figure 7.27c).

Besides basic nondestructive cut-and-paste editing techniques, the amplitude processing of a signal is one of the most common types of changes that are likely to be encountered. These include such processes as gain changing, normalization and fading.

Try This: Copy and Paste

1. Open the session from the preceding tutorial, “Recording a Sound File to Disk.”

2. Consult your editor’s manual regarding basic cut-and-paste commands (which are almost always the standard PC and Mac commands).

3. Open a sound file and define a region that includes a musical phrase, lyric or sentence.

4. Cut the region and try to paste it into another point in the sound file in a way that makes sense (musical or otherwise).

5. Feel free to cut, copy and paste to your heart’s desire to create an interesting or totally wacky sound file.

Gain changing relates to the altering of a region or track’s overall amplitude level, such that a signal can be proportionally increased or reduced to a specified level (often in dB or percentage value). To increase a sound file or region’s overall level, a function known as normalization can be used. Normalization (Figure 7.28) refers to an overall change in a sound file or defined region’s signal level, whereby the file’s greatest amplitude will be set to 100% full scale (or a set percentage level of full scale), with all other levels in the sound file or region being proportionately scaled up or down in gain level.

fig7_28.tif

FIGURE 7.28
Original signal and normalized signal level.

The fading of a region (either in or out, as shown in Figure 7.29) is accomplished by increasing or reducing a signal’s relative amplitude over the course of a defined duration. For example, fading in a file proportionately increases a region’s gain from infinity (zero) to full gain. Likewise, a fade-out has the opposite effect of creating a transition from full gain to infinity. These DSP functions have the advantage of creating a much smoother transition than would otherwise be humanly possible when performing a manual fade.

fig7_29.tif

FIGURE 7.29
Examples of fade-in and fade-out curves.

A cross-fade (or X-fade) is often used to smooth the transition between two audio segments that either are sonically dissimilar or don’t match in amplitude at a particular edit point (a condition that would otherwise lead to an audible “click” or “pop”). This useful tool basically overlaps a fade-in and fade-out between the two waveforms to create a smooth transition from one segment to the next (Figure 7.30). Technically, this process averages the amplitude of the signals over a user-definable length of time in order to mask the offending edit point.

fig7_30.jpg

FIGURE 7.30
Example of a cross-fade window. (Courtesy of Steinberg Media Technologies GmbH, a division of Yamaha Corporation, www.steinberg.net)

Fixing Sound with a Sonic Scalpel

In traditional multitrack recording, should a mistake or bad take be recorded onto a new track, it’s a simple matter to start over and re-record over the unwanted take. However, if only a small part of the take was bad, it’s easy to go back and perform a punch-in (Figure 7.31). During this process, the recorder or DAW:

Silently enters into record at a predetermined point

Records over the unwanted portion of the take

Silently falls back out of record at a predetermined point.

A punch can be manually performed on most recording systems; however, DAWs and newer tape machines can be programmed to automatically go into and fall out of record at a predetermined time.

fig7_31.jpg

FIGURE 7.31
Punch-ins let you selectively replace material and correct mistakes. (Courtesy of Steinberg Media Technologies GmbH, a division of Yamaha Corporation, www.steinberg.net)

When punching-in, any number of variables can come into play. If a solo instrument is to be overdubbed, it’s often easy to punch the track without fear of any consequences. If an offending musical section is within a group or ensemble, leakage from the original instrument could find its way into adjacent tracks, making a punch difficult or unwise. In such a situation, it’s usually best to rerecord the piece, pick up at a point just before the bad section and splice (edit insert) it back into the original recording, or attempt to punch the section using the entire ensemble.

From a continuity standpoint, it’s often best to punch-in on a section immediately after the take has been recorded, because changes in mic choice, mic position or the session’s general “vibe” can lead to a bad punch that simply doesn’t match the original take’s general sound. If this isn’t possible, make sure that you match the sounds by carefully documenting the mic choice, placement, preamp type, etc. You’ll be glad you did.

Naturally, performing a “punch” should always be done with care. In some non-DAW cases, allowing the track to continue recording after the intended out- could possibly cut off a section of the following, acceptable track and likewise require that the following section be redone. Stopping it short could cut off the natural reverb trail of the final note.

It needs to be pointed out that performing a punch using a DAW is often “far” easier than doing the same on an analog recorder. For example:

If the overdub wasn’t that great, you can simply click to “undo” it and start over!

If the overdub was started early and cut into the good take (or went too long), the leading and/or trailing edge of the punch can often be manually adjusted to expose or hide sections after the punch has been performed (a tape editor’s dream).

These beneficial luxuries can go a long way toward reducing operator error (and its associated tensions) during a session.

Comping

When performing a musically or technically complex overdub, most DAWs will let you comp (short for composite) multiple overdubs together into a final, master take (Figure 7.32). Using this process, a DAW can be programmed to automatically enter into and out of record at the appropriate points. When placed into record mode, the DAW will start laying down the overdub into a new and separate track (called a “lane”). At the end of the overdub, it’ll loop back to the beginning and start recording the next pass onto a new and separate lane. This process of laying down consecutive takes will continue, until the best take is done or the artist simply gets tired of recording. Once done, an entire overdub might be chosen, or individual segments from the various takes can be assembled together into a final, master composite overdub. Such is the life of a digital micro-surgeon!

fig7_32.jpg

FIGURE 7.32
A single composite track can be created from several partially acceptable takes.

MIDI Sequencing and Scoring

Most DAWs include extensive support for MIDI (Figure 7.33), allowing electronic instruments, controllers, effects devices, and electronic music software to be integrated with multitrack audio and video tracks. This important feature often includes the full implementation for:

MIDI sequencing, processing and editing

Score editing and printing

Drum pattern and step note editing

fig7_33.jpg

FIGURE 7.33
MIDI edit windows with Steinberg’s Cubase/Nuendo DAW. (a) Piano roll edit window. (b) Notation edit window. (Courtesy of Steinberg Media Technologies GmbH, a division of Yamaha Corporation, www.steinberg.net)

MIDI signal processing

Support for linking the timing and I/O elements of an external music application (often via ReWire)

Support for software instruments (VSTi and RTAS)

Further reading about the wonderful world of MIDI can be found in Chapter 9.

Support for Video and Picture Sync

Most high-end DAWs also include support for displaying a video track within a session, both as a video window that can be displayed on the desktop and in the form of a video thumbnail track (which often appears as a linear guide track within the edit window). Through the use of SMPTE timecode, MTC and wordclock, external video players and edit devices can be locked with the workstation’s timing elements, allowing us to have full “mix to picture” capabilities (Figure 7.34).

Real-Time, On-Screen Mixing

In addition to their ability to offer extensive region edit and definition, one of the most powerful cost- and time-effective features of a digital audio workstation is the ability to offer on-screen mixing capabilities (Figure 7.35), known as mixing “in the box.” Essentially, most DAWs include a digital mixer interface that offers most (if not more) of the capabilities that are offered by larger analog and/or digital consoles—without the price tag and size. In addition to the basic input strip fader, pan, solo/mute and select controls, most DAW software mixers offer broad support for EQ, effects plug-ins (offering a tremendous amount of DSP flexibility), routing, spatial positioning (pan and often surround-sound positioning), total automation (both mixer and plug-in automation), mixing and transport control from an external surface, support for exporting (bouncing) a mixdown to a file—the list goes on and on and on. Further reading on the mixers, consoles and the process of mixing audio can be found in Chapter 17.

fig7_34.jpg

FIGURE 7.34
Most high-end DAW systems are capable of importing a video file directly into the project session window.

fig7_35.jpg

FIGURE 7.35
DAW on-screen mixer. (a) ProTools on-screen mixer. (Courtesy of Avid Technology, Inc., www.avid.com) (b) Nuendo on-screen mixer. (Courtesy of Steinberg Media Technologies GmbH, a division of Yamaha Corporation, www.steinberg.net)

DSP Effects

In addition to being able to cut, copy and paste regions within a sound file, it’s also possible to alter a sound file, track or segment using digital signal processing techniques. In short, DSP works by directly altering the samples of a sound file or defined region according to a program algorithm (a set of programmed instructions) in order to achieve a desired result. These processing functions can be performed either in real time or non-real time (offline):

Real-time DSP: Commonly used in most modern-day DAW systems, this process makes use of the computer’s CPU or additional acceleration hardware to perform complex DSP calculations during actual playback. Because no calculations are written to disk in an offline fashion, significant savings in time and disk space can be realized when working with pro ductions that involve complex or long processing events. In addition, the automation instructions for real-time processing are embedded within the saved session file, allowing any effect or set of parameters to be changed, undone and redone without affecting the original sound file.

Non-real-time DSP: Using this method, signal processing (such as changes in level, L/R channel swapping, etc.) can be saved as a unique sound file in a non-real-time fashion. In this way, the newly calculated file (containing an effect, volume change, combined comp. tracks, sub-mix, etc.) will be played back without the need for additional, real-time CPU processing. It’s good to know that DAWs will often have a specific term for tracks or processing functions that have been written to disk in order to save on processing—often being called “locking” or “freezing” a file. These files can almost always be unlocked at a later time to revert to real-time DSP processing. When DSP is “printed” to a new file in non-real time, it’s almost always wise to save both the original and the affected sound files, just in case you need to make changes at a later time.

Most DAWs offer an extensive array of DSP options, ranging from options that are built into the basic I/O path of any input strip (e.g., basic EQ and gain-related functions) to DSP effects and plug-ins that come bundled with the DAW package, to third-party effects plug-ins that can be either inserted directly into the signal path (insert) or offered as a (send) that can be assigned to numerous tracks within a mix. Although the way in which effects are dealt with in a DAW will vary from one make and model to the next, the basic fundamentals will be much the same.

DSP Plug-Ins

Workstations often offer a number of stock DSP effects that come bundled with the program; however, a staggering range of third-party plug-in effects can be inserted into a signal path which perform functions for any number of tasks ranging from the straightforward to the wild-’n’-zany. These effects can be programmed to seamlessly integrate into a host DAW application that conforms to such plug-in platforms as:

DirectX: A DSP platform for the PC that offers plug-in support for sound, music, graphics (gaming) and network applications running under Microsoft Windows (in its various OS incarnations)

AU (Audio Units): Developed by Apple for audio and MIDI technologies in OS X; allows for a more advanced GUI and audio interface

VST (Virtual Studio Technology): A native plug-in format created by Steinberg for use on either a PC or Mac; all functions of a VST effect processor or instrument are directly controllable and automatable from the host program

MAS (MOTU Audio System): A real-time native plug-in format for the Mac that was created by Mark of the Unicorn as a proprietary plug-in format for Digital Performer; MAS plug-ins are fully automatable and do not require external DSP in order to work with the host program

AudioSuite: A file-based plug-in that destructively applies an effect to a defined segment or entire sound file, meaning that a new, affected version of the file is rewritten in order to conserve on the processor’s DSP overhead (when applying AudioSuite, it’s often wise to apply effects to a copy of the original file so as to allow for future changes)

RTAS (Real-Time Audio Suite): A fully automatable plug-in format that was designed for various flavors of Digidesign’s Pro Tools and runs on the power of the host CPU (host-based processing) on either the Mac or PC

TDM (Time Domain Multiplex): A plug-in format that can only be used with Digidesign Pro Tools systems (Mac or PC) that are fitted with Digidesign Farm cards; this 24-bit, 256-channel path integrates mixing and real-time digital signal processing into the system with zero latency and under full automation

These popular software applications (which are programmed by major manufacturers and smaller startup companies alike) have helped to shape the face of the DAW by allowing us to pick and choose the plug-ins that best fit our personal production needs. As a result, new companies, ideas and task-oriented products are constantly popping up on the market, literally on a monthly basis.

ACCELERATOR PROCESSING SYSTEMS

In most circumstances, the CPU of a host DAW program will have sufficient power and speed to perform all of the DSP effects and processing needs of a project. Under extreme production conditions, however, the CPU might run out of computing steam and choke during real-time playback. Under these conditions, there are a couple of ways to reduce the workload on a CPU: On one hand, the tracks could be “frozen,” meaning that the processing functions would be calculated in non-real time and then written to disk as a separate file. On the other hand, an accelerator card (Figure 7.36) that’s capable of adding extra CPU power can be added to the system, giving it the necessary real-time power to perform the required effects calculations. Of course, as computers have gotten faster and more powerful, native processing packages have come onto the market, which make use of the computer’s own multi-processor capabilities.

fig7_36.jpg

FIGURE 7.36
The UAD-2 DSP PCIe and Thunderbolt (Mac) or USB (Win) DSP processor and several plug-in examples. (Courtesy of Universal Audio, www.uaudio.com, © 2017 Universal Audio, Inc. All rights reserved. Used with permission)

FUN WITH EFFECTS

The following effects notes describe but a few of the possible effects that can be plugged into the signal path of DAW; however, further reading on effects processing can be found in Chapter 15 (Signal Processing).

Equalization

EQ is, of course, a feature that’s often implemented at the basic level of a virtual input strip (Figure 7.37). Most DAW “strips” also include one that gives full parametric control over the entire audible range, offering overlapping control over several bands with a variable degree of bandwidth control (Q). Beyond the basic EQ options, numerous third-party EQ plug-ins are available on the market that vary in complexity, musicality and market appeal (Figure 7.38).

Dynamics

Dynamic range processors (Figures 7.39 and 7.40) can be used to change the signal level of a program. Processing algorithms are available that emulate a compressor (a device that reduces gain by a ratio that’s proportionate to the input signal), limiter (reduces gain at a fixed ratio above a certain input threshold), or expander (increases the overall dynamic range of a program). These gain changers can be inserted directly into a channel or group master track or inserted into the final master output path.

fig7_37.jpg

FIGURE 7.37
DAWs offer a stock EQ on their channel strip. (a) 7-Band Digirack EQIII plug-in for Pro Tools. (Courtesy of Avid Technology, Inc., www.avid.com) (b) EQ plug-in for Cubase/Nuendo. (Courtesy of Steinberg Media Technologies GmbH, a division of Yamaha Corporation, www.steinberg.net)

fig7_38.jpg

FIGURE 7.38
EQ plug-ins (a) FabFilter Pro-Q 24-band EQ plug-in for mixing and mastering. (Courtesy of FabFilter, www.fabfilter.com) (b) Sonnox EQ and Filters for Apollo and the UAD effects processing card. (Courtesy of Universal Audio, www.uaudio.com © 2017 Universal Audio, Inc. All rights reserved. Used with permission)

In addition to the basic complement of stock and third-party dynamic range processors, wide assortments of multiband dynamic plug-in processors (Figure 7.41) are available for general and mastering DSP applications. These processors allow the overall frequency range to be broken down into various frequency bands. For example, a plug-in such as this could be inserted into a DAW’s main output path, which allows the lows to be compressed while the mids are lightly limited and the highs are simultaneously de-essed to reduce harsh sibilance in the mix.

fig7_39.jpg

FIGURE 7.39
DAW stock compressor plug-ins. (a) Compressor/limiter plug-in for Pro Tools. (Courtesy of Avid Technology, Inc., www.avid.com) (b) Compressor plug-in for Cubase/Nuendo. (Courtesy of Steinberg Media Technologies GmbH, a division of Yamaha Corporation, www.steinberg.net)

fig7_40.jpg

FIGURE 7.40
Various compressor plug-ins for Apollo and the UAD effects processing card. (Courtesy of Universal Audio, www.uaudio.com © 2017 Universal Audio, Inc. All rights reserved. Used with permission)

fig7_41.jpg

FIGURE 7.41
Multiband compressor plug-ins. (a) Multiband compressor for Pro Tools. (Courtesy of Avid Technology, Inc., www.avid.com) (b) Multiband compressor for Cubase/Nuendo. (Courtesy of Steinberg Media Technologies GmbH, a division of Yamaha Corporation, www.steinberg.net)

Delay

Another important effects category that can be used to alter and/or augment a signal revolves around delays and regeneration of sound over time. These time-based effects use delay (Figure 7.42) to add a perceived depth to a signal or change the way that we perceive the dimensional space of a recorded sound. A wide range of time-based plug-in effects exist that are all based on the use of delay (and/or regenerated delay) to achieve such results as:

Delay

Chorus

Flanging

Reverb

fig7_42.jpg

FIGURE 7.42
Various delay plug-ins for Apollo and the UAD effects processing card. (Courtesy of Universal Audio, www.uaudio.com © 2017 Universal Audio, Inc. All rights reserved. Used with permission)

Pitch and Time Change

Pitch change functions make it possible to shift the relative pitch of a defined region or track either up or down by a specific percentage ratio or musical interval. Most systems can shift the pitch of a sound file or defined region by determining a ratio between the current and the desired pitch and then adding (lower pitch) or dropping (raise pitch) samples from the existing region or sound file. In addition to raising or lowering a sound file’s relative pitch, most systems can combine variable sample rate and pitch shift techniques to alter the duration of a region or track. These pitch- and time-shift combinations make it possible for such changes as:

Pitch shift only: A program’s pitch can be changed while recalculating the file so that its length remains the same.

Change duration only: A program’s length can be changed while shifting the pitch so that it matches that of the original program.

Change in both pitch and duration: A program’s pitch can be changed while also having a corresponding change in length.

When combined with shifts in time (delay), changes in pitch make it possible for a world of time-based effects, alterations, tempo changes and more to be created. For example:

Should a note be played that’s out of pitch—instead of going back and doing an overdub, it’s a simple matter to simply zoom in on that note and change its pitch up or down, till it’s right.

Using pitch shift, it’s a simple matter to perform time stretching to do any number of tasks. For example:

Should you be asked to produce a radio commercial that is 30 seconds long, and the producer tells (after the fact) that it has to be 28 seconds— it’s a simple matter to time stretch the entire commercial, so as to trim the 2 seconds off.

The tempo of an entire song can be globally shifted in time or pitch, at the touch of a button, to change the entire key or tempo of a song.

Should you import a musical groove that’s of a different tempo than your session tempo; most DAWs will let you slip the groove in time, so that its tempo fits perfectly simply by dragging the boundaries. Changing the groove’s pitch is likewise a simple matter.

Dedicated plug-ins can also be used to automatically tune a vocal or instrumental track, so that the intonation is corrected, smoothed out or exaggerated for effect.

A process called “warping” can be used to apply micro changes in musical timing (using time and pitch shift processing) to fit, modify, shake up or otherwise mangle a section within a passage or groove. Definitely fun stuff!

If you’re beginning to get the idea that there are few limitations to the wonderful world of pitch shifting—you’re right. However, there are definitely limits and guidelines that should be adhered to, or at least experimented with. For starters:

A single program will often have several algorithms that can be applied to a passage (depending upon if it’s percussive, melodic or continuous in nature). Not all algorithms are created equal. Also the algorithms of one program can easily sound totally different than that of another program. It isn’t often straightforward or set-in-stone, as the processing is simply often too complex to predict … it often will require careful experimentation and artistry

Shifting in time or pitch (two sides of the same coin) by too great a value can cause audible side effects. You’ll simply have to experiment.

ReWire

ReWire and ReWire2 are special protocols that were co-developed by Propeller-head Software and Steinberg to allow audio to be streamed between two simultaneously running computer applications. Unlike a plug-in, where a task-specific application is inserted “into” a compatible host program, ReWire allows the audio and timing elements of an independent client program to be seamlessly integrated into another host program. In essence, ReWire provides virtual patch chords that link the two programs together within the computer. A few of ReWire’s supporting features include:

Real-time streaming of up to 64 separate audio channels (256 with ReWire2) at full bandwidth from one program into its host program application

Automatic sample accurate synchronization between the audio in the two programs

An ability to allow the two programs to share a single soundcard or interface

Linked transport controls that can be controlled from either program (provided it has some kind of transport functionality)

An ability to allow numerous MIDI outs to be routed from the host program to the linked application (when using ReWire2)

A reduction of the total number of system requirements that would be required if the programs were run independently

fig7_43.jpg

FIGURE 7.43
ReWire allows a client program to be inserted into a host program (often a DAW) so the programs can run simultaneously in tandem.

This useful protocol essentially allows a compatible program to be plugged into a host program in a tandem fashion. As an example, ReWire could allow Propellerhead’s Reason (client) to be “ReWired” into Steinberg’s Cubase DAW (host), allowing all MIDI functions to pass through Cubase into Reason while patching the audio outs of Reason into Cubase’s virtual mixer inputs (Figure 7.43). For further information on this useful protocol, consult the supporting program manuals and web videos.

Mixdown and Effects Automation

One of the great strengths of the “in the box” age is how easily all of the mix and effects parameters can be automated and recalled within a mix. The ability to change levels, pan and virtually control any parameter within a project makes it possible for a session to be written to disk, saved and recalled at a second’s notice. In addition to grabbing a control and moving it manually (either virtually on-screen or from a physical controller), another interface style for controlling automation parameters (known as rubber band controls) lets you view, draw and edit variables as a graphic line that details the various automation moves over time.

As with any automation moves, these rubber band settings can be undone, redone or recalled back to a specific point in the edit stage. Often (but not always), the fader volume moves within a mix can’t be “undone” and reverted back to any specific point in the mix. In any case, one of the best ways to save (and revert to) a particular mix version (or various alternate mix versions) is simply to save a specific mix under a unique, descriptive session file title (e.g., gamma_ultraviolet_radiomix01.ses) and then keep on working. By the way, it’s always wise to save your mixes on a regular basis (many a great mix has been lost in a crash because it wasn’t saved or the auto-save function didn’t work properly); in addition, progressively saving your mixes under various name or version numbers (mix01.ses, mix02.ses, etc.) can come in handy if you need to revert to a past version. In short, save often and save regularly!

Exporting a Final Mixdown to File

Once your mix is ready, most DAWs systems are able to export (bounce or print) part or all of a session to a single file or set of sound files (Figure 7.44).

fig7_44.jpg

FIGURE 7.44
Most DAWs can export (bounce) session sound files, effects and automation to a final mixdown track.

An entire session or defined region can be exported as a single interleaved file (containing multiple channels that are encoded into a single L-R-L-R sound file), or can be saved as separate, individual (L.wav and R.wav) sound files. Of course, a surround or multichannel mix can be likewise exported as a single interleaved file, or as separate files.

Often, the session can be exported in non-real time (a faster-than-real-time process that can include all mix, plug-in effects, automation and virtual instrument calculations) or in real time. Usually, a session can be mixed down in a number of final sound file and bit/sample rate formats.

POWER TO THE PROCESSOR … UHHH, PEOPLE!

Speaking of having enough power and speed to get the job done, there are definitely some tips and tricks that can help you get the most out of your digital audio workstation. Let’s take a look at some of the more important items. It’s vital to keep in mind that keeping up with technology can have its triumphs and its pitfalls. No matter which platform you choose to work with, there’s no substitute for reading, research and talking with your peers about your techno needs. It’s generally best to strike a balance between our needs, our desires, the current state of technology and the relentless push of marketing to grab our money—and it’s usually best to take a few big breaths (days, weeks, etc.) before making any important decisions.

1 Get a Computer That’s Powerful Enough

With the increased demand for higher bit/sample rate resolution, more tracks, more plug-ins, more of everything, you’ll obviously want to make sure that your computer is fast and powerful enough to get the job done in real time without spitting and sputtering digits. This often means getting the most up-to-date and powerful computer/processor system that your budget can reasonably handle. With the advent of 32 and 64-bit OS platforms and quad or eight-core processors (chips that effectively contain multiple CPUs), you’ll want to make sure that your hardware will support these features before taking the upgrade plunge.

The same goes for your production software and driver availability. If any part of this hardware, software and driver equation is missing, the system will not be able to make use of these advances. Therefore, one of the smartest things you can do is research the type and system requirements that would be needed to operate your production system, and then make sure that your system exceeds these figures by a comfortable margin so as to make allowances for future technological advances and the additional processing requirements that are associated with them. If you have the budget to add some of the extra bells and whistles that go with living on the cutting edge, you should take the time to research whether or not your system will actually be able to deliver the extra goods when these advances actually hit the store shelves.

2 Make Sure You Have Enough Fast Memory

It almost goes without saying that your system will need to have an adequate amount of random access memory (RAM) and hard-disk storage in order for you to take full advantage of your processor’s potential and your system’s data storage requirements. RAM is used as a temporary storage area for data that is being processed and passed to and from the computer’s central processing unit (CPU). Just as there’s a “need for speed” within the computer’s CPU, it’s usually best that we install memory with the fastest possible transfer speed that can be supported by the computer. It’s also important that you install as much memory as your computer and budget will allow. Installing too little RAM will force the OS to write this temporary data to and from the hard disk, a process that’s much slower than transfer to RAM and causes the system’s overall performance to slow to a crawl. For those who are making extensive use of virtual sampling technology (whereby samples are transferred to RAM), it’s usually a wise idea to throw as much RAM into the system as possible.

Hard-disk requirements for a system are certainly an important consideration. The general considerations include:

Need for size: Obviously, you’ll want to have drives that are large enough to meet your production storage needs. With the use of numerous tracks within a session, often at sample rates of 24/96, data storage requirements can quickly become an important consideration.

Need for speed: With the current track count and sample rate requirements that can commonly be encountered in a DAW session, it’s easy to understand how slower disk access times (the time that’s required for the drive heads to move from one place to another on a disk and then output that data) becomes important.

3 Keep Your Production Media Separate

Whenever possible, it’s important that you keep your program and operating system data on a separate drive from the one that holds your production media data. This is due to the simple fact that a computer periodically has to check in and interact with both the currently running program and the OS. Should the production media be on the same disk, interruptions in audio data can occur as the disk takes time to go perform program-related tasks, resulting in a reduction in media and program data access and throughput time (not good).

4 Update Your Drivers … With Caution!

In this day and age of software revisions, it’s always a good idea to go on the Web and search for the latest update to a piece of software or a hardware driver. Even if you’ve just bought a product new out of the box, it might easily have been sitting on a music store shelf for over a year. By going to the company website and downloading the latest versions, you’ll be assured that it has the latest and greatest capabilities. In addition, it’s always wise to save these updates to disk in your backup directories. This way, if you’re without Internet and there’s a hardware or software problem, you’ll be able to reload the software or drivers and should be on your way in no time.

5 Going (At Least) Dual Monitor

Q: How do you fit the easy visual reference of multiple programs, documents and a digital audio workstation onto a single video monitor?

A: You often don’t. Those of you who rely on your computer for recording and mixing, surfin’, writing, etc., should definitely think about doubling your computer’s visual real estate by adding an extra monitor to your computer system.

Folks who have never seen or thought much about adding a second monitor (Figure 7.45) might be skeptical and ask, “What’s the big deal?” But, all you have to do is sit down and start opening programs onto a single screen just to see how fast your screen can get filled up. When using a complicated production program (such as a professional DAW or a high-end graphics app), getting the job done with a single monitor can be an exercise in frustration. There’s just too much we need to see and not enough screen real estate to show it on.

Truth is, in this age of Mac and Windows, adding an extra monitor is a fairly straightforward proposition. Most systems can deal with two or more monitors with little or no fuss. Getting hold of a second monitor could be as simple as grabbing an unused one from the attic or buying a second monitor.

fig7_45.jpg

FIGURE 7.45
You can never have enough visual real estate: (a) side-by-side; (b) top screen shows edit screen, while the bottom (possibly a touch screen) displays the mixer in a traditional layout.

Once you’ve installed the hardware, the software side of building a dual-monitor system is relatively straightforward. Simply call up the resolution settings in the control panel or System Preferences and change the resolution settings and orientation for each monitor. Once you extend your desktop across both monitors, you should be well on your way.

Those of you who use a laptop can also enjoy many of these benefits by plugging the second monitor into the video out and following the setup steps that are recommended by your computer’s operating system. You should be aware that many laptops are limited in the way they share video memory and might be restricted in the resolution levels that can be selected.

This might not seem much like a recording tip, but once you get a dual-monitor system going, your whole approach to producing content (of any type) on a computer will instantly change and you’ll quickly wonder how you ever got along without it!

6 Keeping Your Computer Quiet

Noise! Noise! Noise! It’s everywhere! It’s in the streets, in the car, and even in our studios. It seems like we spend all those bucks getting the best sound possible, only to gunk it all up by placing this big computer box that’s full of noisy fans and whirring hard drives smack in the middle of a critical listening area. Fortunately, a number of companies have begun to find ways to reduce the problem. Here are a few solutions:

Whenever possible, use larger, low-rpm fans to reduce noise.

Certain PC motherboards come bundled with a fan speed utility that can monitor the CPU and case heat and adjust the fan speeds accordingly.

Route your internal case cables carefully. They could block the flow of air, which can add to heat and noise problems.

A growing number of hard-disk drives are available as quiet drives. Check the manufacturer’s noise ratings.

You might consider placing the computer in a well-ventilated area, just outside the production room. Always pay special attention to ventilation (both inside and outside the computer box), because heat is a killer that’ll reduce the life span of your CPU. (Note: When building my own studio I designed a special alcove/glass door enclosure that houses my main computer—no muss, no fuss, and almost no noise.)

Thanks to gamers and audio-aware buyers, a number of companies exist that specialize in quiet computer cases, fans and components. These are always fun to check out on the web.

7 Backup, Archive and Networking Strategies

It’s pretty much always true that it’s not a matter of if an irreplaceable hard drive will fail, but when. At a time that we least expect it, disaster could strike. It’s our job to be prepared for the inevitable. This type of headache can, of course, be partially or completely averted by backing up your active program and media files, as well as by archiving your previously created sessions and then making sure that these files are also backed up.

As previously stated, it’s generally wise to keep your computer’s operating system and program data on a separate hard disk (usually the boot drive) and then store your session files on a separate media drive. Let’s take this as a practical and important starting point. Beyond this premise, as most of you are quite aware, the basic rules of hard-disk management are extremely personal, and will often differ from one computer user to the next (Figure 7.46). Given these differences, I’d still like to offer up some basic guidelines:

It’s important to keep your data (of all types) well organized, using a system that’s both logical and easy to follow. For example, online updates of a prog ram or hardware driver downloads can be placed into their own directories; data relating to your studio can be placed in the “studio” directory and subdirectories; documents, MP3s, and all the trappings of day-to-day studio operations can be also placed on the disk, using a system that’s easy to understand.

Session data should likewise be logical and easy to find. Each project should reside in its own directory and each song should likewise reside in its own subdirectory of that session project directory.

Remember to save various take versions of a mix. If you just added the vocals to a song, go ahead and save the session under a new version name. This acts as an “undo” function that lets you go back to a specific point in a session. The same goes for mixdown versions. If someone likes a particular mix version or effect, go ahead and save the mix under a new name or version number (my greatest song 1 ver15.ses) or (my greatest song 1 ver15 favorite effect.ses). In fact, it’s generally wise to save the various versions throughout the course of the mix. These session files are usually small and might save your butt at a later point in time. As a suggestion, you might want to create a “mix back” subdirectory in the session/song folder and move the older session files there, so you don’t end up being confused with 80 backup take names.

fig7_46.jpg

FIGURE 7.46
Data and hard-drive management (along with a good backup scheme) are extremely important facets of media production.

With regards to backup strategies, a number of options exist. In this day and age, hard drives are the most robust and cost-effective ways of backing up your precious data. Here are some options, although you may have better options that work for your own application and working scale:

Primary data storage drive: Drives (in addition to your main OS drive) that are within your computer can be used to store your primary (master) data files. It’s often good to view a specific drive as a source where all information is held (Fort Knox) and that all data, sound files, etc. need to eventually make it to that drive.

Backup drive or drives: External drives or Portable high-capacity (2G and higher) drives can then be used as to back up your program and media data. The latter portable drives are physically small and can be used with your laptop and other computers in a straightforward way.

Off-site backup drive: It’s almost always important to store a backup drive off-site.

Having a relatively up-to-date backup that’s stored in a bank vault safety box or at a second place (anywhere safe and secure), can literally save a crucial part of your personal life in case of theft or fire. Seriously—your hardware can be replaced, but your data can’t.

Cloud: It’s slow and cumbersome, but storing important data on another cloud network can be an effective backup scheme.

All of the above are simply suggestions. I rarely give opinions like these in a book; however, they’ve served me so well and for so long that I had to pass them along.

COMPUTER NETWORKING

Beyond the concept of connecting external devices to a single computer, a larger concept hits at the heart of the connectivity age—networking. The ability to set up and make use of a local area network (LAN) can be extremely useful in the home, studio and/or office, in that it can be used to link multiple computers with various data, platforms and OS types. In short, a network can be set up in a number of different ways, with varying degrees of complexity and administrative levels. There are two common ways that data can be handled over a LAN (Figure 7.47):

The first is a system whereby the data that’s shared between linked computers resides on the respective computers and is communicated back and forth in a decentralized manner.

The second makes use of a centralized computer (called a server) that uses an array of high-capacity hard drives to store all of the data that relates to the everyday production aspects of a facility. Often, such a system will have a redundant set of drives (RAID) that actually clones the entire system on a moment-to-moment basis as a safety backup procedure. In larger facilities where data integrity is highly critical, a set of backup tapes may be made on a daily basis for extra insurance and archival purposes.

fig7_47.jpg

FIGURE 7.47
Local area network (LAN) connections. (a) Data can be shared between independent computers in a home or workplace LAN environment. (b) Computers or computer terminals may be connected to a centralized server, allowing data to be stored, shared and distributed from a central location and/or on the web.

No matter what level of complexity is involved, some of the more common uses for working with a network connection include:

Sharing files: Within a connected household, studio or business, a LAN can be used to share virtually anything (files, sound files, video images, etc.) throughout the connected facility. This means that various production rooms, studios and offices can simultaneously share and swap data and/or media files in a way that’s often transparent to the users.

Shared Web connection: One handy aspects of using a LAN is the ability to share an Internet connection over the network from a single, connected computer or server. The ability to connect from any computer with ease is just another reason why you should strongly consider wiring your studio and/or house with LAN connections.

Archiving and backup: In addition to the benefits of archiving and backing up data with a server system—even the simplest LAN can be a true lifesaver. For example, let’s say that we need to make a backup of a session. In this situation, we could simply run a backup to the main server that’s connected to the system, and continue working away on our DAW, without interruption—or the backups could automatically run in the background after work hours.

Accessing sound files and sample libraries: It goes without saying that sound and sample files can be easily accessed from any connected computer. Actually, if you’re wireless, you could go out to the pool, download or directly access the needed session files and soak up the sun while working on your latest project!

On a final note, those who are unfamiliar with networking are urged to learn about this powerful and easy-to-use data distribution and backup system for your pro or project studio. For a minimal investment in cables, hubs and educational reading, you might be surprised at the time, trouble and life-saving benefits that will be almost instantly realized.

8 Session Documentation

Most of us don’t like to deal with housekeeping. But when it comes to recording and producing a project, documenting the creative process can save your butt after the session dust has settled—and help make your post-production life much easier (you never know when something will be reissued/remixed). So let’s discuss how to document the details that crop up before, during and after the session. After all, the project you save might be your own!

DOCUMENTING WITHIN THE DAW

One of the simplest ways to document and improve a session’s workflow is to name a track before you press the record button, because most DAWs will use that as a basis for the file name. For example, by naming a track “Jenny’s lead voc take 5,” most DAWs will automatically save and place the newly recorded file into the session as “Jenny’s lead voc take 5.wav” (or .aif). Locating this track later would be a lot easier than rummaging through sound files only to find that the one that you want is “Audio018–05.”

Also, make use of your DAW’s notepad (Figure 7.48). Most programs offer a scratchpad function that lets you fill in information relating to a track or the overall project; use this to name a specific synth patch, note the mic used on a vocal, and include other information that might come in handy after the session’s specifics have been long forgotten.

Markers and marker tracks can also come in super-handy. These tracks can alert us to mix, tempo and other kinds of changes that might be useful in the production process. I’ll often place the lyrics into a marker track, so I can sing the track myself without the need for a lead sheet, or to help indicate phrasings to another singer.

fig7_48.jpg

FIGURE 7.48
Cubase/Nuendo Notepad apps. (Courtesy of Steinberg Media Technologies GmbH, a division of Yamaha Corporation, www.steinberg.net)

MAKE DOCUMENTATION DIRECTORIES

The next step toward keeping better track of details is to create a “Song Name Doc” directory within the song’s session, and fill that folder with documents and files that relate to the session such as:

Your contact info

Song title and basic production notes (composer, lyricist, label, business and legal contacts)

Producer, engineer, assistant, mastering engineer, duplication facility, etc. (with contact info)

Original and altered tempos, tempo changes, song key, timecode settings, etc.

Original lyrics, along with any changes (changed by whom, etc.)

Additional production notes

Artist and supporting cast notes (including their roles, musician costs, address info, etc.)

Lists of any software versions and plug-in types, as well as any pertinent settings (you never know if they’ll be available at a future time, and a description and screenshot might help you to duplicate it within another app)

Lists of budget notes and production dates (billing hours, studio rates and studio addresses—anything that can help you write off the $$$)

Scans of copyright forms, session contracts, studio contracts and billings

Anything else that’s even remotely important

In addition, I’ll often take screenshots of some of my more complicated plug-in settings and place these into this folder. If I have to redo the track later for some reason, I refer to the screenshot, so I can start reconstruction. Photos or movie clips can also be helpful in documenting which type of mic, instrument and specific placements were used within a setup. You can even use pictures to document outboard hardware settings and patch arrangements. Composers can use the “Doc” folder to hold original scratchpad recordings that were captured on your cell phone or message machine.

Furthermore, a “Song Name Graphics” directory can hold the elements, pictures and layouts that relate to the project’s artwork … “Song Name Business” and “Project Name artwork” directory, etc. might also come in handy.

9 Accessories and Accessorize

I know it seems like an afterthought, but there’s an ever-growing list of hardware and travel accessories that can help you to take your portable rig on the road. Just a small listing includes:

Laptop backpacks for storing your computer and gear in a safe, fun case

Pad stands and cases

Instrument cases and covers

Flexible LED gooseneck lights that let you view your keyboard in the dark or on-stage

Laptop DJ stands for raising your laptop above an equipment-packed table

IN CLOSING

At this time, I’d like to refer you to the many helpful pointers in Chapter 22. I’m doing this in the hope that you’ll read this section twice (at least)— particularly the discussion on project preparation, session documentation and backup/archive strategies. I promise that the time will eventually come when you’ll be glad you did.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.93.210