Logic’s palette of software instruments provides a wide array of sound generators for use in professional production. As this list increases, understanding the functions of Logic instruments becomes more and more important in making choices that are suitable to the needs of a project.
In this lesson, you will explore selected instruments that represent each type of sound generator that Logic offers, and you’ll learn how best to integrate your external MIDI hardware. In addition, you will try your hand at programming two powerful instruments: Ultrabeat and Sculpture.
Logic Pro 8 comes with 13 powerful software instruments (not including the 20 GarageBand instruments) ranging from emulations of vintage instruments to tools that provide revolutionary ways to create and shape sounds. To better understand the unique characteristics of each instrument and its application, it is helpful to group the instruments according to the way they generate sound.
Sampler
Most of Logic’s software instruments share interface features that represent how sound is generated and shaped. With synthesizers, it is especially important to trace the signal flow through controls that affect a particular aspect of the sound. Let’s look at two instruments in Logic’s synthesizer family and compare their interfaces.
The template opens, followed by a Save As dialog.
If you didn’t complete Lesson 1, a version of the template used in this lesson is included in the following folder: Logic 8_BTB_Files > Lessons > Templates > Advanced Logic.logic. In order to do the exercises in this lesson, you must copy the file to the following location on your hard drive: ~/Library/Application Support/Logic/Project Templates.
A new project file opens based on the template. For this exercise, it is not necessary to build a project folder and include assets. By clicking Cancel, you can open up a project file without any links to assets.
Logic’s software instruments come in a variety of formats to accommodate mono, stereo, multi output, or even 5.1 projects. The formats available to you depend on the specific instrument.
Look at the ES2’s interface and follow the signal flow in this instrument. On the upper-left side, you can see the three oscillators that are an integral part of its sound generation.
Surrounding each oscillator are controls for tuning and mixing.
Moving to the right, you find the filter section, where the frequency spectrum of the raw sound is shaped.
At the far right is the output section for the ES2, with controls for volume and effects (distortion, chorus, flanger, and phaser).
Below the main part of the graphical interface is the modulation section, where you can manipulate any of the ES2’s controls via any other parameter and real-time input.
Now look at the ES1’s interface. On the far left are the oscillators, along with controls that directly affect them (octave selection and the mix between the primary oscillator and the sub-oscillator).
To the right of the oscillator section is the filter section, including the Key slider for controlling the cutoff frequency that is modulated by the keyboard pitch.
Farther to the right is the output section.
At the bottom of the synthesizer is the modulation section.
The interfaces you just looked at are nearly identical; the differences in controls pertain to their unique sound-generation processes. In general, a signal flows from left to right in all of the Logic synthesizers, with sections that closely parallel those in ES1 and ES2.
Note that even though EXS24 mkII is a sampler, it shares many interface characteristics with the ES2 and ES1 synthesizers. It has filter, output, and modulation sections in similar places (the modulation matrix is identical to the ES2’s).
An alert message appears asking if you want to save the project before closing.
As this project doesn’t contain any project data, it is not necessary to save it.
Ultrabeat’s inspiration stems from the drum machines of the 1980s as well as the currently popular sample-based hardware groove boxes. Ultrabeat is similar to them in functionality, offering both sound generation and integrated step sequencing.
What truly sets Ultrabeat apart, however, are its multiple sound sources (analog synthesis, FM, audio sample, physical modeling), built-in signal processing (bit crushing, distortion, ring modulation, EQ, and stereo effects), sophisticated step sequencing, and highly flexible sound architecture.
Middle C may be designated as either C4 or C3, depending on the manufacturer of your MIDI keyboard. You will need to set the “Display Middle C as” preference to C3 (Yamaha) to accurately follow the directions within this exercise (and others throughout the book). This command can be found in the Preferences > Display > General tab.
When you select a software instrument in the track list for the first time, there might be a slight delay (around 100 milliseconds) at first. This is because Logic does not engage live mode until it receives its first MIDI message. The delay doesn’t affect the playback of sequenced material, but it can interfere with live performance and tracking. If you require perfect timing for the first played note, you need to send silent MIDI events in advance (for example, sustain pedal, pitch bend, or modulation wheel).
In addition to two octaves of individually mapped percussion sounds, a kit contains a slot for a sound that is automatically pitch-mapped over three octaves.
You should hear an analog synthesizer bass sound that changes pitch as you move up and down the keyboard.
The left side of Ultrabeat’s interface contains the Assignment section, which contains the 25 drum sounds of a drum kit and a mixer. Each drum sound has independent parameters that can be adjusted for volume, soloing, muting, pan position, and audio output.
The main section of the interface changes with each selected sound. This is because every drum sound has its own independent sound-generation, filter, modulation, processing, and volume settings, which are viewed by clicking its name.
The interface changes with each new note played. When voice auto select is on, the most current note triggered is displayed.
It is the only oscillator currently active (the power button at the left is lit), so it is responsible for generating the raw sound that makes up the kick drum.
The oscillator is set to Phase Osc, which uses the Slope, Saturation, and Asymmetry controls to shape the waveform into almost any basic synthesizer waveform.
The waveform changes slowly from a square wave to a slightly rounded triangle.
You can hear the sound change as you transition toward the square wave.
This kick drum needs less “beater” (midrange click) to suit the project you will be working with. You can move to the EQ controls in the processing section of Ultrabeat to see how you can change the sound.
Note that the kick drum sound has a slight parametric dip at 170 Hz and a rather large boost centered at 1600 Hz.
The peak is highlighted, and a dot appears at the apex.
If you are familiar with Logic’s Channel EQ, you’ll recognize the same graphical controls for adjusting the EQ band.
The peak moves along with the mouse movement.
The bandwidth narrows and expands accordingly.
The large peak disappears.
The kick now has less midrange attack.
The power button is on, and Oscillator 2 is set to Sample. In the Oscillator 2 section, you’ll also see an audio waveform display.
The crash cymbal in this kit is generated from an audio sample. Oscillator 2 can be configured for all three types of sound generation offered by Ultrabeat: phase oscillator, sample playback, and even component modeling.
The waveform returns.
The crash sample (Crash 19.ubs, displayed above the waveform) doesn’t work for the project you will be building for this lesson. Let’s load a new sample waveform.
A file selector box appears, displaying the contents of the Ultrabeat Samples folder.
The .ubs extension signifies a proprietary sample format that has multiple velocity layers built into the file. Although no user-accessible way exists to create files in the .ubs format, you can import velocity-mapped EXS instruments by clicking the Import button at the top of the Ultrabeat window.
You should hear a higher-sounding cymbal.
This is the volume control for the sound.
The sound is panned further to the left side of the stereo field.
At the bottom of Ultrabeat’s interface is an integrated 32-step sequencer that greatly aids in the production of drum loops and beat patterns. These patterns, including any user-created patterns, are saved within each of the Ultrabeat settings.
The sequencer starts and Ultrabeat plays a sequenced pattern.
You’ll notice that slots with sequence data recorded are marked with sq (for sequence). Looking at the list for this drum kit (Advanced Logic Kit), we see that there are patterns contained only within the first 5 of a possible 24 slots; the rest are available for user-programmed patterns.
The actual sequencing of a given sound takes place in an area called the step grid. Here, events can be graphically inserted and edited to create each element of the pattern.
The snare sound’s sequence is displayed in the step grid.
The snare sound is triggered whenever an event is displayed in the step grid.
This array of buttons is called the trigger row.
The next time the pattern cycles, you should hear a soft snare attack on step 2.
The next time the pattern reaches this step, the event will be louder.
The Swing knob, located to the left of the step grid, lets you adjust the rhythmic feel of the pattern by increasing the distance between notes. Notes on odd-numbered steps remain unchanged, while even-numbered notes are slightly shifted. This control affects all drum sounds that have swing enabled in the pattern (different swing amounts cannot be assigned to sounds individually).
Listen to the results.
The snare drum sequence’s “feel” changes in relation to the rest of the drum sounds in the pattern.
The pattern you are working with uses a different kick drum sound (A2) from the one you edited in an earlier exercise (C1). In this exercise, you want to use the same sequence part but have it trigger the kick you edited. You can do this by copying and pasting the sequence data from one sound to another.
When creating or editing step sequences for multiple sounds, it is advisable to use the full-view function, which displays all sounds at once within the Ultrabeat window.
The interface switches to a graphical view of each drum sound’s trigger row.
In essence, each row of triggers represents the data created in the step grid, and vice versa.
A menu appears containing sound-trigger editing commands.
The kick drum sequence now triggers the kick you want in addition to the original sound (which is muted).
Ultrabeat not only lets you program sound triggers via step sequencing but also lets you do the same for each sound’s parameters. This mode, called Step mode, provides step-by-step automation of any sound-shaping control within the synthesizer.
The sound-editing area darkens, and, in the synthesizer, yellow frames appear around all of the parameters that are available for automation. In addition, the step grid changes to display parameter offset instead of velocity/gate.
When in Step mode, the step grid is used to effect changes to the yellow highlighted parameters by offsetting the current sound settings.
For this exercise, you will be offsetting the pitch of the active oscillator (Osc 2) by changing the speed of the tambourine sample.
Notice that the step grid displays your adjustment as a negative offset (below center line) to the original pitch (C3). This offset is expressed as a percentage.
The pitch of the tambourine sample changes for the altered steps.
This control enables you to mute the currently displayed parameter offsets, returning the part back to its unaltered state.
Now that you have done some work on an existing pattern, you can incorporate it into the project. Each pattern in Ultrabeat can be triggered via an incoming or recorded MIDI note; this allows the starting and stopping of patterns on the fly (especially advantageous for live performances).
The button now displays the On state.
This enables Ultrabeat to receive incoming MIDI data as pattern triggers.
This menu lets you choose how the pattern will be triggered with incoming MIDI notes. Since you have selected Sustain, the pattern will repeat as long as you have the key depressed.
The specific trigger notes were chosen because they are located far below the most commonly used range on a MIDI keyboard. You may have to transpose your MIDI controller (using its octave buttons) to activate the pattern triggers. You can double-check your octave range in the the Transport’s MIDI Activity display at the bottom of the screen.
A different pattern is triggered for each key depressed.
Look at the Pattern menu you accessed earlier. Each pattern has a number designating the slot, as well as a MIDI note number (in parentheses) next to it. The MIDI note number indicates which incoming MIDI note will trigger which pattern.
Now you can create a drum track by using pattern triggers in conjunction with individual sound triggers.
The Inst 1 track that has Ultrabeat instantiated has a blank MIDI region that you will use for your part.
The Piano Roll Editor opens.
This triggers pattern number 1, the one you edited.
This is the crash cymbal you edited earlier.
You just created a drum part stringing together two patterns (C–1 and C-1) and a triggered sound (crash cymbal).
There are times when you might want to process individual aspects of the kit separately, applying different compression and reverb to individual sounds. For instance, the kick oftentimes needs dynamic and ambient treatment different from what the snare drum, cymbals, or toms need. To handle this, you need to isolate the kick drum on its own channel for individual processing. Fortunately, Ultrabeat allows you to route individual sounds through separate virtual “outputs” to accomplish just that.
To use this special function, you need to instantiate Ultrabeat as a multi output instrument. So far you’ve been working with Ultrabeat as a stereo instrument and have done quite a bit of work modifying sounds and patterns. Luckily, Logic allows you to keep all of the current settings when changing from stereo to multi output instantiations.
This holds true for any software instrument: all settings and content contained in a software instrument will be transferred when switching modes (mono, stereo, multi output, and 5.1).
The Ultrabeat interface opens after reloading the associated samples and current settings.
The first eight selections represent stereo routings, and the last eight selections represent mono. You are choosing the first mono routing (17) to send a mono sound (the kick drum).
If necessary, move the Ultrabeat window so you can see the Inst 1 channel strip.
A new Aux 3 channel strip is created immediately to the right of the Inst 1 channel strip. This will be the receiving channel for our kick drum. Notice that the new Aux 3 channel strip that Logic creates is a stereo one. You will need to make it mono in order to receive a mono send from the multi output instrument.
The kick drum now plays through the Aux 3 channel. Already the kick sounds better, as the new routing allows a dry signal to be sent without being affected by the send on the original Ultrabeat channel. With this routing in place, you can easily continue refining the kick drum sound by inserting separate compression and EQ on the Aux 3 channel.
Hands down, the EXS24 mkII is Logic Pro’s most versatile tool. With 24-bit audio resolution, virtual memory for streaming samples from disk, and an advanced modulation matrix, the EXS24 mkII is a full-featured sampler that can serve as a workhorse for your productions.
To best utilize the EXS24 mkII, you must understand how it functions. Sound generation in this sampler consists of three main components: samples, sampler instruments, and playback parameters. Samples are basically standard digital audio files that are organized into sampler instruments, which are then triggered and processed.
In the following exercises, you will examine each of the main components that create and play back EXS24 mkII sounds.
The EXS24 mkII interface opens.
The EXS24 mkII takes a moment to load the samples into RAM.
The EXS24 mkII Instrument Editor opens.
Depending on the size of your display, you might need to resize the EXS24 Instrument Editor to see all of the available information.
The EXS24 Instrument Editor allows you to peek into the construction of the Fretless Electric Bass sampler instrument. Let’s get familiar with the areas within the editor. The Instrument Editor has two views: Zones and Groups. The following screen shot shows the Zones view.
The bottom portion of the window (in the Zones view it is the Zones area) contains the key mapping for each sample. Each gray bar, called a zone, represents a single audio file that is mapped across a range of keys (represented graphically by the piano keyboard at the bottom). These zones can then be assigned into groups, which offer additional control over multiple zones via their own parameters.
Groups that you create are displayed in the Zones column on the far left. You can display the zones associated with a group by selecting the group in this column. This functionality mirrors iTunes’ playlists, down to the ability of dragging and dropping zones into groups.
For each zone, the EXS24 mkII automatically pitch-shifts the audio in relation to the pitch of the keyboard. Zones with the same key range are stacked vertically and can be triggered simultaneously (in layers) or separately, depending on MIDI velocity. Generally, the zones at the bottom are triggered at lower velocities, and the upper zones at higher velocities.
Take a detailed look at one of the zones.
The top portion of the Instrument Editor contains the parameters for each of the zones depicted below. The zone you select in the Zones area will be highlighted in the list of zones in the Parameters area.
To use your MIDI controller to select zones, choose Zone > Select Zone of Last Played Key. This is similar to Ultrabeat’s Voice Auto Select function that you used earlier in the lesson.
Each zone is listed with a set of parameters (pitch, key range, velocity range, volume, pan, and so on) that control how the audio file will be played. You can also view the audio file referenced by the zone for further editing and for setting loop points.
The Sample Editor opens, displaying the audio file used in the zone (FBWACO1E1X05.aif).
You might get an alert message stating that the audio file does not have sufficient access privileges to save an overview. Go ahead and click OK, as this won’t keep you from looking at the file.
You may need to resize the Sample Editor window or drag the zoom sliders to see the entire sound wave.
You now can see a detailed depiction of the audio file used by the selected zone. This illustrates the total integration of the EXS24 mkII with Logic Pro; you can edit samples without ever leaving Logic.
Now that you’ve seen how a sampler instrument is constructed, look at how the EXS24 mkII’s controls can shape and process the sampler instrument.
By default, the Filter mode is set for a low pass (12 dB), so you get a gradual roll-off of some of the high frequencies by reducing the cutoff value.
You could easily continue to sculpt the sound using the instrument’s controls, adjusting the volume envelope, adding distortion, applying low-frequency oscillators to modulate parameters, and so on. This is similar to the way you would work with a subtractive synthesizer, but here you would use the sampler instrument as the sound-generation source instead of a generated raw waveform.
This will enable you to hear one part at a time while you work with the various instruments.
The EVP88 (like the other instruments in the vintage keyboards line, the EVD6 and EVB3) is a physically modeled instrument dedicated to simulating the sounds of classic electric pianos such as the Fender Rhodes, Wurlitzer 200A, and Hohner Electra. The instrument generates its sound not by triggering samples, but by using complex algorithms that recreate a physical event occurring in the real world. Basically, the EVP88 simulates the physical movement of the various electric piano reeds, tines, and tone bars in the electric and magnetic fields of the pickups found in the original instruments. The result is an extremely accurate and playable instrument that synthesizes the ringing, smacking, and bell-like transients of the attack phase, as well as the hammer action and damper noises.
In this exercise you will load, audition, and then modify a stock sound, listening to the results.
The EVP88 interface opens.
Additional controls are revealed, including the Model dial, which enables you to select an electric piano model.
When instantiated, the EVP88 defaults to the SuitcaseMkI model (based on the Fender Rhodes Suitcase MkI), with somewhat generic settings. You can add character to the sound via the EVP88’s built-in effects section, which contains processing that has become closely associated with classic recordings of electric piano sounds.
This should add some bite to the sound.
The EVD6 is also a physically modeled instrument, accurately recreating the sound of the classic Hohner D6 electric clavinet, from the buzzing of the strings and key clicks right down to the pickup configuration.
For this exercise, you will load a setting and modify it by using the EVD6’s built-in effects to customize the sound.
The EVD6 interface opens.
The EVD6 also has a complementary effects section, complete with distortion, modulation effects (phaser, flanger, chorus), and wah.
This activates a resonant low-pass wah.
The EFM 1 is a digital synthesizer that uses frequency modulation to generate sound (that is, one oscillator’s audible frequency modulates another, creating new harmonics). The EFM 1, like its famous FM predecessor, the Yamaha DX7, excels at producing metallic timbres such as bells.
The EFM 1 interface opens.
The amount of harmonics increases.
The ES1 was the first software instrument developed exclusively for Logic, kicking off a development surge that led to the multitude of instruments currently available in Logic Pro. It was specifically designed to emulate the subtractive synthesis of vintage analog synthesizers.
Like the vintage instruments that inspired the ES1, the synthesizer has no dedicated effects section. One of the distinct advantages to using software instruments in Logic is the ability to add effects to the generated sound and create new combinations.
The Tape Delay plug-in opens.
The echo time sounds good, but you need fewer repeats at a lower volume.
The ES2 is a versatile and comprehensive synthesizer capable of a great range of sounds. From large-sounding leads to evolving pads, classic analog waveforms to digital waveforms and FM, the ES2 offers a wide array of tools for synthesizer enthusiasts.
The Macro controls of the ES2 are new to Logic Pro 8. These appear in their own area at the bottom of the interface and provide easy access to common controls used in performance. Of special note are the Cutoff and Resonance (Reso) controls, which simultaneously affect both of the ES2’s filters.
In the previous ES1 exercise, you added a delay effect to enhance the sound. As the combinations of instruments and effects become more involved, it grows tedious to reinsert every component in the chain each time you wish to use that same sound combination. Using Logic Pro’s channel strip settings, you can create and recall settings that contain instrument and effects choices within a single preset.
Logic Pro installs many channel strip settings that fully utilize Logic Pro’s effects and instruments to achieve powerful combinations. This is not unlike the technique utilized by hardware synthesizers that can combine synthesized sound with effects sections to create new and exciting sounds. You can think of instrument channel strip settings as a single patch that happens to use multiple components.
The following plug-ins are instantiated:
By recalling a single channel strip setting, you instantiated a combination of an ES2 synthesizer with phaser, overdrive, and Channel EQ components to further enhance the sound.
Software instruments offer the malleability of MIDI with real-time generated audio output. You can compose, perform, arrange, and edit as if you were working with external MIDI hardware. This flexibility also carries over to Logic Pro’s global tracks (such as the Chord track and Signature track), which let you graphically view and edit global parameters for the project.
In this exercise, you will use the global tracks to make a quick arrangement using software instruments.
If your computer is unable to play the project with all the instruments enabled (due to system overload errors), freeze some of the tracks before playing the project by selecting their respective Track Freeze buttons.
Note that the Signature track at the top of the Arrange area (just below the Bar ruler) is set for D minor, the key of the project. Since the project has a modal feel (Dorian mode on D), let’s create a quick arrangement by changing to E-flat minor (transposing up a semitone) at a certain point within the project.
An alert message appears.
This is a warning to indicate that any changes in the Chord (or Transposition) track will have an effect on Apple Loops, MIDI regions, and aliases. Since this is what you are trying to achieve, it is fine to click OK and close the alert message.
The Define Chord window opens.
The topmost window should display E m.
The Chord track now has an E minor chord inserted at measure 9.
Note that the Transposition track now displays a +1 event at measure 9, corresponding to the chord change (up one semitone).
All instruments respond to the transposition as if they were classic MIDI instruments.
If you froze the tracks earlier, you will have to refreeze them by de-selecting the Freeze buttons, then selecting them again to effect the key change. This is because when freezing a track, Logic creates and caches 32-bit audio files that are literally a recording of the part (and effects) through the instrument. During playback, this file is read from the disk. If changes are made to the part (such as transposition) after the freeze, you must create new freeze files that incorporate the change.
The Save Document As dialog opens.
Sculpture, one of the newer instruments offered in Logic Pro 8, represents a unique approach to synthesis. It uses sophisticated algorithms to recreate the way sound is generated in the natural world. Specifically, it simulates the characteristics of a vibrating string or bar. This technique is called component modeling, and it closely mirrors the sound generation found in Logic’s vintage keyboard instruments (EVP88, EVD6, and EVB3).
One way to wrap your head around Sculpture is to imagine a synthesizer that lets you control how all the components of a real “physical” instrument interact, and what materials they are made from. In effect, you are building a physical instrument from scratch!
Because Sculpture is so innovative, many people have difficulty approaching it, not knowing how to begin designing sounds and editing settings. In the next exercise, you will walk through the key components of Sculpture and create a sound from scratch.
You will use this project for the exercise.
In Sculpture, the central synthesis element is called the string. This is a bit of a misnomer because the basic physical material can also be similar to a bar (a solid mass). The principle is the same, however: sound is generated only by performing a physical action on, or stimulating, the raw material—striking, picking, blowing, and so on.
The Sculpture window opens.
The string animates, depicting its vibration. This is an effective tool when you’re programming sounds with Sculpture, as it provides visual feedback reflecting how your choices are affecting the string.
When you are using Sculpture for playback (and not editing or programming sounds), it is a good idea to disable string animation because it uses some CPU time to generate the animation. Turn the animation on and off by Control-clicking the string (it is off by default).
Using the Material Pad in the center of the interface, you can construct the string by blending the properties of four basic materials: steel, nylon, wood, and glass.
The sound also changes as you move the ball, modifying the Inner Loss (damping) and Stiffness (rigidity) of the material.
The outside ring of the Material Pad contains additional parameters that determine the sound-making properties of the selected material.
Media Loss controls the damping of the string caused by its environment. Imagine a string vibrating in air, water, or pea soup to visualize what this parameter does.
The material’s resolution has to do with the number of harmonics generated. The higher the value, the richer and more complex the sound as more overtones are produced. Be aware that higher resolution values carry a higher CPU load.
The Tension Mod control adds pitch displacement of the string to higher note velocities. This is similar to the slight initial pitch change that occurs when you strongly pluck a stringed instrument.
Sculpture has three objects that determine how the string is excited or disturbed (how it is played). Remember that physical instruments need an action applied to the sound-producing material to make a sound: a guitar string needs to be plucked or picked, a violin string needs to be bowed, a marimba bar needs to be struck with a mallet, and so on.
The button turns from blue to gray, indicating that the object is off.
You shouldn’t hear any sound.
Why is no sound produced? This illustrates the dependent physical interaction between objects and strings. Without an object exciting the string, nothing happens, just as in the real world.
The menu that appears lists various exciter types for exciting the string.
Object 1 is now set to simulate the action and sound of a guitar pick acting on a string. The parameters are controlled by the object’s Strength knob and surrounding sliders: Variation, Timbre, and, for Objects 1 and 2, VeloSens (velocity sensitivity). The parameters are context sensitive, meaning that the exciter type determines what the controls do. For example, the Timbre slider sets hammer mass when the exciter type is Strike, and bow pressure when the type is Bow.
See page 461 in the Logic Pro 8 Instruments and Effects manual for a chart describing the parameters for each exciter type.
In the case of Pick, the Strength parameter determines pick force and speed.
The Variation control determines plectrum stiffness when the exciter type is set to Pick.
Note that this menu is considerably longer than the Object 1 menu. Object 2 offers all of the same exciter types as Object 1 but also contains a variety of others, which are referred to as disturbers. The nature of a disturber type is not to start the string material vibrating (as the exciter types do), but to disturb the vibration in some way through a physical interaction. Therefore, disturbers work in conjunction with exciters to produce a complex result.
With Disturb chosen, you are introducing a physical object at a fixed distance from the string that keeps it from freely vibrating. Think of an object positioned close to the strings of a guitar so that it is nearly touching. When the guitar strings are plucked, the strings hit the object, creating a buzzing sound.
Just as they do with the exciter types, so do the Strength, Variation, and Timbre controls modify aspects of the chosen disturber. In the case of Disturb, Strength sets the hardness of the object positioned near the String.
See page 462 in the Logic Pro 8 Instruments and Effects manual for a chart describing the parameters for each disturber type.
The Gate settings determine when the object interacts with the string in relation to the MIDI controller keystroke: on depressing the key (and not when the key is let go), on letting go of the key (and not when the key is depressed), or always.
Notice that Object 3’s Type choices are limited to disturbers (no exciters). Bouncing simulates a loose object lying on the vibrating string. Imagine a piece of paper or small wood block lying directly on a guitar’s strings to get an idea of what this produces.
Strength controls the effect of gravity on the bouncing object.
In the case of a bouncing object, the Timbre parameter controls the stiffness of the object.
Now that you have determined the basic sound generation by choosing object types, you can further work with the sound by determining where each object interacts on the length of the string. This is similar to picking a guitar string at the bridge, at the neck, or in the middle.
You can position objects by moving sliders representing each object in the Pickup display.
Sculpture uses pickups to sense vibrations from the string, functioning identically to an electric guitar’s electromagnetic pickups. As it is on an electric guitar, the pickup’s location is of importance; different positions along the length of the string create different timbres.
To hear these subtle differences, play your MIDI controller while you do the following adjustments.
The resulting sound emphasizes the Pick exciter, similar to the way the neck pickup works on an electric guitar.
Sculpture’s extensive processing options allow you to further shape the sound using a variety of means (multimode filter, Waveshaper, stereo delay, and Body EQ). Examine some of the choices, working with the sound you have constructed so far.
The Waveshaper provides interesting distortion effects, including tube simulation, for harmonically rich results.
You have just applied soft, tubelike saturation to the sound.
Sculpture’s Body EQ section utilizes a unique approach to equalization, providing some great sound-shaping possibilities. A standard EQ changes individual frequency bands, but Body EQ also offers spectral models that emulate the resonating properties of specific instruments. These models can be shaped by adjusting formant-related parameters.
By default, Body EQ is set to the Lo Mid Hi model, emulating a standard three-band EQ.
As you can see, the choices range from models of string instruments (guitars, violin, cello, double bass, and so on) to kalimba and various flutes (alto and bass).
The controls change to reflect the formant parameters of the resonating body (Dobro Guitar), and the graphic display to the right now depicts a detailed spectrum.
In effect, you are coupling the sound generator that you constructed (through the interaction between string and objects) with the resonating body of a Dobro guitar (a metal-body guitar with an acoustic speaker cone).
Sculpture’s modulation section is extensive, offering everything from low-frequency oscillators (including two jitter generators that produce random variations) to Note On Random modulators and user-created envelopes.
Also of great interest is the Morph Pad, which enables you to morph between parameter settings for the entire instrument. The Morph Pad can be controlled manually, by MIDI controllers, or by its own time-based envelope.
The Morph Pad has five morph points, represented by center and corner points (A, B, C, D). Each point can be thought of as a memory location that stores the parameter settings of everything from string material to object and pickup placement.
Instead of setting each state manually, let’s use one of Sculpture’s useful features to randomly generate deviants of the original state in each point.
By selecting this, you are targeting only the four outermost points (A, B, C, D) for randomization, leaving the original (center point) sound alone.
To see what just happened, look at how the controls for the various states (points) were affected.
The controls (and the sound) change smoothly to reflect the various states.
As you can see, Sculpture is a truly exceptional instrument. Let’s end this section on Sculpture’s unique sound-generation properties by saving the sound you just made as a preset.
Let’s conclude by taking a brief look at some of Sculpture’s expertly programmed settings, which show off the diverse capabilities of the instrument. With each of these, try playing your MIDI controller while holding chords and single notes to hear the sound evolve over time. Also try playing with the Morph Pad (if active) and modulation wheel of the MIDI controller, as these are frequently deployed to control sound changes.
While it is possible to create entire projects using only built-in instruments, Logic Pro also provides ways to integrate external hardware into the computer recording environment.
Logic Pro has a handy little plug-in that enables you to create a direct connection to an external piece of MIDI hardware as if it were an internal software instrument. The plug-in brings the external device’s audio output back into Logic’s Mixer for further processing and bouncing, all in real time.
This plug-in doesn’t do any processing at all, but it functions as a signal router. Basically, it sends MIDI data out to a selected instrument in the Environment representing your external MIDI hardware and receives audio input from a selected input on your audio interface that is receiving the signal from that external MIDI device.
This menu mirrors the available choices for assigning a MIDI track within the Arrange area. Each enabled channel of the device is represented and may be chosen.
For this exercise, you set the MIDI destination to a generic MIDI instrument within the project’s Environment. This multi instrument is set to output on all MIDI ports of your interface, so it should trigger all connected MIDI devices. Since only one MIDI device has its audio outputs connected to the audio interface and chosen within the External Instrument plug-in, this will work well for the purposes of this exercise. Within your own project (containing your own MIDI setup in the Environment), however, you would select an instrument representing your individual MIDI hardware.
You should hear the output of your MIDI device, which is routed into the Inst 2 channel strip or track.
You are now ready to apply additional plug-ins, make volume and pan adjustments, and mix and bounce the MIDI hardware as if it were a software instrument.
An alert message appears asking if you’d like to save the changes done to the project. It is not necessary to save the project file, as no data was actually written.
1. Which direction does signal flow in most of Logic’s software instruments?
2. What do Ultrabeat kits contain?
3. Which feature in Ultrabeat is used to compose and play back patterns?
4. How can you create strings of patterns with Ultrabeat?
5. What do multi output instruments do?
6. Sound generation for the EXS24 mkII consists of what three components?
7. Besides offering flexible physical modeling of real-world components, the vintage keyboard instruments offer what additional feature to process the sound?
8. How are software instrument and effects combinations stored for future recall?
9. What is used to quickly create arrangements of both software instrument and classic MIDI tracks?
10. What is Sculpture’s basis for sound generation?
11. Besides Sculpture’s sound-generation and modulation sections, what other components are used to further process the sound?
12. Which modulation control enables you to program smooth transitions between various parameter states?
13. How can external MIDI hardware be incorporated into Logic’s Mixer?
Answers
1. Signal flow moves from left to right in most software instruments, which helps with understanding the instrument as well as locating controls.
2. Ultrabeat utilizes kits that contain individually programmed drum sounds, each with its own unique settings and modulations.
3. Ultrabeat’s built-in step sequencer works in conjunction with the Logic project to compose and play back patterns.
4. Ultrabeat’s step sequence patterns can be strung together by triggered MIDI notes, or copied to the Arrange window for region-based editing.
5. Multi output instruments (such as Ultrabeat and EXS24 mkII) can route individual sounds to separate channel strips for isolation or further processing.
6. The EXS24 mkII consists of samples organized into sampler instruments that are further shaped by the interface controls.
7. The vintage keyboard instruments have built-in effects processing associated with each instrument.
8. Instrument and effects combinations can be saved and utilized as channel strip settings.
9. Global tracks affect software instruments in the same manner as classic MIDI hardware and can be used to create quick arrangements.
10. Sculpture utilizes a string acted upon by objects as the basis for sound generation.
11. Components such as the Waveshaper, Body EQ, and Delay allow you to further process the instrument in interesting ways.
12. Sculpture’s Morph Pad enables you to smoothly move from various states of control settings.
13. External MIDI hardware can be incorporated into the Mixer as a software instrument by using the External Instrument plug-in.
18.117.77.54