Chapter 6. Making Some Noise

If there was ever any doubt whether the iPhone was intended to be a music device, the presence of three different frameworks on the iPhone just for sound should be all the answer you need. The Core Audio, Celestial, and Audio Toolbox frameworks all provide different levels of sound functionality. In addition to this, the iPhone runs an audio daemon called mediaserverd that aggregates the sound output of all applications and governs events such as volume and ringer switch changes. There are a lot of moving pieces involved in the iPhone’s sound framework, but Apple has provided some great interfaces that make the task painless.

Core Audio: It’s Great, but You Can’t Use It

Of the three frameworks available, Core Audio is the most low-level, and least accessible. Core Audio provides a direct interface to the iPhone’s sound device. There is only one sound device, so only one process can be talking to it at any time. Unlike the Mac OS X desktop, which allows developers to share Core Audio resources, the iPhone has an audio daemon that binds to the device as the iPhone is booted, setting what is called hog mode. Hog mode is a flag hardcoded into the Core Audio framework that prevents any other application from being able to say, “Hey, I’d like to make some sound, can you give up control of the sound card for a minute?” In other words, the audio daemon hogs the sound card all to itself, requiring all sounds to be played through the daemon instead of directly.

The Core Audio framework was the first framework figured out by developers, and some very early iPhone applications attempted to use it to deliver a digital sound stream. This required a terrible and ugly hack requiring the user to kill the mediaserverd process. The program could then be heard, but this hack killed the rest of the sound on the entire iPhone. Many people were so desperate to play video games on the iPhone that they actually did this. Now that the Celestial and Audio Toolbox frameworks have been figured out, there’s no longer any need to use Core Audio, and all of the applications that formerly used it have been updated.

If you’d still like to learn about the Core Audio framework, the good news is that it’s nearly identical to the desktop version. The Apple Developer Connection web site provides many resources for the Core Audio framework at http://developer.apple.com/audio/.

Celestial

Celestial is the preferred iPhone framework for playing sound and music files and recording sound from the built-in microphone. Celestial uses an AVController class to play sound samples, which are represented as AVItem objects. The framework also supports an optional AVQueue class to arrange playback of different samples.

What Celestial is not useful for is playing a digital stream; that is, a raw output channel of sound. That will be covered in the next section on the Audio Toolbox. Celestial works only with audio files.

To tap into Celestial, your application must be linked to the Celestial framework. Using the tool chain, link Celestial to your application by adding the following arguments to the compiler arguments described in Chapter 2:

$ arm-apple-darwin-gcc -o MyApp MyApp.m -lobjc 
    -framework CoreFoundation 
    -framework Foundation 
    -framework Celestial

To add this option to the sample makefile from Chapter 2, add the Celestial framework to the linker flags section so that the library is linked in:

LDFLAGS =    -lobjc 
        -framework CoreFoundation 
        -framework Foundation 
        -framework Celestial

ringerState

Playing sounds can be very useful for many types of applications, but it’s up to the developer to ensure that the user’s desire to silence her phone is respected. Before a sound is played, you should first check the ringer state—the mute switch on the side of the phone. To check this, UIKit provides a method named ringerState within the UIHardware class.

int ringerState = [ UIHardware ringerState ];

When the mute toggle is switched to “sound” mode, a value of 1 is returned. When it is in “mute” mode, a value of 0 is returned. If the ringer is muted, it’s common practice to use the phone’s built-in vibrator in lieu of an audible notification. An example of this can be found in the appendix. Muting is voluntary—that is, the operating system does not enforce this because some applications were meant to generate sound regardless of the ringer switch. For example, the iPod application plays music even if the ringer is turned off, allowing the user to listen to a song without being interrupted by incoming phone calls (if she’s on the treadmill, for example).

The Audio Controller

The AVController class establishes a connection to the iPhone’s audio daemon. It’s also responsible for controlling all sound that is played through it. Think of the AVController as the scrubber at the bottom of your iPod application (the bar containing your play, pause, navigation, and volume control). The class handles all of these functions, as well as setting the equalizer, sound frequency, and even repeat mode.

AVController * av = [ [ AVController alloc ] init ];

Volume

The volume can be set for either individual sound samples or the entire controller. To set it within the scope of the controller, call the AVController’s setVolume method.

[ av setVolume: 1.0 ]

The volume is a floating-point number from 0.0 to 1.0. Adjusting the volume on the controller level will cause a volume window to appear to the user. This is useful for handling mute requests and the like.

Setting the volume for an individual sound sample will be shown in the upcoming section "Audio Samples.”

Repeat mode

To tell the controller to repeat a sound after it has finished playing, call setRepeatMode, passing an integer value as an argument.

[ av setRepeatMode: 1 ];

The repeat mode must be set after an item has begun playing. The valid repeat modes are as follows.

Mode

Description

0

Repeat Off

1

Repeat On

Although set on a controller level, only the current sample will be repeated—even if more than one sound sample is queued up to play.

Sample rate

The sample rate is the playback frequency sounds should be played at. This should match the frequency at which samples were originally recorded, and can be set either manually, using the setRate method shown here, or automatically by leaving the option alone.

NSError *err;
[ av setRate: 44100.0 error: &err ];
if (err != nil) {
    NSLog(@"The following error has occured: %@", err);
}

Many Celestial methods need to be able to report an error. The NSError class is used for this. NSError is similar to an NSString object, which is standard across both the iPhone and Mac OS X desktop operating systems. This class builds on the NSString base class and encapsulates additional information pertaining to error codes. More information about NSError and NSString can be found in the Cocoa reference available on the Apple Developer Connection web site.

Equalizer preset

An equalizer adjusts the relative volume of different frequencies to produce a clearer sound. Different equalizers improve the user’s experience for different types of recordings and music, and Apple uses a preset collection of equalizers heavily in iTunes and follow-up products. The iPhone supports 22 different equalizer presets that can be set using the setEQPreset method.

[ av setEQPreset: 0 ];

The default setting is to use no EQ, and if you’re only playing event sounds, there’s little reason to change this. If you’re playing a radio broadcast or writing a third-party player, however, the following table can be used to set the desired preset.

Preset

Description

0

Off

1

Acoustic

2

Bass Booster

3

Bass Reducer

4

Classical

5

Dance

6

Deep

7

Electronic

8

Flat

9

Hip Hop

10

Jazz

11

Latin

12

Loudness

13

Lounge

14

Piano

15

Pop

16

R&B

17

Rock

18

Small Speakers

19

Spoken Word

20

Treble Booster

21

Treble Reducer

22

Vocal Booster

Mute

Muting the audio channel is as easy as sending a message to setMuted. This method applies not only to the sound currently playing, but also to every sound subsequently played through the controller object.

[ av setMuted: NO ];

Audio Samples

So far, you’ve instantiated an AVController object that builds an audio channel, but nothing has been provided to play samples yet. Create each sample to be played as an AVItem object. The following code snip creates an AVItem to reference an existing sound file on the iPhone. The NSError class introduced in the last section is also used here also to capture errors.

NSError *err;
AVItem *item = [ [ AVItem alloc ]
    initWithPath: @"/Library/Ringtones/Pinball.m4r" error: &err
];
if (err != nil) {
   NSLog(@"The following error has occured: %@", err);
}

Playing URLs

URLs can also be played using the same AVItem class by specifying a URL instead of a file path.

AVItem *item = [ [ AVItem alloc ]
    initWithPath: @"http://path-to-sound-file" error: &err
];

This can be useful, but keep in mind that your application will require a network connection to play these sounds. A slow or lagged connection could also slow the application.

Sample volume

To set the volume for a specific sample, the AVItem’s setVolume method can be called. Unlike the controller’s method, using setVolume here will not affect the rest of the audio channel, nor will it display a volume change window to the user.

[ item setVolume: 0.5 ];

Equalizer preset

An individual sample also has its own EQ property, which can be set using the same table shown earlier. This sets the EQ of the sample only, and will remain in effect for as long as the object exists.

[ item setEQPreset: 0 ];

Duration

Once the object has been created, you can find the sample’s duration in seconds.

float duration = [ item duration ];

Playing an item

Once the desired audio object properties have been set, the object can now be attached to the audio controller for playing.

BOOL ok;
[ av setCurrentItem: item preservingRate:NO ];
ok = [ av play:nil ];

The Boolean return value from the play method will be true if the audio controller accepted your request to play the item. If another item is currently playing, it will reject your request and return NO.

Pausing

To pause playing, call the controller’s pause method.

[ av pause ];

Example: Hello, Sound!

This example creates an audio controller and audio sample using an existing ring tone on the iPhone, then plays the sound. If any errors occur, it will be displayed in the text view. Otherwise, the text “Hello, Sound!” will be displayed and you should hear some noise.

To compile this example, you’ll need to include the Celestial framework in your build statement:

$ arm-apple-darwin-gcc -o MyExample MyExample.m -lobjc 
    -framework Foundation 
    -framework CoreFoundation 
    -framework UIKit 
    -framework Celestial

Example 6-1 and Example 6-2 contain the code.

Example 6-1. Audio controller example (MyExample.h)
#import <CoreFoundation/CoreFoundation.h>
#import <UIKit/UIKit.h>
#import <UIKit/UITextView.h>
#import <Celestial/AVController.h>
#import <Celestial/AVItem.h>

@interface MainView : UIView
{
    UITextView   *textView;
    AVController *av;
    AVItem       *item;
}

- (id)initWithFrame:(CGRect)frame;
- (void)dealloc;

@end

@interface MyApp : UIApplication
{
    UIWindow *window;
    MainView *mainView;
}

- (void)applicationDidFinishLaunching:(NSNotification *)aNotification;
@end
Example 6-2. Audio controller example (MyExample.m)
#import "MyExample.h"
int main(int argc, char **argv)
{
    NSAutoreleasePool *autoreleasePool = [
        [ NSAutoreleasePool alloc ] init
    ];
    int returnCode = UIApplicationMain(argc, argv, [ MyApp class ]);
    [ autoreleasePool release ];
    return returnCode;
}

@implementation MyApp

- (void)applicationDidFinishLaunching:(NSNotification *)aNotification {
    window = [ [ UIWindow alloc ] initWithContentRect:
        [ UIHardware fullScreenApplicationContentRect ]
    ];

    CGRect rect = [ UIHardware fullScreenApplicationContentRect ];
    rect.origin.x = rect.origin.y = 0.0f;

    mainView = [ [ MainView alloc ] initWithFrame: rect ];

    [ window setContentView: mainView ];
    [ window orderFront: self ];
    [ window makeKey: self ];
    [ window _setHidden: NO ];
}
@end

@implementation MainView
- (id)initWithFrame:(CGRect)rect {

    if ((self == [ super initWithFrame: rect ]) != nil) {

        NSError *err;
        textView = [ [ UITextView alloc ] initWithFrame: rect ];
        [ textView setTextSize: 18 ];
        [ textView setText: @"Hello, Sound!" ];
        [ self addSubview: textView ];

        av = [ [ AVController alloc ] init ];
        item = [ [ AVItem alloc ]
            initWithPath:@"/Library/Ringtones/Pinball.m4r" error:&err
        ];

        if (err != nil) {
            [ textView setText: err ];
        } else {
            BOOL playedOK;
            [ av setCurrentItem: item preservingRate:NO ];
            playedOK = [ av play:nil ];
            if (playedOK == NO) {
                [ textView setText: @"An error has occurred." ];
            }
        }

    }

    return self;
}

- (void)dealloc
{
    [ self dealloc ];
    [ super dealloc ];
}

@end

What’s Going On

The audio controller example works like this:

  1. The application instantiates and displays a textbox with the text, “Hello, Sound!”

  2. An AVController object is instantiated followed by an AVItem object pointing to the Pinball ringtone, located at /Library/Ringtones/Pinball.m4r.

  3. If the AVItem initialized with an error, the text in the text view is replaced with the error message. Otherwise, the sample is set as the current item on the controller.

  4. The controller is told to play, a call that plays the current item, and returns a Boolean identifying whether the command was successful.

  5. If the Boolean value NO is returned, something went wrong and an error is displayed to the user in the text view.

Audio Queues

One limitation of the audio controller is that it can deal with only one audio sample at a time. If two sounds need to be played in succession, there’s no way to tell the controller object to do this without abruptly killing the currently playing sound. Audio queues provide a solution to this problem by creating an array-like structure where samples can be queued up for ordered play.

This section discusses the audio queues available in the audio controller’s AVController class, which play prerecorded audio selections from files. If an application generates its own sounds, it should use the Audio Toolbox audio queue described later in this chapter.

The queue provided by the audio controller’s class is the AVQueue class:

AVQueue *avq = [ [ AVQueue alloc ] init ];

An audio queue is attached to the audio controller, at which point the controller acknowledges it as the source for all future playback.

[ av setQueue: avq ];

Sound samples, which are instantiated as AVItem objects, can then be added, removed, and rearranged on the queue using a number of available methods:

Add a sample to the end of the queue

[ avq appendItem: item ];

Insert a sample after another sample, identified by the other sample’s AVItem object

[ avq insertItem: item afterItem: other_item error: &err ];

Insert a sample at a specific position in the queue

[ avq inertItem: item atIndex: 4 error: &err ];

Remove a sample, identified by the sample’s AVItem object

[ avq removeItem: item ];

Remove a sample, identified by a position in the queue

[ avq removeItemAtIndex: 3 ];

Remove all items within a given range by position on the queue

[ avq removeItemsInRange: NSMakeRange(3, 4) ];

The NSMakeRange function takes two arguments: the start position and the length of the range. This example removes samples 3-7.

Remove all audio samples from the queue

[ avq removeAllItems ];

When the controller is instructed to play, it will step through each item in the queue.

[ av play: nil ];

The play method is called in the same fashion as it is when there is no queue, but because the queue has been attached to the controller, it will use the queue as the sample source without any special instructions. During playback, items may be added, removed, or rearranged on the queue so long as the playback hasn’t reached the affected items.

Example: Alternating Ringtones

In the previous example, a ringtone was played directly through the controller without a queue. Any attempts to set the current item to a new sample during playback would have been ignored. In this example, two ringtones will be played through an audio queue, one after the other.

Compile this example using the following command line:

$ arm-apple-darwin-gcc -o MyExample MyExample.m -lobjc 
    -framework Foundation 
    -framework CoreFoundation 
    -framework UIKit 
    -framework Celestial

Example 6-3 and Example 6-4 contain the code.

Example 6-3. Audio queue example (MyExample.h)
#import <CoreFoundation/CoreFoundation.h>
#import <Foundation/Foundation.h>
#import <UIKit/UIKit.h>
#import <UIKit/UITextView.h>
#import <Celestial/AVController.h>
#import <Celestial/AVItem.h>
#import <Celestial/AVQueue.h>

@interface MainView : UIView
{
    UITextView   *textView;
    AVController *av;
    AVItem       *item1, *item2;
    AVQueue      *avq;
}

- (id)initWithFrame:(CGRect)frame;
- (void)dealloc;

@end

@interface MyApp : UIApplication
{
    UIWindow *window;
    MainView *mainView;
}

- (void)applicationDidFinishLaunching:(NSNotification *)aNotification;
@end
Example 6-4. Audio queue example (MyExample.m)
#import "MyExample.h"

int main(int argc, char **argv)
{
   NSAutoreleasePool *autoreleasePool = [
        [ NSAutoreleasePool alloc ] init
    ];
    int returnCode = UIApplicationMain(argc, argv, [ MyApp class ]);
    [ autoreleasePool release ];
    return returnCode;
}

@implementation MyApp

- (void)applicationDidFinishLaunching:(NSNotification *)aNotification {
    window = [ [ UIWindow alloc ] initWithContentRect:
        [ UIHardware fullScreenApplicationContentRect ]
    ];

    CGRect rect = [ UIHardware fullScreenApplicationContentRect ];
    rect.origin.x = rect.origin.y = 0.0f;

    mainView = [ [ MainView alloc ] initWithFrame: rect ];

    [ window setContentView: mainView ];
    [ window orderFront: self ];
    [ window makeKey: self ];
    [ window _setHidden: NO ];
}
@end
@implementation MainView
- (id)initWithFrame:(CGRect)rect {

    if ((self == [ super initWithFrame: rect ]) != nil) {

        NSError *err;
        textView = [ [ UITextView alloc ] initWithFrame: rect ];
        [ textView setTextSize: 18 ];
        [ textView setText: @"Hello, Sound!" ];
        [ self addSubview: textView ];

        av = [ [ AVController alloc ] init ];
        avq = [ [ AVQueue alloc ] init ];

        item1 = [ [ AVItem alloc ]
            initWithPath:@"/Library/Ringtones/Pinball.m4r" error:&err
        ];
        if (err != nil)
            [ textView setText: err ];

        item2 = [ [ AVItem alloc ]
            initWithPath:@"/Library/Ringtones/Blues.m4r" error: &err
        ];
        if (err != nil)
            [ textView setText: err ];

        [ avq appendItem: item1 error: &err ];
        [ avq appendItem: item2 error: &err ];

        [ av setQueue: avq ];
        [ av play:nil ];
    }

    return self;
}

- (void)dealloc
{
    [ self dealloc ];
    [ super dealloc ];
}

@end

What’s Going On

  1. The application instantiates and displays a textbox with the default text, “Hello, Sound!”.

  2. An AVController and AVQueue objects are instantiated.

  3. Instead of creating a single AVItem object, two are created: one for the Pinball ringtone, and another for the Blues ringtone, both located in the /Library/Ringtones directory.

  4. If either AVItem initializes with an error, the text in the text view is replaced with the error message.

  5. Both samples are added to the audio queue object.

  6. The audio queue object is attached to the controller, and the controller is instructed to play.

  7. Both audio samples are played through the queue in the order they were added.

Recording Sound

Celestial not only plays sounds, but records them as well. Using the AVRecorder object, an application can use the iPhone’s built-in microphone to record and then write an audio file out to disk.

The AVRecorder class can be spooky, in that the iPhone gives no indication that any recording is taking place. Malware could easily be written to eavesdrop on an iPhone user without their knowledge. It’s up to the developer to notify the user that sound is being recorded.

A typical initialization for AVRecorder is:

   NSURL *url = [ [ NSURL alloc ] initWithString: @"/tmp/rec.amr" ];
   avr = [ [ AVRecorder alloc ] init ];
   [ avr setFilePath: url ];

The recorder object uses a NSURL object, which is similar to an NSString. It contains the path on disk to write the recorded file to. This is the only parameter needed by the recorder object before it is ready to record. The filename in the example ends with an .amr extension, which reflects the default recording format: Adaptive Multi-Rate. The AMR codec is a compressed audio format developed by Ericsson. Many mobile devices use it because it provides superior compression of voice recordings.

To begin recording, the recorder first activates the microphone, and then it starts recording.

    [ avr activate: nil ];
    [ avr start ];

To stop recording, a message is sent to the stop method, and finally, the microphone is deactivated.

    [ avr stop ];
    [ avr deactivate ];

Example: Voice Recorder

In this example, the user is presented with a Record button. Touching it begins recording through the microphone and presents the user with a stop button to press when she’s finished. The resulting file will be written to /tmp/rec.amr. This file can be copied over to the desktop (for instance, by using the following secure copy command on the desktop terminal window), where it can be played using QuickTime or any other media player:

$ scp root@iphone:/tmp/rec.amr .

To compile this example, remember to include the Celestial framework in your build statement:

$ arm-apple-darwin-gcc -o MyExample MyExample.m -lobjc 
    -framework Foundation 
    -framework CoreFoundation 
    -framework UIKit 
    -framework Celestial

Example 6-5 and Example 6-6 contain the code.

Example 6-5. Audio recorder example (MyExample.h)
#import <CoreFoundation/CoreFoundation.h>
#import <UIKit/UIKit.h>
#import <UIKit/UIAlertSheet.h>
#import <UIKit/UINavigationBar.h>
#import <Celestial/AVRecorder.h>

@interface MainView : UIView
{
    UIAlertSheet    *recordSheet;
    UINavigationBar *navBar;
    AVRecorder      *avr;
}

- (id)initWithFrame:(CGRect)frame;
- (void)dealloc;
@end

@interface MyApp : UIApplication
{
    UIWindow *window;
    MainView *mainView;
}

- (void)applicationDidFinishLaunching:(NSNotification *)aNotification;
@end
Example 6-6. Audio recorder example (MyExample.m)
#import "MyExample.h"

int main(int argc, char **argv)
{
    NSAutoreleasePool *autoreleasePool = [
        [ NSAutoreleasePool alloc ] init
    ];
    int returnCode = UIApplicationMain(argc, argv, [ MyApp class ]);
    [ autoreleasePool release ];
    return returnCode;

}

@implementation MyApp

- (void)applicationDidFinishLaunching:(NSNotification *)aNotification {
    window = [ [ UIWindow alloc ] initWithContentRect:
        [ UIHardware fullScreenApplicationContentRect ]
    ];

    CGRect rect = [ UIHardware fullScreenApplicationContentRect ];
    rect.origin.x = rect.origin.y = 0.0f;

    mainView = [ [ MainView alloc ] initWithFrame: rect ];

    [ window setContentView: mainView ];
    [ window orderFront: self ];
    [ window makeKey: self ];
    [ window _setHidden: NO ];
}
@end

@implementation MainView
- (id)initWithFrame:(CGRect)rect {
    if ((self == [ super initWithFrame: rect ]) != nil) {

        navBar = [ [UINavigationBar alloc] initWithFrame:
            CGRectMake(rect.origin.x, rect.origin.y, rect.size.width, 48.0f)
        ];
        [ navBar setDelegate: self ];
        [ navBar enableAnimation ];
        [ navBar showLeftButton:nil withStyle: 0
                    rightButton:@"Record" withStyle: 1 ];

        [ self addSubview: navBar ];
    }

    return self;
}

- (void)alertSheet:(UIAlertSheet *)sheet buttonClicked:(int)button
{
    [ avr stop ];
    [ avr deactivate ];
    [ sheet dismiss ];
}

- (void)navigationBar:(UINavigationBar *)navbar buttonClicked:(int)button
{
    /* Start recording on button press */

    NSURL *url = [ [ NSURL alloc ] initWithString: @"/tmp/rec.amr" ];

    avr = [ [ AVRecorder alloc ] init ];
    [ avr setFilePath: url ];
    [ avr activate: nil ];
    [ avr start ];

    recordSheet = [ [ UIAlertSheet alloc ] initWithFrame:
        CGRectMake(0, 240, 320, 240)
    ];
    [ recordSheet setTitle: @"Now Recording" ];
    [ recordSheet setBodyText:@"Sound is now being recorded. Press the
button below to stop." ];
    [ recordSheet setDestructiveButton:
        [ recordSheet addButtonWithTitle:@"Stop" ]
    ];
    [ recordSheet setDelegate: self ];
    [ recordSheet presentSheetInView: self ];
}

- (void)dealloc
{
    [ self dealloc ];
    [ super dealloc ];
}

@end

What’s Going On

Here’s how the audio recording process works:

  1. The application instantiates through the main( ) function and returns an instance of the application, just like every other application.

  2. The window is created with MainView as the content. The statement creating MainView also calls its initWithFrame member. This creates the view and navigation bar, and returns.

  3. When the user presses the Record button, the runtime calls buttonClicked. This subroutine creates a new instance of AVRecorder and begins recording. It then creates an alert sheet called recordSheet, which prompts the user to stop the recording.

  4. When the user presses the stop button on the alert sheet, the runtime automatically calls the UIAlertSheet buttonClicked method.

  5. This method stops the recording and dismisses the sheet. The recorder will have written its output to /tmp/rec.amr.

Further Study

Try some of these exercises to get more comfortable with Celestial:

  • Merge the two examples in this section so that the voice recorder will automatically play back the file it has recorded when the user presses stop.

  • Experiment with recording and determine just how much time can be recorded before the iPhone runs out of resources.

  • Check out the AVController.h, AVItem.h, AVQueue.h, and AVRecorder.h prototypes in your tool chain. You’ll find these in /usr/local/arm-apple-darwin/include/Celestial.

Audio Toolbox

The Audio Toolbox framework is new to Leopard, and is available on the desktop and iPhone platforms. As an extension to Core Audio, Audio Toolbox provides many low-level functions for processing sound on a bit stream level. Unlike Core Audio, many of Audio Toolbox’s components can be used on the iPhone. The framework includes many APIs that provide access to the raw data within audio files and many conversion tools.

Unlike many of the frameworks covered in this book so far, the Audio Toolbox framework is predominantly C-oriented. Many references for Audio Toolbox are available on the Apple Developer Connection web site. These include:

Because it exists on the desktop platform, the Audio Toolbox framework is documented fairly well. We won’t cover it in its entirety here, but only the pieces specific to the iPhone. Many pieces of the framework, such as MIDI controllers and Music Player APIs, aren’t relevant to or even available on the iPhone.

The “Other” Audio Queue: For Application-Generated Sound

The Celestial AVQueue class explained in the last section is appropriate for queuing self-contained, prerecorded audio samples. But there has been much demand in the iPhone development community for a facility that can play audio streams generated on the fly by applications such as games.

Such applications can use the Audio Toolbox, which has its own implementation of an audio queue, designed for raw sound data. This is useful for applications that generate their own continuous digital sound stream. The Audio Toolbox queue is entirely independent of Celestial’s controller framework, and works with streams of raw audio data rather than complete files.

Think of the audio queue as a conveyor belt full of boxes. On one end of the conveyor belt, boxes are filled with chunks of sound, and on the other end, they are dumped into the iPhone’s speakers. These boxes represent sound buffers that carry bits around, and the conveyor belt is the audio queue. The conveyor belt dumps your sound into the speakers and then circles back around to have the boxes refilled. It’s your job as the programmer to define the size, type, and number of boxes, and write the software to fill the boxes with sound when needed.

Unlike the Celestial queue, the Audio Toolbox queue is strictly first-in-first-out. While the Celestial queue lets you rearrange audio samples sitting in the queue, the Audio Toolbox conveyor belt plays the samples in the order they are added.

Audio Toolbox’s audio queue works like this:

  1. An audio queue is created and assigned properties that identify the type of sound that will be played (format, sample rate, etc.).

  2. Sound buffers are attached to the queue, which will contain the actual sound frames to be played. Think of a sound frame as a single box full of sound, whereas a sample is a single piece of digital sound within the frame.

  3. The developer supplies a callback function, which the audio queue calls every time a sound buffer has been exhausted. This refills the buffer with the latest sound frames from your application.

Audio queue structure

Because the Audio Toolbox framework uses low-level C interfaces, it has no concept of a class. There are many moving parts involved in setting up an audio queue, and to make our examples more understandable, all of the different variables used will be encapsulated into a single user-defined structure we call AQCallbackStruct.

typedef struct AQCallbackStruct {
    AudioQueueRef queue;
    UInt32 frameCount;
    AudioQueueBufferRef mBuffers[AUDIO_BUFFERS];
    AudioStreamBasicDescription mDataFormat;
} AQCallbackStruct;

The following components are grouped into this structure to service the audio framework:

AudioQueueRef queue

A pointer to the audio queue object your program will create.

Uint32 frameCount

The total number of samples to be copied per audio sync. This is largely up to the implementer.

AudioQueueBufferRef mBuffers

An array containing the total number of sound buffers that will be used. The proper number of elements will be discussed later in the section "Sound buffers.”

AudioStreamBasicDescription mDataFormat

Information about the format of audio that will be played.

Before the audio queue can be created, you have to initialize instances of these variables.

AQCallbackStruct aqc;
aqc.mDataFormat.mSampleRate = 44100.0;
aqc.mDataFormat.mFormatID = kAudioFormatLinearPCM;
aqc.mDataFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger
    | kAudioFormatFlagIsPacked;
aqc.mDataFormat.mBytesPerPacket = 4;
aqc.mDataFormat.mFramesPerPacket = 1;
aqc.mDataFormat.mBytesPerFrame = 4;
aqc.mDataFormat.mChannelsPerFrame = 2;
aqc.mDataFormat.mBitsPerChannel = 16;
aqc.frameCount = 735;

In this example, we prepare a structure for 16-bit (two bytes per sample) stereo sound (two channels) with a sample rate of 44 Khz (44,100). Our output sample will be provided in the form of two two-byte short integers, hence four total bytes per frame (two bytes for the left and right channel, each).

The sample rate and frame size dictate how often the iPhone will ask for more sound. With a frequency of 44,100 samples per second, we can make our application sync the sound every 60th of a second by defining a frame size of 735 samples (44,100/60 = 735).

The format we’ll be providing in this example is PCM (raw data), but 27 different formats are available.

kAudioFormatLinearPCM

kAudioFormatMACE6

kAudioFormatTimeCode

kAudioFormatAC3

kAudioFormatULaw

kAudioFormatMIDIStream

kAudioFormat60958AC3

kAudioFormatALaw

kAudioFormatParameterValueStream

kAudioFormatAppleIMA4

kAudioFormatQDesign

kAudioFormatAppleLossless

kAudioFormatMPEG4AAC

kAudioFormatQDesign2

kAudioFormatMPEG4AAC_HE

kAudioFormatMPEG4CELP

kAudioFormatQUALCOMM

kAudioFormatMPEG4AAC_LD

kAudioFormatMPEG4HVXC

kAudioFormatMPEGLayer1

kAudioFormatMPEG4AAC_HE_V2

kAudioFormatMPEG4TwinVQ

kAudioFormatMPEGLayer2

kAudioFormatMPEG4AAC_Spatial

kAudioFormatMACE3

kAudioFormatMPEGLayer3

kAudioFormatAMR

Tip

Not all of these formats may be supported, depending on the software version of your iPhone.

Provisioning audio output

Once the audio queue’s properties have been defined, a new audio queue object can be provisioned. The AudioQueueNewOutput function is responsible for provisioning an output channel and attaching it to the queue. The prototype function looks like this:

AudioQueueNewOutput(
    const AudioStreamBasicDescription *inFormat,
    AudioQueueOutputCallback          inCallbackProc,
    void *                            inUserData,
    CFRunLoopRef                      inCallbackRunLoop,
    CFStringRef                       inCallbackRunLoopMode,
    UInt32                            inFlags,
    AudioQueueRef *                   outAQ);

and can be broken down as follows:

inFormat

The pointer to a structure describing the audio format that will be played. We defined this structure earlier as a member of data type AudioStreamBasicDescription within our AQCallbackStruct structure.

inCallbackProc

The name of a callback function that will be called when the audio queue has an empty buffer that needs data.

inUserData

A pointer to data the developer can optionally pass to the callback function. It will contain a pointer to the instance of the user-defined AQCallbackStruct structure, which should contain information about the audio queue as well as any information relevant to the application about the samples being played.

inCallbackRunLoopMode

Tells the audio queue how it should expect to loop the audio. When NULL is specified, the callback function runs whenever a sound buffer becomes exhausted. Additional modes are available to run the callback under other conditions.

inFlags

Not used; reserved.

outAO

When the AudioQueueNewOutput function returns, this pointer will be set to the newly created audio queue. The presence of this argument allows an error code to be used as the return value of the function.

An actual call to this function, using the audio queue structure created earlier, looks like this:

AudioQueueNewOutput(&aqc.mDataFormat,
    AQBufferCallback,
    &aqc,
    NULL,
    kCFRunLoopCommonModes,
    0,
    &aqc.queue);

In this example, the name of the callback function was specified as AQBufferCallback. This function will be created in the next few sections. It is the function that will be responsible for taking sound output from your application and copying it to a sound buffer.

Sound buffers

A sound buffer contains sound data in transit to the output device. Going back to our box-on-a-conveyor-belt concept, the buffer is the box that carries your sound to the speakers. If you don’t have enough sound to fill the box, it ends up going to the speakers incomplete, which could lead to gaps in the audio. The more boxes you have, the more sound you can queue up in advance to avoid running out (or running slow). The downside is that it also takes longer for the sound at the speaker end to catch up to the sound coming from the application. This could be problematic if the character in your game jumps, but the user doesn’t hear it until after he’s landed.

When the sound is ready to start, sound buffers are created and primed with the first frames of the your application’s sound output. The minimum number of buffers needed to start playback on an Apple desktop is only one, but on the iPhone it is three. In applications that might cause high CPU usage, it may be appropriate to use even more buffers to prevent under-runs. To prepare the buffers with the first frames of sound data, each buffer is primed in the order it is created. This means by the time you prime the buffers, you’d better have some sound to fill them with.

#define AUDIO_BUFFERS 3

unsigned long bufferSize;

bufferSize = aqc.frameCount * aqc.mDataFormat.mBytesPerFrame;
for (i=0; i<AUDIO_BUFFERS; i++) {
    AudioQueueAllocateBuffer(aqc.queue,
        bufferSize, &aqc.mBuffers[i]);
    AQBufferCallback (&aqc, aqc.queue, aqc.mBuffers[i]);
}

When this code executes, the audio buffers are filled with the first frames of sound data from your application. The queue is now ready to be activated, which turns on the conveyor belt sending the sound buffers to the speakers. As this occurs, the buffers are emptied of their contents (no, memory isn’t zeroed) and the boxes come back around the conveyor belt for a refill.

AudioQueueStart(aqc.queue, NULL);

Later on, when you’re ready to turn off the sound queue, just use the AudioQueueDispose function, and everything stops:

AudioQueueDispose(aqc.queue, true);

Callback function

The audio queue is now running, and every 60th of a second, the application is asked to fill a new sound buffer with data. What hasn’t been explained yet is how this happens. After a buffer is emptied and is ready to be refilled, the audio queue calls the callback function you specified as the second argument to AudioQueueNewOutput. This callback function is where the application does its work; it fills the box that carries your output sound to the speakers. You have to call it before starting the queue in order to prime the sound buffers with some initial sound. The queue then calls the function each time a buffer needs to be refilled. When called, you’ll fill the audio queue buffer that is passed in by copying the latest sound frame from your application—in our example, 735 samples.

static void AQBufferCallback(
    void *aqc,
    AudioQueueRef inQ,
    AudioQueueBufferRef outQB)
{

The callback structure you created at the very beginning, aqc, is passed as a user-defined argument, followed by pointers to the audio queue itself and the audio queue buffer to be filled.

    AQCallbackStruct *inData = (AQCallbackStruct *)aqc;

Because the AQCallbackStruct structure is considered user data, it’s supplied to the callback function as a void pointer, and needs to be cast back to an AQCallbackStruct structure (here, named inData) before it can be accessed. This code grabs a pointer to the raw audio data inside the buffer so that the application can write its sound into it.

    short *CoreAudioBuffer = (short *) outOB->mAudioData;

The CoreAudioBuffer variable represents the space inside the sound buffer where your application’s raw samples will be copied at every sync. Your application will need to maintain a type of “record needle” to keep track of what sound has already been sent to the audio queue.

    if (inData->frameCount > 0) {

The frameCount variable is the number of frames that the buffer is expecting to see. This should be equivalent to the frameCount that was supplied in the AQCallbackStruct structure—in our example, 735.

        outQB->mAudioDataByteSize = 4 * inData->frameCount;

This is where you tell the buffer exactly how much data it’s going to get: a packing list for the box. The total output buffer size should be equivalent to the size of both stereo channels (two bytes per channel = four bytes) multiplied by the number of frames sent (735).

        for(i = 0 ; i < inData->frameCount * 2; i += 2) {
            CoreAudioBuffer[i]   =  (  LEFT CHANNEL DATA );
            CoreAudioBuffer[i+1] =  ( RIGHT CHANNEL DATA );
        }

Here, the callback function steps through each output frame in the buffer and copies the data from what will be your application’s outputted sound into CoreAudioBuffer. Because the left and right channels are interleaved, the loop will have to account for this by skipping in increments of two.

        AudioQueueEnqueueBuffer(inQ, outQB, 0, NULL);
   } /* if (inData->frameCount > 0) */
} /* AQBufferCallback */

Finally, once the frame has been copied into the sound buffer, it’s placed back onto the play queue.

Example: PCM Player

Because the Audio Toolbox framework lives in C land, this is a good opportunity to show an example for the iPhone that doesn’t use Objective-C or the UIKit framework. This example uses good old-fashioned C and is run on the command line with a filename. It loads a raw PCM file and then plays it using the Audio Toolbox’s audio queue. Because your application will likely be generating data internally, and not use a file, we’ll read the file into a memory buffer first and then play from the memory buffer to illustrate the practical concept. This should set the stage for most applications to hook into this same architecture.

Because a raw PCM file doesn’t contain any information about its frequency or frame size, this example will have to assume its own. We’ll use a format for 16-bit 44 KHz Mono uncompressed PCM data. This is defined by the three definitions made at the top of the program:

   #define BYTES_PER_SAMPLE 2
16-bit = 2 bytes
   #define SAMPLE_RATE 44100
44,100 samples per second = 44 KHz
   typedef unsigned short sampleFrame;

An unsigned short is equivalent to two bytes (per sample).

If you can’t find a raw PCM file to run this example with, you can use a .wav file so long as it’s encoded in 16-bit 44 KHz raw PCM. Alternatively, you may adapt this example to use a different encoding by changing mFormatID within the audio queue structure. The example won’t make any attempt to parse file headers of a .wav; it just assumes the data you’re providing is raw, which is what a game or other type of application would provide. Wave file headers will get passed to the audio channel with the rest of the data, and so you might hear a slight click or two of junk before the raw sound data inside the file is played.

To compile this example with the tool chain, use the command line:

$ arm-apple-darwin-gcc -o playpcm playpcm.c 
  -framework AudioToolbox -framework CoreAudio -framework CoreFoundation

Because Leopard also includes the Audio Toolbox framework, this example can be compiled on the desktop as well.

$ gcc -o playpcm playpcm.c 
  -framework AudioToolbox -framework CoreAudio -framework CoreFoundation

Example 6-7 contains the code.

Example 6-7. Audio Toolbox example (playpcm.c)
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <sys/stat.h>
#include <AudioToolbox/AudioQueue.h>

#define BYTES_PER_SAMPLE 2
#define SAMPLE_RATE 44100
typedef unsigned short sampleFrame;

#define FRAME_COUNT 735
#define AUDIO_BUFFERS 3

typedef struct AQCallbackStruct {
    AudioQueueRef queue;
    UInt32 frameCount;
    AudioQueueBufferRef mBuffers[AUDIO_BUFFERS];
    AudioStreamBasicDescription mDataFormat;
    UInt32 playPtr;
    UInt32 sampleLen;
    sampleFrame *pcmBuffer;
} AQCallbackStruct;

void *loadpcm(const char *filename, unsigned long *len);
int playbuffer(void *pcm, unsigned long len);
void AQBufferCallback(void *in, AudioQueueRef inQ, AudioQueueBufferRef outQB);

int main(int argc, char *argv[]) {
    char *filename;
    unsigned long len;
    void *pcmbuffer;
    int ret;

    if (argc < 2) {
        fprintf(stderr, "Syntax: %s [filename]
", argv[0]);
        exit(EXIT_FAILURE);
    }

    filename = argv[1];
    pcmbuffer = loadpcm(filename, &len);
    if (!pcmbuffer) {
        fprintf(stderr, "%s: %s
", filename, strerror(errno));
        exit(EXIT_FAILURE);
    }

    ret = playbuffer(pcmbuffer, len);
    free(pcmbuffer);
    return ret;
}

void *loadpcm(const char *filename, unsigned long *len) {
    FILE *file;
    struct stat s;
    void *pcm;

    if (stat(filename, &s))
        return NULL;
    *len = s.st_size;
    pcm = (void *) malloc(s.st_size);
    if (!pcm)
        return NULL;
    file = fopen(filename, "rb");
    if (!file) {
        free(pcm);
        return NULL;
    }
    fread(pcm, s.st_size, 1, file);
    fclose(file);
    return pcm;
}

int playbuffer(void *pcmbuffer, unsigned long len) {
    AQCallbackStruct aqc;
    UInt32 err, bufferSize;
    int i;

    aqc.mDataFormat.mSampleRate = SAMPLE_RATE;
    aqc.mDataFormat.mFormatID = kAudioFormatLinearPCM;
    aqc.mDataFormat.mFormatFlags =
        kLinearPCMFormatFlagIsSignedInteger
        | kAudioFormatFlagIsPacked;
    aqc.mDataFormat.mBytesPerPacket = 4;
    aqc.mDataFormat.mFramesPerPacket = 1;
    aqc.mDataFormat.mBytesPerFrame = 4;
    aqc.mDataFormat.mChannelsPerFrame = 2;
    aqc.mDataFormat.mBitsPerChannel = 16;
    aqc.frameCount = FRAME_COUNT;
    aqc.sampleLen = len / BYTES_PER_SAMPLE;
    aqc.playPtr = 0;
    aqc.pcmBuffer = pcmbuffer;

    err = AudioQueueNewOutput(&aqc.mDataFormat,
        AQBufferCallback,
        &aqc,
        NULL,
        kCFRunLoopCommonModes,
        0,
        &aqc.queue);
    if (err)
        return err;

    aqc.frameCount = FRAME_COUNT;
    bufferSize = aqc.frameCount * aqc.mDataFormat.mBytesPerFrame;
    for (i=0; i<AUDIO_BUFFERS; i++) {
        err = AudioQueueAllocateBuffer(aqc.queue, bufferSize,
            &aqc.mBuffers[i]);
        if (err)
            return err;
        AQBufferCallback(&aqc, aqc.queue, aqc.mBuffers[i]);
    }

    err = AudioQueueStart(aqc.queue, NULL);
    if (err)
        return err;

    while(aqc.playPtr < aqc.sampleLen) { select(NULL, NULL, NULL, NULL, 1.0); }
    sleep(1);
    return 0;
}

void AQBufferCallback(
    void *in,
    AudioQueueRef inQ,
    AudioQueueBufferRef outQB)
{
    AQCallbackStruct *aqc;
    short *coreAudioBuffer;
    short sample;
    int i;

    aqc = (AQCallbackStruct *) in;
    coreAudioBuffer = (short*) outQB->mAudioData;

    printf("Sync: %ld / %ld
", aqc->playPtr, aqc->sampleLen);
    if (aqc->playPtr >= aqc->sampleLen) {
        AudioQueueDispose(aqc->queue, true);
        return;
    }

    if (aqc->frameCount > 0) {
        outQB->mAudioDataByteSize = 4 * aqc->frameCount;
        for(i=0; i<aqc->frameCount*2; i+=2) {
            if (aqc->playPtr > aqc->sampleLen || aqc->playPtr < 0)
                sample = 0;
            else
                sample = (aqc->pcmBuffer[aqc->playPtr]);
            coreAudioBuffer[i] =   sample;
            coreAudioBuffer[i+1] = sample;
            aqc->playPtr++;
        }
        AudioQueueEnqueueBuffer(inQ, outQB, 0, NULL);
    }
}

What’s Going On

Here’s how the playpcm program works:

  1. The application’s main( ) function is called on program start, which extracts the filename from the argument list (as supplied on the command line).

  2. The main( ) function calls loadpcm( ), which determines the length of the audio file and loads it into memory, returning this buffer to main( ).

  3. The playbuffer( ) function is called with the contents of this memory and its length. This function builds our user-defined AQCallbackStruct structure, whose construction is declared at the beginning of the program. This structure holds pointers to the audio queue, sound buffers, and the memory containing the contents of the file that was loaded. It also contains the sample’s length and an integer called playPtr, which acts as record needle, identifying the last sample that was copied into the sound buffer.

  4. A new sound queue is initialized and started. The callback function is called once for each sound buffer used, to sync the first samples into memory. The audio queue is then started. The program then sits and sleeps until the sample is finished playing.

  5. As audio is played, the sound buffers become exhausted one by one. Whenever a buffer needs more sound data, the AQBufferCallback function is called.

  6. The AQBufferCallback function increments playPtr and copies the next sound frames from memory to be played into the sound buffer. Because raw PCM samples are mono, the same data is copied into the left and right output channels.

  7. When playPtr exceeds the length of the sound sample, this breaks the wait loop set up in playpcm( ), causing the function to return back to main( ) for cleanup and exit.

Further Study

  • Modify this example to play 8-bit PCM sound by changing the data type for sampleFrame and BYTES_PER_SAMPLE. You’ll also need to amplify the volume as the sound sample is now one byte large, but the audio queue channel is two bytes large.

  • Check out AudioQueue.h in Mac OS X Leopard on the desktop. This can be found in /System/Library/Frameworks/AudioToolbox.framework/Headers/.

Volume Control

Samples played through the Celestial framework are played at a high enough level to automatically track with the system volume. The low level that the Audio Toolbox framework function on is, however, oblivious to system volume. So the output volume is static regardless of what the iPhone’s volume is set to. Controlling the volume of an audio queue requires that the developer use higher-level functions to read the volume and scale the sound stream to track with it.

Audio Toolbox and Celestial meet when you manage high-level settings such as sound volume. Sound volume is a function of mediaserverd—the audio daemon you were introduced to earlier that is hogging the Core Audio. This daemon is largely married to the Celestial framework. Thus, the Celestial framework can be used to read the volume and intercept volume button presses.

Before covering the volume, it’s important to discuss what to do with it when playing through Audio Toolbox. The volume is reported as a value between 0.0 and 1.0 (e.g., 0% and 100%). In the callback function used in the previous section, sound frames were copied from the application’s output into sound buffers whenever a sync occurred. If we assume that the application has retrieved the current volume and stored it in a variable named _volume, the bold text in the following code shows the changes to this routine to incorporate the user’s volume preference.

        for(i=0; i<aqc->frameCount*2; i+=2) {
            if (aqc->playPtr > aqc->sampleLen || aqc->playPtr < 0)
                sample = 0;
            else
                sample = (aqc->pcmBuffer[aqc->playPtr]);
            coreAudioBuffer[i] =   sample * _volume;
            coreAudioBuffer[i+1] = sample * _volume;
            aqc->playPtr++;
        }

In other words, when the volume is at its maximum, the actual sample data is being played (e.g., sample * 1.0). When the volume is at any other setting, the sample value is multiplied by the volume setting’s value so that it is decreased by the factor of the volume. If you wanted the maximum volume to be louder, you could just multiply _volume by a factor of two or three, although this would run the risk of overdriving your audio.

Reading the volume

Audio Toolbox lives in C land, while Celestial requires an Objective-C context. To merge the two worlds, a global variable is the easiest way to allow the two to communicate data. In this example, this global variable is called _volume.

Celestial delegates volume and ringer control to the AVSystemController class. To read the volume from Celestial, create an instance of this:

NSString *audioDeviceName;
float _volume;
AVSystemController *avs =
    [ AVSystemController sharedAVSystemController ];
[ avs getActiveCategoryVolume: &_volume andName: &audioDeviceName ];

When the getActiveCategoryVolume method is notified, it sets the value of _volume to the current volume in the range 0.0 through 1.0. This will be seen automatically by the Audio Toolbox code, provided _volume is a global variable, and cause future sample frames to be multiplied by the new value.

Volume change notifications

Using the getActiveCategoryVolume method as a one-time read is useful for setting the output volume when the program starts, but won’t change anything if the volume is adjusted while using the application. To accomplish this, add an observer to the application. The observer monitors specific system events and notifies the given method when the event occurs.

    [ [ NSNotificationCenter defaultCenter ] addObserver: self
        selector:@selector(volumeChange:)
        name: @"AVSystemController_SystemVolumeDidChangeNotification"
        object: avs ];

This code sets a method named volumeChange as the observer for system volume changes. The volumeChange method is then defined in the calling class:

- (void)volumeChange:(NSNotification *)notification {
    AVSystemController *avsc = [ notification object ];
    NSString *audioDeviceName;
    [ avsc getActiveCategoryVolume:&_volume
          andName:&audioDeviceName ];
}

When the volumeChange method is notified, an AVSystemController object is passed in with the notification. This object is then used to reread the volume into _volume, where it will be picked up by the AQBufferCallback function feeding the audio queue.

Example: What’s My Volume?

This example builds off of the “Hello, World!” example from Chapter 3, except that the volume is printed instead of a hokey greeting message. When one of the volume buttons is pressed, the observer we set up notifies volumeChanged, which rechecks the volume and updates the text.

This example can be compiled using the following command line. You’ll need to link in the Celestial, Core Audio, and Audio Toolbox frameworks.

$ arm-apple-darwin-gcc -o MyExample MyExample.m -lobjc 
  -framework CoreFoundation -framework Foundation -framework UIKit 
  -framework Celestial -framework AudioToolbox -framework CoreAudio

Example 6-8 and Example 6-9 show the code.

Example 6-8. Volume example (MyExample.h)
#import <CoreFoundation/CoreFoundation.h>
#import <UIKit/UIKit.h>
#import <UIKit/UITextView.h>
#import <Celestial/AVSystemController.h>

@interface MainView : UIView
{
    UITextView         *textView;
    AVSystemController *avs;
}

- (id)initWithFrame:(CGRect)frame;
- (void)dealloc;
- (void)displayVolume;

@end

@interface MyApp : UIApplication
{
    UIWindow *window;
    MainView *mainView;
}

- (void)applicationDidFinishLaunching:(NSNotification *)aNotification;
@end
Example 6-9. Volume example (MyExample.m)
#import "MyExample.h"

float _volume;

int main(int argc, char **argv)
{
   NSAutoreleasePool *autoreleasePool = [
        [ NSAutoreleasePool alloc ] init
    ];
    int returnCode = UIApplicationMain(argc, argv, [ MyApp class ]);
    [ autoreleasePool release ];
    return returnCode;
}

@implementation MyApp

- (void)applicationDidFinishLaunching:(NSNotification *)aNotification {
    window = [ [ UIWindow alloc ] initWithContentRect:
        [ UIHardware fullScreenApplicationContentRect ]
    ];

    CGRect rect = [ UIHardware fullScreenApplicationContentRect ];
    rect.origin.x = rect.origin.y = 0.0f;

    mainView = [ [ MainView alloc ] initWithFrame: rect ];

    [ window setContentView: mainView ];
    [ window orderFront: self ];
    [ window makeKey: self ];
    [ window _setHidden: NO ];
}
@end

@implementation MainView
- (id)initWithFrame:(CGRect)rect {

    if ((self == [ super initWithFrame: rect ]) != nil) {
        NSString *audioDeviceName;

        avs = [ AVSystemController sharedAVSystemController ];
        [ avs getActiveCategoryVolume:&_volume andName:
          &audioDeviceName ];

        textView = [ [ UITextView alloc ] initWithFrame: rect ];
        [ textView setTextSize: 18 ];
        [ self displayVolume ];
        [ self addSubview: textView ];

        [ [ NSNotificationCenter defaultCenter ] addObserver: self
            selector:@selector(volumeChange:)
            name:
              @"AVSystemController_SystemVolumeDidChangeNotification"
            object: avs ];
    }

    return self;
}

- (void)displayVolume
{
    NSString *text;

    text = [ [ NSString alloc ] initWithFormat: @"Volume is set to %f", _volume ];
    [ textView setText: text ];
}

- (void)volumeChange:(NSNotification *)notification {
    AVSystemController *avsc = [ notification object ];
    NSString *audioDeviceName;

    [ avsc getActiveCategoryVolume:&_volume
          andName:&audioDeviceName ];
    [ self displayVolume ];
}

- (void)dealloc
{
    [ self dealloc ];
    [ super dealloc ];
}

@end

What’s Going On

The volume example’s process flow is as follows:

  1. When the application instantiates, a MainView object gets created and its initWithFrame method is called.

  2. The initWithFrame method creates an instance of AVSystemController and delegates an observer to notify the volumeChange method of notifications for system volume changes.

  3. The volume is initially read once, and the text view is displayed.

  4. When the user presses one of the volume buttons on the side of the phone, the observer notifies the volumeChange method.

  5. The volumeChange method reads the new system volume and calls a method named displayVolume to update the output text.

Further Study

Check out the AVSystemController.h prototype in your tool chain’s include directory. This can be found in /usr/local/arm-apple-darwin/include/Celestial.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.137.167.224