L4. LAB 4: IPHONE AUDIO SIGNAL SAMPLIN 73
e code that is executed is called from the main.m file. It returns the app main in an
autoreleasepool, i.e., it handles the auto releasing of memory variables.
#import <UIKit/UIKit.h>
#import "AppDelegate.h"
#import "IosAudioController.h"
int main(int argc, char * argv[]) {
iosAudio = [[IosAudioController alloc] init];
@autoreleasepool {
return UIApplicationMain(
argc, argv, nil, NSStringFromClass([AppDelegate class]));
}
}
It calls on AppDelegate first to handle the app.
L4.3 RECORDING
e API that supplies the audio recording capability is part of the AudioToolbox. It can be
included within a file using the instruction #import. To specify the audio format to be used by
the app, an instance of AudioStreamBasicDescription needs to be created. is specifies the
audio data to be processed.
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = 44100.0;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags =( kAudioFormatFlagIsSignedInteger |
kAudioFormatFlagIsPacked);
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
e above specifications indicate to the device that the audio data being handled has a sampling
rate of 44.1 KHz, is linear PCM, and is packed in the form of 16 bit integers. e audio is mono
with only one frame per packet. e frame size is not determined here because the hardware
determines the size of the frame at runtime.
is description is used to initialize the Audio Unit that will handle the audio i/o. e
audio data is handled by a separate C function called a callback which is shown as follows.
74 4. ANALOG-TO-DIGITAL SIGNAL CONVERSION
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
// Because of the way audio format (setup below) is chosen:
// only need 1 buffer, since it is mono
// samples are 16 bits = 2 bytes
// 1 frame includes only 1 sample
AudioBuffer buffer;
buffer.mNumberChannels = 1;
buffer.mDataByteSize = inNumberFrames * 2;
buffer.mData = malloc( inNumberFrames * 2 );
// Put buffer in a AudioBufferList
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
// Then:
// Obtain recorded samples
if(getMic()) {
OSStatus status;
status = AudioUnitRender([iosAudio audioUnit],
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
&bufferList);
checkStatus(status);
// Now, we have the samples we just read sitting
// in buffers in bufferList. Process the new data
L4. LAB 4: IPHONE AUDIO SIGNAL SAMPLIN 75
TPCircularBufferProduceBytes(inBuffer,
(void*)bufferList.mBuffers[0].mData,
bufferList.mBuffers[0].mDataByteSize);
if(inBuffer->fillCount >= getFrameSize()*sizeof(short)) {
[iosAudio processStream];
}
} else {
UInt32 frameCount = getFrameSize();
OSStatus err = ExtAudioFileRead(fileRef, &frameCount,
&bufferList);
CheckError(err,"File Read");
if(frameCount > (0) {
AudioBuffer audioBuffer = bufferList.mBuffers[0];
TPCircularBufferProduceBytes(inBuffer, audioBuffer.mData,
audioBuffer.mDataByteSize);
if (inBuffer->fillCount >= getFrameSize()*sizeof(short)) {
[iosAudio processStream];
}
} else {
getTime();
[iosAudio stop];
enableButtons();
}
}
// release the malloc'ed data in the buffer created earlier
free(bufferList.mBuffers[0].mData);
return noErr;
}
As the device collects data, the recording callback is called. Audio samples are collected and
stored in a software buffer. e function processStream is used to process the audio samples in
the software buffer.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.119.160.154