60 4. ANALOG-TO-DIGITAL SIGNAL CONVERSION
L3.3 RECORDING
e API that supplies audio recording capability is found in android.media.AudioRecord.
A reference implementation of this API appears within the WaveRecorder class of the sam-
pling code. To determine which sampling rates are supported by an Android device, the func-
tion WaveRecorder.checkSamplingRate() is called upon the app initialization. is func-
tion initializes the AudioRecord API with a preset list of sampling rates. If the sampling rate
is not supported, an error will occur and that particular sampling rate will not be added to
the list of supported sampling rates. Supported sampling rates as determined by the function
checkSamplingRate are listed in the app settings menu.
To initialize the recorder, the size of the data buffer to be used by the recorder is first
computed by calling the function getMinBufferSize as follows:
int BufferLength = AudioRecord.getMinBufferSize(FS, CHANNELS, FORMAT);
where FS specifies the desired sampling rate, CHANNELS is the channel configuration which
can be either stereo or mono, and lastly FORMAT specifies 16bit PCM or 8bit PCM audio data.
e return value of this function is the minimum size of the buffer in bytes. It is important to
mention that the size of this buffer is dependent upon the Android target and will vary from tar-
get to target. To ensure that no audio data is lost, this value can be scaled up to ensure that there
is enough overhead, but needs to be set to at least the size returned by getMinBufferSize .
Scaling up the size of this buffer will also affect the latency of the recording; larger buffers will
cause increased delay before the sampled audio is available for processing. e same is true for
the AudioTrack API; increasing the output buffer size will increase the delay before the processed
audio is outputted to the smartphone speaker. is BufferLength value is used to instantiate the
AudioRecord object:
AudioRecord record = new AudioRecord(SOURCE, FS, CHANNELS, FORMAT,
BufferLength);
e values used for FS, CHANNELS, and FORMAT should match those used to cal-
culate the buffer length. ere are several options for the SOURCE parameter, detailed
at http://developer.android.com/reference/android/media/MediaRecorder.AudioS
ource.html. For most cases it should be specified as AudioSource.CAMCORDER as this ensures
that the built-in filters for noise reduction and gain correction for voice calls are not active during
the recording.
e audio recorder does not begin to accumulate data when it is instantiated and must
be controlled. To begin collecting data, the function recorder.startRecording() is called.
e recorder object is then polled to retrieve audio data by calling one of the read functions. If
audio is not read from the recorder at a sufficient rate, the internal buffer will overflow. e read
function must be supplied with a buffer to write data into, an initial offset, and a final offset.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.189.180.76