Aside from using an intent to launch the sound recorder and using the MediaRecorder
, Android offers a third method to capture audio, using a class called AudioRecord
. AudioRecord
is the most flexible of the three methods in that it allows us access to the raw audio stream but has the least number of built-in capabilities, such as not automatically compressing the audio.
The basics for using AudioRecord
are straightforward. We simply need to construct an object of type AudioRecord
, passing in various configuration parameters.
The first value we'll need to specify is the audio source. The values for use here are the same as we used for the MediaRecorder
and are defined in MediaRecorder.AudioSource
. Essentially this means that we have MediaRecorder.AudioSource.MIC
available to us.
The next value that we'll need to specify is the sample rate of the recording. This should be specified in Hz. As we know, the MediaRecorder samples audio at 8 kHz or 8,000 Hz. CD quality audio is typically 44.1 kHz or 44,100 Hz. Hz or hertz is the number of samples per second. Different Android handset hardware will be able to sample at different sample rates. For our example application, we'll sample at 11,025 Hz, which is another commonly used sample rate.
int sampleRateInHz = 11025;
Next, we need to specify the number of channels of audio to capture. The constants for this parameter are specified in the AudioFormat
class and are self-explanatory.
AudioFormat.CHANNEL_CONFIGURATION_MONO
AudioFormat.CHANNEL_CONFIGURATION_STEREO
AudioFormat.CHANNEL_CONFIGURATION_INVALID
AudioFormat.CHANNEL_CONFIGURATION_DEFAULT
We'll use a mono configuration for now.
int channelConfig = AudioFormat.CHANNEL_CONFIGURATION_MONO;
Following that, we need to specify the audio format. The possibilities here are also specified in the AudioFormat
class.
AudioFormat.ENCODING_DEFAULT
AudioFormat.ENCODING_INVALID
AudioFormat.ENCODING_PCM_16BIT
AudioFormat.ENCODING_PCM_8BIT
Among these four, our choices boil down to PCM 16-bit and PCM 8-bit. PCM stands for Pulse Code Modulation, which is essentially the raw audio samples. We can therefore set the resolution of each sample to be 16 bits or 8 bits. Sixteen bits will take up more space and processing power, while the representation of the audio will be closer to reality.
For our example, we'll use the 16-bit version.
int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
Last, we'll need to specify the buffer size. We can actually ask the AudioRecord
class what the minimum buffer size should be with a static method call, getMinBufferSize
, passing in the sample rate, channel configuration, and audio format.
int bufferSizeInBytes = AudioRecord.getMinBufferSize(sampleRateInHz, channelConfig,
audioFormat);
Now we can construct the actual AudioRecord
object.
AudioRecord audioRecord = new AudioRecord(audioSource, sampleRateInHz, channelConfig,
audioFormat, bufferSizeInBytes);
The AudioRecord
class doesn't actually save the captured audio anywhere. We need to do that manually as the audio comes in. The first thing we'll probably want to do is record it to a file.
To do that, we'll need to create a file.
File recordingFile;
File path = new File(Environment.getExternalStorageDirectory()
.getAbsolutePath() + "/Android/data/com.apress.proandroidmedia.ch07
.altaudiorecorder /files/");
path.mkdirs();
try {
recordingFile = File.createTempFile("recording", ".pcm", path);
} catch (IOException e1) {
throw new RuntimeException("Couldn't create file on SD card", e);
}
Next we create an OutputStream
to that file, specifically one wrapped in a BufferedOutputStream
and a DataOutputStream
for performance and convenience reasons.
DataOutputStream dos = new DataOutputStream(new BufferedOutputStream(new
FileOutputStream(recordingFile)));
Now we can start the capture and write the audio samples to the file. We'll use an array of shorts to hold the audio we read from the AudioRecord
object. We'll make the array smaller than the buffer that the AudioRecord
object has so that buffer won't fill up before we read it out.
To make sure this array is smaller than the buffer size, we divide by 4. The size of the buffer is in bytes and each short takes up 2 bytes, so dividing by 2 won't be enough. Dividing by 4 will make it so that this array is half the size of the AudioRecord
object's internal buffer.
short[] buffer = new short[bufferSize/4];
We simply call the startRecording
method on the AudioRecord
object to kick things off.
audioRecord.startRecording();
After recording has started, we can construct a loop to continuously read from the AudioRecord
object into our array of shorts and write that to the DataOutputStream
for the file.
while (true) {
int bufferReadResult = audioRecord.read(buffer, 0, bufferSize/4);
for (int i = 0; i < bufferReadResult; i++) {
dos.writeShort(buffer[i]);
}
}
audioRecord.stop();
dos.close();
When we are done, we call stop
on the AudioRecord
object and close
on the DataOutputStream
.
Of course, in the real world, we wouldn't put this in a while
(true)
loop as it will never complete. We also probably want to run this in some kind of thread so that it doesn't tie up the user interface and anything else we might want the application to do while recording.
Before going through a full example, let's look at how we can play back audio as it is captured using the AudioRecord
class.
18.191.144.65