Of course, using an intent to trigger the sound recorder isn't the only way we can capture audio. The Android SDK includes a MediaRecorder
class, which we can leverage to build our own audio recording functionality. Doing so enables a lot more flexibility, such as controlling the length of time audio is recorded for.
The MediaRecorder
class is used for both audio and video capture. After constructing a MediaRecorder
object, to capture audio, the setAudioEncoder
and setAudioSource
methods must be called. If these methods are not called, audio will not be recorded. (The same goes for video. If setVideoEncoder
and setVideoSource
methods are not called, video will not be recorded. We won't be dealing with video in this chapter; therefore we won't use either of these methods.)
Additionally, two other methods are generally called before having the MediaRecorder
prepare to record. These are setOutputFormat
and setOutputFile
. setOutputFormat
allows us to choose what file format should be used for the recording and setOutputFile
allows us to specify the file that we will record to. It is important to note that the order of each of these calls matters quite a bit.
The first method that should be called after the MediaRecorder
is instantiated is setAudioSource
. setAudioSource
takes in a constant that is defined in the AudioSource
inner class. Generally we will want to use MediaRecorder.AudioSource.MIC
, but it is interesting to note that MediaRecorder.AudioSource
also contains constants for VOICE_CALL
, VOICE_DOWNLINK
, and VOICE_UPLINK
. Unfortunately, it appears as though there aren't any handsets or versions of Android where recording audio from the call actually works. Also of note, as of Froyo, Android version 2.2, there are constants for CAMCORDER
and VOICE_RECOGNITION
. These may be used if the device has more than one microphone.
MediaRecorder recorder = new MediaRecorder();
recorder.setAudioSource(MediaRecorder.AudioSource.MIC);
The next method to be called in sequence is setOutputFormat
. The values this takes in are specified as constants in the MediaRecorder.OutputFormat
inner class.
MediaRecorder.OutputFormat.MPEG_4
: This specifies that the file written will be an MPEG-4 file. It may contain both audio and video tracks.MediaRecorder.OutputFormat.RAW_AMR
: This represents a raw file without any type of container. This should be used only when capturing audio without video and when the audio encoder is AMR_NB.MediaRecorder.OutputFormat.THREE_GPP
: This specifies that the file written will be a 3GPP file (extension .3gp
). It may contain both audio and video tracks.recorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
Following the setting of the output format, we can call setAudioEncoder
to set the codec that should be used. The possible values are specified as constants in the MediaRecorder.AudioEncoder
class and other than DEFAULT
, only one other value exists: MediaRecorder.AudioEncoder.AMR_NB
, which is the Adaptive Multi-Rate Narrow Band codec. This codec is tuned for speech and is therefore not a great choice for anything other than speech. By default it has a sample rate of 8 kHz and a bitrate between 4.75 and 12.2 kbps, both of which are very low for recoding anything but speech. Unfortunately, this is our only choice for use with the MediaRecorder
at the moment.
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
Last, we'll want to call setOutputFile
with the location of the file we want to record to. The following snippet of code creates a file using File.createTempFile
in the preferred file location for applications that need to store files on the SD card.
File path = new File(Environment.getExternalStorageDirectory().getAbsolutePath() +
"/Android/data/com.apress.proandroidmedia.ch07.customrecorder/files/");
path.mkdirs();
audioFile = File.createTempFile("recording", ".3gp", path);
recorder.setOutputFile(audioFile.getAbsolutePath());
Now we can actually call prepare
, which signals the end of the configuration stage and tells the MediaRecorder
to get ready to start recording. We call the start
method to actually start recording.
recorder.prepare();
recorder.start();
To stop recording, we call the stop
method.
recorder.stop();
The MediaRecorder
, similar to the MediaPlayer
, operates as a state machine. Figure 7–2 shows a diagram from the Android API reference page for MediaRecorder
, which describes the various states and the methods that may be called from each state.
Figure 7–2. MediaRecorder
state diagram from Android API Reference
Here is the code for a full custom audio capture and playback example using the MediaRecorder
class.
package com.apress.proandroidmedia.ch07.customrecorder;
import java.io.File;
import java.io.IOException;
import android.app.Activity;
import android.media.MediaPlayer;
import android.media.MediaRecorder;
import android.media.MediaPlayer.OnCompletionListener;
import android.os.Bundle;
import android.os.Environment;
import android.view.View;
import android.view.View.OnClickListener;
import android.widget.Button;
import android.widget.TextView;
Our CustomRecorder
activity implements OnClickListener
so that it may be notified when Buttons are pressed, and OnCompletionListener
so that it can respond when MediaPlayer
has completed playing audio.
public class CustomRecorder extends Activity implements OnClickListener,
OnCompletionListener {
We'll have a series of user interface components. The first, a TextView
called statusTextView
, will report the status of the application to the user: “Recording,” “Ready to Play,” and so on.
TextView statusTextView;
A series of buttons will be used for controlling various aspects. The names of the Buttons describe their use.
Button startRecording, stopRecording, playRecording, finishButton;
We'll have a MediaRecorder
for recording the audio and a MediaPlayer
for playing it back.
MediaRecorder recorder;
MediaPlayer player;
Finally, we have a File
object called audioFile
, which will reference the file that is recorded to.
File audioFile;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
When the activity starts up, we'll set the text of the statusTextView
to be “Ready.”
statusTextView = (TextView) this.findViewById(R.id.StatusTextView);
statusTextView.setText("Ready");
stopRecording = (Button) this.findViewById(R.id.StopRecording);
startRecording = (Button) this.findViewById(R.id.StartRecording);
playRecording = (Button) this.findViewById(R.id.PlayRecording);
finishButton = (Button) this.findViewById(R.id.FinishButton);
We'll set all of the Buttons' onClickListeners
to be this
so that our onClick
method is called when any of them are pressed.
startRecording.setOnClickListener(this);
stopRecording.setOnClickListener(this);
playRecording.setOnClickListener(this);
finishButton.setOnClickListener(this);
Finally, in the onCreate
method, we'll disable the stopRecording
and playRecording
Buttons since they won't work until we either start recording or finish recording respectively.
stopRecording.setEnabled(false);
playRecording.setEnabled(false);
}
In the following onClick
method, we handle all of the Button presses.
public void onClick(View v) {
if (v == finishButton) {
If the finishButton
is pressed, we finish the activity.
finish();
} else if (v == stopRecording) {
If the stopRecording
Button is pressed, we call stop
and release
on the MediaRecorder
object.
recorder.stop();
recorder.release();
We then construct a MediaPlayer
object and have it prepare to play back the audio file that we just recorded.
player = new MediaPlayer();
player.setOnCompletionListener(this);
The following two methods that we are using on the MediaPlayer
, setDataSource
and prepare
, may throw a variety of exceptions. In the following code, we are simply throwing them. In your application development, you will probably want to catch and deal with them more elegantly, such as alerting the user when a file doesn't exist.
try {
player.setDataSource(audioFile.getAbsolutePath());
} catch (IllegalArgumentException e) {
throw new RuntimeException(
"Illegal Argument to MediaPlayer.setDataSource", e);
} catch (IllegalStateException e) {
throw new RuntimeException(
"Illegal State in MediaPlayer.setDataSource", e);
} catch (IOException e) {
throw new RuntimeException(
"IOException in MediaPalyer.setDataSource", e);
}
try {
player.prepare();
} catch (IllegalStateException e) {
throw new RuntimeException(
"IllegalStateException in MediaPlayer.prepare", e);
} catch (IOException e) {
throw new RuntimeException("IOException in MediaPlayer.prepare", e);
}
We set the statusTextView
to indicate to the user that we are ready to play the audio file.
We then set the playRecording
and startRecording
Buttons to be enabled and disable the stopRecording
Button, as we are not currently recording.
playRecording.setEnabled(true);
stopRecording.setEnabled(false);
startRecording.setEnabled(true);
} else if (v == startRecording) {
When the startRecording
Button is pressed, we construct a new MediaRecorder
and call setAudioSource
, setOutputFormat
, and setAudioEncoder
.
recorder = new MediaRecorder();
recorder.setAudioSource(MediaRecorder.AudioSource.MIC);
recorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
We then create a new File
on the SD card and call setOutputFile
on the MediaRecorder
object.
File path = new File(Environment.getExternalStorageDirectory()
.getAbsolutePath() + "/Android/data/com.apress.proandroidmedia.ch07
.customrecorder/files/");
path.mkdirs();
try {
audioFile = File.createTempFile("recording", ".3gp", path);
} catch (IOException e) {
throw new RuntimeException("Couldn't create recording audio file",e);
}
recorder.setOutputFile(audioFile.getAbsolutePath());
We call prepare
on the MediaRecorder
and start
to begin the recording.
try {
recorder.prepare();
} catch (IllegalStateException e) {
throw new RuntimeException(
"IllegalStateException on MediaRecorder.prepare", e);
} catch (IOException e) {
throw new RuntimeException("IOException on MediaRecorder.prepare",e);
}
recorder.start();
Last, we update the statusTextView
and change which Buttons are enabled and disabled.
statusTextView.setText("Recording");
playRecording.setEnabled(false);
stopRecording.setEnabled(true);
startRecording.setEnabled(false);
} else if (v == playRecording) {
The last Button that we need to respond to is playRecording
. When the stopRecording
Button is pressed, the MediaPlayer
object, player
, is constructed and configured. All that we need to do when the playRecording
Button is pushed is to start the playback, set the status message, and change which Buttons are enabled.
player.start();
statusTextView.setText("Playing");
playRecording.setEnabled(false);
stopRecording.setEnabled(false);
startRecording.setEnabled(false);
}
}
The onCompletion
method is called when the MediaPlayer
object has completed playback of a recording. We use it to change the status message and set which Buttons are enabled.
public void onCompletion(MediaPlayer mp) {
playRecording.setEnabled(true);
stopRecording.setEnabled(false);
startRecording.setEnabled(true);
statusTextView.setText("Ready");
}
}
Here is the layout XML file, main.xml
, for the foregoing activity.
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
>
<TextView android:layout_width="wrap_content" android:layout_height=
"wrap_content" android:id="@+id/StatusTextView" android:text="Status"
android:textSize="35dip"></TextView>
<Button android:text="Start Recording" android:id="@+id/StartRecording"
android:layout_width="wrap_content" android:layout_height="wrap_content"></Button>
<Button android:text="Stop Recording" android:id="@+id/StopRecording"
android:layout_width="wrap_content" android:layout_height="wrap_content"></Button>
<Button android:text="Play Recording" android:id="@+id/PlayRecording"
android:layout_width="wrap_content" android:layout_height="wrap_content"></Button>
<Button android:layout_width="wrap_content" android:layout_height="wrap_content"
android:id="@+id/FinishButton" android:text="Finish"></Button>
</LinearLayout>
We'll also need to add the following permissions to the AndroidManifest.xml
file.
<uses-permission android:name="android.permission.RECORD_AUDIO"></uses-permission>
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE">
</uses-permission>
As we have seen, developing a custom audio capture application using MediaRecorder
is not too cumbersome. Now let's look at how we can use the MediaRecorder
's other methods to add other features.
MediaRecorder
has a variety of other methods available that we can use in relation to audio capture.
getMaxAmplitude
: Allows us to request the maximum amplitude of audio that has been recorded by the MediaPlayer
. The value is reset each time the method is called, so each call will return the maximum amplitude from the last time it is called. An audio level meter may be implemented by calling this method periodically.setMaxDuration
: Allows us to specify a maximum recording duration in milliseconds. This method must be called after the setOutputFormat
method but before the prepare
method.setMaxFileSize
: Allows us to specify a maximum file size for the recording in bytes. As with setMaxDuration
, this method must be called after the setOutputFormat
method but before the prepare
method.Here is an update to the custom recorder application we went through previously that includes a display of the current amplitude.
package com.apress.proandroidmedia.ch07.customrecorder;
import java.io.File;
import java.io.IOException;
import android.app.Activity;
import android.media.MediaPlayer;
import android.media.MediaRecorder;
import android.media.MediaPlayer.OnCompletionListener;
import android.os.AsyncTask;
import android.os.Bundle;
import android.os.Environment;
import android.view.View;
import android.view.View.OnClickListener;
import android.widget.Button;
import android.widget.TextView;
public class CustomRecorder extends Activity implements OnClickListener,
OnCompletionListener {
In this version, we have added a TextView
called amplitudeTextView
. This will display the numeric amplitude of the audio input.
TextView statusTextView, amplitudeTextView;
Button startRecording, stopRecording, playRecording, finishButton;
MediaRecorder recorder;
MediaPlayer player;
File audioFile;
We'll need an instance of a new class called RecordAmplitude
. This class is an inner class that is defined toward the end of this source code listing. It uses a Boolean called isRecording
that will be set to true
when we start the MediaRecorder
.
RecordAmplitude recordAmplitude;
boolean isRecording = false;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
statusTextView = (TextView) this.findViewById(R.id.StatusTextView);
statusTextView.setText("Ready");
We'll use a TextView
to display the current amplitude of the audio as it is captured.
amplitudeTextView = (TextView) this
.findViewById(R.id.AmplitudeTextView);
amplitudeTextView.setText("0");
stopRecording = (Button) this.findViewById(R.id.StopRecording);
startRecording = (Button) this.findViewById(R.id.StartRecording);
playRecording = (Button) this.findViewById(R.id.PlayRecording);
finishButton = (Button) this.findViewById(R.id.FinishButton);
startRecording.setOnClickListener(this);
stopRecording.setOnClickListener(this);
playRecording.setOnClickListener(this);
finishButton.setOnClickListener(this);
stopRecording.setEnabled(false);
playRecording.setEnabled(false);
}
public void onClick(View v) {
if (v == finishButton) {
finish();
} else if (v == stopRecording) {
When we finish the recording, we set the isRecording
Boolean to false
and call cancel on our RecordAmplitude
class. Since RecordAmplitude
extends AsyncTask
, calling cancel
with true
as the parameter will interrupt its thread if necessary.
isRecording = false;
recordAmplitude.cancel(true);
recorder.stop();
recorder.release();
player = new MediaPlayer();
player.setOnCompletionListener(this);
try {
player.setDataSource(audioFile.getAbsolutePath());
} catch (IllegalArgumentException e) {
throw new RuntimeException(
"Illegal Argument to MediaPlayer.setDataSource", e);
} catch (IllegalStateException e) {
throw new RuntimeException(
"Illegal State in MediaPlayer.setDataSource", e);
} catch (IOException e) {
throw new RuntimeException(
"IOException in MediaPalyer.setDataSource", e);
}
try {
player.prepare();
} catch (IllegalStateException e) {
throw new RuntimeException(
"IllegalStateException in MediaPlayer.prepare", e);
} catch (IOException e) {
throw new RuntimeException(
"IOException in MediaPlayer.prepare", e);
}
statusTextView.setText("Ready to Play");
playRecording.setEnabled(true);
stopRecording.setEnabled(false);
startRecording.setEnabled(true);
} else if (v == startRecording) {
recorder = new MediaRecorder();
recorder.setAudioSource(MediaRecorder.AudioSource.MIC);
recorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
File path = new File(Environment.getExternalStorageDirectory()
.getAbsolutePath() + "/Android/data/com.apress.proandroidmedia.ch07
.customrecorder/files/");
path.mkdirs();
try {
audioFile = File.createTempFile("recording", ".3gp", path);
} catch (IOException e) {
throw new RuntimeException(
"Couldn't create recording audio file", e);
}
recorder.setOutputFile(audioFile.getAbsolutePath());
try {
recorder.prepare();
} catch (IllegalStateException e) {
throw new RuntimeException(
"IllegalStateException on MediaRecorder.prepare", e);
} catch (IOException e) {
throw new RuntimeException(
"IOException on MediaRecorder.prepare", e);
}
recorder.start();
After we start the recording, we set the isRecording
Boolean to true
and create a new instance of RecordAmplitude
. Since RecordAmplitude
extends AsyncTask
, we'll call the execute
method to start the RecordAmplitude
's task running.
isRecording = true;
recordAmplitude = new RecordAmplitude();
recordAmplitude.execute();
statusTextView.setText("Recording");
playRecording.setEnabled(false);
stopRecording.setEnabled(true);
startRecording.setEnabled(false);
} else if (v == playRecording) {
player.start();
statusTextView.setText("Playing");
playRecording.setEnabled(false);
stopRecording.setEnabled(false);
startRecording.setEnabled(false);
}
}
public void onCompletion(MediaPlayer mp) {
playRecording.setEnabled(true);
stopRecording.setEnabled(false);
startRecording.setEnabled(true);
statusTextView.setText("Ready");
}
Here is the definition of RecordAmplitude
. It extends AsyncTask
, which is a nice utility class in Android that provides a thread to run long-running tasks without tying up the user interface or making an application unresponsive.
private class RecordAmplitude extends AsyncTask<Void, Integer, Void> {
The doInBackground
method runs on a separate thread and is run when the execute
method is called on the object. This method loops as long as isRecording
is true
and calls Thread.sleep(500)
, which causes it to not do anything for half a second. Once that is complete, it calls publishProgress
and passes in the result of getMaxAmplitude
on the MediaRecorder
object.
@Override
protected Void doInBackground(Void... params) {
while (isRecording) {
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
publishProgress(recorder.getMaxAmplitude());
}
return null;
}
The preceding call to publishProgress
calls the onProgressUpdate
method defined here, which runs on the main thread so it can interact with the user interface. In this case, it is updating the amplitudeTextView
with the value that is passed in from the publishProgress
method call.
protected void onProgressUpdate(Integer... progress) {
amplitudeTextView.setText(progress[0].toString());
}
}
}
Of course, we'll need to update the layout XML to include the TextView for displaying the amplitude.
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
>
<TextView android:layout_width="wrap_content" android:layout_height=
"wrap_content" android:id="@+id/StatusTextView" android:text="Status"
android:textSize="35dip"></TextView>
<TextView android:layout_width="wrap_content" android:layout_height=
"wrap_content" android:id="@+id/AmplitudeTextView" android:textSize="35dip"
android:text="0"></TextView>
<Button android:text="Start Recording" android:id="@+id/StartRecording"
android:layout_width="wrap_content" android:layout_height="wrap_content"></Button>
<Button android:text="Stop Recording" android:id="@+id/StopRecording"
android:layout_width="wrap_content" android:layout_height="wrap_content"></Button>
<Button android:text="Play Recording" android:id="@+id/PlayRecording"
android:layout_width="wrap_content" android:layout_height="wrap_content"></Button>
<Button android:layout_width="wrap_content" android:layout_height=
"wrap_content" android:id="@+id/FinishButton" android:text="Finish"></Button>
</LinearLayout>
As we can see, using an AsyncTask
to do something periodically is a nice way to provide automatically updating information to the user while something else is in progress. This provides a nicer user experience for our MediaRecorder
example. Using the getMaxAmplitude
method provides the user with some feedback about the recording that is currently happening.
In Android 2.2, Froyo, the following methods were made available:
setAudioChannels
: Allows us to specify the number of audio channels that will be recorded. Typically this will be either one channel (mono) or two channels (stereo). This method must be called prior to the prepare
method.setAudioEncodingBitRate
: Allows us to specify the number of bits per second that will be used by the encoder when compressing the audio. This method must be called prior to the prepare
method.setAudioSamplingRate
: Allows us to specify the sampling rate of the audio as it is captured and encoded. The applicable rates are determined by the hardware and codec being used. This method must be called prior to the prepare
method.3.147.46.58