Android is a rich multimedia environment. The standard Android load includes music and video players, and most commercial devices ship with these or fancier versions as well as YouTube players and more. The recipes in this chapter show you how to control some aspects of the multimedia world that Android provides.
Marco Dinacci
Given a URI to play the video, create an ACTION_VIEW
Intent with it and start a new Activity.
Example 9-1 shows the code required to start a YouTube video with an Intent.
For this recipe to work, the standard YouTube application (or one compatible with it) must be installed on the device.
public
void
onCreate
(
Bundle
savedInstanceState
)
{
super
.
onCreate
(
savedInstanceState
);
setContentView
(
R
.
layout
.
main
);
String
video_path
=
"http://www.youtube.com/watch?v=opZ69P-0Jbc"
;
Uri
uri
=
Uri
.
parse
(
video_path
);
// With this line the YouTube application, if installed, will launch immediately.
// Without it you will be prompted with a list of applications to choose from.
uri
=
Uri
.
parse
(
"vnd.youtube:"
+
uri
.
getQueryParameter
(
"v"
));
Intent
intent
=
new
Intent
(
Intent
.
ACTION_VIEW
,
uri
);
startActivity
(
intent
);
}
The example uses a standard YouTube.com URL. The uri.getQueryParameter("v")
is used to extract the video ID from the URI itself; in our example, the ID is opZ69P-0Jbc
.
Marco Dinacci
Capture a video and record it on the phone by using the c
class provided by the Android framework.
The MediaRecorder
is normally used to perform audio and/or video recording. The class has a straightforward API, but because it is based on a simple state machine, the methods must be called in the proper order to avoid IllegalStateException
s from popping up.
Create a new Activity and override the onCreate()
method with the code shown in Example 9-2.
@Override
public
void
onCreate
(
Bundle
savedInstanceState
)
{
super
.
onCreate
(
savedInstanceState
);
setContentView
(
R
.
layout
.
media_recorder_recipe
);
// We shall take the video in landscape orientation
setRequestedOrientation
(
ActivityInfo
.
SCREEN_ORIENTATION_LANDSCAPE
);
mSurfaceView
=
(
SurfaceView
)
findViewById
(
R
.
id
.
surfaceView
);
mHolder
=
mSurfaceView
.
getHolder
();
mHolder
.
addCallback
(
this
);
mHolder
.
setType
(
SurfaceHolder
.
SURFACE_TYPE_PUSH_BUFFERS
);
mToggleButton
=
(
ToggleButton
)
findViewById
(
R
.
id
.
toggleRecordingButton
);
mToggleButton
.
setOnClickListener
(
new
OnClickListener
()
{
@Override
// Toggle video recording
public
void
onClick
(
View
v
)
{
if
(((
ToggleButton
)
v
).
isChecked
())
mMediaRecorder
.
start
();
else
{
mMediaRecorder
.
stop
();
mMediaRecorder
.
reset
();
try
{
initRecorder
(
mHolder
.
getSurface
());
}
catch
(
IOException
e
)
{
e
.
printStackTrace
();
}
}
}
});
}
The preview frames from the camera will be displayed on a SurfaceView
. Recording is controlled by a toggle button. After the recording is over, we stop the MediaRecorder
. Since the stop()
method resets all the state machine variables in order to be able to grab another video, we reset the state machine and call our initRecorder()
method once more. initRecorder()
is where we configure the MediaRecorder
and the camera, as shown in Example 9-3.
/* Init the MediaRecorder. The order the methods are called in is vital to
* its correct functioning.
*/
private
void
initRecorder
(
Surface
surface
)
throws
IOException
{
// It is very important to unlock the camera before calling setCamera(),
// or it will result in a black preview
if
(
mCamera
==
null
)
{
mCamera
=
Camera
.
open
();
mCamera
.
unlock
();
}
if
(
mMediaRecorder
==
null
)
mMediaRecorder
=
new
MediaRecorder
();
mMediaRecorder
.
setPreviewDisplay
(
surface
);
mMediaRecorder
.
setCamera
(
mCamera
);
mMediaRecorder
.
setVideoSource
(
MediaRecorder
.
VideoSource
.
CAMERA
);
mMediaRecorder
.
setOutputFormat
(
MediaRecorder
.
OutputFormat
.
DEFAULT
);
File
file
=
createFile
();
mMediaRecorder
.
setOutputFile
(
file
.
getAbsolutePath
());
// No limit. Don't forget to check the space on disk.
mMediaRecorder
.
setMaxDuration
(-
1
);
mMediaRecorder
.
setVideoFrameRate
(
15
);
mMediaRecorder
.
setVideoEncoder
(
MediaRecorder
.
VideoEncoder
.
DEFAULT
);
try
{
mMediaRecorder
.
prepare
();
}
catch
(
IllegalStateException
e
)
{
// This is thrown if the previous calls are not made in the
// proper order
e
.
printStackTrace
();
}
mInitSuccesful
=
true
;
}
It is important to create and unlock a Camera
object before the creation of a MediaRecorder
. setPreviewDisplay
, and setCamera()
must be called immediately after the creation of the MediaRecorder
. The choice of the format and output file is obligatory. Other options, if present, must be called in the order outlined in Example 9-3.
The MediaRecorder
is best initialized when the surface has been created. We register our Activity as a SurfaceHolder.Callback
listener in order to be notified of this and override the surfaceCreated()
method to call our initialization code:
@Override
public
void
surfaceCreated
(
SurfaceHolder
holder
)
{
try
{
if
(!
mInitSuccessful
)
initRecorder
(
mHolder
.
getSurface
());
}
catch
(
IOException
e
)
{
e
.
printStackTrace
();
// Better error handling?
}
}
When you’re done with the surface, don’t forget to release the resources, as the Camera is a shared object and may be used by other applications:
private
void
shutdown
()
{
// Release MediaRecorder and especially the Camera as it's a shared
// object that can be used by other applications
mMediaRecorder
.
reset
();
mMediaRecorder
.
release
();
mCamera
.
release
();
// Once the objects have been released they can't be reused
mMediaRecorder
=
null
;
mCamera
=
null
;
}
Override the surfaceDestroyed()
method so that the preceding code can be called automatically when the user is done with the Activity:
@Override
public
void
surfaceDestroyed
(
SurfaceHolder
holder
)
{
shutdown
();
}
The source code for this project is in the Android Cookbook repository, in the subdirectory MediaRecorderDemo (see “Getting and Using the Code Examples”).
Wagied Davids
Use Android’s built-in face detection capability.
This recipe illustrates how to implement face detection in images. Face detection is a cool and fun hidden API feature of Android. In essence, face detection is the act of recognizing the parts of an image that appear to be human faces. It is part of a machine learning technique of recognizing objects using a set of features.
Note that this is not face recognition; it detects the parts of the image that are faces, but does not tell you who the faces belong to. Android 4.0 and later feature face recognition for unlocking the phone.
The main Activity (see Example 9-4) creates an instance of our FaceDetectionView
. In this example, we hardcode the file to be scanned, but in real life you would probably want to capture the image using the camera, or choose the image from a gallery.
import
android.app.Activity
;
import
android.os.Bundle
;
public
class
Main
extends
Activity
{
/** Called when the Activity is first created. */
@Override
public
void
onCreate
(
Bundle
savedInstanceState
)
{
super
.
onCreate
(
savedInstanceState
);
setContentView
(
new
FaceDetectionView
(
this
,
"face5.JPG"
));
}
}
FaceDetectionView
is our custom class used to manage the face detection code using android.media.FaceDetector
. The init()
method conditions some graphics used to mark the faces—in this example, we know where the faces are, and hope that Android will find them. The real work is done in detectFaces()
, where we call the FaceDetector
’s findFaces()
method, passing in our image and an array to contain the results. We then iterate over the found faces. Example 9-5 shows the code, and Figure 9-1 shows the result.
... import android.media.FaceDetector; public class FaceDetectionView extends View { private static final String tag = FaceDetectionView.class.getName(); private static final int NUM_FACES = 10; private FaceDetector arrayFaces; private final FaceDetector.Face getAllFaces[] = new FaceDetector.Face[NUM_FACES]; private FaceDetector.Face getFace = null; private final PointF eyesMidPts[] = new PointF[NUM_FACES]; private final float eyesDistance[] = new float[NUM_FACES]; private Bitmap sourceImage; private final Paint tmpPaint = new Paint(Paint.ANTI_ALIAS_FLAG); private final Paint pOuterBullsEye = new Paint(Paint.ANTI_ALIAS_FLAG); private final Paint pInnerBullsEye = new Paint(Paint.ANTI_ALIAS_FLAG); private int picWidth, picHeight; private float xRatio, yRatio; private ImageLoader mImageLoader = null; public FaceDetectionView(Context context, String imagePath) { super(context); init(); mImageLoader = ImageLoader.getInstance(context); sourceImage = mImageLoader.loadFromFile(imagePath); detectFaces(); } private void init() { Log.d(tag, "Init()..."); pInnerBullsEye.setStyle(Paint.Style.FILL); pInnerBullsEye.setColor(Color.RED); pOuterBullsEye.setStyle(Paint.Style.STROKE); pOuterBullsEye.setColor(Color.RED); tmpPaint.setStyle(Paint.Style.STROKE); tmpPaint.setTextAlign(Paint.Align.CENTER); BitmapFactory.Options bfo = new BitmapFactory.Options(); bfo.inPreferredConfig = Bitmap.Config.RGB_565; } private void loadImage(String imagePath) { sourceImage = mImageLoader.loadFromFile(imagePath); } @Override protected void onDraw(Canvas canvas) { Log.d(tag, "onDraw()..."); xRatio = getWidth() * 1.0f / picWidth; yRatio = getHeight() * 1.0f / picHeight; canvas.drawBitmap( sourceImage, null, new Rect(0, 0, getWidth(), getHeight()), tmpPaint); for (int i = 0; i < eyesMidPts.length; i++) { if (eyesMidPts[i] != null) { pOuterBullsEye.setStrokeWidth(eyesDistance[i] / 6); canvas.drawCircle(eyesMidPts[i].x * xRatio, eyesMidPts[i].y * yRatio, eyesDistance[i] / 2, pOuterBullsEye); canvas.drawCircle(eyesMidPts[i].x * xRatio, eyesMidPts[i].y * yRatio, eyesDistance[i] / 6, pInnerBullsEye); } } } private void detectFaces() { Log.d(tag, "detectFaces()..."); picWidth = sourceImage.getWidth(); picHeight = sourceImage.getHeight(); arrayFaces = new FaceDetector(picWidth, picHeight, NUM_FACES); arrayFaces.findFaces(sourceImage, getAllFaces); for (int i = 0; i < getAllFaces.length; i++) { getFace = getAllFaces[i]; try { PointF eyesMP = new PointF(); getFace.getMidPoint(eyesMP); eyesDistance[i] = getFace.eyesDistance(); eyesMidPts[i] = eyesMP; Log.i("Face", i + " " + getFace.confidence() + " " + getFace.eyesDistance() + " " + "Pose: (" + getFace.pose(FaceDetector.Face.EULER_X) + "," + getFace.pose(FaceDetector.Face.EULER_Y) + "," + getFace.pose(FaceDetector.Face.EULER_Z) + ")" + "Eyes Midpoint: (" + eyesMidPts[i].x + "," + eyesMidPts[i].y + ")"); } catch (Exception e) { Log.e("Face", i + " is null"); } } } }
The source code for this example is in the Android Cookbook repository, in the subdirectory FaceFinder (see “Getting and Using the Code Examples”).
Marco Dinacci
Playing an audio file is as easy as setting up a MediaPlayer
and a MediaController
. First, create a new Activity that implements the MediaPlayerControl
interface (see Example 9-6).
public
class
PlayAudioActivity
extends
Activity
implements
MediaPlayerControl
{
private
MediaController
mMediaController
;
private
MediaPlayer
mMediaPlayer
;
private
Handler
mHandler
=
new
Handler
();
In the onCreate()
method, we create and configure a MediaPlayer
and a MediaController
. The first is the object that performs the typical operations on an audio file, such as playing, pausing, and seeking. The second is a view containing the buttons that launch the aforementioned operations through our MediaPlayerControl
class. Example 9-7 shows the onCreate()
code.
@Override
public
void
onCreate
(
Bundle
savedInstanceState
)
{
super
.
onCreate
(
savedInstanceState
);
setContentView
(
R
.
layout
.
main
);
mMediaPlayer
=
new
MediaPlayer
();
mMediaController
=
new
MediaController
(
this
);
mMediaController
.
setMediaPlayer
(
PlayAudioActivity
.
this
);
mMediaController
.
setAnchorView
(
findViewById
(
R
.
id
.
audioView
));
String
audioFile
=
""
;
try
{
mMediaPlayer
.
setDataSource
(
audioFile
);
mMediaPlayer
.
prepare
();
}
catch
(
IOException
e
)
{
Log
.
e
(
"PlayAudioDemo"
,
"Could not open file "
+
audioFile
+
" for playback."
,
e
);
}
mMediaPlayer
.
setOnPreparedListener
(
new
OnPreparedListener
()
{
@Override
public
void
onPrepared
(
MediaPlayer
mp
)
{
mHandler
.
post
(
new
Runnable
()
{
public
void
run
()
{
mMediaController
.
show
(
10000
);
mMediaPlayer
.
start
();
}
});
}
});
}
In addition to configuring our MediaController
and MediaPlayer
, we create an anonymous OnPreparedListener
in order to start the player only when the media source is ready for playback. Remember to clean up the MediaPlayer
when the Activity is destroyed (see Example 9-8).
@Override
protected
void
onDestroy
()
{
super
.
onDestroy
();
mMediaPlayer
.
stop
();
mMediaPlayer
.
release
();
}
Lastly, we implement the MediaPlayerControl
interface. The code is straightforward, as shown in Example 9-9.
@Override
public
boolean
canPause
()
{
return
true
;
}
@Override
public
boolean
canSeekBackward
()
{
return
false
;
}
@Override
public
boolean
canSeekForward
()
{
return
false
;
}
@Override
public
int
getBufferPercentage
()
{
return
(
mMediaPlayer
.
getCurrentPosition
()
*
100
)
/
mMediaPlayer
.
getDuration
();
}
// Remaining methods just delegate to the MediaPlayer
}
As a final touch, we override the onTouchEvent()
method in order to show the MediaController
buttons when the user clicks the screen. Since we create our MediaController
programmatically, the layout is very simple:
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout
xmlns:android=
"http://schemas.android.com/apk/res/android"
android:orientation=
"vertical"
android:layout_width=
"fill_parent"
android:layout_height=
"fill_parent"
android:id=
"@+id/audioView"
>
</LinearLayout>
The source code for this example is in the Android Cookbook repository, in the subdirectory MediaPlayerInteractive (see “Getting and Using the Code Examples”).
Ian Darwin
This is the simplest way to play a sound file. In contrast with Recipe 9.4, this version offers the user no controls to interact with the sound. You should therefore usually offer at least a Stop or Cancel button, especially if the audio file is (or might be) long. If you’re just playing a short sound effect within your application, no such control is needed.
You must have a MediaPlayer
created for your file. The audio file may be on the SD card or in your application’s res/raw directory. If the sound file is part of your application, store it under res/raw. Suppose it is in res/raw/alarm_sound.3gp; then the reference to it is R.raw.alarm_sound
, and you can play it as follows:
MediaPlayer
player
=
MediaPlayer
.
create
(
this
,
R
.
raw
.
alarm_sound
);
player
.
start
();
In the SD card case, use the following invocation:
MediaPlayer
player
=
new
MediaPlayer
();
player
.
setDataSource
(
fileName
);
player
.
prepare
();
player
.
start
();
There is also a convenience routine, MediaPlayer.create(Context, URI)
, that you can use; in all cases, create()
calls prepare()
for you.
To control the player from within your application, you can call the relevant methods such as player.stop()
, player.pause()
, and so on. If you want to reuse a player after stopping it, you must call prepare()
again. To be notified when the audio is finished, use an OnCompletionListener
:
player
.
setOnCompletionListener
(
new
OnCompletionListener
()
{
@Override
public
void
onCompletion
(
MediaPlayer
mp
)
{
Toast
.
makeText
(
Main
.
this
,
"Media Play Complete"
,
Toast
.
LENGTH_SHORT
).
show
();
}
});
When you are truly done with any MediaPlayer
instance, you should call its release()
method to free up memory; otherwise, you will run out of resources if you create a lot of MediaPlayer
objects.
To really use the MediaPlayer
effectively, you should understand its various states and transitions, as this will help you to understand what methods are valid. The developer documentation contains a complete state diagram for the MediaPlayer
.
The source code for this example is in the Android Cookbook repository, in the subdirectory MediaPlayerDemo (see “Getting and Using the Code Examples”).
Corey Sunwold
One of Android’s unique features is native speech-to-text processing. This provides an alternative form of text input for the user, who in some situations might not have her hands free to type in information.
Android provides an easy API for using its built-in voice recognition capability through the RecognizerIntent
. Our example layout will be very simple (see Example 9-10). I’ve only included a TextView
called speechText
and a Button
called getSpeechButton
. The Button
will be used to launch the voice recognizer, which will remain listening and recognizing until the user stops talking for a few seconds. When results are returned they will be displayed in the TextView
.
public
class
Main
extends
Activity
{
private
static
final
int
RECOGNIZER_RESULT
=
1234
;
/** Called when the Atctivity is first created. */
@Override
public
void
onCreate
(
Bundle
savedInstanceState
)
{
super
.
onCreate
(
savedInstanceState
);
setContentView
(
R
.
layout
.
main
);
Button
startSpeech
=
(
Button
)
findViewById
(
R
.
id
.
getSpeechButton
);
startSpeech
.
setOnClickListener
(
new
OnClickListener
()
{
@Override
public
void
onClick
(
View
v
)
{
Intent
intent
=
new
Intent
(
RecognizerIntent
.
ACTION_RECOGNIZE_SPEECH
);
intent
.
putExtra
(
RecognizerIntent
.
EXTRA_LANGUAGE_MODEL
,
RecognizerIntent
.
LANGUAGE_MODEL_FREE_FORM
);
intent
.
putExtra
(
RecognizerIntent
.
EXTRA_PROMPT
,
"Speech to text"
);
startActivityForResult
(
intent
,
RECOGNIZER_RESULT
);
}
});
}
/**
* Handle the results from the recognition Activity.
*/
@Override
protected
void
onActivityResult
(
int
requestCode
,
int
resultCode
,
Intent
data
)
{
if
(
requestCode
==
RECOGNIZER_RESULT
&&
resultCode
==
RESULT_OK
)
{
ArrayList
<
String
>
matches
=
data
.
getStringArrayListExtra
(
RecognizerIntent
.
EXTRA_RESULTS
);
TextView
speechText
=
(
TextView
)
findViewById
(
R
.
id
.
speechText
);
speechText
.
setText
(
matches
.
get
(
0
).
toString
());
}
super
.
onActivityResult
(
requestCode
,
resultCode
,
data
);
}
}
The developer documentation on RecognizerIntent
.
The source code for this project is in the Android Cookbook repository, in the subdirectory SpeechRecognizerDemo (see “Getting and Using the Code Examples”).
Ian Darwin
Use the TextToSpeech API.
The TextToSpeech (TTS) API is built into Android (though you may have to install the voice files, depending on the version you are using). To get started you just need a TextToSpeech
object. In theory, you could simply do this:
private
TextToSpeech
myTTS
=
new
TextToSpeech
(
this
,
this
);
myTTS
.
setLanguage
(
Locale
.
US
);
myTTS
.
speak
(
textToBeSpoken
,
TextToSpeech
.
QUEUE_FLUSH
,
null
);
myTTS
.
shutdown
();
However, to ensure success, you actually have to use a couple of Intents: one to check that the TTS data is available and install it if not, and another to start the TTS mechanism. So, in practice, the code needs to look something like Example 9-11. This quaint little application chooses one of half a dozen banal phrases to utter each time the Speak button is pressed.
public
class
Main
extends
Activity
implements
OnInitListener
{
private
TextToSpeech
myTTS
;
private
List
<
String
>
phrases
=
new
ArrayList
<
String
>
(
)
;
public
void
onCreate
(
Bundle
savedInstanceState
)
{
phrases
.
add
(
"Hello Android, Goodbye iPhone"
)
;
phrases
.
add
(
"The quick brown fox jumped over the lazy dog"
)
;
phrases
.
add
(
"What is your mother's maiden name?"
)
;
phrases
.
add
(
"Etaoin Shrdlu for Prime Minister"
)
;
phrases
.
add
(
"The letter 'Q' does not appear in 'antidisestablishmentarianism')"
)
;
super
.
onCreate
(
savedInstanceState
)
;
setContentView
(
R
.
layout
.
main
)
;
Button
startButton
=
(
Button
)
findViewById
(
R
.
id
.
start_button
)
;
startButton
.
setOnClickListener
(
new
View
.
OnClickListener
(
)
{
@Override
public
void
onClick
(
View
arg0
)
{
Intent
checkIntent
=
new
Intent
(
)
;
checkIntent
.
setAction
(
TextToSpeech
.
Engine
.
ACTION_CHECK_TTS_DATA
)
;
startActivityForResult
(
checkIntent
,
1
)
;
}
}
)
;
}
protected
void
onActivityResult
(
int
requestCode
,
int
resultCode
,
Intent
data
)
{
if
(
requestCode
=
=
1
)
{
if
(
resultCode
=
=
TextToSpeech
.
Engine
.
CHECK_VOICE_DATA_PASS
)
{
myTTS
=
new
TextToSpeech
(
this
,
this
)
;
myTTS
.
setLanguage
(
Locale
.
US
)
;
}
else
{
// TTS data not yet loaded, try to install it
Intent
ttsLoadIntent
=
new
Intent
(
)
;
ttsLoadIntent
.
setAction
(
TextToSpeech
.
Engine
.
ACTION_INSTALL_TTS_DATA
)
;
startActivity
(
ttsLoadIntent
)
;
}
}
}
public
void
onInit
(
int
status
)
{
if
(
status
=
=
TextToSpeech
.
SUCCESS
)
{
int
n
=
(
int
)
(
Math
.
random
(
)
*
phrases
.
size
(
)
)
;
myTTS
.
speak
(
phrases
.
get
(
n
)
,
TextToSpeech
.
QUEUE_FLUSH
,
null
)
;
}
else
if
(
status
=
=
TextToSpeech
.
ERROR
)
{
myTTS
.
shutdown
(
)
;
}
}
}
The first argument is a Context
(the Activity
) and the second is an OnInitListener
, also implemented by the main Activity in this case. When the initialization of the TextToSpeech
object is done, it calls the listener, whose onInit()
method is meant to notify that the TTS is ready. In this trivial Speaker program, we simply do the speaking. In a longer example, you would probably want to start a thread or Service to do the speaking operation.
The source code for this example is in the Android Cookbook repository, in the subdirectory Speaker (see “Getting and Using the Code Examples”).
3.238.82.77