To use speech recognition in your app, the WMAppManifest.xml file must include the following two capabilities:
ID_CAP_SPEECH_RECOGNITION
ID_CAP_MICROPHONE
To ensure that these capabilities are declared, open the WMAppManifest.xml file from the Properties directory of your project. In the Capabilities tab, ensure that the capabilities are checked.
In this section you look at using the default speech recognizer that allows you to quickly and easily recognize speech.
By default, the speech recognizer API uses a dictation grammar that relies on a Microsoft cloud service to perform speech recognition. Using the speech recognition API in your app is as simple as instantiating a SpeechRecognizerUI
instance and calling its RecognizeWithUIAsync
method, as shown in the following example:
SpeechRecognizerUI recognizer = new SpeechRecognizerUI();
SpeechRecognitionUIResult uiResult
= await recognizer.RecognizeWithUIAsync();
if (uiResult.ResultStatus == SpeechRecognitionUIStatus.Succeeded)
{
string recognitionText = uiResult.RecognitionResult.Text;
// Do something with the recognized text.
}
When the RecognizeWithUIAsync
is called, the SpeechRecognizerUI
presents a dialog that prompts the user to speak to the device.
The RecognizeWithUIAsync
method allows you to use the new asynchronous programming model of C#. The await keyword allows the containing method to wait for the asynchronous method to return without blocking the UI thread. You see a more complete example later in the chapter that demonstrates the use of the asynchronous feature.
Tip
The Windows Phone emulator is able to leverage the microphone of the host system. Thus, apps that use speech recognition and voice commands can be debugged using the Windows Phone Emulator with a microphone attached to your computer.
3.141.3.80