Grammar Recognizer using Speech Recognition Grammar Specification (SRGS)

In the preceding section, we implemented the functionality to allow the user to control the robot using single words and short phrases, but this approach lacks the expressiveness that makes language so powerful and natural. In this section, we will explore an alternative approach that is more expressive and more inline with how people talk. We will be using GrammarRecognizer, which is available in the UnityEngine.Windows.Speech namespace, and will start off by looking at how to build a corpus of phrases we want to recognize, and then create a new PlayStateVoiceHandler to integrate it into the example.

The corpus of phrases we want to recognize will be written in an XML document that conforms to the SRGS, a standard governed by World Wide Web Consortium (W3C) specifically for defining a syntax used for a speech recognizer.

W3C is an international community made up of many organisations and members and is tasked with governing the World Wide Web. Governance is achieved through setting a set of standards to ensure openness. One standard, as mentioned earlier, is the SRGS, which is a standard that defines an XML schema used to define syntax for speech application. Microsoft supports this standard via their Speech API. You can learn more by visiting the official site at https://www.w3.org/TR/speech-grammar/.

Start by creating a new folder in your project's Assets/App directory called StreamingAssets. Unlike other files in your project, files that reside in the StreamingAssets folder of a Unity project are copied across to the filesystem of the destination platform. Once created, create an XML document (using an XML editor of your choice) called srgs_robotcommands.xml; this is the file that will contain our grammar.

The SRGS is a comprehensive schema, and I encourage you to investigate further if, you are curious about this subject; in this example, we will only be scratching the surface, but enough to put it to use in your own applications and, hopefully, enough to make you curious to learn more.

The general idea of the SRGS is to describe a set of phrases you are expecting from the user; its flexibility lies in how these phrases are defined. Unlike in the previous section where we were constrained to static words or short phrases, phrases in an SRGS are made up of a sequence of items, where items can be either single words, a sequence of words, or one of many alternatives. You also have the flexibility of making words optional, repetitive, and dynamically loaded at runtime. In the following diagram, we present a graphical representation of the phrases we are expecting from the user, something we will use as a reference when building up the SRGS:

Here, the circles represent the entry and exit points, rounded rectangles represent items, and the arrows indicate the sequence. We have two paths, each for handling different intents, with the first (top) being used to recognize when the user wants to stop the current command and the second (bottom) showing the flow of a user committing a command. Here, we have items chained together, indicating whether an item is optional and/or a section of subitems; for example, the two phrases--please rotate the base left and rotate base left--provide the same meaning (or more correctly, satisfy the same flow).

For us, it's not enough just to recognize a phrase; ideally we want to be able to extract useful information from the user's utterance, for example, recognizing what part the user is wanting to move and how they want to move it. SRGS and GrammarRecgonizer provide us with this functionality and something we will make use of in this example, so let's jump into the document srgs_robotcommands.xml and start fleshing out the semantics.

We will start with our stop phrase, and despite its brevity, this short phrase will introduce everything we need to build out our longer, more complex phrase; with srgs_robotcommands.xml open, type or copy in the following:

<?xml version="1.0" encoding="UTF-8" ?>
<grammar version="1.0" xml:lang="en-US" mode="voice" root= "Entry"
xmlns="http://www.w3.org/2001/06/grammar" tag-format="semantics/1.0">

<rule id="Entry" scope="public">
<one-of>
<item> <ruleref uri="#RobotStop"/> </item>
</one-of>
</rule>

<rule id="RobotStop" scope="public">
<example> please stop </example>
<example> stop </example>

<item repeat="0-1"> please </item>

<one-of>
<item> stop <tag> out.Action = "stop"; </tag> </item>
</one-of>

</rule>

</grammar>

As mentioned in the preceding code snippet, this short extract demonstrates everything we will need to build the rest of our grammar, so we will take our time on this first phrase and move quickly through the second.

The topmost root is the grammar tag. What is important here is the root tag; this value references the entry element in your grammar document, in our case, suitably named as Entry. This element is of the rule type, an element the engine uses to match the user's utterance with the defined phrases; it also provides a clean way of separating chunks of your phrase, thus allowing you to more easily reuse common expressions. Let's now define our entry point:

 <rule id="Entry" scope="public">
<one-of>
<item> <ruleref uri="#RobotStop"/> </item>
</one-of>
</rule>

The contents of the Entry element are wrapped with a one-of element; this is akin to a switch statement in C#, allowing alternative expressions to be evaluated. Here, we only have one item, but we will return here in a bit to add reference to our second phrase. Next, we have the item element, which can contain utterances expected from the user and/or subelements, such as ruleref, one-of, or tag. For example, to recognize the Hello Word and Hello Earth phrases, we can define an item as follows:

 <item> Hello <ruleref uri="#WorldPhrases" /> </item>

Here, as you might have suspected, ruleref points to another rule (either within the same file or externally) containing a set of alternatives for world (World and Earth in our case). Let's now inspect the referenced rule RobotStop:

 <rule id="RobotStop" scope="public"> 
<example> please stop </example>
<example> stop </example>

<item repeat="0-1"> please </item>

<one-of>
<item> stop <tag> out.Action = "stop"; </tag> </item>
</one-of>

</rule>

In the preceding code snippet, we first provide two examples using the example element; like code comments, these are ignored by the recognizer, but are useful to us. Next, we define an optional utterance using the repeat="0-1" attribute and value, which means the recognizer will successfully match this phrase with or without the utterance please at the beginning of the user's utterance.

We have wrapped the next part in a one-of element with an item containing the utterance we are expecting--stop. For a successful match, the user's phrase must match this.

You may have noted that we only have one element within one-of; in fact this element is not needed. The purpose of using it here is to introduce the element. As the name implies, it allows one of many items to be matched; for example, here we have only included the utterance stop, but we can easily add alternatives such as halt. To be successful, only one of these utterances will need to match.

As mentioned earlier, we need a way of extracting useful information, referred to as semantics, from the phrases recognized by the user, and this is where tag comes in; it provides a way of propagating values back up to the GrammarRecognizer. The contents of a tag can either be a value (out = "stop") or key-value pair (out.Action = "stop"); here, we are using the latter, assigning stop to the Action key.

This now finishes our grammar definition for our first phrase--stop--and concludes our brief introduction to SRGS. We will now move on to fleshing out our second phrase, but will omit a lot of the details, as nothing new is introduced.

Start by extending our Entry element to include the reference to the second phrase; make the following amendments:

 <rule id="Entry" scope="public">
<one-of>
<item> <ruleref uri="#RobotStop"/> </item>
<item> <ruleref uri="#RobotExecute"/> </item>
</one-of>
</rule>

We will continue drilling down, defining the top elements and then their dependencies. With this in mind, let's now define the RobotExecute rule:

 <rule id="RobotExecute" scope="public"> 
<example> please rotate the base left </example>
<example> start rotating the base left </example>
<example> rotate the base left by 30 degrees </example>

<item repeat="0-1"> please </item>
<item repeat="0-1"> start </item>

<item>
<ruleref uri="#Action"/>
<tag> out.Action = rules.Action; </tag>
</item>

<item repeat="0-1"> the </item>

<item>
<ruleref uri="#Part"/>
<tag> out.Part = rules.Part; </tag>
</item>

<item>
<ruleref uri="#Direction"/>
<tag> out.Direction = rules.Direction; </tag>
</item>

<item repeat="0-1">
<item repeat="0-1"> by </item>

<item>
<ruleref uri="#Number"/>
<tag> out.Change = rules.Number; </tag>
</item>

<item>
<ruleref uri="#Unit"/>
<tag> out.Unit = rules.Unit; </tag>
</item>

</item>

<item repeat="0-1">
<one-of>
<item> please </item>
<item> thanks </item>
</one-of>
</item>

</rule>

Despite it being a fairly long piece of text, it introduces nothing new, apart perhaps from some nuances that will be described in the following section. If you compare it with the graphical representation we saw earlier, you will see how representative it is of the actual grammar we are defining.

The nuance I was referring to before is how we can propagate results back to the recognizer; let's take the Part item for example; the following is an extract taken from the preceding section:

 <item>
<ruleref uri="#Part"/>
<tag> out.Part = rules.Part; </tag>
</item>

We wrapped the element ruleref around an item and included a tagWhat might look out of place is the value assigned to Part, rules.Part; this is a syntax of SRGS that allows us to extract results from a rule we are referencing. To make it more concrete, let's examine the Part rule:

 <rule id="Part">
<example> base </example>
<example> arm 1 </example>
<example> arm 2 </example>
<example> tool </example>

<one-of>
<item>
<ruleref uri="#PartBase"/>
<tag> out = "base"; </tag>
</item>
<item>
<ruleref uri="#PartArm1"/>
<tag> out = "arm 1"; </tag>
</item>
<item>
<ruleref uri="#PartArm2"/>
<tag> out = "arm 2"; </tag>
</item>
<item>
<ruleref uri="#PartTool"/>
<tag> out = "tool"; </tag>
</item>
</one-of>
</rule>

As can be seen from the preceding code snippet, the Part rule defines a set of alternatives, and, if matched, the item will assign a value to the out; for example, if PartBase is matched, then base is returned:

 <item>
<ruleref uri="#PartBase"/>
<tag> out = "base"; </tag>
</item>

SRGS exposes the results from rules via the rules object, so, for this example, rules.Part will have the assigned results from the Part rule to our key Part. Let's continue drilling down our document, continuing with the individual part rules:

 <rule id="PartBase">
<example> base </example>

<one-of>
<item> bottom </item>
</one-of>
</rule>

<rule id="PartArm1">
<example> arm 1 </example>

<one-of>
<item> arm 1 </item>
<item> arm one </item>
<item> lower arm </item>
<item> bottom arm </item>
</one-of>
</rule>

<rule id="PartArm2">
<example> arm 2 </example>

<one-of>
<item> arm 2 </item>
<item> arm two </item>
<item> upper arm </item>
<item> top arm </item>
</one-of>
</rule>

<rule id="PartTool">
<example> tool </example>

<one-of>
<item> tool </item>
<item> end </item>
<item> end effector </item>
<item> hand </item>
<item> handle </item>
</one-of>
</rule>

The preceding code snippet contains a set of rules to define a list of alternatives for each part. We could just as easily have embedded this into the Part rule, but just like code, this separation allows easier management and offers more resilience to change.

Let's now define our Action rule:

 <rule id="Action">
<example> rotating </example>
<example> move </example>

<one-of>
<item> rotate <tag> out = "rotate"; </tag> </item>
<item> rotating <tag> out = "rotate"; </tag> </item>
<item> move <tag> out = "move"; </tag> </item>
<item> moving <tag> out = "move"; </tag> </item>
</one-of>
</rule>

Now, let's look at the Directions rule. As we are seeing a lot of repetition, we will omit a lot of the subrules. For the complete version, download the associated project:

 <rule id="Direction">
<example> left </example>
<example> up </example>
<example> forward </example>

<one-of>
<item>
<ruleref uri="#DirLeft"/>
<tag> out = "left"; </tag>
</item>
<item>
<ruleref uri="#DirRight"/>
<tag> out = "right"; </tag>
</item>
<item>
<ruleref uri="#DirUp"/>
<tag> out = "up"; </tag>
</item>
<item>
<ruleref uri="#DirDown"/>
<tag> out = "down"; </tag>
</item>
<item>
<ruleref uri="#DirForwards"/>
<tag> out = "forwards"; </tag>
</item>
<item>
<ruleref uri="#DirBackwards"/>
<tag> out = "backwards"; </tag>
</item>
</one-of>
</rule>

<rule id="DirLeft">
<example> left </example>

<one-of>
<item> left </item>
<item> clockwise </item>
</one-of>
</rule>

...

Our final rules include Number and Unit; similar to the previous, we will omit a lot of the details, especially the Number rule:

 <rule id="Unit">
<one-of>
<item> dgrees <tag>out = "degrees"; </tag> </item>
<item> meters <tag>out = "meters"; </tag> </item>
<item> centimeters <tag>out = "centimeters"; </tag> </item>
<item> millimetre <tag>out = "millimetre"; </tag> </item>
</one-of>
</rule>

<rule id="Number">
<one-of>
<item> zero <tag>out = 0; </tag> </item>
<item> one <tag>out = 1; </tag> </item>
<item> two <tag>out = 2; </tag> </item>
<item> three <tag>out = 3; </tag> </item>
<item> four <tag>out = 4; </tag> </item>
...
</one-of>
</rule>

This now completes our SRGS, but we have only just scratched the surface of what is possible. I encourage you to continue exploring and learning, especially as Voice User Interfaces (VUIs) have just started to be developed and, in no time, will become one of the dominant ways to interact with our digital peers. Furthermore and SRGS offers a flexible and comprehensive solution that can cater for a lot of use cases.

With our SRGS now defined, it's time to return to code and make use of it. As we did earlier, we will create a concrete implementation of our PlayStateVoiceHandler class, specifically to use our grammar file with GrammarRecognizer. With Unity Editor open, expand the App/Scripts folder in the Project panel and create a new script called PSSRGSGrammarHandler by clicking on the Create dropdown and selecting C# Script. Double-click on PSSRGSGrammarHandler to open it in Visual Studio.

Our newly created script PSSRGSGrammarHandler will resemble much of our PSKeywordHandler class, mainly because they are trying to achieve the same thing, with the main difference being how they interpret the recognized phrases from the user. Let's start by inheriting from the PlayStateVoiceHandler class and wiring up the GrammarRecognizer:

using UnityEngine.Windows.Speech;
using System.IO;

public class PSSRGSGrammarHandler : PlayStateVoiceHandler
{
public ConfidenceLevel confidenceLevel = ConfidenceLevel.Medium;

public float rotationSpeed = 5.0f;

public float moveSpeed = 5.0f;

public override void StartHandler()
{
}

public override void StopHandler()
{

}

private void Update()
{
}

private void OnDestroy()
{
}
}

We will first include the System.IO and UnityEngine.Windows.Speech namespaces, the former to read our srgs_robotcommands.xml file, and the latter to get access to the GrammarRecognizer. We inherit from our PlayStateVoiceHandler class and implement the abstract methods. Finally, we will include a confidence threshold in the conferenceLevel variable, which we will use to filter out any results that fall below this level, and two variables exposing the speeds at which the rotate arm will rotate and move: rotationSpeed and moveSpeed. Next, we will flesh out the StartHandler and StopHandler abstract methods, which will be responsible for loading and disposing of the GrammarRecognizer. Make the following amendments to the PSSRGSGrammarHandler class:

 public string SRGSFileName = "srgs_robotcommands.xml";

private GrammarRecognizer grammarRecognizer;

public override void StartHandler()
{
if(grammarRecognizer == null)
{
try
{
grammarRecognizer = new
GrammarRecognizer(Path.Combine(Application.streamingAssetsPath,
SRGSFileName));

grammarRecognizer.OnPhraseRecognized +=
GrammarRecognizer_OnPhraseRecognized;

}
catch
{
throw new Exception(string.Format("Error while trying to load or parse
the SRGS file {0}", SRGSFileName));

}
}

grammarRecognizer.Start();
}

public override void StopHandler()
{
if(grammarRecognizer != null)
{
grammarRecognizer.Stop();
}
}

private void Update()
{
}

private void OnDestroy()
{
if (grammarRecognizer != null)
{
grammarRecognizer.Stop();
grammarRecognizer.OnPhraseRecognized -=
GrammarRecognizer_OnPhraseRecognized;

grammarRecognizer.Dispose();
grammarRecognizer = null;
}
}

private void GrammarRecognizer_OnPhraseRecognized(PhraseRecognizedEventArgs args)
{
}
}

We add two new variables, one to store the location and filename of our SRGS file, and the other to hold reference to an instance of the GrammarRecognizer. In the StartHandler method, we instantiate an instance of the GrammarRecognizer, passing in the file path of our srgs_robotcommands.xml file. Next, we register our GrammarRecognizer_OnPhraseRecognized delegate, and then call Start on the GrammarRecognizer, which will, as you might expect, start our GrammarRecognizer.

Within the StopHandler method, we simply call Stop on the GrammarRecognizer and take care of unregistering the event handler and disposing of GrammarRecognizer in the OnDestory method. Finally, we add our GrammarRecognizer_OnPhraseRecognized handler.

With just the previously written code, we have a functional GrammarRecognizer. Our last task is to handle the recognized phrases, but, before we do, let's quickly discuss what the GrammarRecognizer returns.

KeywordRecognizer and GrammarRecognizer both inherit from PhraseRecognizer, and both return a PhraseRecognizedEventArgs instance when a phrase is recognized. While both share the type class, it's only the GrammarRecognizer that makes use of the semanticMeanings property (of the SemanticMeaning type). This type encapsulates the recognized semantics within your phrase; for example, in our SRGS, we define an output variable--Action--which is set to either "stop", "rotate", or "move"; if matched, these values are returned via the PhraseRecognizedEventArgs using the semanticMeaning property. We can iterate through the returned semantics (semanticMeaning is an array) and match the key with a given variable we are expecting (in this case, Action) and then obtain its value (or values if there are multiple results).

KeywordRecognizer and GrammarRecognizer cannot run at the same time; before using one, you must explicitly stop the other.

To mitigate the possibility of bugs, we will define some constants for the semantic keys we are expecting; make the following amendments to the PSSRGSGrammarHandler class:

 sealed class SemanticKeys
{
public const string Action = "Action";
public const string Part = "Part";
public const string Direction = "Direction";
public const string Change = "Change";
public const string Unit = "Unit";
}

sealed class CommandAction
{
public const string Stop = "stop";
public const string Rotate = "rotate";
public const string Move = "move";
}

sealed class CommandUnit
{
public const string Degrees = "degrees";
public const string Meters = "meters";
public const string Centieters = "centieters";
public const string Millimeters = "millimeters";
}

By looking at the graphical representation of the phrases we are interested in. We can see an opportunity to encapsulate the details into some structure. For example, the phrase (or command) is either going to stop or manipulate a specific part of the robot. If the latter, then we are expecting reference to the part, direction, and possibly some discrete change. Encapsulating it into a structure keeps our code cleaner and better prepares the code base for future requirement changes. Let's now define a struct that encapsulates the parameters of our phrase; make the following amendments to the PSSRGSGrammarHandler class:

  public struct Command
{
public string action;
public string part;
public string direction;
public string unit;
public float? change;

public bool IsDiscrete
{
get { return change.HasValue && (unit != null && unit != string.Empty); }
}

public float ScaledChange
{
get
{
if (!change.HasValue)
{
return 0;
}

return change.Value * GetMeterScaleForUnit(unit);
}
}

public float GetMeterScaleForUnit(string unit, float defaultScale=1f) {
if(unit == null || unit == string.Empty)
{
return defaultScale;
}

switch (unit)
{
case CommandUnit.Centieters:
return change.Value / 100f;
case CommandUnit.Millimeters:
return change.Value / 1000f;
}

return defaultScale;
}
}

In the preceding code snippet, we define a simple struct that encapsulates the possible values returned by the GrammarRecognizer; we have added two convenient methods: the IsDiscrete property that returns true if the user's intention is for discrete movement, and false otherwise, and GetMeterScaleForUnit, which is used to standardize the value requested by the user to meters (the unit we are working with in Unity). Before moving on to handle the response from the GrammarRecognizer and binding the return values with the Command struct, now let's define variable and property to hold reference to the current command, unsurprisingly named CurrentCommand. The reason we do this is, as we did in the preceding section, if Command is not discrete then it will be executed continuously until explicitly told to stop:

 private Command? _currentCommand; 

public Command? CurrentCommand
{
get
{
return _currentCommand;
}
private set
{
if (_currentCommand.HasValue)
{
if(_currentCommand.Value.part.Equals(PART_HANDLE, StringComparison.OrdinalIgnoreCase))
{
PlayStateManager.Instance.Robot.solverActive = false;
}

_currentCommand = value;

if (_currentCommand.Value.part.Equals(PART_HANDLE, StringComparison.OrdinalIgnoreCase))
{
PlayStateManager.Instance.Robot.solverActive = true;
}
}
}
}

Within the CurrentCommand property, we handle toggling the solveActive variable of RobotController every time a Command is set. This ensures that the robot is unlikely, to ever be in an invalid state with the inverse kinematics solver active when it shouldn't be.

We are almost there; the last two major tasks remaining are creating and binding a Command to a recognized phrase and then actually processing the CurrentCommand. Let's start with binding; this, of course, is performed when we are notified by the GrammarRecognizer of a valid match via the OnPhraseRecognized event. Make the following amendments to the GrammarRecognizer_OnPhraseRecognized method:

  private void GrammarRecognizer_OnPhraseRecognized(PhraseRecognizedEventArgs args)
{
if(args.confidence < confidenceLevel)
{
return;
}

Command commandCandidate = CreateCommand(args);

if (IsCommandValid(commandCandidate))
{
CurrentCommand = commandCandidate;
}
}

We first ensure that the interpreted phrase has reached our confidence threshold and, if satisfied, we delegate the creation and binding of the Command instance to the CreateCommand method passing over the received argument PhraseRecognizedEventArgs. The CreateCommand method is responsible for iterating through each semanticMeanings assigned to the parameter and binding them to the relevant variable of our newly created command object. Once this is returned, we verify that it is valid before assigning it to the property CurrentCommand (implemented earlier).

Let's now implement our creation and validation methods; add the following methods to the PSSRGSGrammarHandler class:

 Command CreateCommand(PhraseRecognizedEventArgs args)
{
SemanticMeaning[] meanings = args.semanticMeanings;

return new Command
{
action = meanings.Contains(SemanticKeys.Action) ? meanings.SafeGet(SemanticKeys.Action).Value.values[0] : string.Empty,
part = meanings.Contains(SemanticKeys.Part) ? meanings.SafeGet(SemanticKeys.Part).Value.values[0] : string.Empty,
direction = meanings.Contains(SemanticKeys.Direction) ? meanings.SafeGet(SemanticKeys.Direction).Value.values[0] : string.Empty,
change = meanings.Contains(SemanticKeys.Change) ? int.Parse(meanings.SafeGet(SemanticKeys.Change).Value.values[0]) : 0,
unit = meanings.Contains(SemanticKeys.Unit) ? meanings.SafeGet(SemanticKeys.Unit).Value.values[0] : string.Empty
};
}

bool IsCommandValid(Command command)
{
// details omitted for brevity
}

The CreateCommand class simply returns a new value of the Command with its properties bound to the available semantics of the argument PhraseRecognizedEventArgs. The next method, IsCommandValid, is responsible of ensuring that the Command is in a valid state for processing; this is a fairly verbose method and for this reason, has been omitted here, but you can check it out in the full source available for download at this book's website.

The semanticMeanings property of PhraseRecognizedEventArgs is an array; to ensure that the code was compact enough to publish, the Contains and SafeGet extension methods were created for convenience. The implementation of these extensions can be found in the Extensions.cs file accompanying with this project.

Now we can listen to and understand the user's utterance (that match our predefined syntax). The last piece of code that needs to be written is to execute the command, most of which should look familiar to you as it shares a lot with the code in the preceding section; add the following method to your PSSRGSGrammarHandler class:

 void ProcessCurrentCommand()
{
if (!CurrentCommand.HasValue)
{
return;
}

Command command = CurrentCommand.Value;

switch (command.action)
{
case CommandAction.Stop:
{
// terminate command
CurrentCommand = null;
break;
}
case CommandAction.Rotate:
{
PlayStateManager.Instance.Robot.solverActive = false;

if (command.IsDiscrete)
{
PlayStateManager.Instance.Robot.Rotate(command.part, GetRotationVector(command.direction, command.ScaledChange));
CurrentCommand = null;
}
else
{
PlayStateManager.Instance.Robot.Rotate(command.part, GetRotationVector(command.direction, rotationSpeed * Time.deltaTime));
}

break;
}
case CommandAction.Move:
{
PlayStateManager.Instance.Robot.solverActive = true;

if (command.IsDiscrete)
{
PlayStateManager.Instance.Robot.MoveIKHandle(GetTranslationVector(command.direction, command.ScaledChange));
PlayStateManager.Instance.Robot.solverActive = false;
CurrentCommand = null;
}
else
{
PlayStateManager.Instance.Robot.MoveIKHandle(GetTranslationVector(command.direction, moveSpeed * Time.deltaTime));
PlayStateManager.Instance.Robot.Rotate(command.part, GetRotationVector(command.direction, rotationSpeed * Time.deltaTime));
}
break;
}
}
}

We control which block is executed based on the bound action assigned to the command, and for each action, we first check whether the command is discrete or not. If discrete, we execute it using the associated direction and offset before terminating the command by setting it to null. If not discrete, we adjust the part using the direction and related speed (rotationSpeed or moveSpeed). To execute commands continuously, we need to call this method's each frame; we can easily do this by adding a call in the Update method. Let's add that now; make the following amendments to the Update method:

 private void Update()
{
if(CurrentCommand.HasValue)
{
ProcessCurrentCommand();
}
}

This now completes our PSSRGSGrammarHandler class; the only thing left to do is to hook this class up in the editor and test it. Jump back into the Unity Editor, and, as we did with the PSKeywordHandler, expand the Managers GameObject in the Hierarchy panel (if not already done), add a new empty GameObject with the name PSSRGSGrammarHandler by clicking on the Create dropdown, selecting Create Empty, and entering the appropriate name. Next, we need to add our script--select the newly created PSSRGSGrammarHandler GameObject and click on the Add Component button within the Inspector panel, typing and selecting PSSRGSGrammarHandler . Once attached, select the Managers GameObject and assign the PSSRGSGrammarHandler to the PlayStateManager script by clicking on and dragging the PSSRGSGrammarHandler onto the Voice Handler field. The Inspector panel of the Managers GameObject should look similar to this:

Try out the new voice handler by building and deploying to your device.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.222.117.109