This recipe handles two important parts of making characters seem alive and sentient. Technically, they can be handled using AnimChannel
, but they still deserve their own mention as they have some special requirements.
Lip syncing revolves around something called Phoneme, which is the distinct shape the mouth takes when making certain sounds. The number of phonemes a character has varies according to different needs, but there is a basic set that is used to create believable mouth movements.
Finally, we'll use jMonkeyEngine's Cinematics system to apply them in sequence and have the character speak (mime) a word. Cinematics is jMonkeyEngine's scripting system, and it can be used both to create in-game-scripted events and cutscenes. It is covered in more depth in Chapter 9, Taking Our Game to the Next Level.
We'll follow the control pattern in this recipe, and control can be merged into another animation controller or be kept alone.
Having a model with phoneme animations ready or creating them in an external modeling program is preferred. It's perfectly all right if the animations are one-frame static expressions.
If the previous options are not available, one method is to use the SDK's functionality to create subanimations. A version of Jaime with phoneme animations is supplied with the project for the sake of this recipe. For those interested in going through the process of creating subanimations themselves, there is a list of the ones used in the Enabling nightly builds section in Appendix, Information Fragments.
All the required functionalities can be implemented in a single class by performing the following steps:
ExpressionsControl
that extends AbstractControl
.AnimControl
named animControl
, one AnimChannel
called mouthChannel
, and another AnimChannel
called eyeBrowChannel
.RESET
option for a neutral mouth expression, as shown in the following code:public enum PhonemeMouth{ AAAH, EEE, I, OH, OOOH, FUH, MMM, LUH, ESS, RESET; };
public enum ExpressionEyes{ NEUTRAL, HAPPY, ANGRY; };
setSpatial
method, we create AnimChannel
for mouth animations and one for the eyes, then we add suitable bones to each of these, as shown in the following code. The list of bones available can be seen in SkeletonControl
in SceneComposer
.mouthChannel = animControl.createChannel(); mouthChannel.addBone("LipSide.L"); ...
LoopMode
to Loop
or Cycle
. The speed has to be higher than 0
or blending won't work. Set these for both AnimChannels
.public void setPhoneme(PhonemeMouth p){ mouthChannel.setAnim("Phoneme_" + p.name(), 0.2f); } public void setExpression(ExpressionEyes e){ eyeBrowChannel.setAnim("Expression_" + e.name(), 0.2f); }
When this recipe was written, the following AnimationEvent
constructor did not exist and AnimChannels
were not applied properly. A patch has been submitted but may not have made it into a stable build. If required, the patch can be found in the The AnimationEvent patch section in Appendix, Information Gathering. It can also be acquired by turning on nightly builds in the SDK.
public void setupHelloCinematic() { cinematicHello = new Cinematic((Node)jaime, 1f); stateManager.attach(cinematicHello); cinematicHello.addCinematicEvent(0.0f, new AnimationEvent(jaime, "Expression_HAPPY", LoopMode.Cycle, 2, 0.2f)); cinematicHello.addCinematicEvent(0.1f, new AnimationEvent(jaime, "Phoneme_EEE", LoopMode.Cycle, 1, 0.1f)); cinematicHello.addCinematicEvent(0.2f, new AnimationEvent(jaime, "Phoneme_LUH", LoopMode.Cycle, 1, 0.1f)); cinematicHello.addCinematicEvent(0.3f, new AnimationEvent(jaime, "Phoneme_OOOH", LoopMode.Cycle, 1, 0.1f)); cinematicHello.addCinematicEvent(0.7f, new AnimationEvent(jaime, "Phoneme_RESET", LoopMode.Cycle, 1, 0.2f)); cinematicHello.setSpeed(1.0f); cinematicHello.setLoopMode(LoopMode.DontLoop); cinematicHello.play(); }
The technical principles behind the phonemes are not that different from animating other parts of the character. We create AnimChannels
, which handles different sets of bones. The first tricky bit is to organize the channels if you want to be able to control different parts of the body at the same time.
The pipeline for how to apply the phonemes can also be difficult. The first step will be to not set them directly in the code. It's not implausible that changing the expression of the character could be called directly from the code on certain events. Doing so for each phoneme in a sentence would be very cumbersome. Using the cinematics system is a good start as it would be relatively simple to write a piece of code that parses a text file and creates a cinematic sequence from it. Timing is really crucial, and it can take a lot of time to get the movements synced with sound. Doing it in a format that allows you to have a quick iteration is important.
Another more complex way would be to build up a database that maps words and phonemes and automatically applies them in a sequence.
The absolutely simplest approach is to not really care about lip syncing and just apply a moving mouth animation whenever the character speaks.
3.21.12.140