Lip syncing is normally a hot topic for animation students. The good news is that it's quite an easy task, if you follow some basic guidelines:
As with most things in animation, lip syncing gets easier once you have an organized workflow. Looking for a good reference is also important to get inspiration: notice how every person says the same word a bit differently than the others.
010-Talk.blend
. It has our character Otto with all his facial controllers, looking at someone behind our camera, as seen in the following screenshot:We have an audio file recorded for our scene in the file 010-Talk.wav, in which we have a man's voice saying "So... what do you want to do?"
010-Talk.wav
. Make sure that you set the Start Frame slider to 1 on the left-hand side of the window, so our sound strip starts on the first frame of our scene. You should also enable the Cache option to load this audio file into system memory.Back to the VSE window, and we're going to add some markers to indicate the syllables and frame positions we need to animate. These markers are visible in all timeline-type windows, such as the DopeSheet, the Video Sequence Editor, and the actual Timeline window. This makes it easier to spot where we should insert the keyframes to match the audio strip.
So
. You can move markers by selecting them with the right mouse button and pressing G. Repeat this process for the other sounds that you think are stronger.Remember that not every syllable should be animated. You should identify the stronger sounds, because they are the ones you should focus on. In our example, the stronger sounds are set in bold: "So, what you wanna do?"
The following screenshot shows our markers set and named properly on the VSE:
The layered animation approach we use for our characters' bodies is also relevant for animating mouth shapes. We're going to animate the basic shapes first and then add layers of details until we finish the shot.
Jaw
bone and insert keyframes to make our character open and close his mouth to match the sounds of the audio strip. Remember that we're only focusing on the jaw here; the lips will be taken care of in the next layer. The next screenshot shows our character with his mouth a bit open to match the sound of the word "so...".On the next layer, we'll match the narrowing or widening of the mouth shape. The vowel sounds "O" and "U", for instance, require narrow mouth shapes while "A", "E" and "I" need wider shapes.
That holds true for any scene that you're animating. If your audio has someone screaming in anger at someone, for example, the whole body should follow the sound accents. In your planning phase, it's useful to act the scene in front of a mirror or camera, sketch thumbnails of your pose ideas regarding those sounds and then transpose them to your character rig.
A useful tip when making these acting choices is to avoid being too obvious or literal; if your character says the word "big", you don't have to make his pose say the same. Try to make your character's body match the emotional state, not the word's meaning.
The file 010-Talk-complete.blend
has this finished recipe for your reference.
By loading an audio file and setting up markers for the sound accents, you can have visual feedback to help create the mouth shapes for lip syncing. Building the mouth shapes in a layered fashion—just like you do with the body motion—is a good way to be more productive when animating your character while speaking.
You should always build asymmetry into the facial movements in order to achieve natural and fluid results. Remember that nobody speaks only with their mouth; the full body must be taken into account. When animating the body on top of a sound file, try to match the emotional state of your character. Avoid being too obvious and literal in your acting choices.
Appendix: Understanding Extremes, Breakdowns, Inbetweens, ones and twos
Chapter 6: Non-linear animation
Chapter 6: Animating in layers
Chapter 7: Easy to Say, Hard to Do: Mastering the Basics
18.219.63.95