© Purushothaman Raju 2019
P. RajuCharacter Rigging and Advanced Animation https://doi.org/10.1007/978-1-4842-5037-2_9

9. Morph Animation and Facial Rigging

Purushothaman Raju1 
(1)
Bangalore, Karnataka, India
 
In the previous chapters, we learned how to create custom character rigs using bones, bipeds, and CAT tools. In this chapter, we will look at creating facial animation for our characters. We will also learn about the techniques that are primarily used for facial animations:
  • First technique is purely based on morph targets, and this technique is used for close-up shots where only the character’s face is in the frame and not the rest of the body.

  • The second technique is a hybrid of bones and morphs. We use the bones to give head rotations and movement, while the morph targets handle the facial expressions.

Morph Animation

Before we begin to learn morph animation, let’s consider what morphing is. Morphing is the process of changing the appearance of one object to another. A quick example is the metal man from the Terminator Judgment Day movie where he changes into various people he meets. Another example of a classical 2D morph is from the movie Saving Private Ryan, in the end scene where Private Ryan sees the grave of Captain Miller. In this shot, you see Private Ryan morph from his young self to an older age in a seamless manner. Let’s now delve deeper into creating morphs in 3ds Max.

Morph animation in 3ds Max can be easily done through the Morpher Modifier from the Modifier list. In order to perform a morph animation, you need a minimum of two objects— a source and a target object. (You can have any number of morphs. We will look at that in the face morphing exercise.)

In order to achieve a predictable result when transforming one object into another, you need to make sure that both objects have the same number of vertices. This usually would not be a problem if you create your source object and copy it and use the copy as the target. The target can have its vertices, edges, or polygons moved, rotated, and scaled. Note that you cannot perform any modeling toolset tools, as they will change the poly count of the object.

Working of a Morpher

Let’s run through a quick example of how Morpher works to begin with and then we will move into facial animation using morphers. We will morph a Cabernet wine glass type to a Burgundy one (see Figure 9-1). Of course there are a lot of other types of glasses, so let’s use this to understand how morphs work.
../images/477189_1_En_9_Chapter/477189_1_En_9_Fig1_HTML.jpg
Figure 9-1

Red wine glass types

Fire up 3ds Max and load Morph_Begin.max. (If you get an error about missing the image Wineglasstypes.jpeg, browse to the reference image folder of Chapter 9 and choose the image from the folder to assign it.) I modeled the Cabernet wine glass using the polygon modeling tools. If you right-click on the Cabernet glass that is modeled and go to Object Properties, you should see the vertices and faces count, as shown in Figure 9-2.
../images/477189_1_En_9_Chapter/477189_1_En_9_Fig2_HTML.jpg
Figure 9-2

Object properties: Vertices and Faces count

Now we need to model the Burgundy wine glass, since we need both shapes for the morph to work. You can model the burgundy following the reference image, as set in the Begin_Morph.max, but you need to remember that both objects should have the same number of vertices. It’s going to be hard to keep track of the number of vertices as we model (it’s possible to model by keeping the vertex count in check for every step of your model, but it is not ideal to do it this way).

The easier and better approach when you want to animate using morphs is to make a copy of the source object (in our case, the Cabernet glass), hold down the Shift key, and move it (using the Move tool) to the position of the Burgundy glass in the reference image. When you let go, you will be prompted with a dialog, as shown in Figure 9-3.
../images/477189_1_En_9_Chapter/477189_1_En_9_Fig3_HTML.jpg
Figure 9-3

Clone options

We need a copy of it, so choose Copy. If you need more copies, you can change the number of copies. For now, we need only one. Rename the new copy Burgundy. When you let go, you should have two objects—one named Cabernet and the other named Burgundy. A checkpoint file has been created up until this point called Morph_Begin01.max.

Since we copied/cloned the first object, the second object will share the same vertex count. Now, with the second object aligned to the reference image, select it and go into subobject mode of Editable Mesh ➤ Vertex from the modifier stack. Begin tweaking the vertices to align them to fit as per the burgundy glass in the reference image. See Figure 9-4.
../images/477189_1_En_9_Chapter/477189_1_En_9_Fig4_HTML.jpg
Figure 9-4

Subobject Mode

Remember not to use any modeling tools, as they will add/remove vertices based on the tool. See Figure 9-5.

Note

You can select the Burgundy object and press Alt+X to make it see through.

../images/477189_1_En_9_Chapter/477189_1_En_9_Fig5_HTML.jpg
Figure 9-5

X-ray mode alignment of burgundy object

Once you are happy with the settings, you will have two different-looking objects with the same number of vertices and faces, which is a must-have for any object to morph. A checkpoint file has been created called Morph_Begin02.max.

Our intention is to morph Cabernet to Burgundy, so select the Cabernet image and choose the Modify tab in the command panel. Add a Morpher modifier by clicking on the Modifier list and choosing Morpher. See Figure 9-6.
../images/477189_1_En_9_Chapter/477189_1_En_9_Fig6_HTML.jpg
Figure 9-6

Modifier list

Select the Morpher in the Modifier list and go to the channel list section of the Morpher modifier. The channel list will show the number of morph targets you can add. We will discuss this more in a later section, as we move into facial animations. For now, right-click on the first empty button in the channel list. You should be prompted with the Pick From Scene option, as shown in Figure 9-7.
../images/477189_1_En_9_Chapter/477189_1_En_9_Fig7_HTML.jpg
Figure 9-7

Morpher channel, Pick From Scene option

Click on Pick From Scene and click on the Burgundy Object in the viewport. Once you have chosen the Burgundy object, the channel list should show the first channel with the Burgundy name and a green light, which denotes that there is a morph. See Figure 9-8.
../images/477189_1_En_9_Chapter/477189_1_En_9_Fig8_HTML.jpg
Figure 9-8

Morpher channel, target object added

At this point I would like you to use the spinner next to the Burgundy name in channel list and see what happens in the viewport. Your Cabernet glass should morph into a burgundy glass as you increase the value from 0 to 100. You can now turn on Auto Key and change the values as needed to animate the morph.

Note

Before you render the animation, you need to hide the backdrop and the Burgundy object or move it away from the camera view so that it is not visible in your renders.

This is morph animation at a basic level. There are more than just two types of wine glasses, though. By modeling the other kinds using the same technique of clone and tweak and adding them to the subsequent channels in the channel list, you could animate to many targets concurrently—such as a Burgundy and a short glass. By adding both to their respective channels, you could even animate from a Cabernet to a short Burgundy glass (both morphs are applied at the same time!).

Facial Animation

Facial animation can be tedious and also easier based on the approach you take to achieve the animation. In earlier chapters, we completed a rig that used basic movements for a character’s head. We were able to tilt and rotate the head using the controllers, but facial animation goes beyond that. We need our characters to speak and show emotions to give them more life. Let’s look at facial animation using morphs.

In order to set up your character for facial animation with morphs, you need to determine the number of morphs needed. (Remember in our earlier exercise we had a Cabernet glass and wanted it to morph to a Burgundy glass.) Morphs need to be created based on the persona of the character. Let’s go on a guided tour to understand the morphs used for a typical human face animation.

Typical facial animation can be split into two categories:
  • Emotions of a character for expressions

  • Phonemes for speech animation

Let’s get deeper into each of these pointers and break them down further to see what morphs need to be created.

Emotions: Facial Expression

Humans are capable of showing diverse expressions. To recreate all those expressions is cumbersome, so in general the following animations are more than enough for characters in games or short animations, unless you intend to show more diverse expression.
  • Happiness

  • Sadness

  • Anger

  • Fear/shock

  • Surprise

  • Disgust

When a human character shows any of these expressions, there are a number of muscles involved in contorting the face to show what we see. At this point, I suggest you mimic these expressions in a mirror and note the changes happening in your face, especially when compared to a neutral state when no expression is shown.

You should have noted the following points. Compare these to your observations. I am just noting down key features. There is a lot more happening if you look into it in detail.
  • Happiness: The mouth corners are wider and go up a bit.

  • Sadness: The mouth corners are lower compared to a normal position.

  • Anger: Lowered eyebrows, lips closed or slightly open to show teeth.

  • Fear/shock: Eyebrows raised and mouth slightly open.

  • Surprise: Eyes wide open and mouth open (more like a jaw drop).

  • Disgust: Eyes nearly closed (not entirely, around 80%, although this varies based on race and culture) and cheeks raised.

Let’s go ahead and create facial animation of emotions with morphs.

I detached the head of the character that we previously used for rigging and deleted the body so that we can focus on the facial features alone. I created a checkpoint file with the head alone called Human_Head.max.

Fire up 3ds Max and open human_head.max. You should see the human head detached from the body. I also renamed the object to neutral. This serves as our default facial pose.
  1. 1.

    Select the neutral face. Hold Shift down and move it using the Move tool along the X axis.

     
  2. 2.

    Release the mouse button, and in the Clone Options, choose copy. Set the number of copies to 6.

     
  3. 3.

    Name the copies Happiness, Sadness, Anger, Fear, Surprise, and Disgust.

     
  4. 4.

    Now comes the hardest part of the exercise. You need to go into the modifier stack of each head. (Leave the neutral state as is and tweak the polygons, edges, and vertices using the translation tools. (Remember: No modeling tools!)

     
  5. 5.
    Do not forget to use the Soft Selection in the modifier stack of the editable poly to get a smooth deformation. Enable Use Soft Selection and increase or decrease the falloff range for the desired output. Any polygon within the red-orange area is going to be heavily influenced by the translation, while the ones in the blue range are least affected by the transform. Anything not within the color range will not be affected at all. See Figure 9-9.
    ../images/477189_1_En_9_Chapter/477189_1_En_9_Fig9_HTML.jpg
    Figure 9-9

    Soft selection

     

I recommend you find images or take photographs of a person who matches the persona of the character you are trying to animate and use that as a backdrop or reference. I created a checkpoint file with all the tweaked emotions. The reference file is called Emotions.max.

You can see these emotions in Figures 9-10 through 9-12.
../images/477189_1_En_9_Chapter/477189_1_En_9_Fig10_HTML.jpg
Figure 9-10

Happy and sad emotions

In Figure 9-10, apart from the obvious mouth open and closed state, there is a lot happening around the mouth area. Notice the shape of the chin in both states. Next up, Figure 9-11 shows anger and fear.
../images/477189_1_En_9_Chapter/477189_1_En_9_Fig11_HTML.jpg
Figure 9-11

Anger and fear

Much of the tweaks in Figure 9-12 are around the eyes and mouth areas.
../images/477189_1_En_9_Chapter/477189_1_En_9_Fig12_HTML.jpg
Figure 9-12

Surprise and disgust

Note that fear and disgust have a lot in common, except for the eyes. This should do well for the animation.

If you are unable to find a reference image, this chapter’s images are provided along with the source files in the reference images folder.

Note

The quality of your animations depends on the amount of detail and time you put in getting the expressions right. Keep making tweaks and practicing and your results will be fruitful.

If you want to succeed as an animator, observe the motions around you. There is a rhythm to all motions and facial expressions.

Another key point to note is that like in rigging, where we recommend a T pose for the character, in facial animation always have the character’s mouth opened in its default pose. This is because closing the mouth using subobject modes of the geometry is easier than opening it.

A checkpoint file has been created called Emotions.max for you to reference if need be. You can use it to follow along in getting the morph animation for the character.

Okay let’s get into animating the head using morphs.
  1. 1.

    Load emotions.max or follow along if you have created your file with the poses.

     
  2. 2.

    Select the neutral head.

     
  3. 3.

    Go into the Modify tab of the command panel and add a morpher modifier.

     
  4. 4.

    In the channel list, right-click the first empty channel list and choose Pick from Scene. Select the Happiness head.

     
  5. 5.

    In the channel list, right-click the next empty slot and choose Pick from Scene. Select the Sadness head.

     
  6. 6.

    In the channel list, right-click the next empty slot and choose Pick from Scene. Select the Anger head.

     
  7. 7.

    In the channel list, right-click the next empty slot and choose Pick from Scene. Select the Fear head.

     
  8. 8.

    In the channel list, right-click the next empty slot and choose Pick from Scene. Select the Surprise head.

     
  9. 9.

    In the channel list, right-click the next empty slot and choose Pick from Scene. Select the Disgust head.

     
Your morphs are set! If you have additional morphs, you can load them into subsequent slots. We have set six emotions for the character, with a total of seven, including the neutral state. Your channel list should look like Figure 9-13.
../images/477189_1_En_9_Chapter/477189_1_En_9_Fig13_HTML.jpg
Figure 9-13

Morph list for neutral head

At this point, you can select all the other heads except the neutral one and hide them so that they are not visible in the renderer.

You can turn on AutoKey or use SetKey and tweak the values in the channel list. Keyframes are automatically created when Auto Key is on. To see the keyframes for modifier elements, be sure to check the modifiers in the Keyfilter mode. See Figure 9-14.
../images/477189_1_En_9_Chapter/477189_1_En_9_Fig14_HTML.jpg
Figure 9-14

Keyfilter dialog

A checkpoint file is available for your reference called Emotions_Morph.max. Now you can tweak the values of multiple morphs to get diverse results.

Phonemes

If we want our character to talk, we need to begin setting up morphs for that as well, but how many morphs do we need to set? Maybe 26 for the letters of the alphabet? Alphabet sounds are different when they are pronounced in conjunction with other letters. Even though the English language has only 26 letters, there are actually 44 phonemes.

What exactly are phonemes? To break it down into simpler terms, it’s the sound we create when we say a particular word. We don’t speak by letters but by sounds. For example, the word catch is comprised of three sounds—c (it’s more like ca), a (more like aeh), and tch—as one sound. It would be hard to explain this even further. Words as phonemes need to explained through verbal training or sound files. There are a total of 44 phonemes. Does this mean we need to create morphs for all the 44? It would be best because we would have much more control over the character with 44 morphs, but if you want to simplify this process, you can do away with some of these phonemes. In our case, we will be using nine phonemes.

I provide a file for you to see the morphs created for the phonemes; it’s called Phonemes.max. The phonemes can be created by duplicating the head using the Move, Rotate, and Scale tools with the aid of soft selection for fine tweaks. Again remember not to increase the polygon count, as morphs rely on the polygon count to be the same.

Here are the basic phonemes that I created for this exercise:
  • A and I: In this morph position, the lips are a bit wide open and you can see the teeth and the tongue resting inside the mouth. See Figure 9-15.
    ../images/477189_1_En_9_Chapter/477189_1_En_9_Fig15_HTML.jpg
    Figure 9-15

    Phoneme_A_I

  • C, D, G, K, N, R, S, T: This will be close to the neutral setup we have. The mouth needs to be open a faint bit and stretched. See Figure 9-16.
    ../images/477189_1_En_9_Chapter/477189_1_En_9_Fig16_HTML.jpg
    Figure 9-16

    Phoneme_C_D_G_K_N_R_S_T

  • E: This is the same as A and I but the lips are spread wider like in a happiness pose. See Figure 9-17.
    ../images/477189_1_En_9_Chapter/477189_1_En_9_Fig17_HTML.jpg
    Figure 9-17

    Phoneme_E

  • F and V: In this morph, the upper jaw teeth need to be close and touching the lower lips. See Figure 9-18.
    ../images/477189_1_En_9_Chapter/477189_1_En_9_Fig18_HTML.jpg
    Figure 9-18

    Phoneme_F_V

  • L: The mouth is faintly open and the tongue touches the upper jaw on this morph. See Figure 9-19.
    ../images/477189_1_En_9_Chapter/477189_1_En_9_Fig19_HTML.jpg
    Figure 9-19

    Phoneme_L

  • M, B, and P: The lips are sealed with this morph. The duration varies with each letter, with M being the longest, followed by B and P. See Figure 9-20.
    ../images/477189_1_En_9_Chapter/477189_1_En_9_Fig20_HTML.jpg
    Figure 9-20

    Phoneme_M_B_P

  • O: This is the same as U but the lips don’t come forward. See Figure 9-21.
    ../images/477189_1_En_9_Chapter/477189_1_En_9_Fig21_HTML.jpg
    Figure 9-21

    Phoneme_O

  • U: In this morph, the character needs to have its mouth outward, like going for a lip kiss, but with the exception of the lips being spaced apart in an oval shape. The lips tend to come up front. See Figure 9-22.
    ../images/477189_1_En_9_Chapter/477189_1_En_9_Fig22_HTML.jpg
    Figure 9-22

    Phoneme_U

  • W and Q: These are similar to U, but with only the bottom teeth visible, as the lower lips come down a lot more when pronouncing this. See Figure 9-23.
    ../images/477189_1_En_9_Chapter/477189_1_En_9_Fig23_HTML.jpg
    Figure 9-23

    Phoneme_W_Q

Creating these morph positions is the biggest and cumbersome task. Once the morphs are done, the animation is easy. There are a lot of third-party tools to aid you in creating morphs easily. If you are exporting a mesh from the Autodesk Character Generator, you can export the character with morphs and a preset bone rig that has bone weight.

Audio Lip Sync: MorphTarget

Let’s now use an audio file to create a lip sync. I provided an audio file called Morph_Targets.wav for your reference. You can follow along with this file or with any voiceover files you have to lip sync the character. Be sure you have the audio in the .wav 16-bit pcm format, or your audio might not be supported by 3ds Max. Feel free to use any converter online to convert it to .wav format.

Fire up 3ds Max if you haven’t already and open FacialAnimation_Begin.max. Note that the file has copies of a human head tweaked for the following morphs:

The main head is named Phoneme_Rest:
  • Eye blinks
    • RightEyeBlink and LeftEyeBlink

  • Phonemes
    • Phoneme_A_I, Phoneme_CDGKNRST, Phoneme_FV, Phoneme_L, Phoneme_MBP, Phoneme_O, Phoneme_U, and Phoneme_WQ

  • Expressions
    • Surprise, Sadness, Happiness, Fear, Disgust, and Anger

We will be using all these phonemes and expressions to achieve a good lip sync animation.
  1. 1.

    Select Phoneme_Rest head and go into the Modifier tab in the command panel. In the Modifier list, choose Morpher.

     
  2. 2.

    Select the Morpher and, in the Morpher Channel list, click on Load Multiple Targets. Choose all the morphs that have phonemes in them. I could choose all morphers at this point, but I am choosing phonemes only here. Choosing all the morphers at this point would put the morphs in alphabetic order. We would end up with expressions followed by phonemes and then by expressions again.

     
  3. 3.

    Now select the load multiple targets and choose the RightEyeClosed and LeftEyeClosed.

     
  4. 4.

    Finally, click on Load Multiple Targets again and choose the Expressions.

     
Your Morpher list should now show all the phoneme morphs and then the Blink morphs and the expressions after that. A checkpoint file has been created called FacialAnimation_Begin01.max.
  1. 1.
    Change your view to single view and frame phoneme_rest in your view (by using the Zoom Extents Selected button, which is located in the bottom-right corner of Max UI). Open the mini Curve Editor. Your UI will be similar to the one shown in Figure 9-24.
    ../images/477189_1_En_9_Chapter/477189_1_En_9_Fig24_HTML.jpg
    Figure 9-24

    3ds Max UI, Morpher and mini Curve editor

     
  2. 2.
    In the mini Curve Editor, double-click on Sound in the left pane. A Pro Sound dialog option should open, as shown in Figure 9-25.
    ../images/477189_1_En_9_Chapter/477189_1_En_9_Fig25_HTML.jpg
    Figure 9-25

    Pro Sound dialog

     
  3. 3.

    Click on the add and choose the Morph_target.wav provided in the reference audio folder of Chapter 9. If you would like to animate the head with a different voiceover, load that file and ensure it is in .wav format and in 16-bits.

     
  4. 4.

    Turn on Permit Backwards Scrubbing in the Pro Sound dialog. This is very useful when lining up the morphs. Without this, audio will play only when the play head is moving/moved forward.

     
  5. 5.
    The dialog box in Figure 9-26 appears once the audio is loaded. It shows various parameters. Check the endframe in the file; ensure that it’s at least the same or more than the given value, otherwise the audio will not fit.
    ../images/477189_1_En_9_Chapter/477189_1_En_9_Fig26_HTML.jpg
    Figure 9-26

    Pro Sound dialog: Audio file loaded

     
  6. 6.

    The animation is 100 frames long and since the end frame is 97.8, we are good. Click Close.

     
  7. 7.

    You can also close the mini Curve Editor by clicking on the Close button.

    Play the scene using the play head. Note that when the audio plays, you can scrub the time slider forward and backward to hear the audio as well. A checkpoint file has been created called FacialAnimation_Begin02.max.

     
  8. 8.

    Click on Key Filters, which is next to Set Key, and ensure you have the Modifiers checked in it. We are going to be animating the modifiers, without it being checked, we won’t be able to see the keyframes in the timeline.

    At this point if you would like, you can go ahead and hide all other geometries except Phoneme_Rest.

     
  9. 9.

    We will use Set Key method for this. In Set Key Mode, all the morphs receive a keyframe when you set a key, unlike with Auto Key, where only the changed morph receives a keyframe added.

     
  10. 10.
    At frame 0, turn on Set Key and press the Set Key icon or press K. Notice how all the modifiers receive a keyframe (see Figure 9-27).
    ../images/477189_1_En_9_Chapter/477189_1_En_9_Fig27_HTML.jpg
    Figure 9-27

    All morphs keyframed

     
  11. 11.

    If you scrub the timeline, you should notice that the audio part of F starts in the word facial at frame 6.

     
  12. 12.

    We don’t want any changes to happen until frame 4, so go to frame 4 and click on Set Key or press K. I am not choosing frame 5 at this point because I want to give a preroll of one frame. This one frame preroll is not much, but it will provide a subtle result. Ideally 2-4 frames of preroll are better.

     
  13. 13.

    At frame 6, increase Phoneme_F_V to 100.

     
  14. 14.

    At frame 10, increase Phoneme_A_I to around 75 and reduce Phoneme_F_V to 0.

     
  15. 15.
    At this point, I recommend you open the mini Curve Editor and expand the Sound in the left pane of the mini Curve Editor. Expand until you see the Morph_Targets.Wav and waveform. In the View menu, choose Frame Extents Horizontal to frame the timeline to your scene range. We can use the visual reference of the audio waveform to position our keyframes. See Figure 9-28.
    ../images/477189_1_En_9_Chapter/477189_1_En_9_Fig28_HTML.jpg
    Figure 9-28

    Mini Curve Editor, waveform

     
  16. 16.

    At frame 10, decrease Phoneme_A to 65.

     
  17. 17.

    At frame 12, increase Phoneme_C to 40.

     
  18. 18.

    At frame 15, set a keyframe for Phoneme_L to 100 and bring the other to 0.

     
The same process needs to be followed for the other words as per the audio. Look at phonetics and when that sound is pronounced, use the relevant morph to animate the face. There are two ways we can proceed. One is by setting keyframes for key phonemes for the entire sentence for voiceovers and then coming in and filling in between. Or, you can go in a linear process and set all keys at a particular frame before going forward to the next one. I recommend you keep the keyframes and then come back and edit/add as necessary.
  1. 1.

    Keep playing a particular part over and over until you are satisfied. Reposition the keyframes as needed based on the audio cues.

     
  2. 2.

    Once the keyframes are set, open the mini Curve Editor and on the left pane, choose Object ➤ Phoneme_Rest ➤ Modified Object ➤ Morpher. Select the morph. You should see the graph. Tweak the graphs to your liking using the graph tools and for fine refinement of the animation. See Figure 9-29.

     
../images/477189_1_En_9_Chapter/477189_1_En_9_Fig29_HTML.jpg
Figure 9-29

Mini Curve Editor: keyframe graphs

I went ahead and made a few tweaks in terms of adding more morph keyframes for phones. I also used blink and other expression morphs. A checkpoint file has been created called FacialAnimation_Begin03.max.

Hybrid : Facial Animation Using Morphs and Bones

Now let’s take the head and make it look around as it talks. We could have created four morphs with the head, making it look up, down, left, and right. We are going to use bones to get this done.

Load FacialAnimation_Begin04.max.
  1. 1.
    In the left viewport, create two bones as depicted in Figure 9-30. You might need to lower the bone scale, considering the head mesh is very small. Rename them Neck_Bone, Head_Bone, and Head_end.
    ../images/477189_1_En_9_Chapter/477189_1_En_9_Fig30_HTML.jpg
    Figure 9-30

    Morph_ Head: Bone Setup

     
  2. 2.
    Create another bone chain for the jaw, starting near the end of the neck bone. Select the newly created jaw bone and link it to the head bone. Rename the bones Jaw_Bone and Jaw_End. See Figure 9-31.
    ../images/477189_1_En_9_Chapter/477189_1_En_9_Fig31_HTML.jpg
    Figure 9-31

    Morph_ Head: JawBone Setup

     
  3. 3.

    From the Create panel, create a circle shape and name it Neck_Control.

     
  4. 4.

    Create another circle in the front viewport and name it Head_Control.

     
  5. 5.
    Position both the circles, as shown in Figure 9-32, using the align tools. Use the bones as a reference to position pivot of the circles to the bones.
    ../images/477189_1_En_9_Chapter/477189_1_En_9_Fig32_HTML.jpg
    Figure 9-32

    Head and neck controls

     
  6. 6.

    Select the Head_Control and link it to the Neck_Control.

     
  7. 7.

    Select the bone near the neck area and go to the Animation menu. Choose Constraints ➤ Orientation Constraint and then choose the Neck_Control. Your bone structure might turn in some other direction. In the Motion panel, under the Orientation Constraint pane, choose Keep Initial offset.

     
  8. 8.

    Select the bone that overlaps the head (Head_Bone) and go to the Animation menu. Choose Constraints ➤ Orientation constraint and choose the Head_Control. Your bone structure might turn in some other direction. In the Motion panel, under the Orientation Constraint pane, choose Keep Initial offset.

     
  9. 9.

    Select Phoneme_Rest Head mesh and, in the Modifier pane, choose Skin Modifier.

     
  10. 10.

    In the Skin Modifier, click on Edit Envelopes and weight them using the tools we learned about in the skinning session.

     
  11. 11.

    Now you can use the Head_Control to make the head look around as it talks using the morphers.

     

A checkpoint file has been created called FacialAnimation_Complete.max. The Neck_Control can be a child of a complete human rig so that the head moves along with the body.

Summary

To summarize what we have learned in this book:
  • In Chapter 1, we learned the principles of animation.

  • In Chapter 2, we learned how to create animations and refine them using animation editors in 3ds Max.

  • In Chapter 3, we learned about advance animations using various constraints.

  • In Chapters 4, 5, and 6, we learned to create character rigs using Bones, Bipeds, and the CAT toolkit.

  • In Chapter 7, we looked at how to bind the rigs that we created in earlier chapters to a character mesh.

  • In Chapter 8, we looked at animating a walk and run cycle for bipeds and quadrupeds.

  • In this chapter, we learned how to use morph tools to create morph expressions and phonemes for animation.

This concludes the book. I wish you all good luck in bringing your characters to life. Observe animations and try to mimic them; you may not achieve a professional level of work right away, but with practice, it will become second nature.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.140.185.147