Вы находитесь на странице: 1из 110

CALIFORNIA STATE UNIVERSITY, NORTHRIDGE

PEPI: A STUDY IN FACIAL ANIMATION WITH MAYA

A thesis submitted in partial fulfillment of the requirements


for the degree of Master of Science in
Computer Science

by

Ervin Bakhshian

December 1999
Signature page

The thesis of Ervin Bakhshian is approved:

____________________________________ ____________________
Melvin Epstein Date

____________________________________ ____________________
David Salomon Date

____________________________________ ____________________
Michael Barnes, Chair Date

California State University, Northridge

ii
Trademark Acknowledgement

Alias|Wavefront is a trademark of Alias|Wavefront, a division of Silicon Graphics


Limited. Silicon Graphics and IRIX are registered trademarks of Silicon Graphics, Inc.
Maya is a registered trademark of Silicon Graphics Inc., and exclusively used by
Alias|Wavefront. Maya Paint Effects, Maya Artisan, and Maya MEL are trademarks of
Silicon Graphics, Inc., exclusively used by Alias|Wavefront. Intel is a registered
trademark of Intel Corporation. Windows NT is a registered trademark of Microsoft
Corporation. All other product names mentioned are trademarks or registered trademarks
of their respective holders.

iii
Acknowledgements

Many thanks to my committee members, three inspiring teachers, Professors


Michael Barnes, Melvin Epstein, and David Salomon, for everything.

I would also like to thank Mr. Bill Buxton, Chief Scientist at Alias|Wavefront,
for making this project possible.

iv
Dedication

to my father

v
Table of Contents

Signature page..................................................................................................................... ii
Trademark Acknowledgement........................................................................................... iii
Acknowledgements............................................................................................................ iv
Dedication ........................................................................................................................... v
List of Figures ................................................................................................................... vii
Abstract .............................................................................................................................. ix

Introduction......................................................................................................................... 1
Chapter 1: Maya.................................................................................................................. 1
1.1. Brief Overview: ....................................................................................................... 2
1.2. The Maya Architecture:........................................................................................... 2
1.2.1. The Dependency Graph:.................................................................................... 2
1.2.2. MEL: ................................................................................................................. 3
1.3. The Maya Interface:................................................................................................. 4
1.4. Artisan: .................................................................................................................... 4
1.5. Visual Effects: ......................................................................................................... 5
1.6. Applications of Maya: ............................................................................................. 5
Chapter 2: Pepi.................................................................................................................... 6
2.1. Character Concept: .................................................................................................. 6
2.2. Modeling Phase: ...................................................................................................... 8
2.3. Setting up the Animation Mechanism: .................................................................. 18
2.3.1. Controlling the Eyes:....................................................................................... 19
2.3.2. The Internal Structure - Joints for the Neck and Lower Jaw: ......................... 20
2.3.3. A lattice Flexor for the Neck:.......................................................................... 26
2.3.4. A Short Recapitulation:................................................................................... 27
2.3.5. Facial Expressions and Speech with Blend Shape Deformers:....................... 27
2.4. Adding Functionality with MEL: .......................................................................... 38
2.5. Adding a Custom Interface with MEL: ................................................................. 41
2.6. Texturing Phase: .................................................................................................... 66
2.6.1. A Bump Map for Pepi: .................................................................................... 68
2.6.2. A Color Map for Pepi:..................................................................................... 70
2.6.3. A Diffusion Map for Pepi: .............................................................................. 71
2.6.4. A Specularity Map for Pepi:............................................................................ 72
2.6.5. Texturing Pepi’s Eyes and Tongue: ................................................................ 73
2.7. Some Design Ideas for the Future: ........................................................................ 76
2.8. Animating Pepi:..................................................................................................... 77
Chapter 3: Project Summary............................................................................................. 79

References......................................................................................................................... 81
APPENDIX A: Image gallery........................................................................................... 82
APPENDIX B: Definitions ............................................................................................... 98
APPENDIX C: Useful online resources ......................................................................... 101

vi
List of Figures

Figure 1: The Drawing that Inspired Pepi. ........................................................................ 7


Figure 2: Pepi’s Orthographic Drawings. .......................................................................... 8
Figure 3: Pepi’s Orthographic Drawings Mapped into Front and Side Views.................. 9
Figure 4: A Half Sphere Is Created in the Side View...................................................... 10
Figure 5: Sphere Reduced to Match the General Head Shape......................................... 11
Figure 6: The Nose and Overall Head Shape Become More Apparent........................... 12
Figure 7: Eyelids, Eyebrow, and Lower Jaw Area Are Formed...................................... 12
Figure 8: The Nostril Is Added, the Eye and Eyebrow Regions Are Defined................. 13
Figure 9: A Rough Eye Socket Is Formed, and the Mouth Cavity Is Started.................. 13
Figure 10: Mouth Is Completed, Eye Area Is Enhanced, Ear Position Is Marked. ......... 14
Figure 11: Isoparms Constituting the Head Shape. ......................................................... 15
Figure 12: The Completed Half of Pepi’s Head. ............................................................. 15
Figure 13: Pepi Transparent............................................................................................. 16
Figure 14: A Closer Look at Pepi’s Dentures.................................................................. 17
Figure 15: Pepi’s tongue. ................................................................................................. 17
Figure 16: The Completed Model of Pepi’s Head........................................................... 18
Figure 17: Each Eyeball Always Looks in the Direction of Its Locator.......................... 19
Figure 18: Pepi’s Cranial Skeleton Joints........................................................................ 20
Figure 19: Smooth Skinning Versus Rigid Skinning....................................................... 21
Figure 20: The Outliner. .................................................................................................. 22
Figure 21: The Hypergraph.............................................................................................. 22
Figure 22: Pepi’s Lower Jaw’s Cluster of CVs. .............................................................. 23
Figure 23: The Paint Weights Tool Feedback. ................................................................ 24
Figure 24: Before (a) and After (b) Assigning Weights to the Lower Jaw CVs. ............ 25
Figure 25: A Neck Flexor for Pepi. ................................................................................. 26
Figure 26: Pepi’s Neck, Before and After Applying a Lattice Flexor............................. 26
Figure 27: Pepi in His Initial and Animated States. ........................................................ 27
Figure 28: Pepi Blinking.................................................................................................. 29
Figure 29: Pepi Squinting. ............................................................................................... 29
Figure 30: Pepi Surprised. ............................................................................................... 30
Figure 31: Pepi Smiling. .................................................................................................. 30
Figure 32: Pepi Sad.......................................................................................................... 31
Figure 33: Pepi Angry...................................................................................................... 31
Figure 34: Pepi with Open Mouth. .................................................................................. 32
Figure 35: Pepi Blowing Air............................................................................................ 32
Figure 36: Left and Right Frown. .................................................................................... 33
Figure 37: Left and Right Smirk...................................................................................... 33
Figure 38: Left and Right Sneer. ..................................................................................... 34
Figure 39: Pepi Saying ‘AH’. .......................................................................................... 34
Figure 40: Pepi Saying ‘OH’. .......................................................................................... 35
Figure 41: Pepi Saying ‘UH’. .......................................................................................... 35
Figure 42: Pepi Saying ‘EH’............................................................................................ 36
Figure 43: Pepi Saying ‘FH’............................................................................................ 36
Figure 44: Pepi Saying ‘MH’........................................................................................... 37

vii
Figure 45: Expression Editor with “lowerjawRotatorEXP”. ........................................... 40
Figure 46: The Attribute Editor. ...................................................................................... 44
Figure 47: A Preliminary Sketch for the Custom Interface. ............................................ 45
Figure 48: Pepi’s Custom Interface. ................................................................................ 62
Figure 49: A Custom Shelf Button for Invoking Pepi’s Control Panel. .......................... 62
Figure 50: Widget Hierarchy for the Custom Interface................................................... 63
Figure 51: Event Diagram for the Custom Interface. ...................................................... 64
Figure 52: Controlling Pepi’s Facial Expressions. .......................................................... 65
Figure 53: Controlling Pepi’s Speech Attributes............................................................. 65
Figure 54: Pepi Before Applying Image Maps................................................................ 67
Figure 55: The Bump Map and Its Effects. ..................................................................... 68
Figure 56: The Color Map and Its Effects. ...................................................................... 70
Figure 57: The Diffusion Map and Its Effects................................................................. 71
Figure 58: The Specularity Map and Its Effects. ............................................................. 72
Figure 59: Eyeball Texture. ............................................................................................. 73
Figure 60: Tongue Texture. ............................................................................................. 73
Figure 61: Pepi Complete with Textures. ........................................................................ 74
Figure 62: Lighting the Scene.......................................................................................... 75
Figure 63: The Graph Editor............................................................................................ 78
Figure 64: Modified Animation Curve for the Blink Attribute. ...................................... 78

viii
Abstract

PEPI: A STUDY IN FACIAL ANIMATION WITH MAYA

by

Ervin Bakhshian

Master of Science in Computer Science

Maya, developed by Alias|Wavefront, a division of SGI, is the most advanced tool


available in the field of Digital Visual Effects today. This paper is the result of a study of
the Maya environment. It documents the creation process of a project that is the product
of experimenting with some of the major tools available in the Maya system. The result is
Pepi, a complex 3D character capable of displaying facial expressions and speech poses.

ix
This thesis is a study of the most advanced tool available in the field of Digital
Visual Effects today. While it is not really intended as a tutorial, it does introduce the tool
and its capabilities, and documents the implementation of the author’s main project
utilizing the tool. Given the huge scale of the tool, only those program features are
covered in detail, which directly pertain to the development of this particular project. It is
assumed that the reader is familiar with basic Computer Graphics concepts, and
preferably has knowledge of at least one 3D-design package.

Introduction

It is not easy to define what Maya [4, 13, Appendix C] is, since it provides such a
broad array of functionality. Maya is a product developed by Alias|Wavefront [Appendix
C], a division of SGI, and is the most powerful, comprehensive, and flexible 3D
animation and visual effects system available today. It is therefore, in my opinion, the
tool to be familiar with if you are in the Computer Graphics field.
Since the time I have first heard of Maya, I have been impatient about being able
to learn how to use it. This Masters program was the perfect opportunity. Maya is a tool
that has enabled me to combine some of the skills that I have learned throughout my
academic stay at CSUN, as well as incorporate my artistic skills, into a single project.
My objective with this project is to demonstrate that I am able to assimilate and
proficiently use a tool such as Maya in a fairly short period of time. The main skills that I
will be using in this project are: programming, Graphical User Interface design, 3D
sculpting, painting, and animation.
Chapter 1 is a brief overview of some of Maya’s important features and its current
applications. Chapter 2 is the main body of this report. It contains a detailed description
of the project developed as a result of learning to use Maya. Chapter 3 is a summary of
the contents of this report.

Chapter 1: Maya

Most visual effects companies regard being able to use a tool such as Maya very
highly and include it as a requirement for employment, but what they value even more is
being able to expand and customize a software’s capabilities to meet specific production
requirements. This is also true for individual 3D artists, who are often limited by a tool’s
failure to either provide certain features, or allow the possibility of customizing the
software into the specific medium they are comfortable working with. This means being
able to write your own tools to be used with the software in addition to the many ones
already available in the software. This also includes being able to create your own user
interface to fit your specific needs. Maya allows you to achieve these goals.
In the coming sections, I will go over some of the general characteristics of this
amazing program. I will start with a brief overview of Maya. I will then explain its
architecture, followed by its ingenious user interface. Then I will present “Artisan” [5,

1
12], Maya’s revolutionary 3D- manipulation tool. Finally, after describing Maya’s visual
effects capabilities, I will conclude with some of its applications.

1.1. Brief Overview:

Maya is available for both Silicon Graphics IRIX and Intel-based Windows NT
workstations, and performs significantly better with OpenGL 3D acceleration hardware.
Maya file formats are readable without conversion across both platforms.
Maya allows for exceptional performance even on a system with barely the
minimum requirements. On a system with the recommended hardware, Maya handles
complex, shaded, lit, textured, and high-resolution scenes which include characters, in
real-time.
Maya is based on an object oriented C++ design with a native OpenGL graphics
implementation. The two main architectural features are the dependency graph, (or node-
based architecture), and MEL, a unique scripting language. In addition, Maya's C++ API
provides the capability of writing custom plug-ins for the program. In order to build
Maya plug-ins, Alias|Wavefront recommends using Visual C++ 5.0 compiler for
Windows NT service pack 3 or later, or MIPSpro 7.2 compiler for IRIX 6.5.4.
The program’s interface is outstanding. For the beginner, the default interface is
perfect for rapidly learning the program basics. As you become more proficient, you also
learn to reduce the display complexity on your screen and achieve functionality through
the use of shortcuts.

1.2. The Maya Architecture:

There are two components to Maya’s architecture: The dependency graph and
MEL, Maya’s embedded scripting language.

1.2.1. The Dependency Graph:

Maya’s node-based architecture is referred to as the dependency graph [13]. The


very basic structure could be described as nodes with connections between them. From
the object-oriented point of view, each node represents an object with properties and
methods. This is what gives Maya its flexible procedural characteristics. Any object,
whether a surface, a texture, a light, a deformer, etc. is represented by a node. If you are
familiar with VRML, there are some basic similarities.
Each node has attributes. A transform node for instance has x, y, and z attributes
for translation, rotation, and scale. You can define your own attributes for any node, as
well as methods. If, for example, you write an expression that links the translateX
attribute of a sphere node to the intensity attribute of a light node, then a method is
created as a property of the nodes. You can set keys for any attribute in Maya for
animation. Furthermore, you can group nodes, or parent a node to another, thus creating
connections between one node's outputs and another's inputs. What results is a graph of

2
nodes and connections or “dependencies” which make up your scene: the dependency
graph.
These details are however transparent to the user. Most basic connections are
created automatically by Maya as a result of using the more intuitive creation tools. As a
simple example, when you click on the sphere creation icon, a sphere node with its
default attributes is created. As a direct result, a default material node is also created for
the sphere in order to visualize it. A transform node for the sphere is also automatically
created, in order to represent the sphere’s default position, orientation, and scaling.
It is possible to access and edit this construction history through the Hypergraph
[13]: a very useful dependency graph editor where you can create connections between
nodes using a simple drag-and-drop interface. Also, this construction history can be
animated to easily create otherwise very complex animations. For instance, again as a
very simple example, imagine that you have created a snake using the lofting method,
which is to extrude a circle (the cross section of the snake’s body), along a sinuous curve
(the general shape of the body). You can then animate the snake by simply editing its
construction history and animating the curve only. The changes will be propagated
through all the dependent nodes in the construction history.

1.2.2. MEL:

MEL [13, 17], or Maya Embedded Language, is Maya’s integrated scripting and
command language, which gives you the power to customize Maya. If you are a 3D artist
and have no experience with programming, you can still use Maya very efficiently. It is
even fairly easy to learn simple MEL expressions that you can enter at the command line
or in the expression editor. This would be enough to enable you to specify simple node
attribute dependencies. However, studying MEL in detail gives you a definitive
advantage. You can then write complex scripts which will allow you to extend the system
to meet specific needs.
In Chapter 2, you will see how I have used MEL scripts to create custom controls,
unique to my character. You will also see the actual syntax of MEL and its similarities to
the C++ language. If you have a programming background, and are especially familiar
with C++, you can have significant control over Maya. MEL supports all the common
data types (integer, float, string, array, vector, matrix, etc.) as well as commands for all
Maya functions, so that you can call them in your scripts. Once you write a tool, you can
execute it by typing its name and arguments at the command line, or inside the script
editor. If you prefer, you can either create a GUI for it, or/and simply attach it to an icon,
which can be placed on a default or user defined ‘tool shelf’. You can then invoke your
new tool with a simple mouse click.
MEL’s commands for building user interface components include elements very
similar to both Microsoft Foundation Class (MFC), and XMotif. Maya's user interface is
itself constructed from a library of MEL scripts. Maya licensees are granted the rights to
examine and modify the internal interface scripts of the tools provided, and use them (or
parts of them) in their own scripts. Having access to these scripts written by professionals
at Alias|Wavefront is very helpful in accelerating the learning process. If you are familiar
programming with the MFC or XMotif, then you will feel comfortable with MEL’s
interface commands and syntax.

3
1.3. The Maya Interface:

Maya is “huge”, with an obviously large cognitive load. Although it provides a


huge array of functionality, you can still configure its interface to a simple “small”
workspace, with only the very basic tools that you need for your simpler projects, and
still produce outstanding work. You can vary the display complexity on your screen
depending on your skill and comfort level. A very innovative interface technique, (which
I haven’t seen in any other program) called the Hotbox [13], is an advanced dynamic and
transparent pop-up menu system that appears anywhere on your screen, under your
current cursor position, when you hold down the space bar. This menu is also
customizable, has the same items than the traditional “fixed” menu, automatically
changes depending on what context you are in (Modeling, Animation, Rendering,
Dynamics), and allows you to maximize your working window hiding as many of the
GUI elements as you wish.
Unlike other programs, Maya provides precise and intuitive 3D manipulators for
performing transformations directly in the perspective view, without the need for
traditional editing using the orthographic views. Right-clicking on any object gives
access to its components. You can perform unlimited Undo/Redo operations. You can
drag & drop almost anything, from loading a file from the desktop to assigning materials
and expressions to a certain node. Selection handles allow you to easily select and
manipulate a group of nodes, locators, bone structures, clusters of control points, etc.
Each node comes with an integrated attribute editor. You can reduce display complexity
in your windows by defining filters to hide unwanted distracting entries. You can also
increase overall performance by reducing the display resolution of your objects. For
instance, a smoothness setting allows you to see your objects at various degrees of
complexity (rough/medium/fine), without actually altering the geometry.

1.4. Artisan:

Artisan [5, 12] is the name given to Maya’s revolutionary new modeling and
animation tool that lets you sculpt surfaces as if they were made of clay. The use of this
concept of intuitive direct 3D manipulation has been extended to a multitude of other
operations such as “script painting”. Script painting is the ability to define a node (a
geometry, a texture, etc.) as your brush, and then paint this attribute on the selected
surface, such as trees on a hill to make a forest. There is more: you can even paint
dynamics behavior or weights for a surface’s control points. As you will see in Chapter 2,
when the character opens his mouth, different skin points near the cheeks and jaw move
or “stretch” at different rates, and therefore need to be assigned different weights
accordingly, to simulate a movement as realistic as possible.
Artisan provides you with a complete set of variously shaped brushes, each with
customizable size and strength. Although Artisan works perfectly well with a three-
button mouse, it has been implemented to be used with a digital pen and tablet. The only,
but very significant, advantage of using a digital tablet versus the mouse is that Artisan is
pressure sensitive. With the mouse, if you want to remove a chunk of “digital clay”, a
click will remove a fixed amount, versus with the tablet, the pressure you apply with the

4
pen determines the amount of clay that is removed. This makes digital clay modeling
even closer to the real thing.

1.5. Visual Effects:

Given initial specifications about all objects in the scene (collisions, elasticity,
speed, etc.) and about the environmental conditions of the scene (gravity, wind,
turbulence, etc.), Maya can automatically calculate the trajectory and behavior of objects
set in motion in that particular environment. In other words, it can extrapolate the effects
of objects on one another and on the environment, simulating actions and events that are
logical and natural to the eye [13, 15].
As a simple example, imagine that you have defined a planar surface to be the
ground, and a sphere some distance above the plane to be a ball. You have also created
collision and gravity conditions. Then, as soon as you start playing your scene, Maya will
simulate the entire motion, from the initial moment the ball is in the air and falling, to the
bouncing of the ball on the ground, decreasing the bouncing until the ball comes to a full
stop. This concept of “rigid body” dynamics [14, 15] is very powerful and can simulate
contact forces, resting forces, momentum transfer, and constraints, for realistic complex
animation.
For example, picture a character releasing a bowling ball down the lane. Maya
can automatically calculate the complex motion of the bowling ball, including the path,
velocity and rotation of the ball as it collides with the bowling pins, and the pins collide
with one another, fall or bounce on the ground, until everything comes to a complete
stop.
Maya also provides “soft body” dynamics [14, 15]. In our simple example, when
the ball falls and touches the ground, its elastic quality makes it get a little squashed
vertically from the contact. Another example is the subtle deformations of a character’s
skin when pressure is applied from touch or contact with another surface. Or the
generation of a realistic wake in the ocean automatically as a fighter jet flies at low
altitude over the water.
Outstanding particle system dynamics [15] is also available for creating
atmospheric, explosive, liquid and pyrotechnic effects. Unlike any other system, rigid
body, soft body, and particle system dynamics, as well as any other deformations in
Maya (lattice, cluster, wire, wrinkle, sculpt, twist, etc. [14]), can be combined for
unlimited visual effect possibilities, and amazing chain reaction effects.

1.6. Applications of Maya:

Maya has been used to create spectacular visual effects in some of the most
successful recent film releases, including Armageddon, A Bug’s Life (Pixar), Contact,
The Truman Show, Comedy Central's South Park, Titanic (Blue Sky|VIFX), Mighty Joe
Young (Dream Quest Images), the tornado sequence in The Avengers (Cinesite), and a
fleet of spacecrafts for Star Trek: Insurrection (Santa Barbara Studios and Blue
Sky|VIFX).

5
ILM (Industrial Light & Magic) has used Maya to model some of the aliens in
Men In Black, the hilarious contortions in The Mask, the dinosaurs in The Lost World:
Jurassic Park, and to create the stunning effects in Spawn and Twister. More recently,
Maya has been extensively used in the new Star Wars Episode I: The Phantom Menace.
Some of the most predominant names in the gaming industry that use Maya are
CAPCOM, NAMCO, Nintendo, and SEGA. Blue Sky|VIFX recently animated six
dinosaurs for a 3D IMAX™ movie called T- Rex.
Some of Alias|Wavefront’s visualization and industrial design customers include
Apple, AT&T, BMW, Boeing, Daimler-Chrysler, Fiat, Ford, General Motors, Honda,
Italdesign, Kodak, Mattel, Philips, Renault, Rollerblade, Sony, Timex, Volvo and
Volkswagen.

Chapter 2: Pepi

In chapter 1, you were introduced to Maya and its capabilities. Now, I would like
to demonstrate an actual project implementation, which I hope will give you a much
better understanding of some of the possibilities that Maya has to offer. In this chapter, I
will start by describing my project concept. Then, I will perform a detailed presentation
of the creation process. The chapter is structured into the following milestones:

• Character concept.
• Modeling phase.
• Setting up the animation mechanism.
• Adding functionality with MEL.
• Adding a custom interface with MEL.
• Texturing phase.
• Animation phase.

2.1. Character Concept:

Humanoid character animation is a relatively complex task simply because the


range of human expressions, their randomness and seamless quality, is difficult to
achieve. Realism does not mean that your character has to lose any of its art-like
qualities. It does mean however that your character’s behavior must obey certain
universal or “common sense” rules, in order to be convincing and appealing to the eye. In
other words, it must be credible. I have attempted to produce a work that is descriptive of
this motivation.
I chose for this project to model and animate a male character’s head, focusing on
his facial expressions and speech attributes. This is my first attempt at developing a work
of this scale, which includes all of the elements of a complete 3D-animation project. Until
now, I had developed models with only minimal texturing and simplistic animation. The
character concept chosen here is mainly experimental, a learning experience. I have not
designed the character based on a particular storyboard. Therefore, I have not developed
a complex biography for him. I call him Pepi. As part of the design phase, and also to

6
give him a personality, I will describe a few of Pepi’s traits and the known facts about
him.
Nobody really knows Pepi’s story, but as far as anyone in his neighborhood can
remember, he has always been here. Pepi’s favorite pastime is to sit on one of the public
benches in the nearby park, and contemplate the passersby. He hates dogs, and thinks all
children are brats. The shopkeepers use him as a living clock, telling the time of day
based on his routine itinerary. The very few people who know him a little look up to him
as something precious, who lived through life’s nonsense, and came out almost intact.
The rest, who see him every day during his promenades, know him as the old man who
talks to the birds.
In truth, Pepi is not that old. He has a mild, pensive expression, melancholic with
a little bitterness. The wrinkles and dark rings around his eyes are testimony of someone
who has lived through a great deal. He hears voices in his head from time to time, ghosts
from the past tormenting him in his solitude. He talks to himself or to “them” quite
frequently. Pepi’s gaze is a bit disconcerting, but overall, he appears content.
Pepi was inspired by a drawing I made some time ago, which is shown in Figure
1 below.

Figure 1: The Drawing that Inspired Pepi.

7
2.2. Modeling Phase:

Now that you know a little more about Pepi, I will begin with the creation
process. I will start with the head shape, using orthographic template drawings as visual
helpers, and I will sculpt a sphere into the finished head shape using Maya’s Artisan
module. Eyes, teeth, gums and tongue, will also be created to complete the modeling
phase.

The first step is to create orthographic drawings and import them into Maya.
These drawings are shown in Figure 2.

Figure 2: Pepi’s Orthographic Drawings.

Note that this step is optional. It used to be difficult to model a face without
orthographic drawings, but with Artisan, the process is an intuitive one. Just like when
sculpting real clay, you would only need to refer to your concept drawing. It also depends
on your level of experience with 3D visualization, as well as your level of comfort with
virtual sculpting. This is only my second attempt at modeling a humanoid face, and
therefore, I chose to include all the visual help I can get. But as you become more
proficient, you will find this phase unnecessary.

Orthographic drawings do not need to be perfect, or too detailed. They simply


need to highlight the major silhouette lines and facial features (main creases, contours for
the mouth, nose, eyes, and eyebrows, etc.) Remember that they are only used to help you
visualize the general shape. You need them to be as simple and as clear as possible. Once
you have scanned the drawings, you simply create two perpendicular planes facing the

8
front and side cameras, and you map your images onto the planes as file textures. The
result of this process is illustrated in Figure 3.

Figure 3: Pepi’s Orthographic Drawings Mapped into Front and Side Views.

With the visual helpers in place, we are ready to model. The next step is to create
half a NURBS sphere, oriented vertically, in the side view (Please refer to Appendix B
for a definition of NURBS). Artisan used to work only with NURBS surfaces. But in
version 2.0, it has been modified to also work with polygonal surfaces. I chose a NURBS
surface, because it appears much smoother, and is well suited for character animation.
You can either first create the whole sphere with the NURBS sphere creation tool, and
then change the sphere’s “End Sweep” attribute to 180 degrees, or you can first edit the
NURBS sphere creation tool itself and specify your preferences, before you click create.
In Maya, tools (just like objects) can be edited by double-clicking their icon, to invoke an
attribute editor, which allows you to set your preferences before creation, and can save
you some time especially when performing complex operations. Figure 4 shows the half
sphere in all views. I have applied a default color and made the sphere a little transparent
to be able to see the orthographic drawings through it.

9
Figure 4: A Half Sphere Is Created in the Side View.

The sphere will be shaped to become the head. It is in some sense the block of
clay out of which the head is sculpted. Artisan provides a “reflectivity” option, which
mirrors all actions performed on one side to the other. So you could work with a whole
sphere, and any modeling on one side will be reproduced on the other. However, in the
case of complex models, working on one half and mirroring the model at the end to
obtain the second half speeds up the work considerably.
I chose to model the entire head out of a single “skin”, however, there are many
other alternatives. Patch modeling is one example. Ears, eyelids, nose, lips and lower jaw,
can be modeled as separate entities, which in some cases, makes the animation process
much easier.
The sphere is then ‘carved’ or ‘reduced’ to match the general outline of the
orthographic drawings. This is shown in Figure 5. While you can use Artisan for this
process, I have used control vertex (CV)-pulling using the very handy 3D manipulators
for moving, rotating, and scaling. The reason was to keep the shape as consistent and
regular as possible, so that it is easier to work with later, when using Artisan for detail
work.

10
Figure 5: Sphere Reduced to Match the General Head Shape.

This process also involves inserting both horizontal and vertical isoparms, to
increase the resolution of the surface, in order to be able to match the curvatures of the
orthographic drawings (Please refer to Appendix B for a definition of isoparms). At this
stage though, the shape is still very simple and undefined. You are only trying to make
the block of clay closer to your shape and therefore easier to work with (removing the
excess clay).
When you are satisfied with your general shape, it is time to get started with
Artisan. Artisan evaluates and modifies your surface depending on its resolution. If you
are trying to model a very pronounced crease for example, you must make sure to insert
enough isoparms to allow for the complex surface contortions. Remember however, to try
to keep the number of isoparms you insert to the minimum possible. This is hard when
trying to achieve complex shapes such as an ear for example, but a too high resolution
may slow performance considerably depending on your machine.
The more detailed sculpting starts here. With the following figures, I will outline
the progression of the head until completion, and I will explain the main areas of each
step. Keep in mind that, as the model progresses, isoparms are inserted as needed.
As you can see in Figure 6, the nose starts to take shape, and the overall
silhouette of the head becomes more apparent.

11
Figure 6: The Nose and Overall Head Shape Become More Apparent.

Next, in Figure 7, the eyelids and eyebrow positions are accentuated. If you
compare with Figure 6, the lower jaw and chin area is also formed.

Figure 7: Eyelids, Eyebrow, and Lower Jaw Area Are Formed.

Then, in Figure 8, a nostril has been added, and the eyelids and eyebrow are
becoming more defined. You can also see the beginning of a cheek formation. The model
appears a little distorted, but it is still at an early stage. The distortions will be corrected
later on using Artisan’s smooth tool.

12
Figure 8: The Nostril Is Added, the Eye and Eyebrow Regions Are Defined.

Figure 9 shows a notch pushed into the eyelids’ region to obtain an eye socket.
The mouth is started by pushing a cave into the mouth region. The model is still very
crude and needs a lot of work.

Figure 9: A Rough Eye Socket Is Formed, and the Mouth Cavity Is Started.

13
In Figure 10, you can see how the area near the bridge of the nose has been
accentuated compared with Figure 9. More detail is added to the eye socket and
eyebrow. The mouth cavity is closed by pulling together the top and bottom of the cavity,
thus forming upper and lower lips along the mouth line, which is shown by the front view
orthographic drawing. You can also see the beginning of a protrusion indicating the area
where the ear will be modeled.

Figure 10: Mouth Is Completed, Eye Area Is Enhanced, Ear Position Is Marked.

One main problem is deforming surfaces diagonally to their parameterization,


without having a sort of jagged look. But Artisan allows you to restrict the sculpt tool’s
effects to the x, y, or z directions, allowing you to group together the isoparms on your
surface by pushing or pulling them closer together. In other words, as shown in Figure
11, the same isoparms you used for the mouth area can be used for the ear area. You
simply concentrate portions of the isoparms together in certain areas. The Smooth tool
also gets rid of the jagged effect, and can be thought of as a kind of 3D anti-aliasing tool.

14
Figure 11: Isoparms Constituting the Head Shape.

Figure 12 shows the end result with the ear completed, and any final adjustments
performed. Smoothing has been applied throughout the model. Fine details have been
added, such as ‘laugh lines’, the groove above the upper lip, a slight depression under the
cheek, a subtle bag under the eye, as well as a very subtle double chin.

Figure 12: The Completed Half of Pepi’s Head.

15
Once satisfied with the model, it is time to make a mirror copy to obtain the
second half , and attach both halves together seamlessly. This is a relatively easy task as
Maya provides the functions that perform such an operation, included in its Edit Surfaces
menu. After the mirroring process, in order to convey a more realistic look, some
asymmetric details are added to create natural irregularities.
With Pepi’s head completed, the eyes, teeth, gums, and tongue, shown in Figure
13, are modeled as separate entities. The eyes and tongue are relatively easy to model.
However, for the teeth and gums, depending on how realistic you would like them to be,
you might find it useful to consult a human anatomy book [7]. Remember that there is no
need to model a complete set of teeth, nor is it necessary to provide them all with the
same level of detail. As far as Pepi is concerned, you will never see his back teeth. Only
the front teeth need to appear more realistic and detailed.

Figure 13: Pepi Transparent.

The eyes are simple spheres, and therefore straightforward. As for the teeth, one
set of teeth in one quadrant is modeled, then copied, mirrored, and modified accordingly
to obtain the remaining quadrants. Each tooth is started from a sphere, then shaped into
the various forms using CV-pulling. With the upper-teeth in place as visual helpers, a
small vertical U-shaped curve is lofted along a larger horizontal U-shaped curve to create
the upper gums. The lower gums are created from the upper gums, once again using
mirroring. Artisan is then used to create the irregularities along the ridge of the gums,
producing the desired realistic effect that the teeth are firmly embedded in the gums.
Figure 14 illustrates this effect.

16
Figure 14:
A Closer Look at Pepi’s Dentures.

I experimented using a lattice deformer for the tongue. A lattice deformer


surrounds an object with a box structure of points that you can manipulate to indirectly
change the object’s shape. You can specify the desired number of divisions for the lattice
box. You can either apply transformations to the entire lattice structure, or to its
individual points. Any transformation applied to the lattice affects the target object,
which in the case of Pepi’s tongue, was again a NURBS sphere. Figure 15 shows the
lattice deformer applied to the sphere, and the final result obtained by manipulating the
lattice itself. One advantage of lattice deformers is that they are very simple to visualize,
and therefore allow you to very quickly obtain the desired result, especially with simple
shapes.
Again, I would make the same remark for Pepi’s tongue as for his teeth: Unless
you will require to include in your animation a close-up of your character’s mouth, then
only the most visible part of the tongue should be as accurate as possible.

Figure 15: Pepi’s tongue.

After modeling all the components, it is a good idea to group the ones that move
as one unit. Maya provides such group nodes, which are simply containers for other
nodes. For example, all lower teeth can be grouped together. Then, the lower teeth, lower
gums, and the tongue can be grouped together as “lower jaw”. The upper teeth and upper
gums can be put in a separate group called “upper jaw”. Grouping is not only for ease of

17
organization, but also because later on, when we will need to apply transformations to
these components, it will make much more sense to apply the transformation to a group,
rather than separately to each component.
The modeling phase is now complete. The three-dimensional Pepi, displayed in
Figure 16, although manifestly lacking texture, is faithful to his two-dimensional concept
drawing. A simple temporary black and white texture has been applied to the eyes, for
visualization purposes for the next section.

Figure 16: The Completed Model of Pepi’s Head.

2.3. Setting up the Animation Mechanism:

In this section, I will explain the steps that were necessary to prepare Pepi for
animation. After experimenting with animating facial features using joints and skeletons
for various motions, I decided to implement the mechanism based on a real head. While it

18
is possible to animate facial expressions using joints, this technique would make more
sense for a character whose facial components have been modeled as separate entities.
In a real head, only the neck and the lower jaw’s motions are performed by the
skeletal structure. All other facial movements and expressions are triggered by muscles.
Therefore, I used Maya’s ‘blend shape’ deformers [14] to animate them. This technique
is equivalent to ‘morphing’, and I will demonstrate it in section 2.3.5.. Using blend shape
deformers is most convenient for a character such as Pepi, whose entire face skin is
modeled as one unit.
I will begin with the mechanism that allows you to control eyeball movements. I
will then cover the implementation of the neck and lower- jaw skeleton. And after
providing Pepi with the ability to open his mouth, I will demonstrate the process of
creating the blend shape deformers.

2.3.1. Controlling the Eyes:

Figure 17: Each Eyeball Always Looks in the Direction of Its Locator.

A mechanism is needed to control the direction in which the eyes are looking. To
do this, a locator node [4] is created in front of each eyeball, to restrict the direction the
eyeball is looking at by means of a constraint mechanism. Maya provides various kinds
of constrain objects [14] such as aim, point, orient, scale, geometry, tangent, etc., whose
manipulation affects their target objects. For each eyeball and its locator, an aim
constraint is used, such that the eyeball is looking straight ahead at its locator. Both
locators for the left and right eyeballs are then grouped into a single selection handle in
the center, in order to not only be easily selectable, but also have both eyes always look
in the same direction. This process is illustrated in Figure 17.

19
2.3.2. The Internal Structure - Joints for the Neck and Lower Jaw:

I have used skeleton joints to simulate Pepi’s cranial structure, shown in Figure
18. Maya provides skeleton creation tools to construct hierarchical, articulated structures
made of joints and bones [4, 14].

Figure 18: Pepi’s Cranial Skeleton Joints.

Once the skeleton is in place, you must bind the skin to the joints. This process is
called skinning [14]. There are two direct skinning methods available in Maya: smooth
skinning and rigid skinning. Smooth skinning allows several joints to influence the same
set of control vertices on a deformable surface, and would be appropriate for animating a
snake’s wavy body for example. In other words, the deformation effect is averaged
between all joints, each set of CVs being influenced the most by the joints that are closest
to it. With rigid skinning, on the other hand, one joint influences one set of CVs, and
would be appropriate for animating a human elbow for example. Figure 19 shows the
effects of both skinning techniques.

20
Figure 19: Smooth Skinning Versus Rigid Skinning.

Since the human skull is a rigid body and we do not want Pepi’s entire head to
deform every time he moves his neck, it makes sense to use rigid skinning. Referring
back to Figure 18, I will now describe each joint and its function.
Joint 1 is the root. It influences Pepi’s entire head, and therefore keeps his head in
place. If moved, the whole head moves with it. Joint 2 is the neck joint, and is used to
rotate the head up and down, tilt it left and right, or turn it left and right. Joint 3 is a
dummy joint, meaning that no transformations are ever applied to it directly. One major
role of joint 3 however is that it is a ‘parent’ node to the eyes, the upper jaw and the
lower jaw. The reason for parenting these components to joint 3 is that we want them to
‘inherit’ the attribute changes made to joint 3. In other words, when a transformation is
applied to joint 2, joint 3 and therefore its ‘children’ indirectly inherit those
transformations. We would not want Pepi to leave his eyes and teeth behind every time
he moves his head.
Joints 4 and 5 are used to control Pepi’s lower jaw. Joint 4 is the actual lower jaw
rotator or “mouth opener”. Joint 5, similarly to joint 3, is a dummy joint. Joint 5’s role is
most important, because its position and orientation determine the motion of the lower
jaw. In the previous section (2.3.1.) we created an aim constraint for the eyeballs. Here,
for the lower jaw and joint 5, we will create two constraints (with concurrent effect), a
point constraint and an orient constraint, such that the lower jaw’s position and
orientation are always the same as joint 5’s. As joint 4 rotates, it affects joint 5, and
therefore the lower jaw components.
Before continuing with this phase, I will briefly describe the process of creating
parent-child relationships between objects. Maya provides various editors to help you
visualize your scene, and set up relationships between your objects. The two most
common are the Outliner (Figure 20) and the Hypergraph (Figure 21), which provide
interfaces for examining and manipulating your scene’s dependency graph (Section
1.2.1.).

21
Figure 20: The Outliner. Figure 21: The Hypergraph.

The Outliner displays the scene hierarchy as a directed acyclic graph. Each parent
node (such as “head”) is originally collapsed, and can be expanded by clicking the little
sign icon to its left, in order to reveal its children. (‘+’ means collapsed, ‘-’ means
expanded). To parent an object to another, as we previously did with joint 3 (“headJoint”
in figures) and Pepi’s internal components, you simply drag the future child node onto
the future parent node. The Outliner also allows you to select and rename objects, as well
as reorder them. Reordering an object, besides modifying its position in the graph for
organizational purposes, affects Maya’s evaluation order for the object, as it evaluates
them as listed from top to bottom. [13]
The Hypergraph is similar to the Outliner, but has much more features and visual
aids for working with scene components. It is therefore more appropriate for scenes with
complex object interdependencies. For instance, you can display color-coded lines in the
scene hierarchy that illustrate nodes connected by a MEL expression, a constraint, or a

22
deformer. Each node is represented in a box, and can be positioned anywhere inside the
Hypergraph. This gives you more customizing power, by allowing you to create a free-
form graph. I did not have any real use for the Hypergraph for my project. The operations
allowed by the Outliner were enough. I only used the Hypergraph to select and delete
some unnecessary connections and nodes, to “clean up” the scene.
Back to our animation mechanism, at this point in development, as joint 5 moves,
the lower jaw elements move appropriately, however, the mouth does not actually open
yet. One more child of joint 5 needs to be specified, and that is the set of CVs making up
the lower jaw skin. This process needs a bit of explanation and the introduction of
another amazing feature of Maya: The Paint Selection tool.
While the task of selecting CVs near a region such as the mouth could be a very
frustrating and time-consuming operation, Maya’s Paint selection tool makes it trivial.
The goal is to select the CVs of the lower jaw in order to pull them down for the mouth to
open. The problem is: the fact that the character has been modeled with the mouth shut,
means that you will have an extremely hard time discerning whether a CV that you just
picked belongs to the upper or lower lip.
The paint selection tool is part of the Artisan module, and simplifies any selection
operation, by allowing you to directly “paint” on your surface the area containing the
CVs you want to select. With other programs, every time you select a group of CVs, you
must make sure that the ones on the back face do not also get accidentally selected. With
Maya’s Paint selection tool, you no longer have to worry about that. Only those CVs on
the surface areas you paint are selected. Being part of the Artisan module also means that,
different shapes of brushes, as well as handy functions such as reflectivity, are still
available with the Paint selection tool. You are now able to perform complex selection
operations precisely, intuitively, and directly in the perspective view.
Back to our problem of opening Pepi’s mouth, we now need to use a ‘cluster’
node, which is illustrated in Figure 22 below.

Figure 22:
Pepi’s Lower Jaw’s Cluster of CVs.

After selecting the lower jaw’s CVs


using the paint selection tool, the
CVs are grouped into a cluster for
ease of manipulation. The cluster
handle appears in this figure and is
represented by a “C”. You can then
select the set of CVs simply by
selecting the cluster handle.

23
The cluster is then constrained to the dummy joint 5. This means that when joint 4
is rotated, joint 5, and therefore the lower jaw’s cluster of CVs follows the rotation, and
Pepi’s mouth opens.
We are now faced with a new problem: the skin is rigid, and the jaw opening is
abrupt and unnatural. This is because all the CVs in the lower jaw cluster move at the
same speed. They are pulled away with the same force. We need some way of adding
elasticity to the movement. Each CV, as a member of the cluster, has been assigned a
weight. This weight is the percentage of the cluster’s movement that the member inherits.
By default, all CVs in the cluster have the same maximum weight of 1, which is why they
are all moving away at the same speed. By assigning gradually lower weights to those
CVs near and around the cheeks and mouth, we can attenuate the force or speed used to
pull them down, thus creating a smoother motion.
As you might already expect, it would be very inefficient to hand pick and assign
a weight to each CV one by one. Here then, is yet another fantastic tool provided by
Maya: the Paint Weights tool, a third feature of the Artisan module. Figure 23 shows a
grayscale visualization of the weights assigned to the CVs of the lower jaw cluster. The
red circular marker is the influence region of the smooth brush.

Figure 23:
The Paint Weights Tool Feedback.

The Paint Weights tool allows you to


interactively paint, in grayscale,
directly on the surface map, the
weight values for each CV. ‘White’
means a maximum weight of 1 (the
CV will inherit all of the movement
of its cluster). ‘Black’ means a
minimum weight of 0 (the CV will
inherit none of the movement of its
cluster).

Note that an alternative to using the Paint Weights tool, is to edit the lower jaw
cluster inside Maya’s component editor, and manually enter (or load from a file) the

24
weight value for each CV. Although it might be quite lengthy, this alternative is
appropriate in cases where exact weight values are needed.

Figure 24 displays the lower jaw’s motion before and after applying proper
weights to the cluster’s CVs. Without appropriate CV weights (a), Pepi’s jaw seems to
break like a hard material, whereas with weights (b), Pepi’s skin stretch naturally when
he opens his mouth.

Figure 24: Before (a) and After (b) Assigning Weights to the Lower Jaw CVs.

25
2.3.3. A lattice Flexor for the Neck:

As for the jaw, we need a mechanism to smoothen the various tilting and turning
motions of the neck. Maya provides a special kind of deformers called “Flexors” [4, 14],
which are designed to work with rigid skinning. They are bound into a joint hierarchy
while being applied to the associated rigid skin, in order to enhance the effects. A lattice
flexor is used in conjunction with joint 2, Pepi’s neck joint. Figure 25 shows the lattice
flexor when the neck joint is in its initial and rotated states.

Figure 25: A Neck Flexor for Pepi.

As illustrated in Figure 26, without the flexor (a), when you rotate Pepi’s neck
joint, the movement is abrupt and the skin does not stretch naturally. With the flexor
however (b), a much smoother and subtler effect is obtained.

Figure 26: Pepi’s Neck, Before and After Applying a Lattice Flexor.

Note that the results of using a flexor here might not appear to be very significant,
because we have only modeled the head. However, if we had a complete torso, then it
would be easier to fully appreciate the necessity and importance of flexors.

26
2.3.4. A Short Recapitulation:

Before continuing to the next major phase of creating blend shape deformers, here
is a graphical recapitulation of all the animation mechanisms that I have covered up to
this point. Figure 27 below shows Pepi in his initial state then with his eyes and jaw
animated.

Figure 27: Pepi in His Initial and Animated States.

2.3.5. Facial Expressions and Speech with Blend Shape Deformers:

In the previous sections, Pepi’s fundamental animation controls were


implemented. We are now ready to proceed with an essential step in his evolution
towards becoming “real”: enabling him to talk and express emotions. The goal for this
stage is to model different facial poses for Pepi, which will be then incorporated as
attributes of his head shape using blend shape deformers.
The first step is then to model all of the different facial poses. We are recreating
or simulating the mechanism of various facial muscles. Each pose is obtained by starting
with a duplicate copy of Pepi’s head shape, and transforming it until the desired pose is

27
reached, or in a sense, the particular facial muscles are fully flexed. Besides lattice and
cluster deformers, which I have already introduced, Maya provides a variety of other
deformers for such a task. However, after experimenting with them, and although they
have considerable usefulness, I came back to Artisan.
Even so, familiarizing with these deformation tools was very important. Each is a
completely unique concept, and allows for very distinct effects. A wire deformer [14], for
instance, enables you to deform objects using one or more NURBS curves. In Maya, any
surface can be made “live” [13], meaning that it is possible to draw a freeform 3D curve
directly on any live surface. This feature is very useful for creating wire deformers from
these surface curves. When a wire deformer is applied to a deformable surface, a base
wire, together with an influence wire, is created. Any manipulation of the influence
wire’s shape relative to its base wire affects the corresponding surface. This allows for
very subtle surface deformations. Another example is the sculpt deformer [14], where the
influence object is a sphere. Depending on the mode, as you approach the sphere closer to
the deformable surface, the sphere either attracts or repels the nearby surface region,
creating a smooth, rounded deformation. The sphere can be thought of as a positively or
negatively charged particle. You may have heard of this concept referred to as a
“metaball”.
The above techniques are only two of many others available. As far as I am
concerned, for the purposes of a project such as this one, I would only use these
techniques if Artisan were not available. Artisan is a very high-level and intuitive tool. It
provides a 3D artist with great freedom. With this in mind, I will then proceed with the
implementation of this stage.
Blend shape deformers are very powerful. They allow you to morph NURBS or
polygonal objects. In character animation, they are most appropriate for setting up the
character’s facial poses. From Pepi’s base head, I have used Artisan to model a total of
twenty different poses that I wanted Pepi to be able to express. After creating a blend
shape deformer for a pose, Maya automatically computes the complete range of in-
between shapes necessary for morphing from the base head to that pose. As you create a
blend shape deformer, Maya instantly creates a slider control in its blend shape editor,
which lets you set the value for your blend shape deformer. Thus, as I simply drag a
slider, Pepi’s head updates from its initial state to the corresponding pose, seamlessly
going through all the intermediate shapes extrapolated by Maya. Recall that, as I
explained in section 1.5., Maya allows for cumulative deformations. In other words,
besides the twenty original expressions, by using appropriate combinations of poses, I
can obtain almost any desired facial expression.
Keep in mind that each pose is a “maximum” deformation. By this I mean, that
the “frown” pose for instance, represents Pepi at his maximum frowning state, or when
his “frowning” muscles are fully flexed. I have slightly exaggerated the expressions on
some of Pepi’s poses. This allows me to obtain a greater range of expressions, as well as
some unusually distorted but potentially interesting results. With this said, I will now go
over each of Pepi’s poses.

28
Figure 28 shows Pepi with his eyes completely shut. Only the upper eyelids are
transformed.

Figure 28: Pepi Blinking.

This pose may seem straightforward. Only


the upper eyelids are involved in the
process of blinking. When opened, the
upper eyelid skin folds beneath and into the
top part of the eye socket, leaving a
characteristic crease. When closed, the
upper eyelid skin smoothly covers the
eyeball, and the crease more or less
disappears. This proper stretching of the
eyelid skin is very important in conveying a
realistic eyelid motion, and is obtained by
properly spacing the horizontal isoparms
that make up the eyelid.

Figure 29 shows Pepi squinting. This expression involves transforming a wider


region of the surface near and around the eyes to simulate proper muscle movements.

Figure 29: Pepi Squinting.

The eyes are partly closed. A very subtle


frowning occurs. The muscles under the
lower eyelids are also contracted. The skin
surrounding the eyes’ region is somewhat
pulled closer to simulate contraction in the
forehead and cheek muscles. The isoparms
of the contracted surface are manipulated to
recreate a shortening and thickening of the
affected muscles.

29
Figure 30 shows a surprised Pepi. The expression of surprise has been very
slightly exaggerated to make it more interesting.

Figure 30: Pepi Surprised.

The forehead muscles are performing this


action. The eyebrows are lifted up, dragged
from their center to form the characteristic
upside-down ‘V’. The eyes are wide open.
The forehead is transformed to convey the
tension of the eyebrows, and the
contraction of the forehead muscles. The
area near the bridge of the nose is elongated
and thinned a little, to simulate muscle
tension.

Figure 31 shows a happy Pepi. Here also, the expression has been relatively
exaggerated. This is a rather big smile for a character who rarely smiles.

Figure 31: Pepi Smiling.

The cheek muscles contract, the sides of the


mouth stretch away and upward. The skin
volume in the lower jaw is in a sense
transferred up to the cheeks, such that the
cheeks become plumper. Keep in mind that
when you smile, your eyes have a tendency
to squint a little.

30
Figure 32 shows a sad Pepi.

Figure 32: Pepi Sad.

The sides of the mouth are slightly pulled


downward. More importantly, the two
eyebrows’ proximal areas are pulled closer
together, and a little upward, creating some
wrinkling at the center of the forehead. The
bridge of the nose is a little stretched and
thinned, to simulate the muscle tension
needed to raise the eyebrows in this
manner.

Figure 33 shows an angry Pepi. This pose has been quite exaggerated. Pepi is
snarling like a mad dog, very unlike his usual mild self. This is Pepi’s most dramatic
expression. It involves a lot of muscle movements.

Figure 33: Pepi Angry.

Skin volume is concentrated towards the


area between the eyes, creating pronounced
wrinkles on the bridge of the nose and
forehead. The creases on both sides of the
nose (the laugh lines) are pulled up and
accentuated towards the top. The
contraction of the cheek muscles pulls the
right and left extremities of the mouth up
and closer together, shaping the upper lip
into an ‘M’.

31
Figure 34 shows Pepi with his mouth slightly open. This is Pepi’s most subtle
pose. But it also has extensive use.

Figure 34: Pepi with Open Mouth.

When our face is in a normal, calm state,


our mouth is sometimes open, or
sometimes close. This pose is unlike the
other ones because it does not involve any
muscle movement. This is just like Pepi’s
initial state, with the exception that his
mouth is slightly open in a relaxed position.

Figure 35 shows Pepi blowing air. This pose is to be used for more specific
actions (whistling, putting out a candle, or after being exhausted…).

Figure 35: Pepi Blowing Air.

The cheeks are inflated with air, and


become voluminous. As a result of the
skin’s tension, the laugh lines are
attenuated. Also, inflating the cheeks
pushes the overall mouth and nose area
forward. Here also, the pose has been quite
exaggerated, and even though it is rarely
used, I simply find it funny, and it can be
used in various combinations with other
poses to obtain great grimaces.

32
Figure 36 shows Pepi frowning. This is the first of three other poses, which have
been implemented for left and right sides separately, to allow for more control and
flexibility over Pepi’s expressions. They are all somewhat exaggerated.

Figure 36:
Left and Right Frown.

The eyebrow’s proximal region is


pulled down and more towards the
center, while the distal region is
pulled up a bit. As a result of the
contraction, heavy wrinkles are
formed in the middle of the
forehead.

Figure 37 shows Pepi smirking.

Figure 37:
Left and Right Smirk.

The cheek muscle is contracted,


pulling the side of the mouth up.
As a result of the contraction, the
cheek becomes plumper. This
expression is close to smiling, but
it is a “forced” smile. The element
of sincerity, normally conveyed by
the eyes, has disappeared.

33
Figure 38 shows Pepi sneering.

Figure 38:
Left and Right Sneer.

This pose is just like the previous


one, with the exception that the
side of the mouth opens a little, as
if Pepi was grumbling something.

So far, the poses I have covered are for Pepi to express emotions. The following
six poses are Pepi’s basic speech poses. After some research, I narrowed them down to
these few fundamental ones.
Figure 39 shows Pepi pronouncing the phoneme ‘AH’. This pose also illustrates
the maximum opening of Pepi’s lower jaw.

Figure 39: Pepi Saying ‘AH’.

The lower jaw is opened to its maximum.


The lips are very tense. Most of the
mouth’s stretching is vertical. The corners
of the lips do extend slightly. The overall
shape of the lips is hexagonal. The laugh
lines are accentuated. The groove above the
upper lip almost disappears as a result of
the mouth’s tension.

34
Figure 40 shows Pepi pronouncing the phoneme ‘OH’.

Figure 40: Pepi Saying ‘OH’.

The lower jaw is opened halfway. The


mouth is protruded into forming an ‘O’.
The lips pucker and thrust forward. The
skin is compacted near the chin area and
up. Overall, the cheeks slightly come
forward. The bottom ends of the laugh lines
are erased, and new subtler creases are
formed around the mouth, with a direction
that flows together with the lips’
protrusion.

Figure 41 shows Pepi pronouncing the phoneme ‘UH’.

Figure 41: Pepi Saying ‘UH’.

This pose has some similarities with the


previous one as far as the muscle tensions.
There is however almost no mouth
opening. The same protrusion of the mouth
occurs, but the lower jaw is only opened
very little. The lips are plumper and more
puckered. Pepi is pursing his lips.

35
Figure 42 shows Pepi pronouncing the phoneme ‘EH’.

Figure 42: Pepi Saying ‘EH’.

The lower jaw is opened halfway. The


mouth opening is narrow. The lips extend
at the corners, to form a vertically squashed
hexagon or oval. The overall area of the
mouth and cheeks becomes rounder. The
cheeks contract and become chubbier. The
groove above the upper lip almost
disappears from the stretching. (This pose
has been a little exaggerated.)

Figure 43 shows Pepi pronouncing the phoneme ‘FH’.

Figure 43: Pepi Saying ‘FH’.

The lower lip folds inward (touching the


upper teeth). There is a very slight, almost
non-existent, mouth opening. The small
creases on both corners of the mouth are
accentuated. There is a slight cheek
contraction, which pulls the corners of the
mouth up and back a little.

36
Figure 44 shows Pepi pronouncing the phoneme ‘MH’.

Figure 44: Pepi Saying ‘MH’.

This pose has some similarities with the


previous one. Both lips together fold
inward this time. The small creases on both
corners of the mouth are again accentuated.
Here too, the corners of the mouth are
slightly pulled up and back. But unlike the
previous pose, the area immediately under
the lower lip swells a little, suggesting the
contour of the lower teeth beneath the skin.

Consulting books on phonetics, pronunciation, or lip-reading, can be very helpful


in designing facial poses [8, 20]. This simplified my planning phase considerably,
especially with categorizing phonemes for implementing speech poses.
The first four speech poses are for vowel sounds, and are fundamental to achieve
a whole range of other sounds. The ‘FH’ pose is used for both ‘f’ and ‘v’ sounds. The
‘MH’ pose can be used for ‘m’, ‘b’, or ‘p’ sounds. A large number of consonant sounds
do not require any lip movement, and can be therefore obtained very simply by using
Pepi’s initial (normal) pose. For some of these sounds, the mouth is in a relaxed, natural,
slightly opened state. For example, the sounds: ‘d’ (as in “bad”), ‘l’ (as in “long”), ‘n’ (as
in “fun”), ‘t’ (as in “pet”), ‘nd’ (as in “wind”), ‘nt’ (as in “flint”), and ‘th’ (as in “this”),
are formed by the tongue, without lip movement. There are also throat sounds, such as ‘k’
(as in “bike”), the hard ‘c’ (as in “car”), the hard ’g’ (as in “gorilla”), and ‘ng’ (as in
“strong”), which do not require any lip movement. ‘h’ sounds (as in “hotel”) involve an
expiration of breath. Sounds with ‘r’ (as in “row”), ‘s’ (as in “salt”), ‘sh’ (as in “wish”),
‘ch’ (as in “child”), ‘g’ (as in “giraffe”), and ‘j’ (as in “jelly”) require a combination
tongue movement and expiration of breath [8, 20].
The poses provided for Pepi are to some degree unique to Pepi. They are a
mixture of traits universally shared by all humans, and very unique individual traits.
When explaining the mechanism to model these poses, I am not implying that they would
work for any character. These poses are merely the result of examining my own face in a
mirror, and trying to apply what I see to the character’s model.

37
With the poses modeled, we are now ready to create a blend shape deformer,
which will allow us to morph Pepi. All of the poses are selected first as target shapes. The
initial head is selected last as the base shape. Now, we simply invoke Maya’s ‘Create
Blend Shape’ tool from the ‘Deform’ menu, at which point Maya gives you the choice of
either deleting the poses if you think they are final, or keeping them if you think that you
will need to modify them later. Regardless, the poses are now part of the initial head
shape, as keyable attributes. And this marks the end of this stage.
For the purposes of this project, we have now implemented all of the mechanisms
needed to animate Pepi. However, we cannot start animating him yet. We can open
Pepi’s mouth using the blend shape deformers, and we can rotate Pepi’s lower jaw
components using Pepi’s joint structure, but the two are unrelated. We still need to define
a relationship between them, in order to synchronize their motion, which is the topic of
the next section.

2.4. Adding Functionality with MEL:

In the previous section, we implemented the motions for each of Pepi’s head’s
components. We did create some simple relationships by parenting some components to
the skeleton, in order to have all components move together as a unit. However, we now
have to deal with a “nested” transformation, one within the unit. As Pepi opens his
mouth, the lower jaw components need to follow the movement. In this section, I will
explain how to use Maya’s scripting language MEL to write a simple expression, which
will define proper relationships between the lower jaw, and those blend shape deformers
that involve opening the mouth.
In this section and the next one, I will try to introduce the use and syntax of MEL
gradually, starting with simple keyword commands, followed by one-line statements and
expressions, then a short script, and finally a complete program.
As I explained in section 1.2.2., MEL is Maya’s embedded scripting language.
You can use it to bypass the Maya graphical user interface entirely, and enter MEL
commands through the command line or the script editor. The command line only allows
for one line of code at a time. Pressing the “Enter” key will compile and, provided there
are no errors, execute that line of code. The script editor however, allows you to write
and compile entire programs. It has two panes: the bottom pane is a text editor where you
can compose or paste in MEL scripts, and the top pane is a read-only text window where
any execution results or script compilation errors are displayed.

38
Here are some simple examples of MEL commands entered at the command line:

cone <ENTER> will create a default NURBS cone named nurbsCone1


rename “nurbsCone1” myCone <ENTER> will change its name to myCone.

When an object is created, it is by default selected. So, for example:

rotate 0 0 90 <ENTER> will rotate myCone 90 degrees along the z-axis


move 5 0 0 <ENTER> will translate myCone 5 units along the x-axis.
Semicolons and attribute references can be used to enter more than one command.
For instance, the following line of code will create a NURBS sphere called
‘sliderButton’, translate it 6 units along the z-axis, and scale it by 4 along the y-axis. The
attribute references can be abbreviated, so here, we could write –n instead of –name.

sphere –name sliderButton; move 0 0 6; scale 1 4 1 <ENTER>

Now, one way we could link these two objects together is, for example, to have
myCone’s scaleY attribute change as sliderButton’s translateX attribute varies. We would
then enter the following code at the command line:

myCone.scaleY = sliderButton.translateX <ENTER>

So, as we drag sliderButton along the positive x-axis, myCone automatically


stretches along the y-axis.

For Pepi then, we need to implement a mechanism to link the opening of the
mouth to the motion of the lower jaw. We do not want to have to deal with the lower
jaw’s operation. It should be automatic, every time the “event” of Pepi opening his mouth
occurs. This means that Maya must constantly check for this event. This is done in Maya
through “expressions” [16]. Expressions are MEL instructions, which have the option of
being always evaluated over time by Maya. They can be used for linking attributes
between different objects, where a change in one attribute alters the behavior of the other.
For example, an expression can be used to link the rotation of a tire to the forward or
backward movement of a car.
The blend shape deformer created in the previous section (2.3.5.) has, as its
attributes, three poses that involve Pepi opening his mouth. These are the ‘AH’, the ‘OH’,
and obviously the ‘OPENMOUTH’ attributes. A MEL expression, which I have called
“lowerjawRotatorEXP”, links the rotateZ attribute of the lower jaw joint (this is joint 4 in
Figure 18) to the values of these three poses.
There are two ways to create an expression. One is to enter your code in Maya’s
expression editor, as shown in Figure 45.

39
Figure 45: Expression Editor with “lowerjawRotatorEXP”.

The second possibility is to enter the following code at the command prompt or in
the script editor, as follows:

expression –n lowerjawRotatorEXP –o lowerjawRotator


-s “lowerjawRotator.rotateZ = (blendShape.AH * -20) +
(blendShape.OPENMOUTH * -4) + (blendShape.OH * -6)” <ENTER>

The abbreviations are –n for –name, –o for –object, and –s for –script. The values
of blendShape’s attributes are between 0 (no effect) to 1 (max. effect). Therefore, we
multiply each value by the amount (in degrees) that we want the lower jaw to rotate along
the z-axis (y is ‘up’). Finally, we add the values together since, the combination of blend
shape deformers being cumulative, the effect on rotateZ should be therefore additive.
Pepi can still be animated without writing the above expression, however, this
would make the animation process needlessly complicated. It would also be a very weak
use of Maya. Maya’s power partly resides in its ability to let you to automate as much of
the animation mechanism as possible, so that once you start the actual animation process,
you are operating at a very high level, without worrying about internal implementation
details. With this motivation in mind, I will now proceed into the next major step in this
project: Creating a custom interface.

40
2.5. Adding a Custom Interface with MEL:

In the previous section, we finalized Pepi’s animation mechanism. Although we


could immediately start animating him, it is a good idea to consider creating custom
procedures that automatically perform series of repetitive actions, and even a custom
interface window to invoke these procedures from. This step may seem unnecessary, but
will in fact greatly simplify and speed up the animation process. Also, creating your own
interface does by no means imply any shortcomings of Maya. On the contrary, Maya is
allowing you to add to its functionality, in order to create a more efficient and productive
environment, best suited to your specific needs.
The custom interface that we will create will hide all the details and effort put into
creating the blend shape deformations, and will make the process of setting keys for
animation much faster and easier. It will do so by automating repetitive steps that would
take more time and effort using the default Maya interface. Someone knowing nothing
about how the character was implemented can still use the control panel to quickly
animate the character. This custom interface not only performs at once a series of tasks
that we would otherwise have to perform one by one, but it also gathers in a single
window, controls that are relevant to this particular project, and which normally are
scattered in different interface windows in Maya.
I will start with a few MEL procedures that I have written to automatically set
keys for animation. Although it will be possible to invoke these procedures from the
command line, I have written them specifically to be invoked in the custom interface.
Before implementing the procedures, I have defined some variables other than the
ones used for blendShape. I have created two group nodes: facialExpressionControl and
speechControl. These nodes will act as a higher layer for indirectly accessing
blendShape’s attributes. To these nodes, I have added the keyable attributes shown in the
following table:

facialExpressionControl speechControl

blink eh
squint uh
beSurprised oh
frownRight ah
frownLeft fh
smile mh
beSad
beAngry
smirkRight
smirkLeft
sneerRight
sneerLeft
blowAir
openMouth

41
Then, I have used simple MEL expressions to link each of the attributes that I
have defined above to the ones used by blendShape. The reason for doing this (besides
hiding the blendShape layer) is that, blendShape’s attribute values range from 0 to 1, but
in my custom interface, I want to include sliders to control blendShape’s attributes, and I
want their value to go from 0 to 10 instead of 0 to 1. Therefore:

• For facialExpressionControl:

blendShape.BLINK = facialExpressionControl.blink * 0.1


blendShape.SQUINT = facialExpressionControl.squint * 0.1

blendShape.OPENMOUTH = facialExpressionControl.openMouth * 0.1

• For speechControl:

blendShape.EH = speechControl.eh * 0.1


blendShape.UH = speechControl.uh * 0.1

blendShape.MH = speechControl.mh * 0.1

Now, for each attribute of facialExpressionControl and speechControl, a script is


written to set keys for animation. The following script is called blinkSCR, and is used to
set animation keys for facialExpressionControl.blink. The scripts for the rest of the
attributes are similar to this one and can be obtained by simply replacing the correct node
and its attribute names.

// This script builds a procedure for animating Pepi’s


// facialExpressionControl.blink attribute.

global proc blinkSCR (float $blinkDelay){

The argument $blinkDelay is the desired duration of one blink.

// set up variables that will be used in the script

string $blink = "facialExpressionControl.blink";


float $time = `currentTime -query`;
float $blinkCurrent = `getAttr $blink`;

currentTime is a Maya system variable. getAttr is a predefined MEL command


that returns the value of the named object's attribute. In MEL, you must enclose a
command within left-hand single quotes (`) in order to return command output.

42
// at the start of the blink, set a key of value 0 (no blink)

setKeyframe -value 0
-time $time
-attribute $blink
$blink;

setKeyframe is a predefined MEL command. Pepi is in his initial state with his
eyelids open. The morphing process is about to start. A key is set.
facialExperssionControl.blink is initialized to ‘0’. (And as a consequence,
blendShape.BLINK is set to ‘0’).

// half way through the blink, set a key of value "$blinkCurrent"

setKeyframe -value $blinkCurrent


-time ($time + $blinkDelay/2)
-attribute $blink
$blink;

$blinkCurrent is the value for facialExpressionControl.blink set by the user. This


will be done later through the custom interface window.
facialExperssionControl.blink is set to $blinkCurrent. (Therefore,
blendShape.BLINK is set to $blinkCurrent * 0.1). Pepi’s face is morphed into a blinking
pose of value $blinkCurrent. A key is set.

// at the end of the blink, set a key of value 0 (no blink)

setKeyframe -value 0
-time ($time + $blinkDelay)
-attribute $blink
$blink;

Pepi is morphed back to his initial pose. The eyelids return to their initial open
position. The last key is set.

// notify the user

print("keys were set for a blink of: value= " + $blinkCurrent


+ ", startTime= " + $time + ", duration= " + $blinkDelay +
".\n");
}

43
We can now invoke the above script by simply entering, for example, blinkSCR
5 at the command line. Let us assume that we have set the current time at 40 and the
current value for facialExpressionControl.blink at 6 (max. is 10). This means that, at time
40, Pepi will start to gradually close his eyelids until time 42.5, by which point the
eyelids will be partially closed with an amount of 6. Then, he will gradually return his
eyelids to their initial open position by time 45.
In order to specify all of these values, we need to first set the time value by using
Maya’s animation time line. We then need to select the facialExpressionControl node,
and change its blink attribute to the desired value using maybe the side channel box, or
the attribute editor. Then, we can invoke blinkSCR at the command line. Again, this is not
a complex process. But it would be nice if Pepi had his own control panel, with all of the
above operations in one place. And, since I am the main user, this custom interface would
also be designed according to my preferences regarding the layout and the functionality
of the controls.

After examining the layouts of some of the Maya interface windows, I decided on
those components that were the most appealing to me. I wanted the interface to be very
simple and self-explanatory. I also wanted it to be very compact. I found all of these
design elements in Maya’s attribute editor, shown in Figure 46.

Figure 46:
The Attribute Editor.

The attribute editor has a


relatively small size, but is
amazingly functional. The use of
scrollbars, tabs, and collapsible
frames, keeps the layout
complexity at a very low, and
allows the user to quickly change
the focus on a specific group of
controls.

44
Figure 47 shows a preliminary sketch of Pepi’s control panel layout.

Figure 47: A Preliminary Sketch for the Custom Interface.

I will now go over the script itself. To keep things clear and concise, I have
removed repetitive code portions, which I will note as I progress through the script. Each
important component (whether a procedure or an interface control) will be discussed at
its own creation time in the script. If a component is mentioned prior to its creation time,
because it is referenced in a procedure, I will then provide the line number indicating the
location of its creation in the script.

1. // Author: Ervin Bakhshian


2. // creation date: 10/3/99
3. // ----------------------------------------------------------------
4. // These scripts have been written for animating Pepi’s
5. // facial expressions and speech poses.
6. // They are part of the author's Master's thesis.
7. // ----------------------------------------------------------------

8. // procedures for the top commands --------------------------------

I will start with the procedures invoked by the top control widget: the animation
slider group called animationSlider (line 178).

45
9. global proc updateTimeLinePROC() {

10. // query the control panel’s time slider value


11. float $currentFrame = `intSliderGrp -query -value
12. interfaceSCR|rootLayout|topControlsLayout|animationSlider`;

Line 12 shows how a particular control is traced through the script by specifying
its entire ancestral path. Here, interfaceSCR (line 151) is the name of the main window.
rootLayout (line 169) is the window’s main layout. topControlsLayout (line 171) is the
layout widget containing the animation slider.
13. // update the global time and all attributes accordingly
14. currentTime -edit $currentFrame;
15. refreshAttributesPROC;
16. }

The above procedure called updateTimeLinePROC is invoked every time the


value of the animation slider group is changed, either by dragging its slider button, or by
typing a new value in its edit-box. This gives us control over Maya’s global time.
However, as we change the value of the global time, we also need for each attribute entry
in the control panel to update to its current value. In other words, we need the control
panel to be aware of the attribute values for any animation keys that we might have
previously set for Pepi. This is why, each time we modify the global time, we must call
refreshAttributesPROC, which comes next.
17. global proc refreshAttributesPROC() {
18. // reloads all attributes so that they are current

This procedure gathers the values of all attributes at the current time, and updates
the corresponding entries in the control panel.
19. int $newTime = `currentTime -query`;

20. intSliderGrp -edit -value $newTime


21. interfaceSCR|rootLayout|topControlsLayout|animationSlider;

The Maya global time is queried. Then, the value of the control panel’s animation
slider is set to the global time. It may seem redundant to perform this operation right after
updateTimeLinePROC, however, as I will later explain, updateTimeLinePROC is not the
only place where refreshAttributesPROC is invoked.
22. int $newBlink = (float) `getAttr facialExpressionControl.blink`;

Line 22 is an example of type conversion (float to integer).

23. intSliderGrp -edit -value $newBlink


24. interfaceSCR|rootLayout|frame|form|tabs|child1|child1Form|
25. blinkC|rc1|blinkAmountSlider;

46
The current blink value is queried. The value of the blinking slider group called
blinkAmountSlider (line 245) is set to the current blink value. The blink slider’s ancestral
path is: frame (line 216), a child of rootLayout containing the middle controls; form (line
222), a placement widget for frame’s components; tabs (line 224), a tab layout widget;
child1 (line 232), the first tab of tabs; child1Form (line 233), another placement widget
for child1’s components; blinkC (line 234), a frame layout widget for the blink slider
group’s components; rc1 (line 240), a row-column layout widget containing the blink
slider group.
I have removed some code here. What has been done in lines 22 to 25 should be
reproduced for all of facialExpressionControl’s attributes. For instance, for the
openMouth attribute, lines 22 to 25 should be modified to the following code in lines 26
to 29. (The row-column widget has been named rc14 because openMouth is the 14th
pose).
26. int $newOpenMouth =
(float) `getAttr facialExpressionControl.openMouth`;

27. intSliderGrp -edit -value $newOpenMouth


28. interfaceSCR|rootLayout|frame|form|tabs|child1|child1Form|
29. openMouthC|rc14|openMouthAmountSlider;

We repeat the same thing for speechControl’s attributes. The first attribute is eh:
30. int $newEH = (float) `getAttr speechControl.eh`;

31. intSliderGrp -edit -value $newEH


32. interfaceSCR|rootLayout|frame|form|tabs|child2|child2Form|
33. ehC|rc15|ehAmountSlider;

The current eh value is queried. The value of the eh slider group called
ehAmountSlider (line 297) is set to the current eh value. The eh slider group is a
descendent of tabs. It is inside child2 (line 284), the second tab of tabs; child2Form (line
285) is the placement widget for child2’s components; ehC (line 286) is the frame layout
widget for the eh slider group’s components; finally, rc15 (line 292) is the row-column
layout widget containing the eh slider group.
Here again, I have omitted some code. What has been done in lines 30 to 33
should be emulated for the rest of speechControl’s attributes. For example, for the mh
attribute, lines 30 to 33 should be modified to the following code in lines 34 to 37. (the
row-column widget has been named rc20 because mh is the 20th pose).
34. int $newMH = (float) `getAttr speechControl.mh`;

35. intSliderGrp -edit -value $newMH


36. interfaceSCR|rootLayout|frame|form|tabs|child2|child2Form|
37. mhC|rc20|mhAmountSlider;
38. }

47
39. // procedures for the middle commands (first tab) ------------------

Now, we should define the procedures invoked by the middle control widgets.
These are the tabs containing the fields, sliders, and buttons.
40. global proc selectProperObjectPROC() {
41. // when a tab is selected, the corresponding group node in the
42. // scene is also selected.

43. string $currentTab = `tabLayout -query –selectTab


44. interfaceSCR|rootLayout|frame|form|tabs`;

45. if ($currentTab == "child1") select facialExpressionControl;


46. if ($currentTab == "child2") select speechControl;
47. }

The above procedure is optional. Its role is to further synchronize the custom
interface with the Maya interface. It makes sure that when we change the focus from one
tab to the other within the control panel, then the corresponding object in the scene is
selected (using the predefined MEL command: select). So when we select the “Facial
expressions” tab, the node facialExpressionControl is selected in the scene. This is not
required for the proper operation of the control panel. The reason for doing this is that,
when we set animation keys for an object using Maya’s time line, then those keys are
represented on the time line by a red tick mark. Tick marks for an object are displayed
only if that object is selected. Therefore, by selecting facialExpressionControl or
speechControl, and provided that we have chosen to turn on the display of Maya’s time
line in our working area, then we can take advantage of the “red tick marks” feature.

The following procedures are the main commands of the control panel. They
modify attribute values and set animation keys. We will first implement the procedures
called by the components of the first tab: those that control Pepi’s facial expressions.
48. global proc updateBlinkValuePROC() {
49. // update facialExpressionControl.blink attribute

50. float $currentBlink = `intSliderGrp -query –value


51. interfaceSCR|rootLayout|frame|form|tabs|child1|
52. child1Form|blinkC|rc1|blinkAmountSlider`;
53. setAttr facialExpressionControl.blink $currentBlink;
54. }

The above procedure updateBlinkValuePROC is called by the blink slider group


(line 245) to set the amount for the blink attribute. setAttr is a predefined MEL command
that assigns to the named object's attribute (its first argument), the given value (its second
argument).

48
55. global proc blinkGetInfoPROC() {
56. // get necessary info and call the procedure to set keys

57. float $blinkDelay = `intSliderGrp -query -value


58. interfaceSCR|rootLayout|frame|form|tabs|child1|child1Form|
59. blinkC|rc1|blinkDelaySlider`;

60. // done with this value, so reset it to zero.


61. intSliderGrp -edit -value 0
62. interfaceSCR|rootLayout|frame|form|tabs|child1|child1Form|
63. blinkC|rc1|blinkDelaySlider;

64. // set keys


65. blinkSCR $blinkDelay;
66. }

After using the control panel’s animation slider group to set the current time, and
after specifying the blink amount at that time, we would normally like to set an animation
key. This is the role of the above blinkGetInfoPROC procedure. Setting a key involves
first specifying a duration value for the blink using the blink delay slider group (line 256),
and second pushing on the ‘Key’ button (line 255). As soon as we push the ‘Key’ button,
blinkGetInfoPROC is invoked. The blink delay value is queried and passed as blinkSCR’s
argument. (We defined blinkSCR earlier, as a separate script, on page 47).
I have removed some code here. The code in lines 48 to 66 should be reused for
the remaining attributes of facialExpressionControl. Thus, for the openMouth attribute,
we would modify lines 48 to 66 to obtain the following lines (67 to 85):
67. global proc updateOpenMouthValuePROC() {
68. // update facialExpressionControl.openMouth attribute

69. float $currentOpenMouth = `intSliderGrp -query -value


70. interfaceSCR|rootLayout|frame|form|tabs|child1|child1Form|
71. openMouthC|rc14|openMouthAmountSlider`;
72. setAttr facialExpressionControl.openMouth $currentOpenMouth;
73. }

74. global proc openMouthGetInfoPROC() {


75. // get necessary info and call the procedure to set keys

76. float $openMouthDelay = `intSliderGrp -query -value


77. interfaceSCR|rootLayout|frame|form|tabs|child1|child1Form|
78. openMouthC|rc14|openMouthDelaySlider`;

79. // done with this value, so reset it to zero.


80. intSliderGrp -edit -value 0
81. interfaceSCR|rootLayout|frame|form|tabs|child1|child1Form|
82. openMouthC|rc15|openMouthDelaySlider;

83. // set keys


84. openMouthSCR $openMouthDelay;
85. }

49
We should now implement the procedures called by the components of the second
tab: the ones that control Pepi’s speech.
86. // procedures for the middle commands (second tab) -----------------

87. global proc updateEhValuePROC() {


88. // update speechControl.eh attribute

89. float $currentEh = `intSliderGrp -query -value


90. interfaceSCR|rootLayout|frame|form|tabs|child2|child2Form|
91. ehC|rc15|ehAmountSlider`;

92. setAttr speechControl.eh $currentEh;


93. }

The above procedure updateEhValuePROC is called by the eh slider group (line 297) to
set the amount for the eh attribute.
94. global proc ehGetInfoPROC() {
95. // get necessary info and call the procedure to set keys

96. float $ehDelay = `intSliderGrp -query -value


97. interfaceSCR|rootLayout|frame|form|tabs|child2|child2Form|
98. ehC|rc15|ehDelaySlider`;

99. // done with this value, so reset it to zero.


100. intSliderGrp -edit -value 0
101. interfaceSCR|rootLayout|frame|form|tabs|child2|child2Form|
102. ehC|rc15|ehDelaySlider;

103. // set keys


104. ehSCR $ehDelay;
105. }

As we did previously for facialExpresionControl’s blink attribute, after using the


control panel’s animation slider to set the current time, and after specifying the eh
amount at that time, we would normally like to set an animation key. This is taken care of
by the above ehGetInfoPROC procedure. We specify an eh delay using the eh delay
slider group (line 308), and second pushing on the ‘Key’ button (line 307). As soon as we
push the ‘Key’ button, ehGetInfoPROC is invoked. The blink delay value is queried and
passed as ehSCR’s argument (ehSCR, very similar to blinkSCR on page 47, has been
written previously as a separate script).
Once again, I have discarded some code here. Lines 87 through 105 should be
reused to write the procedures for the remaining attributes of speechControl. For
instance, for the mh attribute, we would modify lines 87 to 105 to obtain the following
procedure (lines 106 to 124):

50
106. global proc updateMhValuePROC() {
107. // update speechControl.mh attribute

108. float $currentMh = `intSliderGrp -query -value


109. interfaceSCR|rootLayout|frame|form|tabs|child2|child2Form|
110. mhC|rc20|mhAmountSlider`;

111. setAttr speechControl.mh $currentMh;


112. }

113. global proc mhGetInfoPROC() {


114. // get necessary info and call the procedure to set keys

115. float $mhDelay = `intSliderGrp -query -value


116. interfaceSCR|rootLayout|frame|form|tabs|child2|child2Form|
117. mhC|rc20|mhDelaySlider`;

118. // done with this value, so reset it to zero.


119. intSliderGrp -edit -value 0

120. interfaceSCR|rootLayout|frame|form|tabs|child2|child2Form|
121. mhC|rc20|mhDelaySlider;

122. // set keys


123. mhSCR $mhDelay;
124. }

We have one more set of procedures to write before we can start with the main
program. These are the procedures for the three buttons at the bottom of the control
panel. My original intent, as illustrated in the preliminary sketch in Figure 50, was to
have only one ‘close’ button here. However, I decided to add an ‘about’ and a ‘help’
button, so that I can experiment with the use of MEL dialog windows, and also just in
case someone else might use the control panel to animate Pepi.
125. // procedures for the bottom commands: --------------------------

The following procedure is invoked by the ‘close’ button (line343). deleteUI is a


predefined MEL command that deletes GUI objects such as windows and controls.
Deleting a layout (group) or window will also delete all of its children.
126. global proc closePROC() {
127. // called by close button
128. deleteUI interfaceSCR;
129. }

The next two procedures called aboutPROC and helpPROC are examples of
creating a dialog window. The confirmDialog command creates a modal dialog with a
message to the user and a variable number of buttons to dismiss the dialog. The only
other available command to create a dialog window is the promptDialog, which is similar
to the confirmDialog, but has, in addition, an edit area where the user is expected to enter
data. Both of these commands create dialog windows that are full application modal,

51
meaning that, until the dialog window is dismissed, we cannot interact with any other
Maya window. There are no predefined MEL commands at this time for creating a
modeless dialog window, which would allow us to leave a dialog window open, while we
are concurrently interacting with its calling (parent) window.

130. global proc aboutPROC() {


131. // about dialog window
132. confirmDialog
133. -title "About the author"
134. -message "\n
135. Ervin Bakhshian\n\n
136. egb41679@csun.edu\n\n
137. www.csun.edu/~egb41679/ervin.html\n"
138. -button "OK"
139. -cancelButton "OK";
140. }

The above procedure is invoked by the ‘about’ button (line 344).


141. global proc helpPROC() {
142. // about dialog window
143. confirmDialog
144. -title "Help"
145. -message "Hello!\n\n
146. This control panel has been written to work specifically\n
147. with Pepi, the character developed by the author for his\n
148. Masters' project. It will only work properly if your scene\n
149. contains a complete model of Pepi.\n\n
150. You can use this control panel to animate Pepi's facial\n
151. expressions and speech. Start by selecting one of the two\n
152. tabs. Each tab displays a list of Pepi's possible behaviors\n
153. that you can animate. Use the side scrollbar to scroll through\n
154. the list.\n
155. Now, use the top slider to set the current frame (or type\n
156. your choice in the edit box). The scene will update to the\n
157. time frame that you have specified.\n
158. Next, inside the tab, select the desired behavior by\n
159. clicking on its little side button. This will expand its\n
160. controls (you can collapse/expand the controls by using the\n
161. side button).\n
162. Drag the amount slider and watch Pepi animate. Use the\n
163. controls to set the desired amount and delay for the behavior.\n
164. Once satisfied, press the 'Key' button, and animation keys\n
165. will be set according to your specifications. Use the top\n
166. slider to rewind and forward the scene, and watch your new\n
167. animation."
168. -button "OK"
169. -cancelButton "OK";
170. }

The above procedure is invoked by the ‘help’ button (line 345).

52
171. // main interface: ------------------------------------------------

This is the main program. It is here that the control panel’s actual components and
their layout are built.
172. global proc interfaceSCR() {

173. int $currentBlink =


(float) `getAttr facialExpressionControl.blink`;

I have omitted some lines here. The statement in line 173 is simply repeated for
the remaining attributes of facialExpressionControl.
174. int $currentEH = (float) `getAttr speechControl.eh`;

As above, line 174 is reused for the rest of speechControl’s attributes.


175. // clean up any existing interfaceSCR windows
176. if ((`window -exists interfaceSCR`) == true) deleteUI interfaceSCR;

The above statement makes sure that there is only one control panel present at any
one time.
177. // create the main window

Before building the main window, a windowPref command is used to specify


preferred window attributes. The size and position of a window is retained during and
between application sessions. A default window preference is created when a window is
closed. The windowPref command can be used to override these default preferences. In
this case, we do not want to allow any modifications to our window’s size or position.

178. windowPref -enableAll false; // no user preferences, size fixed

The next step is to create the main window. The window command creates a new
window but leaves it invisible. It is most efficient to first add the window's elements and
then make it visible. We have the option of enabling or disabling features such as a menu
bar, a title bar, or the minimize/maximize buttons. Here, we have created a window that
is 390 pixels wide and 350 pixels tall. We have called it Facial Animation in its title bar,
but Maya will know it as interfaceSCR. Since we do not want it to be resized, we have set
its –sizeable attribute to false (line 183). This means that the window will be created
without minimize and maximize buttons.
179. window // name and dimensions
180. -width 390
181. -height 350
182. -title "Facial Animation"
183. -sizeable false
184. interfaceSCR;

53
Now, we need to use a layout widget to organize the components that we want to
create inside the window. Such a widget is also known as a “manager” widget (X-Motif
terminology). All components are organized according to a hierarchy. A formLayout
widget is used that will be the root widget containing all of the other widgets. I have
therefore called it rootLayout. The form layout allows absolute and relative positioning of
the controls that are its immediate children, via attachments. It is defined here before we
can proceed with the creation of its children. Once the children are created, we will edit
rootLayout and specify its arrangement (line 346).

185. // this is the main layout


186. formLayout rootLayout;

A layout widget can contain control widgets or other layout widgets. In line 187,
we are using a row layout called topControlsLayout, inside rootLayout. As its name
indicates, it will contain the top controls. A rowLayout widget arranges its children into a
single horizontal row. Here, we are defining a row layout with two columns. The first
one, 300 pixels wide, will contain the animation slider (line 191). The second one, 50
pixels wide, will contain the icon-button that launches this report in html format (line
205).
187. // this is the 1st sub-layout for the top controls
188. rowLayout -numberOfColumns 2
189. -columnWidth2 300 50
190. topControlsLayout;

Every time a new layout widget is created, it becomes the current parent or
container. In other words, the next component, animationSlider, unless specified
otherwise, will be topControlsLayout’s first child.
To create the animation slider group, the intSliderGrp command is used. This
command creates a pre-packaged collection of text label, integer field and integer slider,
as well as a built-in row layout to position them. The default behavior is for the slider and
field to have the same range of values.
As you can see below, our text label contains “Current frame:” and is 80 pixels
wide. The integer field displays the current time and is 50 pixels wide. The integer slider
varies from the current scene’s start of animation time to its end of animation time, and is
130 pixels wide. Every time the integer value is changed, using either the field or the
slider, the procedure updateTimeLinePROC (line 9) is invoked.
191. intSliderGrp
192. -label "Current frame:"
193. -columnWidth3 80 50 130
194. -label "Current frame:"
195. -field true
196. -minValue `playbackOptions -query -animationStartTime`
197. -maxValue `playbackOptions -query -animationEndTime`
198. -value `currentTime -query`
199. -changeCommand "updateTimeLinePROC"
200. -dragCommand "updateTimeLinePROC"
201. animationSlider;

54
topControlsLayout is still the current parent. We can therefore create its second
child: the icon-button. For this we use the iconTextButton widget. This is similar to a
normal button, but can be labeled with either text, or an icon, or both.
The button we have created is 70 pixels wide and 45 pixels tall, is labeled with an
icon, and when pushed, invokes the predefined Maya script doBrowserHelp to launch this
report in html format. Also, when the mouse pointer is moved over the button, a text pops
up explaining the role of the button. This is achieved by using the –annotation attribute in
line 210.
202. // the following icon-button uses Maya's "doBrowserHelp" script,
203. // provided in the directory “../scripts/others” to launch Netscape
204. // and open the thesis file for this project in html format.
205. iconTextButton
206. -width 70
207. -height 45
208. -style "iconOnly"
209. -image1 "…/maya/projects/Pepi/PEPIICON.BMP"
210. -annotation "Opens thesis in HTML browser"
211. -command "doBrowserHelp {\"UserAbsolute\",\…/These/Pepi.html\"};"
212. HTMLbutton;

We are done with the top controls. Therefore, we do not want topControlsLayout
to be the parent anymore. To change the current parent, we use the setParent command.
This command simply changes the current parent to the specified one. Two special
parents are “/” which indicates the top of the hierarchy, or “..” which indicates one
level up in the hierarchy (So in line 213, we could have written setParent /;
instead.). Trying to move above the top level has no effect.
213. setParent rootLayout; // return parenthood to the main layout.

214. // this is the 2nd sub-layout for the middle controls

With rootLayout set as the current parent, we are ready to create the middle
controls. We will enclose the middle buttons inside a frame, using a frameLayout widget.
The frame layout is very useful for spatially separating a group of controls from another.
A frame layout draws a border around its children. Frames can have labels, and come in a
variety of border styles. Frame layouts may also be collapsible. Collapsing a frame layout
will hide its children and shrink its size. The frame layout can then be expanded to restore
it to its original size, and to make its children visible again.
215. // create a frame for the middle controls.
216. frameLayout
217. -label "Controls"
218. -labelAlign "top"
219. -borderStyle "in"
220. frame;

55
frame is now the current parent. Inside it, we create a form layout that will be
used for arranging the components within the frame.
221. // then, create a form layout inside of that frame.
222. formLayout form;

With form as the current parent, a tab layout widget called tabs is created as
form’s only child, using the tabLayout command. Tab layout widgets are a special type of
layout widgets that contain only other layout widgets. Whenever a layout widget is added
to a tab group, it will have a new tab provided for it, that allows selection of that widget
amongst other tabbed layout widgets. Only one child of a tab layout is visible at a time.
For tabs, we have specified inner margin values. This is to indicate that we would
like tabs’ children to be slightly offset from its borders. Also, whenever a new tab is
selected, the procedure selectProperObjectPROC (line 40) is called.
223. // now, create a tab layout.
224. tabLayout
225. -innerMarginWidth 5
226. -innerMarginHeight 5
227. -selectCommand "selectProperObjectPROC"
228. tabs;

229. // then, create the items inside of each tab.


230. // 2 tabs: facial expression (child1) and speech (child2).

tabs has two children. Both are shelf layout widgets. The shelfLayout command
creates a new empty shelf layout, which is another general-purpose layout widget. I used
shelf layouts here because of their appealing look, and also because they support
automatic scrollbars.
231. // tab 1:
232. shelfLayout -cellWidthHeight 500 1000 child1;
233. formLayout child1Form;

After creating a form layout named child1Form inside childl (line 233), a group
of controls is created for each of facialExpressionControl’s attributes. For each attribute,
a frame layout is used to set its controls apart from the rest.
For the blink attribute, a frame layout called blinkC is created. Its label is set to
“blink”. A margin of 5 pixels is specified for its border. Both its –collapsable and –
collapse attributes are set to true. A true –collapsable attribute means that the frame will
feature a small side button, which, when pushed will cause the frame to expand or
collapse. A true –collapse attribute means that the frame will be created in an initial
collapsed state.
234. frameLayout
235. -label "blink"
236. -marginWidth 5
237. -collapsable true
238. -collapse true
239. blinkC;

56
With blinkC as the current parent, a row-column layout widget called rc1 is
created using the rowColumnLayout command (rc1 has been so called because blink is
Pepi’s first pose.).
240. rowColumnLayout
241. -numberOfColumns 2
242. -columnWidth 1 255
243. -columnWidth 2 40
244. rc1;

Inside rc1, three children are created: the blinkAmountSlider, a ‘key’ button, and
the blinkDelaySlider.
245. intSliderGrp
246. -columnWidth3 55 35 130
247. -label "amount:"
248. -field true
249. -minValue 0
250. -maxValue 10
251. -value $currentBlink
252. -changeCommand "updateBlinkValuePROC"
253. -dragCommand "updateBlinkValuePROC"
254. blinkAmountSlider;

blinkAmountSlider’s text label reads “amount:” and is 55 pixels wide. Its value,
held by its field (35 pixels wide) and slider (130 pixels wide), can vary from 0 to 10, and
when modified, causes the procedure updateBlinkValuePROC (line 48) to be invoked.
255. button -label "key" -command "blinkGetInfoPROC";

Next to blinkAmountSlider, a simple button labeled “key” is created using the


button command. Since no size has been specified for it, it will assume the size dictated
to it by its parent rc1, which is 40 pixels (line 243). When pressed, it will invoke the
blinkGetInfoPROC procedure (line 55).
256. intSliderGrp
257. -columnWidth3 55 35 130
258. -label "delay:"
259. -field true
260. -minValue 0
261. -maxValue 100
262. -value 0
263. blinkDelaySlider;

Below blinkAmountSlider, another intSliderGrp widget called blinkDelaySlider is


created. Its dimensions are the same as blinkAmountSlider’s, but its value varies from 0 to
100.
264. setParent child1Form;

57
The controls for the first pose are created. Parenthood is returned to child1Form
(line 233). Here, I have omitted some code. Lines 234 through 264 should be reused for
the remaining facialExpressionControl attributes.

Once all controls for this tab have been defined, it is time to edit child1Form, to
spatially organize the controls using attachments.
265. // now, place all items in this tab properly by editing child1Form.
266. formLayout –edit
267. -attachForm blinkC "top" 5
268. -attachControl squintC "top" 5 blinkC
269. -attachControl beSurprisedC "top" 5 squintC
270. -attachControl frownRightC "top" 5 beSurprisedC
271. -attachControl frownLeftC "top" 5 frownRightC
272. -attachControl smileC "top" 5 frownLeftC
273. -attachControl beSadC "top" 5 smileC
274. -attachControl beAngryC "top" 5 beSadC
275. -attachControl smirkRightC "top" 5 beAngryC
276. -attachControl smirkLeftC "top" 5 smirkRightC
277. -attachControl sneerRightC "top" 5 smirkLeftC
278. -attachControl sneerLeftC "top" 5 sneerRightC
279. -attachControl blowAirC "top" 5 sneerLeftC
280. -attachControl openMouthC "top" 5 blowAirC
281. child1Form;

282. setParent tabs;

Parenthood is returned to tabs. tabs’ second child, child2, is created. Similarly to


child1, a form layout named child2Form is created inside child2 (line 285). Then, a group
of controls is created for each of speechControl’s attributes. Again, for each attribute, a
frame layout is used to set its controls apart from the rest.
283. // tab 2:
284. shelfLayout -cellWidthHeight 500 500 child2;
285. formLayout child2Form;

For the eh attribute, a frame layout called ehC is created. Its label is set to “E”. A
margin of 5 pixels is specified for its border. The –collapsable and –collapse attributes
are both set to true.
286. frameLayout
287. -label "E"
288. -marginWidth 5
289. -collapsable true
290. -collapse true
291. ehC;

With ehC as the current parent, a row-column layout widget called rc15 is created
(rc15 has been so called because eh is Pepi’s fifteenth pose).

58
292. rowColumnLayout
293. -numberOfColumns 2
294. -columnWidth 1 255
295. -columnWidth 2 40
296. rc15;

Inside rc15, three children are created: the ehAmountSlider, a ‘key’ button, and
the ehDelaySlider.
297. intSliderGrp
298. -columnWidth3 55 35 130
299. -label "amount:"
300. -field true
301. -minValue 0
302. -maxValue 10
303. -value $currentEH
304. -changeCommand "updateEhValuePROC"
305. -dragCommand "updateEhValuePROC"
306. ehAmountSlider;

ehAmountSlider’s text label reads “amount:” and is 55 pixels wide. Its value, held
by its field (35 pixels wide) and slider (130 pixels wide), can vary from 0 to 10, and when
modified, causes the procedure updateEhValuePROC (line 87) to be invoked.
307. button -label "key" -command "ehGetInfoPROC";

Next to ehAmountSlider, a simple button labeled “key” is created. Since no size


has been specified for it, it will assume the size dictated to it by its parent rc15, which is
40 pixels (line 295). When pressed, it will invoke the ehGetInfoPROC procedure (line
94).
308. intSliderGrp
309. -columnWidth3 55 35 130
310. -label "delay:"
311. -field true
312. -minValue 0
313. -maxValue 100
314. -value 0
315. ehDelaySlider;

ehDelaySlider is created below ehAmountSlider with similar dimensions, but its


value varies from 0 to 100.
316. setParent child2Form;

The controls for the fifteenth pose are created. Parenthood is returned to
child2Form (line 316). Here, I have again discarded some lines of code. Lines 286
through 316 can be reused to obtain the remaining attributes of speechControl.

Once all controls for the second tab have been defined, we can edit child2Form
and arrange the controls using attachments.

59
317. // now, place all items in this tab properly by editing child2Form.
318. formLayout –edit
319. -attachForm ehC "top" 5
320. -attachControl uhC "top" 5 ehC
321. -attachControl ohC "top" 5 uhC
322. -attachControl ahC "top" 5 ohC
323. -attachControl fhC "top" 5 ahC
324. -attachControl mhC "top" 5 fhC
325. child2Form;

Next, tabs is edited and tab labels for its two children are set.
326. tabLayout –edit
327. -tabLabel child1 "facial expressions"
328. -tabLabel child2 "speech"
329. tabs;

form’s contents are properly centered using attachments.


330. formLayout –edit
331. -attachForm tabs "top" 5
332. -attachForm tabs "left" 1
333. -attachForm tabs "bottom" 1
334. -attachForm tabs "right" 1
335. form;

We are done with the middle controls. Parenthood is returned to rootLayout.


336. setParent rootLayout;

With rootLayout set as the current parent, we are ready to create the bottom
controls. We will use a simple row-column layout called bottomButtonsLayout. It has
three columns, one for each button. The column spacing on line 339 indicates the number
of pixels between the border of the interface window and the first button. The column
spacing on line 340 indicates the number of pixels between the first and second buttons.
And the column spacing on line 341 indicates the number of pixels between the second
and third buttons.
337. rowColumnLayout
338. -numberOfColumns 3
339. -columnSpacing 1 20
340. -columnSpacing 2 10
341. -columnSpacing 3 10
342. bottomButtonsLayout;

With bottomButtonsLayout in place, the actual bottom buttons are created. The
‘close’ button, when pressed invokes closePROC (line 126). The ‘about’ button, when
pressed invokes aboutPROC (line 130). The ‘help’ button, when pressed invokes
helpPROC (line 141).

60
343. button -label "close" -command "closePROC" closeButton;
344. button -label "about" -command "aboutPROC" aboutButton;
345. button -label "help" -command "helpPROC" helpButton;

We are done with the bottom controls, and almost finished implementing the
control panel. We still need to edit rootLayout and to position the top, middle, and bottom
controls in relation to each other, and to the main window, again using form attachments.
346. formLayout –edit
347. -attachForm topControlsLayout "top" 10
348. -attachForm topControlsLayout "left" 5
349. -attachNone topControlsLayout "right"
350. -attachNone topControlsLayout "bottom"
351. -attachPosition frame "top" 5 15
352. -attachForm frame "left" 5
353. -attachControl frame "bottom" 10 bottomButtonsLayout
354. -attachForm frame "right" 5
355. -attachNone bottomButtonsLayout "top"
356. -attachForm bottomButtonsLayout "left" 5
357. -attachForm bottomButtonsLayout "bottom" 10
358. -attachForm bottomButtonsLayout "right" 5
359. rootLayout;

All that is left to do now is to make the window visible, by calling showWindow.
360. showWindow;

361. // when the window is created, this tab is selected by default.


362. select facialExpressionControl;
363. }

The script for the control panel is complete. We must make one final declaration
of the interfaceSCR procedure name.
364. interfaceSCR;

The above script must be saved into Maya’s scripts directory as


“interfaceSCR.mel”. It can now be called by typing interfaceSCR at the command
line.
There is one important detail left for the control panel to function properly. If we
were to launch it right now, we would run into a problem: We only have a “one-way”
relationship. While the Maya interface is aware of any actions performed within the
control panel, the control panel is not reacting to any “outside” actions, performed within
the Maya interface. For instance, when we drag the control panel’s animation slider, the
Maya animation slider does update properly. However, when we drag Maya’s animation
slider, the control panel’s slider stays put. We need a process that would constantly keep
the control panel up to date with the scene’s status. This sounds like a job for an
expression:

if ((`window -exists interfaceSCR`) == true) refreshAttributesPROC;

61
The above expression, which I have called timeSyncEXP, is necessary to link the
Maya interface to the custom interface, thus providing a two-way understanding. As
Pepi’s control panel is launched within the Maya interface, all the values in its entries are
kept synchronized with the current scene values. The timeSyncEXP expression achieves
this by calling the refreshAttributesPROC procedure (line 17).
Figure 48 shows the completed control panel, with the “be surprised” attribute’s
frame expanded.

Figure 48:
Pepi’s Custom Interface.

Although we can invoke the control panel by typing interfaceSCR at the


command line, it would be nice to have an easier way to access it. Maya allows you to
assign MEL commands to icon-buttons displayed on the main interface shelves, thus
creating shortcuts to those scripts or commands that you use most frequently. The shelf
button created for invoking the control panel is shown in Figure 49.

Figure 49:
A Custom Shelf Button for Invoking
Pepi’s Control Panel.

Creating a shelf button for interfaceSCR is quite easy. Start by typing


interfaceSCR at the command line. Then, highlight the text you just typed and,
using the middle mouse button, drag the highlighted text to one of the shelves. A shelf
button with a default mel icon is instantly created. Now, you can easily invoke Pepi’s
control panel by pressing this new shelf button. If you want to create a more personal
look, you can design your own icon. Then, you can simply replace the default icon with
yours by selecting “Options > Customize UI > Shelves” from the main interface menu.

62
Maya main interface window

icon-button interfaceSCR ‘about’ ‘help’


shortcut to window confirmDialog confirmDialog
interfaceSCR()
rootLayout
formLayout

topControlsLayout frame bottomButtonsLayout


rowLayout frameLayout rowColumnLayout

form
formLayout

animationSlider HTMLbutton closeButton closeButton closeButton


intSliderGrp iconTextButton button button button

tabs
tabLayout

child1 child2
shelfLayout shelfLayout

child1Form child2Form
formLayout formLayout

blinkC ehC
frameLayout frameLayout

rc1 rc15
rowColumnLayout rowColumnLayout

‘key’ ‘key’
button button

blinkAmountSlider blinkDelaySlider ehAmountSlider ehDelaySlider


intSliderGrp intSliderGrp intSliderGrp intSliderGrp

Figure 50: Widget Hierarchy for the Custom Interface.

The diagram in Figure 50 is a widget hierarchy representing the parent/child


relationships between the different components that were used in the creation of the
custom interface. You can see how layout or manager widgets can be parented to other
layout widgets, in order to create a complex organization of the window components.

63
The event diagram in Figure 51 is a graphical representation of the messages sent
between the different components of the custom interface. Some events are managed
automatically by the components, as part of their internal structure. Some examples of
these automatically handled features are the tabs, the collapse/expand buttons of the
control frames inside of each tab, the scrollbar in the shelf layout, the exit button at the
top right corner of the window, and the ‘OK’ buttons to dismiss the dialogs.

interfaceSCR()
selectProper-
ObjectPROC()
updateBlinkValuePROC()

Netscape
call to external script
updateTimeLinePROC()
doBrowserHelp()

blinkGetInfoPROC()

call to external script


blinkSCR()
for setting keys.

closePROC() aboutPROC() helpPROC()

QUIT

Figure 51: Event Diagram for the Custom Interface.

64
Pepi has now his own control panel that we can use to quickly and accurately set
animation keys. Figures 52 and 53 below show the custom interface at work.

Figure 52: Controlling Pepi’s Facial Expressions.

Figure 53: Controlling Pepi’s Speech Attributes.

65
The control panel will be most appropriate for creating the general aspect of the
animation. Any fine-tuning to obtain subtler motion effects will be left to Maya’s more
advanced animation tools, such as the graph editor, which allows you to refine the
animated channels by tweaking animation curves. I will elaborate on the animation
process later, but for now, let us proceed to the next section and final step before
animation can begin: The texturing phase.

2.6. Texturing Phase:

In this section, Pepi will come to life through the process of realistic texturing.
This is a crucial stage in 3D-character development. Although an accurate and detailed
model is the basis for realistic character design, proper texturing is indispensable in order
for the character to be truly convincing.
Creating a high level of surface detail is most important for achieving a realistic
look. I will demonstrate this procedure on Pepi, by means of 2D image maps created with
Photoshop, which will be imported into Maya to be used as textures [1, 2, 3, 9, 10]. A
Blinn material and a planar mapping technique will be employed. Four types of texture
maps will be created: a bump map, a color map, a diffusion map, and a specularity map.
Texturing in Maya is done through the use of shading groups. A shading group is
made up of a series of nodes that define an object’s material qualities. For Pepi, we are
using a Blinn material. We will apply our four image maps to the Blinn material node [6,
19].
The Blinn material is usually used for surfaces that have soft specular highlights.
It is therefore ideal for skin texturing. Its specularity attribute can be manipulated to
simulate a leather-like quality with subtle highlights. Also, for surfaces with bump maps,
the soft highlights obtained with a Blinn material are less likely to flicker or rope than
those obtained with other materials such as Phong. Another important attribute of a Blinn
material is eccentricity, which controls the size of shiny highlights on the surface. It
ranges from 0 to 1. A high eccentricity value (closer to 1) produces broad, but not very
shiny, highlights. A low eccentricity value (closer to 0), on the other hand, produces
small, but very shiny, highlights [6].
We will use the planar mapping technique to apply the textures to Pepi’s head.
This method places the texture on a plane and projects it onto the object. This is the most
commonly used mapping method. Maya provides direct manipulation and visualization
tools for both texture and light placement. A feature called hardware texturing allows for
visualizing a roughly rendered preview of a texture or light node’s effects, directly in the
modeling view. This means that you can interactively place textures and lights, moving
them in your scene and getting immediate feedback.

66
Before we start, I have included in Figure 54, a “before” rendering of Pepi with
his default textures.

Figure 54: Pepi Before Applying Image Maps.

As you can see, the Blinn material by itself, with a default color, produces a sort
of polished, shiny look as if Pepi’s skin was made of plastic or metal.

67
2.6.1. A Bump Map for Pepi:

Bump mapping is a way to simulate three-dimensional surface detail effects,


without actually altering the shape of the surface. The surface is made to appear rough or
bumpy, by altering surface normals during rendering, according to the intensity of the
pixels in the bump map [6]. The achieved effect simulates the existence of surface relief,
by making parts of the model appear to be raised or lowered. The dark areas of the bump
map will appear slightly recessed, while the light areas will appear slightly elevated.
Figure 55 shows the two-dimensional bump map, along with the results of applying it to
Pepi’s model.

Figure 55: The Bump Map and Its Effects.

The bump map will ignore any color, because it only deals with the grayscale or
alpha channel values. It does not actually add any shades of gray to your surface, but it

68
uses the different nuances as weights, in a sense just like Artisan’s Paint Weights tool
(see Figure 23 in section 2.3.2.).
You can see that with the bump map alone, Pepi has already started to undergo a
significant transformation. The details that we have added have given him a lot more
character. Prominent wrinkles have been added to his forehead. Subtle rings have been
drawn under his eyes, along with fine creases on the sides. Some wrinkling on the cheeks
and around the mouth has been added to enhance the laugh lines on the model. Very fine
lip creases, appropriate to Pepi’s age, have also been drawn. Finally, various skin pores
have been randomly scattered throughout the face, but primarily on the forehead, the
chin, and the end of the nose.
One important step in creating such a bump map is to make sure that none of the
crease lines terminate too abruptly. Their extremities should gradually fade out for a more
natural look. It is this meticulous attention to detail that will make Pepi look much more
authentic. The bump map is an extension of the modeling phase. It adds the finishing
touches to the model and is the basis for realistic texturing.

We can now proceed to the next image map: the color map. Again, our goal is to
reproduce features that we readily recognize. Regardless of how close to reality or how
imaginary our character concept is, it must to some degree appeal to our common sense.
When we look at any creature that is supposed to look elderly, we do expect to see the
effects of age and long-term exposure to elements.

69
2.6.2. A Color Map for Pepi:

The color of organic surfaces such as skin is rarely consistent. We must reproduce
this aspect when creating Pepi’s color map. It is the subtle and gradual color variations
that will make him more believable.
Figure 56 shows the two-dimensional color map, along with the results of
applying it, together with the bump map, to Pepi’s model. Exposure to weather has been
characterized by the more pronounced redness of some areas of the face, mainly around
the forehead and on the nose. A darker skin tone under the eyes enhances the rings that
we have added with the bump map. A more pallid beard region is the result of years of
shaving and less sun exposure. The natural redness of the lips and the appearance of some
capillaries around them enhance the mouth area. Finally, some randomly placed lighter
and darker marks represent Pepi’s age and liver spots.

Figure 56: The Color Map and Its Effects.

With the color map in place, Pepi has undergone his second important
transformation. We can proceed to the next step: creating a diffusion map.

70
2.6.3. A Diffusion Map for Pepi:

Diffusion is the ability of a material to reflect light in all directions. The diffuse
value acts like a scaling factor applied to the color setting: the higher the diffuse value
(whiter on the map), the closer the actual surface color will be to the color setting [6].
Diffusion is useful for skin texturing, because it gives a skin surface its soft and
porous quality. The light reflected by the skin is scattered by the diffusion map, giving
the skin its elasticity and leather-like appearance. The crease lines and skin tones
generated by the two previous image maps are diffused in order to tone them down a bit,
in a sense blending them together to create a more natural look.
Figure 57 shows the effects of Pepi’s diffusion map, with an eccentricity value
set to 0.6.

Figure 57: The Diffusion Map and Its Effects.

71
2.6.4. A Specularity Map for Pepi:

Specularity refers to the color of shiny highlights on a surface. A black specular


color produces no surface highlights, while a white specular color gives a surface a
glossy, shiny appearance [6].
We would like those areas of the face that are protruded to be more specular than
the rest of the face. This means the forehead, the nose, the cheeks, the chin and the lips.
The specularity map is shown in Figure 58 below, along with the resulting effects on the
model.

Figure 58: The Specularity Map and Its Effects.

As far as this project is concerned, I have only textured Pepi’s face. Everything
that we have accomplished in this section should be repeated for all of the other sides of
the head. The image maps for the remaining sides of the head will be of course much
simpler to create since the major features and details are concentrated in the face.

72
2.6.5. Texturing Pepi’s Eyes and Tongue:

Pepi’s very simple default eye texture was appropriate for visualization, but needs
to be upgraded to the same level as his new face. Figure 59 shows the texture created for
each eyeball.

Figure 59: Eyeball Texture.

Planar mapping has been used here


as well.

This is the only image map that has been used for Pepi’s eyeballs. In order to
make the eyes appear more real, their default specularity has been increased, giving them
a more glossy, “wet” appearance.

The tongue needs a more suitable texture as well. When Pepi opens his mouth, the
tongue becomes clearly visible. Its default texture makes it look rather synthetic, like a
plastic material. In Figure 60, the combination of a linear ramp color map and a solid
fractal bump map, as well as overall diffusion and specularity, creates a much better
looking tongue surface. (Please refer to Appendix B for a definition of ramp and solid
fractal)

Figure 60: Tongue Texture.

The two textures used for the


tongue did not have to be created.
They are predefined in Maya’s
texture library. Each has its own
unique attributes that you can
manipulate to reach the desired
effects.

As far as the teeth and gums are concerned, they have kept their default textures.
Although texturing them could add to Pepi’s realism, I did not find it crucial here for

73
them to have a great deal of detail. Similarly to the eyes and tongue, their specularity has
been appropriately set.
The texturing phase for Pepi is now complete. Figure 61 shows the final textured
model. To realize the full extent of Pepi’s transformation and the importance of the
texturing phase, compare Figure 61 to the before picture in Figure 54.

Figure 61: Pepi Complete with Textures.

74
One important thing to keep in mind is that, it was possible to see the effects of
each texture map in this section through proper lighting of the model. In the real world, it
is light that allows us to see our surroundings. In Computer Graphics, digital lights play
the same role.
In Maya, there are four types of lights that you can manipulate or combine to
simulate any desired lighting effects. Briefly described, a point light shines evenly in all
directions, like a light bulb. Rays from a point light are not parallel, and the position of an
object with respect to the light source affects the angle at which light strikes the object. A
spot light has a specific range and direction that are determined by a its cone of influence.
A directional light shines evenly, emitting parallel rays in one direction only, similarly to
the sun. Finally, an ambient light is used to simulate a combination of direct light and the
resulting indirect light reflected by an object’s surroundings. It produces a constant
illumination on all surfaces, regardless of their orientations.
Placing the lights in the scene and specifying their orientation and influence is not
an easy process. As previously mentioned, Maya’s hardware texturing option can be used
to preview a light node’s effects.
Figure 62 shows two light nodes that I have placed in the scene. One is a
directional light, and the other is an ambient light. The directional light is oriented to
illuminate Pepi’s face in a certain way, while the ambient light is to control the overall
atmosphere of the scene.

Figure 62: Lighting the Scene.

Lighting is never to be overlooked. It is the most important ingredient before the


final animation can be rendered. Without appropriate lighting, all the effort put in the
creation of texture maps is meaningless.

75
2.7. Some Design Ideas for the Future:

Before I describe the animation process, I would like to briefly go over some of
the possibilities for extending this project, in an effort to further develop Pepi’s human
qualities. I will keep the focus on the improvement of Pepi’s head and facial behavior.
One obvious feature that Pepi is currently lacking is hair. There are two different
approaches for modeling hair. One is to create the appearance of hair by using bump and
texture maps as we did for Pepi’s skin in the previous section. The more realistic
approach however, is to create actual individual hair, each being a separate geometry.
There is no predefined tool for creating hair-like geometry with Maya 2.0. However, in
Chapter 1, section 1.4., I mentioned that the Artisan module could be used for “script
painting”. The geometry for one hair would be defined as our brush, and could be then
painted over a surface, with options such as hair thickness, density, or length. Each hair
would be assigned proper weights, and influenced by a gravity node in the scene, in order
to fall and flow naturally. Such a geometry painting script has been provided as sample in
the “Discover Maya” CD-ROM shipped with Maya 2.0.
Having control over Pepi’s skin coloration would be interesting. We could
increase the redness of his skin in some areas in order to simulate emotions such as
blushing or anger.
Another characteristic that would significantly enhance Pepi’s credibility is more
realistic eyes. Our pupils are constantly moving, dilating and contracting as a result of
focusing, and as a reaction to light’s intensity as well. This could be automated with
scripts that would transform the pupils’ diameter depending on the lighting in the scene,
as well as the distance between the eyes and the current object they are focusing on. The
pupils could be either separate geometry that the script would resize, or part of a texture,
in which case, the portion of the texture containing the pupil would be scaled
accordingly.
Realistic sweat or tears can be very impressive too and can be obtained through
the use of particle effects and soft body transformations. For example, one method for
creating tears would be the following: First, we would make Pepi’s skin surface aware of
collisions, and we would add a gravity node to the scene. Then we would create a
teardrop geometry that we would surround with a lattice deformer (as we did for the
tongue in section 2.2.). Then we would transform the lattice structure into a soft body.
We would have to create some attributes for the teardrop, such as life-time (does the
teardrop slide all the way down Pepi’s cheek, or does it dry out before getting there?),
viscosity (does the teardrop slide more slowly, sticking to the skin surface, “fighting the
gravity node’s effect”, or does it drop fast because it is more fluid?), tail size (the size of
the “tail” left behind the teardrop), etc. After applying a proper water texture to the
teardrop, a script would be written to generate teardrops inside Pepi’s eye-socket.
Instantly after its creation, the teardrop will be set in motion, pulled down by the scene’s
gravity node, then collide with Pepi’s skin surface, deform as it slides down, and wobble
as it detaches from Pepi’s skin and falls trough the air.
As far as the custom interface developed in section 2.5., it could be modified to
include additional tabs for animating all of Pepi’s current attributes. One such tab could
be called “Head movements” for example. It would have controls for neck rotation:
tilting the head up and down, tilting the head left and right, and turning the head left and

76
right. For future additions, new tabs or control groups can be provided, such as a “create
tear in left eye” button, or a slider to control the degree of redness of Pepi’s skin.
The few possibilities that I have described above, although not effortless, are
fairly simple to implement. There are really no limits to improving the look and feel of a
virtual character. Tools such as Artisan take 3D design to a higher level, by giving much
more freedom to the artist. A new set of fantastic tools, available in the next version of
Maya, will allow for the creation of effects such as the ones I have discussed above, with
an interface as intuitive and straightforward as the Artisan interface. At the time of this
report, Alias|Wavefront has announced Maya 2.5’s release for November 02, 1999. The
revolutionary tool set available with this release has been called “Paint Effects”. I have
provided a brief preview of Paint Effects’ capabilities in Appendix B.

2.8. Animating Pepi:

In this last section, I will describe the process of creating an animation sequence,
as well as the main tool provided by Maya to edit the quality of the animation.
In traditional cell-based animation, one would normally start by designing the
important or “key” scenes at selected frames of the animation, and then proceed by
drawing all of the transition frames from one key frame to the other. This latter process is
known as “in-betweening”. Computer animation is very similar to cell-based animation,
with the exception that the in-betweening part is handled automatically by the animation
program [11].
When an object is animated in Maya, one or more of its attributes are in fact being
modified over time. When we set keys on an object’s attribute, we are creating markers to
represent the value of that attribute at particular times. Maya then interpolates the values
of the attribute at in-between time frames, and creates animation curves to graphically
represent these interpolated values. Maya’s graph editor can be used to edit these
animation curves, in order to refine the animation.
With the custom interface that we have created in section 2.5., setting animation
keys is straightforward. However, as we playback the resulting animation sequence,
Pepi’s motion seems to lack some character. His movements are too consistent, similar to
that of a robot. We can use the graph editor to edit the quality of the original animation,
in order to add some character to the motions.
Figure 63 shows the graph editor displaying the animation curve for Pepi’s blink
attribute.

77
Figure 63: The Graph Editor.

The dots on the curve represent the


keys set using the custom interface.
At time 10 (blink = 0), a very
gradual blinking process is indicated
by the curve smoothly going up
until time 12 (blink = 6), where
again, it smoothly decreases until
time 14 (back to blink = 0). The two
halves of the curve are symmetrical,
meaning that the motion is very
consistent.

We can modify the shape of the curve by moving the position of the keys, or by
changing the direction of the tangents at each key, to obtain a specific motion. For
example, as you can see in Figure 64, by making the curve steeper as it approaches the
end of the blink period, we can make the eyelid stay open longer, and close suddenly at
the end.

Figure 64:
Modified Animation Curve for
the Blink Attribute.

By editing this curve, we are


overriding the way that Maya
has handled the in-betweening
process.

The graph editor can also be used to “clean up” unnecessary keys that do not
contribute to the motion, or to add new keys to refine the motion.
Once satisfied with the animation, we can simply render it using Maya’s batch
renderer. The final consideration before rendering, when specifying rendering

78
preferences, is to set a proper surface tessellation, increasing the quality and accuracy of
the materials. (Please refer to Appendix B for more on tessellation).

The lesson to retain here is that, all of our efforts put into automating Pepi’s
animation mechanism, including the expressions that we wrote for procedural animation
and the custom interface that we created to automate the process of setting keys, have
now paid off, by making the actual animation process extremely simple.

I have included a gallery with pictures of Pepi in Appendix A. Also, as you might
have already noticed, a small animation of Pepi can be viewed by quickly flipping the
pages of this report.

Chapter 3: Project Summary

With Maya, Alias|Wavefront has created the most powerful, comprehensive, and
flexible 3D animation and visual effects tool available today. Maya introduces a
dramatically new approach to 3D design, giving 3D artists more freedom to create, by
providing virtual tools that “feel” and operate as closely to the real thing as possible.
Interaction becomes more intuitive and natural.
A node-based architecture and MEL, Maya’s embedded scripting language, give
Maya its power and flexibility. Although the details of the architecture are transparent
during creation, the artist has the ability to access and edit the architecture to customize
Maya in any desired way. This not only means the ability to manipulate the construction
history of the objects you create, but also the power to extend the available tools and
interfaces, and even to create brand new ones that fit your specific production
requirements.
The Maya interface contains components that can be easily manipulated in order
to vary the display complexity on your screen, depending on your level of skill or
comfort.
Maya’s very intuitive virtual tools operate much like real tools. For example,
Maya’s Artisan module, a brush-based modeling interface, makes manipulating surfaces
very much like sculpting real clay. Besides allowing you to transform a surface’s shape,
Artisan can be used to paint behaviors or scripts directly on the surface.
Complex visual effects are achieved through multiple integrated simulations of
dynamic and particle systems that follow physical laws.
Being the leading 3D Visual Effects application, Maya is currently used by some
of the major companies in the entertainment and visualization industry.

With this project, I have only demonstrated a mere parcel of what you can do with
the Maya environment. However, most of the general concepts mentioned above are
illustrated by this project.

79
In section 2.2., we saw how modeling our character Pepi’s head and its
components was straightforward, using the Artisan module and the other intuitive
manipulation tools provided by Maya.
Then in section 2.3., we implemented various motion mechanisms for Pepi and
experimented with Maya’s skeleton creation and skin binding procedures. We refined
some of Pepi’s movements using various techniques, such as weights painting and joint
flexors. We accessed the scene’s node hierarchy, and, using constraints, created specific
connections between some nodes, in order to automate some of the movements. We used
Artisan a second time to model facial expressions and speech poses for Pepi, which we
then used for morphing.
In section 2.4., we were introduced to MEL’s syntax and we wrote some simple
MEL expressions, creating procedural animation to synchronize some of Pepi’s
components.
In section 2.5., we wrote a complete MEL script in order to create a custom
interface, a compact and functional control panel to animate Pepi’s facial expressions and
speech attributes. We also made a minor alteration to the Maya interface by creating a
simple shortcut button inside the main interface to invoke our control panel.
In section 2.6., mapping Pepi with realistic textures was easy, thanks to direct
manipulation and visualization methods provided by Maya.
In section 2.7., although left as a future enhancement to the project, we considered
some other aspects of Maya that we could explore, in order to further develop Pepi’s
human qualities. We discussed using dynamics and particle effects to give Pepi a broader
range of behaviors and actions.
Finally, in section 2.8., we realized how our efforts to make Pepi a “high level”
character, by automating his motion mechanism and developing a custom interface, were
crucial in considerably simplifying the final animation process.

With Maya, Alias|Wavefront has started a new generation of 3D design and


animation tools that give 3D artists a tremendous edge over traditional Computer
Graphics techniques. With tools such as Maya, more artists will surely be attracted by the
limitless possibilities provided by the computer as a medium for artistic creation.

80
References

1. Bill Fleming, Creating Cool Characters, Serious 3D, September/October 1998, Volume 1, Issue 2,
Pages 36-41.

2. Bill Fleming, Creature Design Part Two: “The Surfacing”, Serious 3D, September/October 1998,
Volume 1, Issue 2, Pages 6-9.

3. Bill Fleming, Photoshop – Surfacing a Living Toon, Serious 3D, September/October 1998, Volume 1,
Issue 2, Pages 67-76.

4. Learning Maya 2. Alias|Wavefront, a division of Silicon Graphics Limited, USA, 1999.

5. Learning Maya Artisan, Version 1.0. Alias|Wavefront, a division of Silicon Graphics Limited, USA,
January 1998.

6. Maya Reference: Rendering. Alias|Wavefront, a division of Silicon Graphics Limited, USA, 1999.

7. Robert M. H. McMinn & R. T. Hutchings, Color Atlas of Human Anatomy, Year Book Medical
Publishers, 1985.

8. Denyse Rockey, Phonetic Lexicon, Heyden & Son Ltd., London, 1973.

9. Geoffrey Smith, Photoshop – Prehistoric Creature Surfacing, Serious 3D, June/July 1998, Volume 1,
Issue 1, Pages 59-66

10. Geoffrey Smith, Surfacing Secrets, Serious 3D, June/July 1998, Volume 1, Issue 1, Pages 12-17.

11. Using Maya: Animation. Alias|Wavefront, a division of Silicon Graphics Limited, USA, 1999.

12. Using Maya: Artisan, Version 1.0. Alias|Wavefront, INC., USA, January 1999.

13. Using Maya: Basics, Version 2. Alias|Wavefront, a division of Silicon Graphics Limited, USA, 1999.

14. Using Maya: Character Setup. Alias|Wavefront, INC., USA, May 1999.

15. Using Maya: Dynamics. Alias|Wavefront, INC., USA, May 1999.

16. Using Maya: Expressions. Alias|Wavefront, INC., USA, May 1999.

17. Using Maya: MEL. Alias|Wavefront, INC., USA, May 1999.

18. Using Maya: NURBS Modeling. Alias|Wavefront, a division of Silicon Graphics Limited, USA, 1999.

19. Using Maya: Rendering. Alias|Wavefront, a division of Silicon Graphics Limited, USA, 1999.

20. Edward F. Walther, Lipreading, Nelson-Hall, Chicago, 1982.

81
APPENDIX A: Image gallery

82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
APPENDIX B: Definitions

ISOPARMS

The term “isoparm” refers to isoparametric surface curves. An isoparm is therefore a line
of constant U or V value on a surface. Unlike a curve that has only a U parametric
direction, a surface has both U and V parametric directions. You can increase or decrease
a surface’s resolution by inserting or deleting horizontal (U direction) and/or vertical (V
direction) isoparms. You can also detach or subdivide a surface at a specific isoparm. [4,
18]

NURBS

The following definition is on page 3 of the “Using Maya NURBS Modeling” manual.

NURBS is an acronym for Non Uniform Rational B-Spline, which is the type of curve
used in Maya.

Splines were originally developed for shipbuilding. A way to draw a smooth curve
through a set of points was needed. The solution was to place metal weights at points and
pass a thin wooden beam between the weights. The beam, called a spline, adopts a
minimum energy position with respect to the weights, producing a smooth curve.

The influence of each weight is maximum at the point of contact, and decreases smoothly
away from that point.

NURBS are a superset of conics, splines, Beziers. They are parametric polynomial
curves, and are particularly suited for modeling in 3D as they provide excellent continuity
with a minimum number of control points.

98
PAINT EFFECTS

Paint Effects is a new paint technology that will redefine the way artists add beauty and
complexity to 3D scenes.

The following is the preview of Paint Effects from the Alias|Wavefront web site.

Paint Effects is a groundbreaking new paint technology for creating amazing natural
detail on a 2D canvas or in true 3D space. For example, with Paint Effects you can:

• Create a jungle of trees and plants in a couple of minutes, and walk your character
right through it.
• Add flowing hair and flickering flames in seconds for a level of realism previously
unattainable in CG.
• Define a logotype in glorious oil paint, and watch it draw on over time while your
camera flies around it.
• Paint fantastic repeating textures for your game levels and see them update right on
your model.

With Paint Effects, you use brushes to paint real-time rendered strokes onto or between
3D objects or onto a 2D canvas. Paint Effects comes with a myriad of editable, pressure-
sensitive preset brushes to represent countless organic forms of a real and surreal nature,
including hair, trees, leaves, fire, lightning and stars, together with natural media such as
airbrushes, chalks, pastels, crayons, watercolors and oil paint. Create your own brushes
by modifying shading, illumination, shadow, glow, tube, gap and flow animation
attributes, or by blending together existing presets to form new ones. You can also apply
dynamic forces or key-framed animation to the effects you paint in your scenes; for
example, you can make plants grow, have long hair flow in the wind, or cause a river to
flow.

Paint Effects strokes are drawn fully-rendered during interactive painting, providing
immediate feedback; final rendering is extremely fast, resolution-independent and can
include 3D cast shadows, depth-of-field and fog effects and motion blur.

99
RAMP

A ramp is a range of values that increment from one value to another. Usually used to
describe color and monochrome textures, where the color or gray value linearly or
logarithmically changes from one given color or gray value to the other across the extent
of the image [Maya Help].

SOLID FRACTAL

A solid texture is a procedural texture that, when applied to a surface or group of


surfaces, gives the appearance of an object that has been carved from a block of a
substance. Unlike parametric textures, solid textures have no edges, but provide a
continuous-looking appearance [Maya Help]. Fractal refers to a type of mathematical
objects that are created by applying infinite recursive subdivisions to a basic form while
introducing a random factor at each subdivision. Fractals can be used to create complex
textures such as terrain, clouds, or smoke [Maya Help].

TESSELLATION

The level of detail of a material applied to a surface depends on the resolution of that
surface, or how finely tessellated the surface is. “A NURBS surface is composed of one
or more patches. During rendering, each patch is divided into an appropriate number of
triangles to approximate the true shape of the surface. This is called tessellation.” [19].

100
APPENDIX C: Useful online resources

• http://www.aw.sgi.com

At the Alias|Wavefront web site, you will find complete information about Maya,
including numerous tips and tutorials on all facets of Maya by using the Assistant
Online.

• http://www.highend3d.com

This site features tutorials for a variety of 3D applications including Maya. Two of
the tutorials related to character animation are:

“Expressing emotion through facial animation (General Concept Tutorial)” by Peter


Ratner.

“Organic Modeling and Animation in Maya V.1”, a three-part tutorial by Alex


Alvarez.

This site also contains some MEL scripting tutorials.

• http://www.gnomon3d.com

School of Visual Effects for film, television, and games.

• http://www.mastering3dgraphics.com

Bill Fleming's fantastic online magazine. This is by far the best computer graphics
site available on the Internet for 3D artists. It provides outstanding tutorials on
general 3D design techniques as well as for specific 2D and 3D applications. I have
gained considerable knowledge from Mr. Bill Fleming’s tutorials.

101