Вы находитесь на странице: 1из 52

Project1_Layout 1 3/14/11 2:51 PM Page 1

1 March 2011
ON THE COVER
SEE IT I N
At rst glance, the partnership of live-action director Gore
Verbinski and legendary VFX studio ILM on an animated movie
seems as out of place as, well, a Hawaiian shirt-clad chame-
leon in a western. But in the CG feature Rango, both pairings
couldnt be more perfect. See pg. 10.
Director Gore Verbinski and ILM discuss
the making of Rango.
How audio enhances video games.
Focus on storage in the studio.
The challenges of posting reality TV.
Features
Claim Jumpers
10
Industrial Light & Magic partners up with director Gore Verbinski, drawing on
its VFX experience to create the CG animated feature Rango.
By Barbara Robertson
Commercial Success
21
This years Super Bowl ads serve up a wide range of digital effects,
including an all-CG epic-style invasion, dogs that are the life of the party, a
grateful beaver, a black beetle on the go, a car heist thats over the top, and
TV icons who show their team spirit.
By Karen Moltenbrey
Cry Wolf
32
A CG werewolf and digital sets help set the stage for a modern-day
retelling of Red Riding Hood.
By Barbara Robertson
Mother of Invention
36
In its last performance, ImageMovers Digital creates an out-of-this-world
CG experience, using its performance-capture technology for the animated
feature lm Mars Needs Moms.
By Barbara Robertson
Recruitment
44
Double Negatives talent manager offers some career advice for those
seeking positions at DNeg as well as other VFX facilities.
Use a Web-enabled smartphone to access the stories tagged with QR codes
in this issue. If your phone does not have the required software, download
a reader free of charge at www.cgw.com/qr-code-app-info.aspx.
COVER STORY
The Skys the Limit
28
Projecting and viewing stereoscopic 3D in domed
environments capitalizes on new technological
advancements to offer unique experiences.
March 2011 Vol. 34 Number 2 I n n o v a t i o n s i n v i s u a l c o mp u t i n g f o r D C C p r o f e s s i o n a l s
Departments
Editors Note Wheres the Creativity?
2
Following some poor performances in past years, this years Super Bowl
ads scored relatively high in terms of their creativity and VFX.
Spotlight
4
Products Dells Latitude laptops, tablet, OptiPlex desktops and small
form-factor solution, and Precision workstations and mobile workstations.
The Foundrys Nuke and NukeX Version 6.2. Vicons T-Series cameras.
Okinos CAD conversion system for SolidWorks 2011. Nvidias NVS 300.
News Workstation market continues steady growth. PC graphics chip
shipments fall short of expectations.
Viewpoint
8
Social rendering. xxxxxxxxxxxx
Portfolio
42
Khalid Al-Muharraqi.
xx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx x
Back Products
47
Recent product releases. xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx x
x
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx x

Whats a QR code? Find out in this months
Editors Note.
CHIEF EDITOR
karen@CGW.com
EditorsNote
Wheres the Creativity?
2 March 2011
T
he average price of a ticket to Super Bowl XLV: $4700. The average price of a 30-second
television commercial during the 2011 game: $3 million. But the real million-dollar ques-
tion is, did ad agencies obtain the priceless results they were hoping those commercials
would generate? At $100,000 per second, the follow-up question should be, did ad agencies
make good use of vendors dollars?
Summaries concerning the results of this yearly Ad Bowl have
pointed to the dismal economy for the conservative approach
to the Super Bowl advertising of lateor better said, the general
lack of creative content in these million-dollar commercials. As-
suredly, audiences can count on an ahhh moment from the annual
Anheuser-Busch Clydesdale spot, a hearty laugh from one of the
brewers comical Bud Light scenes, or a chuckle from a Coke or
Pepsi presentation. To fairly judge the caliber of the games lineup,
though, fans have to look beyond these all-star offerings and in-
stead examine the remaining positions. Based on this assessment,
the commercials scored fairly high this year, at least in my book.
Dont get me wrong. Super Bowl XLV brought many flubsfrom Christina Aguileras
botched national anthem, to the Steelers mistake-riddled first-half performance, to the lack-
luster Black Eyed Peas half-time show (much to my surprise). And a number of commercials
fell short of their mark, as well. The Best Buy spot with odd couple Justin Bieber and Ozzy
Osbourne was unimaginative, as was the GoDaddy.com spot, which continues to rely on sexy
women to sell an unrelated product (without any humor or other much-needed hook). And
then there was the backlash from the politically incorrect Timothy Hutton Groupon piece.
How did this super event turn into a circus? Money. When the first Super Bowl aired in
1967, a collective 41 million viewers watched the game. The average price of a 30-second spot:
$40,000. Hardly chump change, though the big game among advertisers had not yet started.
Nevertheless, there were nuggets of creativity in the ads that aired early on. Among them: the
1967 Noxema spot featuring New York Jets legendary quarterback Joe Namath, the 1980
Coke ad with Pittsburgh Steeler great Mean Joe Green (still voted one of the all-time favorites,
though it debuted months before the game), and the 1984 Apple Big Brother-themed spot.
Over the years, as the audience expanded, ad executives and vendors began stepping up their
game, rolling out some memorable (and not so memorable) commercials. Its debatable whether
the quality of the ads rose in conjunction with the price, however. Not in question, though, is
how competitive the commercial event has become. Yet, somewhere along the way, ad execs
appeared too focused on out-doing one another in terms of absurdity, not creativity. This year,
many of them seemed to have dusted off their older playbooks, and with positive results. A
number of more interesting commercials required digital assistance (see A Commercial Suc-
cess, pg. 21). But, cutting-edge VFX cannot go it alone. These commercials have to grab our
attention and stay with us. Just as amazing imagery cannot carry a CG animated film without
a good story, neither can smart digital work carry a catchy commercial that lacks a creative
message or story.
This year, studios including The Mill, Framestore, and Animal Logic took to the field, lend-
ing their expertise and applying their digital magic to funny and/or imaginative Super Bowl
XLV commercials. And the results were quite nicewhat I expect a $3 million commercial to
look like.
The Magazine for Digital Content Professionals
EDITORIAL
KAREN MOLTENBREY
Chief Editor
karen@cgw.com (603) 432-7568
CONTRIBUTING EDITORS
Courtney Howard, Jenny Donelan, Kathleen Maher,
George Maestri, Martin McEachern, Barbara Robertson
WILLIAM R. RITTWAGE
Publisher, President and CEO,
COP Communications
ADVERTI SI NG SALES
JEFF VICTOR
Director of SalesWest Coast
jvictor@cgw.com
(847) 367-4073
GARY RHODES
Sales ManagerEast Coast & International
grhodes@cgw.com
(631) 274-9530
KELLY RYAN
Marketing Coordinator
kryan@copcomm.com
(818) 291-1155
Editorial Office / LA Sales Office:
620 West Elk Avenue, Glendale, CA 91204
(800) 280-6446
CREATI VE SERVI CES
AND PRODUCTI ON
MICHAEL VIGGIANO
Art Director
mviggiano@copcomm.com
CUSTOMER SERVICE
csr@cgw.com
1-800-280-6446, Opt 3
ONLINE AND NEW MEDIA
Stan Belchev
sbelchev@copcomm.com
Computer Graphics World Magazine
is published by Computer Graphics World,
a COP Communications company.
Computer Graphics World does not verify any claims or other information
appearing in any of the advertisements contained in the publication, and
cannot take any responsibility for any losses or other damages incurred
by readers in reliance on such content.
Computer Graphics World cannot be held responsible for the
safekeeping or return of unsolicited articles, manuscripts, photographs,
illustrations or other materials.Address all subscription correspondence
to: Computer Graphics World, 620 West Elk Ave, Glendale, CA 91204.
Subscriptions are available free to qualified individuals within the United
States. Non-qualified subscription rates: USA$72 for 1 year, $98 for 2
years; Canadian subscriptions $98 for 1 year and $136 for 2 years;
all other countries$150 for 1 year and $208 for 2 years.
Digital subscriptions are available for $27 per year.
Subscribers can also contact customer service by calling
(800) 280 6446, opt 2 (publishing), opt 1 (subscriptions) or sending an
email to csr@cgw.com. Change of address can be made online at http://
www.omeda.com/cgw/ and click on customer service assistance.
Postmaster: Send Address Changes to
Computer Graphics World, P.O. Box 3551,
Northbrook, IL 60065-3551
Please send customer service inquiries to
620 W. Elk Ave., Glendale, CA 91204
Recent awards:
5HDG\IRUDGYHQWXUH
modo

501 image by Robert Lech|


4 March 2011
PRODUCT: MONITOR
Dell has announced 24 new business
computing solutions and form factors,
including laptop, tablet, desktop, and
workstation computers, in one of the
companys largest product introductions.
Dell, intending to provide businesses the
Power To Do More, has launched the
Dell Latitude E5420, E5520, E6220,
E6320, E6420, E6520, and E6420
ATG laptops; XT3 convertible tablet;
OptiPlex 990, 790, and 390 desktops;
OptiPlex small form factor all-in-one solu-
tion; Precision T1600 desktop worksta-
tion; and Precision M6600 and M4600
mobile workstations. The new Latitude E
family and Latitude XT3 convertible tablet
include more than 100 design improve-
ments and a range of new features to
meet evolving business needs, including
increased security and manageability.
The Latitude E5420 and E5520 laptops
are for budget-conscientious profession-
als, whereas the Latitude E6220, E6320,
E6420, E6520, and E6420 ATG busi-
ness-rugged laptops are designed for
demanding conditions. The new Latitude
laptops feature a completely new design;
enhanced security via Dell Data Protec-
tion, Remote Data Delete, and Free Fall
Sensor protection; commonality across
models; a backlit keyboard option; and
Intel second-gen Core processors.
Secure, exible, and manageable desk-
tops, the new OptiPlex systems offer tool-
free access to system components; small
footprints and more chassis options; and
new Intel vPro processor technology.
Dell Precision high-performance, scal-
able workstations are purpose built for
graphics and compute-intensive applica-
tions, such as special effects, animation,
and digital imaging.
The new Dell Precision T1600 work-
station features ISV certication on Auto-
CAD, Pro/Engineer, and other select
software; Intel second-generation Core
and Xeon processors; AMD and Nvidia
graphics; and a tool-less chassis.
The Dell Precision M6600 and
M4600 mobile workstations are
designed for users who need raw
horsepower, scalable performance,
and certied operation.
Dells new KACE appliances
automate time-consuming, manual IT
tasks, helping companies quickly deploy
and manage new business computing
solutions. KACE provides hardware and
software inventory, software distribution,
patch management, and OS and appli-
cation imaging.
Dell Data Protection Encryption, avail-
able with Dells new computing solu-
tions, is a exible, auditable endpoint
encryption solution that helps customers
simplify data protection and comply with
security regulations.
Pricing for the Latitude E5000 series
starts at $859, OptiPlex 390 pricing
begins at $650, and the Precision T1600
workstation cost starts at $840.
Dell Unveils Business Computing Solutions
PRODUCT: WORKSTATIONS
The Foundry has upgraded versions of
its compositing applications, Nuke and
NukeX, to Version 6.2, which incorpo-
rates image-based modeling and projec-
tion tools for creating and using scene
geometry in visual effects.
Nuke 6.2s improved rendering perfor-
mance, ip-book playback, 2D tracker
interaction, expression editor and le
browser enhancements, and new time-
line Dope Sheet, for manipulating anima-
tion key frames and interactively position-
ing and trimming read clips, combine to
improve productivity and workow.
NukeX 6.2 adds new image-based
modeling tools to help artists automati-
cally create simple scene geometry by
combining a tracked 3D camera with a
selection of 2D image features. Version
6.2 provides new ways to reference
and re-apply composited elements into
live-action scenes efciently and realis-
tically, taking advantage of new dense
point-cloud generation and automatic
calibration of projection cameras from
2D images and geometry. NukeX 6.2
also boasts Pixar RenderMan Pro Server
support, enabling the alignment of Nuke-
rendered scenes with the rest of the 3D
pipeline, matching motion blur and depth
of eld, and providing exible control of
rendering capabilities, such as raytraced
shadows and reections.
The Foundry Upgrades Nuke, NukeX
PRODUCT: COMPOSITING
March 2011 5
Spotlight Products
PRODUCT: GRAPHICS PRO-
CESSOR
IMAGES Nvidia NVS folder

Nvidia Introduces Multi-
display Processor
Nvidia has announced its NVS 300 business
graphics solution, a graphics processor designed
to deliver visual delity across up to eight displays,
while consuming minimal power.
The NVS 300 graphics processor offers versatile
display connectivity in a low-prole, space-saving
graphics card design, simplifying IT administration.
Compatible with LCD, DLP, and plasma display
types and standard tower, workstation, and small-
form-factor system congurations, the NVS 300
supports VGA, DVI, DisplayPort, and HDMI at reso-
lutions as high as 2560x1600.
The new graphics processor features Nvidia
nView desktop management software and new
Nvidia Mosaic technology, which provides seamless
taskbar spanning and transparent scaling of any
application across up to eight displays. The NVS
300 is built for demanding enterprises that require
high reliability, improved manageability, and tremen-
dous value, explains Jeff Brown, general manager
of the Professional Solutions Group at Nvidia. The
ability to support legacy and current display types
provides an upgrade path without disrupting exist-
ing, complex installations.

The NVS 300 graphics processor sports a
passive thermal design and built-in power manage-
ment technology, which intelligently adjusts power
consumption based on the applications in use.

The Nvidia NVS 300 is priced at $149 and is now
available in PCI Express x16 and x1 congurations.
PRODUCT: STEREOSCOPY
IMAGE Choice of three images that start with
3Dvision
PNY Technologies Ships
3D Vision Pro
PNY Technologies is offering Nvidia 3D Vision
Pro, a new 3D stereoscopic solution that enables
designers, engineers, architects, and other profes-
sionals who work with complex 3D designs to see
their work in greater detail.
Nvidia 3D Vision Pro, designed to work in conjunc-
tion with Nvidia Quadro by PNY professional graph-
ics solutions, is a combination of wireless, 120Hz
active shutter glasses, an RF communication hub,
and advanced software. The system automatically
transforms graphics applications into full stereo-
scopic 3D. 3D Vision Pros 2.4GHz radio-frequen-
cy communication provides key features, including
a range of up to 100 feet, signicantly more than
infrared-based 3D glasses; no line-of-sight require-
ment between the glasses and emitter, useful for
multi-user power walls or auditoriums; bidirectional
communication, enabling installations to verify that
the glasses are operating; and explicit connection
between the glasses and the hub, without crosstalk,
for multi-user environments such as studios or labs.
DCC artists, product designers, physicians, and
other professionals can now see their creations
in 3D. Businesses looking to provide large-scale
visualizations on video walls or in theaters, studios,
and collaborative virtual environments (CAVEs) can
employ 3D Vision Pro technology to deliver rich, 3D
visualization experiences.
3D Vision Pro supports Windows XP, Windows
Vista, and Windows 7 (32- and 64-bit), as well as
Linux (32- and 64-bit).
Vicon Launches T-Series
S Edition Mocap Cameras
Vicon has rolled out its T-Series cameras optimized for
outdoor motion capture, enabling accurate motion capture
outdoors without interference from natural elements and
lighting. Customers with an existing T-Series system can
also have their cameras upgraded for outdoor capture.
Vicon is also launching three T-Series S Edition motion-
capture cameras: the T40S, T20S, and T10S. Building on
the speed and exibility of existing T-Series cameras, the new
S Edition is said to deliver the fastest full-frame, 1-megapixel
mocap camera in the world.
The full Vicon T-Series range now includes: the T10, achiev-
ing 250 frames per second (fps) at 1-megapixel resolution;
T10S, the worlds fastest full-frame mocap camera, capable
of 1000 fps at 1-megapixel resolution; T20S, offering 690
fps at 2-megapixel resolution; T40S, providing 515 fps at
full-frame resolution (4 megapixels); and T160, a 16-mega-
pixel camera delivering 120 fps at full-frame resolution.
Customers also can customize T-Series cameras with opti-
mized lens and strobe combinations to maximize capture
volumes.
PRODUCT: MOTION CAPTURE
Continued Strength in
the Workstation Market
The workstation market continued its steady, determined
march back to the volume levels prior to the economic
collapse that kicked off in Q4 of 2008. Jon Peddie Research
(JPR) has wrapped up its Q3 analysis of the workstation
market and reports the industry shipped 849,700 worksta-
tions in Q3 2010, representing a robust 31.8 percent year-
over-year growth.
Sequential growth slowed a bit to 6.9 percent (from a 9.6
percent gain in Q2), showing some moderation that could
reduce the potential for a double-dip recovery. The number
was still stronger than expected by typical quarterly cyclical
norms, illustrating that recovery is still solidly headed in the
right direction.
HP was decidedly back on top as workstation volume
leader in Q3 2010, mimicking the surge over Dell the
company achieved in Q3 2009. Yet, one quarter after JPR
announced HP as the new undisputed workstation leader,
Dell bounced back to put the two in a virtual deadlock, a
position the rm has been in since. The books are now
closed on Q3 2010, and HP has opened up another appre-
ciable gap, taking 40.5 percent of the market to Dells 37.5
percent. Its clear HP is continuing to move aggressively
forward in workstation business.
The market for professional graphics hardware takes that
double-dip in Q3 2010, but theres more to the story. In
the workstation-related market for professional graphics
hardware, Q3 provided more of the slowing volume JPR
had expected. More accurately, Q3 went beyond mere
slower growth and took an unexpected dip. Worldwide,
units totaled 1.14 million, down 9.5 percent sequentially
beyond any typical cyclical norm.
Technically the market did experience the dreaded double-
dip, but closer inspection revealed two sides to the Q3
coin. The second dip was likely more symptomatic of exag-
gerated cyclical conditions than an indication of another
substantial dive for both graphics and workstations.
By no means was the whole professional graphics market
down, notes JPR. Mobiles were essentially at, and good
news was to be found in the near 9 percent sequential
growth of 3D cards, as both Nvidia and AMD were ramp-
ing up their new-generation models. It left the 2D card sub-
segment as the primary culprit dragging down the overall
market.
2D cards dropped precipitously, down to 333,600 from
497,500 in Q2, the previous quarter. With past 2D ship-
ments likely running a bit too hot for the market to digest, a
short-term dip even a dramatic oneis likely not indicative
of any longer-term trends, particularly in light of growth in the
substantially higher-revenue segment of 3D cards.
NEWS: WORKSTATIONS
Okino Enhances CAD
Conversion System
Okino Computer Graphics has updated its CAD conver-
sion system for SolidWorks 2011. The conversion pipeline
enables native SolidWorks BREP CAD assembly, part, and
presentation les to be converted to major animation and
authoring packages, 3D downstream le formats, and visu-
alization/simulation programs.
Okinos NuGraf and PolyTrans software import assembly
dataincluding crack-free geometry, hierarchy, and materi-
alsfrom native SolidWorks les. Users can take advantage
of Okinos high-end rendering, viewing, and scene compo-
sition, or optimize and pipeline the data into major 3D le
formats, animation packages, and third-party tools.
Okinos 3D data translation solutions directly import Solid-
Works assemblies, parts, and presentations into various 3D
animation programs; any third-party product with Okinos Poly-
Trans 3D converters; and major le formats, including Collada,
DirectX, DWF-3D, and SketchUp.
The system is now available to current customers within
their valid maintenance period. Okinos Dual-CAD-Granite/
Pack license is priced at $510. The SolidWorks conversion
pipeline is available in Okinos CAD/Pack for $245.
PRODUCT: CAD
6 March 2011
Jon Peddie Research (JPR) has
announced estimated graphics chip ship-
ments and suppliers market share for Q4
of 2010. Overall shipments of graphics
devices for the year 2010 came in below
expectations with an unimpressive 4.3
percent total year-to-year growth. Instead
of the traditional seasonal pickup, market
leader Intel showed decline, which affect-
ed the overall results.
More than 113 million graphics chips
and CPUs with graphics shipped in Q4
2010. Intel was the leader in unit ship-
ments for Q4, elevated by Clarksdale,
continued Atom sales for netbooks, and
Sandy Bridge. On a quarter-to-quarter
basis, however, AMD and Nvidia gained
market share.
AMD reported graphics as 26 percent
of the companys total sales, an increase
of 8.7 percent sequentially and 0.7
percent from last year. The graphics
business beneted from a double-digit
volume increase. Intel reported chipset
and other revenue of $1.68 billion in Q1,
but it does not include embedded graph-
ics CPUs.
Nvidia reported revenues of $844
million for its scal Q3 2011, spanning
from September to the end of January.
The companys next quarter ends in April.
Initial estimates indicated a 15 percent
to 17 percent growth year for the PC.
Gartner and IDC reported the year at
approximately 13 percent. Graphics are
a leading indicator, and lackluster sales
of graphics are a bad-news bellwether
for the PC industry.
Some analysts are saying tabletsor
more precisely, the iPadhas cut into
low-end PC sales. Given how low the
growth was for graphics, a causality
relationship may exist.
Market prediction is challenging given
uncertainty in denition, methodology,
and sell-through; yet, JPR continues to
be optimistic about the future for PCs in
2011. There is momentum for machines
used in business, creative content, and
entertainment. The iPad and coming
tablets will remain attractive adjacent
devices and will primarily affect the low
end of notebook sales.
Numbers were down in Q4, but JPR
expects 2011 to be a strong year for
GPU sales. The full adoption of DX 11
for mainstream and high-end systems will
take place, putting a premium on GPU
sales. AMDs Fusion and Intels Sandy
Bridge should hit their stride. Couple
all that with an improving US and world
economy, and 2011 should be a solid
year all-around.
The Q4 2010 edition of Jon Peddie
Researchs Market Watch, now available
in electronic and hard copy editions, can
be purchased for $995. An annual
subscription of Market Watch costs
$3500 and includes four quarterly issues.
PC Graphics Shipments Down Year over Year
NEWS: GRAPHICS CHIPS
Nvidia has announced its NVS 300 business graphics solution,
a graphics processor designed to deliver visual delity across
up to eight displays, while consuming minimal power.
The NVS 300 graphics processor offers versatile display
connectivity in a low-prole, space-saving graphics card design,
simplifying IT administration. Compatible with LCD, DLP, and
plasma display types and standard tower, work station, and
small-form-factor system congurations, the NVS 300 supports
VGA, DVI, DisplayPort, and HDMI at resolutions as high as
2560x1600.
The new graphics processor features Nvidia nView desktop
management software and new Nvidia Mosaic technology,
which provides seamless taskbar spanning and transparent
scaling of any application across as many as eight displays.
The NVS 300 is built for demanding enterprises that require
high reliability, improved manageability, and tremendous value,
explains Jeff Brown, general manager of the Professional Solutions
Group at Nvidia. The ability to support legacy and current display
types provides an upgrade path without disrupting existing,
complex installations.
The NVS 300 graphics
processor sports a passive ther-
mal design and built-in power manage-
ment technology, which intelligently adjusts power consump-
tion based on the applications in use.
The Nvidia NVS 300 is priced at $149 and is now available in
PCI Express x16 and x1 congurations.
PRODUCT: GRAPHICS PROCESSOR
Nvidia Introduces Multi-display Processor

Social Rendering
D
espite all the advances in technology, software rendering is still
slow. Of course, the big studios have their rendering farms, but
what about small studios or hobbyist animators? For most of us,
rendering times hinder our creative fow and cripple our production
pipeline. When it comes to rendering large projects, we often have few
options beyond the handful of computers in our
ofces and homes.
How can we get access to more rendering com-
puters? Our friends, family, and colleagues have
perfectly fne computers just sitting idle at their
homes, and most likely they would be more than
willing to share their resources. Yet, how can we
quickly and easily utilize their computers to help
render our animations?
While there have been a number of Internet-
based rendering solutions over the years, they al-
ways seemed overly complex. For example, in order
to volunteer, the user needs to download special
software, install the software, link up projects, and
so forth. While the technically minded are able to
do this, we want to draw volunteers from all of
our friends, regardless of their computer expertise.
So, where can we fnd hundreds of online friends
who might be willing to help us render? Te an-
swer, of course, is Facebook. By integrating the
entire rendering experience within Facebook, we would have a platform
that connects animators to a large pool of potential volunteers.
To address this goal, we are developing a new Facebook application
called RenderWeb, which allows artists and animators to upload their
animation projects. Once the projects are uploaded, Facebook users can
volunteer to render by simply clicking on a Web link. Te rendering oc-
curs directly within the Web browser, and preview images are displayed
to the volunteer. After the animations are rendered, the videos are made
available for the entire community to watch and tag.
While we hope to soon integrate commercial renderers, RenderWeb
is currently compatible with the popular Blender animation program.
Blender was an ideal ft for this project: It is open source, available on
multiple platforms, and has a small binary download (which is great
for Web applications). Yet, it was no trivial task to get Blender to work
within Facebook. Typically, Facebook applications utilize Web languag-
es, such as Java, JavaScript, or Flash. However, Blender is written in the
C programming language. While we could have rewritten Blender into
a Web language, this would have led to a buggy and incomplete ver-
sion of Blender. Instead, we decided to develop a distributed rendering
platform based on Java. Te rendering does not occur using Java, but
instead Java acts as a communication layer between Blender and the
RenderWeb server. Java detects a users operating system, temporarily
downloads the proper Blender version, executes Blender, and then pipes
the images to the Web server. All this occurs without the user having to
set up or confgure anything. Utilizing this method, we do not need to
make any modifcations to Blender. In fact, the version of Blender uti-
lized by RenderWeb is byte for byte identical to the version distributed
by blender.org. Moreover, this modularized approach will allow us to
integrate additional renderers into RenderWeb with little efort.
Although it is certainly interesting that Blender can be integrated
into a Web browser, the true power comes when many instances are
working together to render animations. In RenderWeb, the relation-
ships within Facebook are used to direct the fow of computation from
computers to specifc projects. RenderWeb allocates projects based on
the existing relationships within Facebook. When you are volunteering,
a higher priority is given toward using your computer to render your
Rendering
By AdAm mcmAhon
The RenderWeb rendering app works with the open-source Blender animation program.
March 2011
n
Adam McMahon, a PhD candidate at the
University of Miami, is the founder of
RenderWeb LLC. He can be reached at
www.RenderWeb.org.


B
l
e
n
d
e
r

F
o
u
n
d
a
t
i
o
n

w
w
w
.
b
l
e
n
d
e
r
.
o
r
g
.

Social Rendering
D
espite all the advances in technology, software rendering is still
slow. Of course, the big studios have their rendering farms, but
what about small studios or hobbyist animators? For most of us,
rendering times hinder our creative fow and cripple our production
pipeline. When it comes to rendering large projects, we often have few
options beyond the handful of computers in our
ofces and homes.
How can we get access to more rendering com-
puters? Our friends, family, and colleagues have
perfectly fne computers just sitting idle at their
homes, and most likely they would be more than
willing to share their resources. Yet, how can we
quickly and easily utilize their computers to help
render our animations?
While there have been a number of Internet-
based rendering solutions over the years, they al-
ways seemed overly complex. For example, in order
to volunteer, the user needs to download special
software, install the software, link up projects, and
so forth. While the technically minded are able to
do this, we want to draw volunteers from all of
our friends, regardless of their computer expertise.
So, where can we fnd hundreds of online friends
who might be willing to help us render? Te an-
swer, of course, is Facebook. By integrating the
entire rendering experience within Facebook, we would have a platform
that connects animators to a large pool of potential volunteers.
To address this goal, we are developing a new Facebook application
called RenderWeb, which allows artists and animators to upload their
animation projects. Once the projects are uploaded, Facebook users can
volunteer to render by simply clicking on a Web link. Te rendering oc-
curs directly within the Web browser, and preview images are displayed
to the volunteer. After the animations are rendered, the videos are made
available for the entire community to watch and tag.
While we hope to soon integrate commercial renderers, RenderWeb
is currently compatible with the popular Blender animation program.
Blender was an ideal ft for this project: It is open source, available on
multiple platforms, and has a small binary download (which is great
for Web applications). Yet, it was no trivial task to get Blender to work
within Facebook. Typically, Facebook applications utilize Web languag-
es, such as Java, JavaScript, or Flash. However, Blender is written in the
C programming language. While we could have rewritten Blender into
a Web language, this would have led to a buggy and incomplete ver-
sion of Blender. Instead, we decided to develop a distributed rendering
platform based on Java. Te rendering does not occur using Java, but
instead Java acts as a communication layer between Blender and the
RenderWeb server. Java detects a users operating system, temporarily
downloads the proper Blender version, executes Blender, and then pipes
the images to the Web server. All this occurs without the user having to
set up or confgure anything. Utilizing this method, we do not need to
make any modifcations to Blender. In fact, the version of Blender uti-
lized by RenderWeb is byte for byte identical to the version distributed
by blender.org. Moreover, this modularized approach will allow us to
integrate additional renderers into RenderWeb with little efort.
Although it is certainly interesting that Blender can be integrated
into a Web browser, the true power comes when many instances are
working together to render animations. In RenderWeb, the relation-
ships within Facebook are used to direct the fow of computation from
computers to specifc projects. RenderWeb allocates projects based on
the existing relationships within Facebook. When you are volunteering,
a higher priority is given toward using your computer to render your
friends projects, as opposed to other projects
in the queue. Tus, the more friends that an
animator has, the higher the potential for
computational power.
While there will always be members of the
community volunteering their computers,
sometimes we have an immediate need for sig-
nifcant computational power. To address this
need, RenderWeb allows you to automatically
update your Facebook wall to inform your
friends of projects that need to be rendered.
Tis optional feature posts a link on your
Facebook wall that allows your friends to ren-
der with just one click. Using this feature, you
can easily notify your friends when you have a
demanding project in the queue.
In a sense, RenderWeb is similar to cloud
computing and online rendering farms.
However, while those services are expensive,
RenderWeb is free because your friends and
the community will share the rendering load.
Moreover, since RenderWeb is integrated with-
in a social network, it has the added beneft
of allowing you to share your animations with
the community and bridge new contacts with
other animators.
In summary, we believe that social network
integration will be a game changer for Inter-
net-based rendering. RenderWeb will connect
animators with friends and communities that
are willing to share resources.
If you would like to participate in this new
arena of social rendering, join RenderWeb
at apps.facebook.com/renderweb. You can
upload your own Blender projects or volun-
teer your computers to render. With a shared
community efort, rendering will no longer
be a bottleneck in the pipeline. We will all
have ample computational power to render
everything that our production or creativity
demands. n
March 2011
C
o
u
r
t
e
s
y

D
o
u
g

J
a
m
e
s
.
RenderWeb, which relies on social networking,
offers users a free rendering method.


B
l
e
n
d
e
r

F
o
u
n
d
a
t
i
o
n

w
w
w
.
b
l
e
n
d
e
r
.
o
r
g
.
March 2011 10

CGI
ILM based Rangos CG charactersincluding (inset, from left to right) Rango leading the posse, Priscilla, and the Mariachi owlsas
well as the town of Dirt (above) and the hot, dusty, desert backdrop on artwork from production designer Mark Crash McCreery.
Priscilla in the middle, and the Mariachi owls at rightthe town of Dirt, and the hot, dusty, desert on artwork from production
designer Mark Crash McCreery.
Claim Jump ers
Industrial Light & Magic uses techniques and processes
honed in visual eects work to create live-action director
Gore Verbinskis rst CG feature animation
By Barbara Robertson
Claim Jump ers
Industrial Light & Magic uses techniques and processes
honed in visual eects work to create live-action director
Gore Verbinskis rst CG feature animation
By Barbara Robertson
Images 2011 Paramount Pictures.
Saddle up, CG cowboys. e fences are down, and the barn door ew open. Weve seen
lmmakers straddle the boundary between live-action and animated features for longer
than great-grandmas chin whiskers, but weve never seen anything head out for new
territory like Rango.
Directed by Gore Verbinski, designed by Mark Crash McCreery, and created at
Industrial Light & Magic, the Paramount feature, produced by Blind Wink, GK Films,
and Nickelodeon, is the rst animated lm for the live-action director. Its also the rst
animated lm for the designer, and the rst animated feature to move through ILMs
visual eects pipeline. Did that mean that the director and artists mimicked an anima-
tion studios pipeline and processes? Nope. Caint say they did.
OK, then, did they adopt Robert Zemeckiss style of making an animated lm using
live-action techniques? Nope. Didnt go there, either. is lm has no motion-captured
performances.
Heres how it worked: e crew simply herded the wacky spaghetti western down the
road as if it were a visual eects project and adapted to the scale of an animated lm as
needed. at makes Rango the rst animated feature created with visual eects, and it
opens the cattle gate to other such projects in the future.
We all came from live action, and that was our common language, says Tim Alex-
ander, visual eects supervisor. As we got into the pipeline, we found things we could
do better in terms of scale and continuity, but we kept our strengths.
Freaky Frontier
One of ILMs strengths is in creature animation, and boy howdy did they have creatures
to animate: 130 individual characters and 50 rigged variations. Of those, 50 were hero
characters, 26 were main characters. But, the quantity didnt cause the studio to slack
o. e characters in this lm are as detailed as the creatures we create for visual
eects, Alexander says.
McCreery designed all the characters based on animals, but, with few ex-
ceptions, they act like humans, and all but a few wear multiple layers of
clothing. All the characters are really humans with an animal design mo-
tif layered over them, says Hal Hickel, animation supervisor. e mayor
acts like John Huston from Chinatown. He doesnt act like a turtle.
e star, Rango (Johnny Depp), is a chameleon who bounced out of his ter-
rarium from the inside of a car traveling through the desert. As the lm begins, hes
free, alone, and lost.
But along comes Beans (Isla Fisher), a lovely lady lizard bobbing her way to town
in a rickety wooden wagon lled with empty, jostling water bottles. Shes holding
the reins of a javelina, a crusty wild pig thats pulling the wagon, and shes cranky.
Somehow, weve moved into a creature-sized world, and its rough, tough, and dirty.
Look-development supervisor Damian Steele describes the character design as
a cross between Robert Crumb and Beatrix Potter. Others simply call it nasty.
When Rango rst sees the town of Dirt on the horizon, it looks like two blocks of
ramshackle old buildings lining either side of a main street rising from the hot des-
March 2011 11
CGI

Claim Jump ers


Images 2011 Paramount Pictures.
March 2011 12
nnnn
CGI
ert. In Dirt, the suspicious animal citizens who
greet Rango wear clothes straight out of a Sergio
Leone spaghetti western. And the wary charac-
ters in the saloonmangy creatures, allwear
cowboy hats, vests, gun belts. Tree Mariachi
owls comment on Rangos prediction.
Dressed in a red Hawaiian shirt, Rango is
the stranger in town, a chameleon searching
for an identity. So, after a quick look around
the saloon, he picks one: a Western hero.
Soon, through a series of accidental events, he
becomes a hero, and the mayor appoints him
sherif. Tis is a spaghetti western, so Sherif
Rango will have to save the town from a series
of evil plot twists and discover who he really is.
Or, maybe not.
Critters
Geof Campbell led a team of 12 model-
ers who, working from McCreerys artwork,
sculpted the creatures and their costumes.
Crash quickly made it clear that we were to
match the artwork, he says. Wed ask wheth-
er Rango was more of a chameleon or a lizard,
and hed tell us it didnt matter, that what we
saw in the artwork was the character. Refer-
ence of actual animals would become impor-
tant later, especially for the look-development
artists who paid attention to such things as
skin quality, but in terms of the characters
shape, the modelers focused on the artwork.
Typically in animation studios, modelers
work from maquettes sculpted from the char-
acter designs; they rarely begin with the de-
signs. With three weeks per character allotted
for modeling, though, the ILM team decided
to go straight into Autodesks Maya, much as
they have done with visual efects characters.
But, when questions about tiny details, such
as the shape of the teeth and tongue, tied the
approval process in knots, the modelers reined
back and switched horses.
Rather than trying to sculpt fnal models
from the get-go in Maya, they created 3D ma-
quettes in Pixologics ZBrush and posed them
to match the artwork. Te maquettes showed
Verbinski and McCreery that the modelers
understood the proportions and quality the
director and designer wanted. And with that
approval, the artists began working in Maya
on sculpts that the riggers could prepare for
animation, the view painters could texture,
and to which the look-development artists
could assign materials for rendering. For facial
animation, the modelers moved the charac-
ters into ILMs proprietary Zeno program to
hand-sculpt shapes used to form expressions.
FEZ, a FACS-based system in Zeno, placed
the shapes and categorized them.
Some of the things we modeled were quite
grotesque, Campbell says. One model is a
kid with a mullet. Hes from the animal world
of Dirt, and hes quite ugly. His fur is over-
agitated where he was scratching. Wed look at
him and burst out laughing in dailies. He was
so pathetic. And another is a rodent character
with gauze on his eye that we built into the
model. All these little things. Each character
had something unique that gave a sense of
where these people are from. Sometimes tex-
tures would handle it. Sometimes wed model
in bit of a scar or a bandage.
True Grit
Steve Walton, who supervised the view paint-
ers (texture painters), and Damian Steele, one
of two look-development supervisors on the
project, sat within spittin distance of each
other during postproduction. Damian sat at
the next desk over, Walton says. Everything
I do has to work for him, and he has requests
for me. Its a direct partnership. Damian refers
to it as a three-legged race.
View painters, however, started producing
texture maps for the characters a bit ahead of
the look-dev technical directors, who jumped
in toward the last half of view painting. For
Rango alone, Walton estimates the paint-
ers created 120 separate efects maps and 20
color maps. Hes a chameleon, Walton says.
Tings happen to him.
Modeler Frank Gravatt worked on Rangos
shape and scales, which he built individually by
hand. Any time Rango changed, Frank had to
go in and reapply the scales, Steele says. It was
like tiling a bathroom. Walton added the small-
er details on top, creating maps that defned the
skins shininess and translucency. Ten, the
look-dev artists applied the materials, which
drive the Pixar RenderMan shaders, and gener-
ated the hair using an ILM-specifc process.
Tats when we had to consider the size
of the character, Steele says. Light difuses
through an object at a certain rate per centi-
meter, so when you light with subsurface scat-
tering, it is brighter on a small creature than
on one the size of, say, Arnold Schwarzeneg-
ger. When theres a lot of scattering, things
look babyish and sweet. So, when we rendered
Rango, we treated him like a six-foot-tall crea-
ture. Also, a lot of RenderMan attributes are
scale dependant. You have to work out how
lighting afects shadow details and bump
details early on, so we work closely with the
view painters. We render the creatures in as
many situations as possible to try to fgure out
At top, Dirts mayor, played by Ned Beatty, may look like a turtle, but he acts more like John Huston
in Chinatown. At bottom, all the characters in Rango are animals, which meant they had fur, scales, or
feathersa diffcult task made harder with multiple layers of costumes.
March 2011 14
nnnn
CGI
how theyll look in every situation.
Te look developers began working dur-
ing the frst month of production by sitting
in ILMs theater and watching the thumbnails
of the entire movie set to a sound track as
Verbinski acted out the dialog. It was fantas-
tic, Steele says. We knew from the beginning
we were working on a good flm.
More importantly in terms of the work at
hand, they could see most everything that
would happen to the characters. We could
see when Rango would get dirty and wet,
Steele says. We learned that Beans would get
turned upside down. We could schedule when
we were going to panic. Te view painters
and look-development TDs then spent the
following year creating the characters.
We had around 150 creatures, with 100
detailed enough to stand up to full-frame
scrutiny, Steele says, so it was literally a year
of look development. Id drop my kid at day
care, and suddenly Id be in a dark room star-
ing at an image, trying to describe what I liked
and didnt like, and what was wrong. Wed try
to describe pictures using words. With Rango,
for example, at one extreme, Gore [Verbinski]
would say he looked waxy. At the other ex-
treme, that he looked chalky. Wed fnd those
boundaries and then steer between them.
Each day, between 20 and 30 modelers,
view painters, look-development TDs, and,
often, the visual efects supervisors (John
Knoll and Tim Alexander) would sit in the
dark room for a half-hour, two hours, some-
times three hours, and decide whether Rango
was the right shade of green, whether Beans
looked pretty enough, whether the rodents
were grungy enough. Teyd see between fve
and 20 creatures a day, many of which were
repeat performances.
Every once in a while, Id be talking about
something, like subsurface scattering, and spot
someone Ive known for 15 years in the the-
ater, Steele says. Weve been here so long,
were all friends. It was an honor to trust that
theyd all bring their part of the puzzle and it
would look good.
Because Rango and Beans were the main
characters, they were the most difcult for the
group to get right. We had the artwork, but
that only goes so far, Walton says. When we
tried to make him look like the artwork, he
had too much crunchy detail. At one point, I
was in dailies with the people in Los Angeles
and I could hear someone in the background
there saying Rango looked like fan art. I said,
Hey. Im here. I can hear you. So Damian
and I decided to start over from scratch. We
took Rangos simple form and found a good
look with that. Once we had that, we added
levels of detail that made him look real. But,
we had to get his basic look frst. It was pain-
ful. But, it was great.
For Beans, the challenge was in creating a
heroine. Shes taller than Rango, but her head
is smaller, Campbell says. When we made
her shorter, her proportions became unattract-
ive. We had all kinds of issues.
And she was a leading lady, so she had to
look pretty. But, what does a pretty lizard look
likeespecially one living in a hot, dirty envi-
ronment? We tried to get the right amount
of bumpiness and shininess, and a color blend
that was varied enough, but not splotchy or
blotchy, Walton says. But its subjective. It
depends on whos looking, so you fnd yourself
chasing a bit. Gore used the word honest. She
had to feel real and right.
To convince Verbinski they had developed
Mastering the Elements
Raul Essig, CG supervisor of effects, managed a crew of approximately 17
artists who created water and placed heat ripples in the desert, dust in the
town of Dirt, haze in the atmosphere, and fre in the campfre. We covered all
the elements, really, he says. At frst I thought this show wouldnt be too hard
for my group because it was an animated feature. But that idea was shattered
fairly early on. There are lots of sequences in which the effects are very impor-
tant. A fre-breathing shot, for example.
This was familiar territory for the crew, and they made good use of the various
techniques and tools available for visual effects at ILM. For water simulation,
the artists used an in-house PLS (particle level set) system, SPH (smoothed-
particle hydrodynamics), and a 2.5D simulation, the latter to create ripples on
a water surface.
For fre, they used Plume, a fast GPU-based system developed for Air Bend-
er. For dust, Plume again, and volume rendering. For haze, volume rendering
with a particle simulation to describe where the dust is created.
Early on in the flm, the town of Dirt looked really good, Essig says. But
when we started layering in atmospheric passes with haze in the distance and
dust swirling by, the look completely changed. All of a sudden, Gore [Verbinski,
director] said, Oh. Thats it. Thats what we want.
Even though Essig and his team planned the ways in which they would
handle each shot, Essig believes its important to be fexible. Shots change.
Crews move on and off shows. You might have an artist available with great
skills in a technique different from the original plan, he says. You have to be
willing to adapt. We have so many tools, and the artists all have so much expe-
rience, you have to trust that experience to know the best way to approach a
problem. Its a fuid process. Barbara Robertson
The main characters Rango and Beans were diffcult for the modelers and look developers to nail down.
Rango needed exactly the right amount of detailnot too much or hed look crusty. Beans needed to look
pretty enough to be a leading lady even though shes a lizard, which took several iterations.
March 2011 15
CGI
nnnn
the Beans he had in mind, the artists asked the
animators to help them create a motion test.
We had her point a shotgun at the camera
and do some dialog, Steele says. Only then
did the director see that we had come up with
the character.
Te frst two characters the look-dev team
worked on were Rango and Priscilla (Abigail
Breslin), a cute little girl based on a Madagascar
rat who has an unusual fondness for death. We
started with the sweetest characters, Steele says,
which was good, because we hadnt perfected
techniques for dirtying things up.
In fact, when they frst started dirtying
up the grungy characters, it didnt feel right.
It was too disturbing, Steele says. So, we
pushed ahead slowly. CG makes things clean;
theres a real art in making things dirty. One
person added matted hair. Te next added
dirt. And then late in preproduction we
started look dev on the inbred rodents. Mike
Halsted had gotten quite far with the level of
distress by then, and they were just disgust-
ing. Matted, hairy, everything they should
have been.
Once the characters passed their turntable
examination, they moved into production,
but the look-dev artists kept a close eye on
their treasures, all the way through to the end.
Each one is like a little Faberge egg, Steele
says, little jewels that everyone contributed
to. Its our job to maintain them. Te TDs
were, in a way, customer support technicians
for the characters.
Tis meant during production, for exam-
ple, when Verbinski noticed that the eye lines
in the rendered shots werent the same as in
the animation he had fnaled, it was up to the
look-dev TDs to fx the problem. In this case,
we realized that refraction caused an error that
was measurable, Steele says. So, we fxed the
problem, and the shot TDs imported the fx
into the shots.
Te problem arose because the anima-
tors didnt see the refraction, and refraction
can skew the eye line. So, Jason Smith, a TD
supervisor, confgured Maya to give animators
a preview of what refraction would do. Tere
was a deeper problem, though, Steele says. It
wasnt just eye direction. Our eyes have an ap-
parent depth to them, and where the eye was
sitting in the socket caused part of the prob-
lem. Tere were lots of little issues like that, so
we were always on a heightened state of alarm.
It was quite nice when the show ended and I
no longer had to worry about being respon-
sible for things that could blow up.
Keep Them Doggies Movin
James Tooley led the team of creature-de-
velopment TDs responsible for rigging and
skinning the characters, and for dynamic
simulations. Most of the characters were
bipeds that could use an evolution of rigs
developed over the years at ILM. Te main
innovation for this flm was that the riggers
created a GUI for each charactera request
from the animators.
Animators could manipulate the charac-
ters in Maya, or select parts of the character
through the UI without touching the screen
in Maya, says Brian Paik, associate creature
supervisor. Te two systems would update
each other.
Te riggers also added a little squash and
stretch to the rigs to give the characters more
fexibility than a character in a live-action flm
might require, and gave the animators the
ability to tweak the characters silhouettes us-
ing extra deformers on the skin.
In addition to the bipedal characters, the rig-
gers set up systems for quadrupeds, characters
with wings, and one massive snake. We had
everything, Tooley says. Feathers, scales, hair,
multiple layers of clothing that we dynamically
simulated. For characters with wings, the team
created one rig that worked whether the bird
folded its wing or few. In the past, animators
had to work with separate rigs.
Te snake was even trickier. If stretched out,
Rattlesnake Jake (Bill Nighy) would be 100
Triage
To cope with the number of characters in the flm, ILM used a bit less geometry
and a lot more displacement. A lot of our characters had low CVs, so we had
to optimize for displacement, says Tim Alexander, visual effects supervisor.
And that produced interesting rendering challenges.
Wed get displacement pops, Alexander says. Hair sizzle. Other problems.
So, we assigned two CG supervisorsPat Myers and Kevin Sprouto render
triage. We told the TDs that if they spent more than an hour on a problem to
call their supervisor or send the problem to render triage.
For Myers, that meant spending much of the show fxing problems. We had
a lot of shots in play, and artifacts would show up in a render, so we worked on
the noodly problems with the complicated solutions.
Light leakage, for example. One time we ended up with infnite numbers
inside part of a calculation, which produced cool effects, Myers says. It made
a character look like it had explosions of light under the skin. Or, wed run a
scatter calculation and suddenly have random spots with bursts of red glow
on a characters teeth. We called them the red dots of death. That turned out
to be a slight bug in Pixars RenderMan that showed up with a specifc kind
of geometry and specifc kind of lighting environmentwhich the RenderMan
team fxed.
The triage team fxed a tiny buzz that showed up on the edge of a sheriffs
badge in an extreme close-up. A sizzle in the specular highlights on refracted
glass. And, sometimes, they gave up.
Wed do as much as we could and then the p word came up, Myers says.
Wed say, Well, we can paint. I was amazed we held off from doing that more;
we really didnt want to go down that path.
Myers also kept an eye out for renders that ran too long. I didnt keep track
of stats, but I can guarantee that some frames rendered for pretty close to a
day. Sometimes that was in error, he says. When people are chasing artifacts
in a problem shot, they turn up the quality knobs, and then after they fx the
problem, they tend to leave the knobs turned up. I told everyone that if they had
a shot that ran for 24 hours and didnt fnish, to call me.
When Myers frst started on the flm, he worked on creating a terrain shader
to help the artists deal procedurally with the details in the huge desert environ-
ment. But then, he moved into problem solving and stayed there. With most
problems, if you attack them from a bunch of different angles, you can get to
the bottom of them, he says. Ive always enjoyed problem solving. But I felt a
little like the elves working on the shoes. Barbara Robertson
March 2011 16
nnnn
CGI
feet long. Rather than rattles, Jake has a gun,
and he carries 257 bullets. He has 60 teeth,
two fangs, and 7855 scales. Each scale is a spe-
cifc piece of geometry that moves in a certain
way as his body bends. Animators moved the
snake in Maya using layered controls, and the
scales moved appropriately. Keiji Yamaguchi
created the complex rig, Paik says. Anima-
tors could pose the character and use controls
to ofset and hold its form. Te challenge was
with the scales. Deformers sent information
about the orientation of the snake to trans-
formers that contoured the scales correctly so
they didnt interpenetrate.
Fancy Duds
Te biggest challenge was in simulating the
multiple layers of clothing. Modelers created
each piece of clothing, in detail, stashing vari-
ous bits in a wardrobe databasegun belts, hol-
sters, boots, vests, hats, and so forth. Verbinski
and McCreery had specifed that even though
the characters didnt look like theyre from the
natural world, the clothing had to be photo-
real. So, modelers built seams into the clothing
and, as they had done for Pirates, added loose
threads to the model database.
Someone could grab loose threads and
move them into place in a Maya scene or
Zeno, and that would take the crispness out of
the clothes, Campbell says. And then Steve
[Walton] probably mentioned salt stains and
sweat stains on the clothing. We had holes,
loose threads, the look of clothes that had been
re-stitched.
Te frst week Steele was on the show,
McCreery came in with costumes and props
from Universal Pictures wardrobe department
for the view painters and look-development
TDs to see and touch. Crash really wanted
everything to feel like people were watching a
good-old western movie, Walton says.
During the feature flm, Rango alone had
13 costume changes. We had to rig all those
costumes, Paik says. Because theyd mix and
match costume parts, we set them up so no
matter what outft he had on, the parts would
interact with one another.
Te animators would put Rango into his
pose with a silhouette they liked, and then the
creature supervisor would match that starting
pose for the simulation pass. Usually Rango
and the other characters wore multiple layers
of clothing, which the character TDs simu-
lated one layer at a time, starting from the in-
side. In some cases, however, the layers needed
to interact, and in others, the layers needed to
move together.
We have built incredible simulation con-
trols into our dynamic software, Tooley says.
One of the things we can do is paint areas
where we want things to behave diferently.
For example, if a character is wearing a shirt
under pants with suspenders, we can pre-
simulate some of the costume, style it the way
we want, and then tack down the shirt, almost
like gluing it under the suspenders, so they
move together.
ILM frst used tacks with dynamic simu-
lations in Star Wars Episode I by writing code
that specifed particular actions for individual
CVs. Now, texture maps handle the instruc-
tions, which help create the realistic result.
Tat photorealistic movement is something
Tooley says he doesnt often see in animated
flms. Instead, he notices that crews often use
wrinkle maps and other deformations rather
than dynamic simulations.
When I was an efects animator at Dis-
ney, Tooley says, we drew shadow maps,
tone maps, and other hand-drawn efects on
top of a character to move the clothing. But
when everyone switched from 2D to 3D, it
became more difcult to move clothes cor-
rectly without any snagging or tangling. [In
visual efects] we have to have characters side
by side with actors, so their costumes have to
move right and look right. We had to work
hard to get there, but its something weve
been doing for a long time.
Tooley counts resolution among the most
important elements in creating realistic cloth
simulations. You can tell, he says. If clothes
look smooth with no wrinkles, theyre using
low-resolution cloth. On clothing, to represent
a fold, the ideal is three polygons per quarter-
inch. Id like to go higherthree polygons per
eighth-inch. I think some of the Jedi costumes
were three polygons per half-inch, and Rangos
pajamas were close to that. I wish it could be
more, but its a trade-ofquality and look
versus speed.
Pretty Curls, Fine Feathers
For hair, ILM uses a guide-hair-based system
in which modelers place curves on the charac-
ters that the renderer, RenderMan, instances
to create full heads of hair and furry bodies.
Particular to this flm were Beans long curls
and the clumpy hair on the ill-kempt critters.
Animators could place the curls using a single
strand of bendable, fexible hair. To wiggle the
curls during the simulation without having
them uncurl wildly, the riggers put the curls
in springy tetrahedronal cages. Look-develop-
ment TDs managed the tufting and clumping
on the interpolated hairs. It was a back-and-
forth process, Tooley says.
In one sequence, the sequence that turns
Rango into a hero, an enormous hawk chas-
es him through the town, crashing through
buildings as he runs. When the hawk was
close, each feather was a piece of geometry
that deformed based on the skin movement.
After animation, a rigging artist would set up
cloth meshes that represented each feather,
Paik says. Wed manipulate the springs on
those so that the feathers would collide with
each other, but not interpenetrate. Since its
spring-based, sometimes it would drag and
elongate the feathers, but another deformer on
top would stretch it back down to its original
length while keeping the simulation.
The Gangs All Here
In addition to Johnny Depp, Isla Fisher, Abigail
Breslin, and Bill Nighy, Verbinski assembled a
wide cast of actors to play the 100-plus char-
acters, including Ned Beatty (Tortoise John),
Ray Winstone (Bad Bill), and others. Voice
actors for most animated flms work solo in
recording booths, although occasionally a di-
rector might corral two in one sound booth so
they could interact. Verbinski, however, want-
ed the whole shootin match working together,
just as they do on a live-action set.
Visual effects artists used every simulation trick in the studios tool box, including Plume, developed
for Air Bender to create and manage art-directed fre.
B e c a u s e i t m a t t e r s .
From 3G to 3D,
connect with AJA.
www.aja.com
Designed for the latest desktop workflows, KONA 3G builds on
the industry standard KONA 3 platform, adding 3G SDI I/O,
and true 3D HDMI 1.4a monitoring support.
Chosen for their outstanding performance and unparalleled
reliability, KONA cards are cross-platform, giving you the
freedom to work with any format - in the software of your choice.
KONA 3G. Power. Performance. Price.
Multi-format SD/HD/2K Video I/O Hardware
Joining our range of Mini-Converters, Hi5-3D is a fast, flexible way to
monitor true stereoscopic 3D footage. Combining two SDI inputs
into various multiplexed 3D formats for output on true 3D HDMI 1.4a
and SDI, Hi5-3D fully supports side-by-side, top-bottom, and
frame-packing 3D modes.
Easy to set-up and configure, Hi5-3D features both dipswitch control
and USB host control via AJAs Mini-Config application.
Find out about our workflow enhancing solutions by visiting us at
www.aja.com
Hi5-3D. Stereo 3D, Simply.
HD/3G-SDI to true 3D HDMI 1.4a Stereo Mini-Converter
Seamless integration with Apple Final Cut Studio, Adobe CS5 and more
Uncompressed Quicktime, and DPX file format support
FCP Dynamic RT Extreme and DVCPro HD / HDV hardware scaling acceleration
Broadcast quality hardware based 10-bit up-, down-, and cross-conversion
RED Epic and ARRI Alexa workflow ease with true 2K playback and LUT support in realtime
3D workflow support with discrete left and right eye output
NEW
NEW
#27984 - CGW KONA Hi5 Ad_Layout 24/09/2010 15:10 Page 1
March 2011 18
nnnn
CGI
He wanted to direct the ensemble even
if there would be technical problems with
tangled lines, Hickel says. He wanted props
and costumes. He wanted to walk the set and
know how many steps Rango takes in the sa-
loon to get to the bar.
Verbinski directed and recorded the actors
during a 20-day period on stage sets with
props appropriate to the scenetables, chairs,
and a bar in the saloon, for example, a desk
in the mayors ofce. A curtained area nearby
gave the actors an opportunity to record their
lines in a quieter area, but they did so still as
an ensemble, and, Hickel points out, they had
the sense memory of the earlier performance.
Gore likes to create chaos and then catch mo-
ments with a butterfy net, Hickel says.
Even so, Verbinski had spent the previous
year creating a story reel2D thumbnail
drawings that he cut together into the entire
flm. While he directed the actors on stage, a
script supervisor checked their timing against
the reel. After the recording session, editors
added selected bits of dialog to the reel.
Ready for the Roundup
Te story reel gave layout artists a starting
point for selecting lenses, positioning the cam-
era, blocking characters, and assembling assets.
Layout supervisor Colin Benoit concentrated
primarily on camera work; Nick Walker, who
joined ILM from PDI/DreamWorks where he
had been head of layout for Shrek the Tird,
handled the assets. Layout is where all the
assets funnel in and come together, Hickel
says. We hadnt focused on dressing a natu-
ral terrain, on pulling together every pebble,
bar stool, shot glass. Our layout artists know
matchmoving.
Benoit had moved beyond matchmov-
ing into cameramoving, especially on the
last Star Trek flm, where he found himself
facing black cards with instructions, rather
than live-action plates. For Rango, he worked
directly with Verbinski. It was amazing,
Benoit says. Normally, our layout artists
dont work on conversation scenes. We dont
see how a director shoots something that
doesnt have efects in it. Tis was like two
years of flm school with Gore Verbinski.
Each time Verbinski gave the artists a se-
quence from the storyboards to work on, hed
explain how he would have shot it if he had
been on set. From that point on, we treated
the entire movie like a live-action shoot, Ben-
oit says. Every term was based on live-action
cinematography; all the camera work was
based in reality.
Te layout artists would, for example, bring
the town of Dirt, Rango, and Beans into Zeno,
and then shoot the scene with virtual cameras
equipped with virtual lenses that matched real-
world lenses. For the most part, we used the
same lens kit that Gore shot Pirates with, Benoit
says. Wed shoot scenes as if we had a flm crew
with grips and gafers in the town of Dirt.
Sometimes, Verbinski would come to ILM
and frame shots within locations while the lay-
out artists were working on the rough layouts,
by looking at frst-stage geometry created for
the sets and backgrounds. He could see the
3D set on a Wacom Cintiq tablet, change the
camera angle, the lenses, and the focus. Sup-
pose we had a shot in the general store, Benoit
says. Wed populate the store with characters
that we could move around. Gore would be
on the motion-capture stage. He could frame
the shots and take snapshots. Or, if the space
seemed too small, he might ask us to push a
wall out or move a post.
When working at his desk, Benoit typi-
cally keyframed the camera, sometimes with
Verbinski looking over his shoulder. Occasion-
ally, he used an Xsens device that gave him ori-
entation control.
Te rough layouts with the keyframed cam-
era moves and blocked characters moved into
animation, and then the fnal animation came
back to the layout department for a fnal cam-
In addition to characters, the artists had to create the world
those characters lived in, and this, for Industrial Light & Magic
(ILM), was perhaps the greatest departure from their visual
effects work. All told, the modelers, painters, look-dev artists,
digimatte artists, and set dressers created 289 creature as-
sets, 653 prop assets, and 134 environment and set assets.
The town of Dirt has 40 buildings made from old wood and
other objects the animals scavenged in the desert. The saloon
is a gas can.
ILM art director John Bell created concept art for several of
the buildings, miscellaneous water towers, the opening high-
way sequence, and the desert. Crash [McCreery, production
designer] had a distinct idea about how the desert would dif-
ferentiate during the story, Bell says. In the beginning, when
Rango is displaced, he wanted a bleak, barren, nondescript
landscape with foothills at least 30 miles away. Later, the land-
scape becomes more interesting, engaging, three dimensional.
But the overall feeling was that it was hot, arid, and everything
is brittle. Bell drew rocks with hard edges, gave branches jag-
ged angles.
The town, too, needed to give the feeling that this was a hot,
arid place. Its purposely de-saturated, Bell says. If you study
the buildings, youll see a lot of blue, rust, yellow, ochre, a wide
variety of color. But, we didnt want it to be as bold and colorful
as other CG flms that push saturation.
To create the desert, the digimatte artists expected they
would paint and project 2.5D backgrounds in 3D space,
as they typically do for visual effects. However, they quickly
Tone on the Range
Layout artists and look-development technical directors gave the saloon characters their grungy look.
Lighting artists created the mood with help from cinematographer Roger Deakins.
March 2011 19
CGI
nnnn
era pass. For this, the layout artists spent 90
percent of their time on the motion-capture
stage. We had multiple rigs, Benoit says.
We had a dolly, a jib arm crane, a shoulder-
mount rig, a handheld rig, so we could reshoot
the camera based on the action.
Te reason for reshooting was that Verbinski
wanted to have the camera moves feel like an
actual camera operator had shot them. Matt
Neopolitan on our team did most of that
work, Benoit says, but we also had Gores
camera operator, Martin Schaer, which was
fantastic. Martin came in and shot four or fve
scenes with us. Just watching him work was a
lesson in how to do this. He did the campfre
scene with Rango and Beans, and helped with
the saloon scenes as well.
When Verbinski was in LA rather than
at ILM in San Francisco, the teams com-
municated using CineSync from Rising Sun
Research. Hed draw Rangos head to move
him to the left, Benoit says. Hed draw a new
perspective for the camera. For the most part,
though, we got the composition and camera
working in rough layout.
Animators working in San Francisco and
Singapore created all the performances us-
ing keyframe animation, no motion capture.
Tey were all dying to do the acting, Hickel
says. I was concerned at frst about directing
the animators in Singapore remotely, so I gave
them the crowd characters to start, but they
segued to hero characters.
Hickel organized the team of approximately
50, the same number he had for Pirates, into
character and sequence leads. Often character
efects are action-driven, Hickel says. Tis
flm was acting-driven. We knew wed kill our-
selves if we worked on one shot at a time. We
showed whole sequences to Gore, usually with
CineSync.
Similarly, Alexander organized the light-
ing artists by sequences. Tey could work
on multiple shots as if they were a single
shot, he says. Because all the assets moved
through the same pipeline as the characters,
lighters could light sets and environments
on the fy. Alexander estimates that each se-
quence was a six- to eight-week process.
Tat process often included advice from
award-winning cinematographer Roger Dea-
kins. It did not, however, include color scripts,
the little thumbnail paintings artists at anima-
tion studios create to design color and mood
through a flm.
Wed have people come in and say that we
needed a color script, Alexander says. Ten
we looked at what we do. We make things
look realistic. So, they decided that since they
had never needed a color script before to make
things look realistic, they didnt need one now.
Instead, they talked through each sequence.
Each sequence has a certain color con-
trast, Alexander says. Te saloon is dark,
scary, gritty, smoky, like something from Once
Upon a Time in the West, and every beam of
light is intentional.
Dispensing with a color script, doing
location scouting and rough layout on a
motion-capture stage, creating 3D maquettes
in ZBrush rather than sculpting clay fgures
were just a few of the ways in which ILM cre-
ated an animated flm within their visual ef-
fects pipeline. Te studio had to wrestle with
the scale of the flm, the number of assets
required, which were typical for an animated
flm but far larger than any visual efects proj-
ect. But, in creating Rango, they didnt try to
mimic an animation studio. And, they didnt
compromise on the quality for which theyve
staked a claim in visual efects. Tey worked,
as Alexander points out, from their strength.
In doing so, theyve pushed animated flm-
making into a new frontier. n
Barbara Robertson is an award-winning writer and a
contributing editor for Computer Graphics World. She can
be reached at BarbaraRR@comcast.net.
learned they needed to produce fully 3D environments for
director Gore Verbinski to explore and for lighting artists to
illuminate.
The question was how to do full environments without the
overhead of modeling and texturing every last piece, says
Andy Proctor, digital matte supervisor. The answer was to work
in stages and to develop some procedural tools for creating
exteriors. For interiors, they could use the typical 3D pipeline
modeling, view painting for textures, look dev for materials, and
set dressing for rendering.
To create the exteriors, the artists frst established the ge-
ography with an undulating ground plane that had a repeated
fractal pattern, and, if called for in the sequence, cliffs, rocks,
buttes, and so forth patterned with rough textures. That was
good enough to scout locations, Proctor says. Gore wanted
to equate the process to how hed work on set, so we kept that
fexibility. Then, they added details on a per-shot basis.
For interiors, modelers created the buildings and props that
view painters textured and for which the look-dev TDs defned
materials that drove the shaders. Walton would start painting
textures as if the assets were new, and then imagine how the
desert sun would weather them. I was working on the saloon,
which is a gas can, and tried to imagine how you would put an
awning on it, he says. Then I realized they would have punc-
tured holes in it, and where they punctured the holes, the paint
would corrode. So, I placed a rusty hole in a spot. Theres a
scene where Rango is walking in front of the saloon and you
can see through the hole. It landed in just the right spot.
Walton also blistered paint, cracked wood, bleached sur-
faces, and applied a dusty wash to everything. We had these
dilemmas with the story, he says. Its supposed to be dry be-
cause it hasnt rained forever, so why would things be rusty?
Crash would say, Dont worry about it. Its more about the
story and the feeling than the rules. Barbara Robertson
All the characters wear clothing, and Rango has 13 costume changes during the flm. Riggers set up
each piece of clothing so that in any combination, all the pieces interacted properly with one another
during dynamic simulations.
A record-number 111 million fans tuned in to Super Bowl XLV earlier this year, making
it the most-watched television program of all time. e event is the ultimate clash of
the titansthe best that the American Football Conference and the National Football
Conference have to oer. Similarly, the game (with its astronomical Nielsen ratings)
spawns a parallel competition that pits the best of Madison Avenue against one another
in what has come to be known as the Ad Bowl.
Some viewers, in fact, contend that they tune in to the game just to watch the adsa
big win for advertisers in an age when more and more viewers have their nger on
the fast-forward button while watching pre-recorded shows. During the Super Bowl,
though, fans see the commercials more as entertainment and less as a sales pitch. ats
because Super Bowl commercials are famous for going all out in terms of their complex-
ity; in contrast, some take a more subtle approach. Some use a serious tone; some focus
on humor. Some capture our attention with cute animals; some with sexy women. No
matter the tone, the expectation for a next-day watercooler discussion remains.
is year, audiences saw all these dierent calls from the Ad Bowl playbook. And a
number of the more popular spots required a digital assist, some more than others
from the beautiful all-CG epic fantasy piece complete with a re and ice dragon for
Coke, to the comedy of the digitally altered Bud Light dogs that have a doggone time at
a party, to the cute computer-generated Bridgestone beaver that shows us what Karma
is all about, to a unique piece for Volkswagen featuring the CG Black Betty beetle,
to a live-action/CG car heist that escalates from the daring to the downright ridiculous
for Kia, to a cast of famous characters from TV past and present who
subtly show their newfound postproduction-induced support for their
favorite teams in a segment for the NFL.
ese spots relied heavily on digital work
to pull o the gag, and each did so
successfully. Here, we take you be-
hind the scenes as we unveil the
digital magic.

Broadcast

March 2011 21
March 2011 22
nnnn
Broadcast
Coke: Siege
Fire and ice dont mix well. And that was cer-
tainly the case in the all-CG Coke Siege
commercial, as two culturesone fre, one
iceclash on an epic scale.
Set in a breathtaking, icy fantasy world, the
60-second cinematic story focuses on a battle
between an army of fearsome fre warriors
descending toward a peaceful community of
ice-dwelling creatures. Accompanying the
warriors is a huge fre-breathing dragon, which
leaves a burning path of destruction in its
wake and little doubt as to the likely outcome
for the defenseless villagersprotected only by
a tall, wooden wall surrounding their tranquil
village. Suddenly, the city gates open and the
villagers wheel out a sculpted ice dragon. With
one blazing breath from the fre dragon, the ice
sculpture melts to reveal a bottle of Coca-Cola,
which is quickly consumed by the creature. Te
warrior general gestures for the dragon to attack
the castle battlements, but instead of emitting a
giant freball, the dragon expels an explosion of
harmless freworks into the air. Confused and
without their greatest weapon, the army beats
a hasty retreat, leaving the villagers to celebrate
their victory with bottles of Coke.
Te commercial, produced by Nexus, con-
tained an impressive expanse of CGI, from
furry creatures, feshy beings, and vast crowds,
to the metal armor, fre, smoke, freworks, an-
cient buildings and objects, towering snowy
landscapes, rich forests, moody skies, and
more. Tis expansive digital universe was built
at Framestore.
According to Diarmid Harrison-Murray,
Framestore VFX supervisor, the studios brief
was far from simple: Create a painterly-style
epic flm, set within a fantasy world. Te
directors were keen that it be flled with lots
of detail and richness in terms of the environ-
ments and the characters, he says. Specifcally,
the visual brief was to create an animation that
looked more like classic fantasy art than CG.
Tats not a look you get for free in CG.
Creating a Fantasy
Nexus, led by directors Fx & Mat, provided
the initial concept art, which was then built
out by Framestore. Some have compared the
commercials visuals to those from the new
World of Warcraft trailer, others to the style of
the movie Kung Fu Panda. Yet, unlike those
projects, this one has a softer style to the
computer graphics, achieved through matte
paintings (crafted by London-based Painting
Practice), textures, and techniques used in the
fnal composites whereby the artists painted
out some of the crisp CG detail to achieve the
spots fantasy-like aesthetic.
To create the diferent elements, the art-
ists used a wide range of tools, including:
Autodesks Maya for modeling, with sculpting
and some texturing done in both Autodesks
Mudbox and Pixologics ZBrush; Maya for
previs; Adobes Photoshop for matte paint-
ings; Side Efects Houdini for the main efects
work (pyrotechnics, including fre, smoke, and
freworks) and far background environments
(terrain and forest generation), along with
Maya for the closer hero-character environ-
ments; Houdinis Mantra and Mayas Men-
tal Ray (from Mental Images) for rendering;
Framestores proprietary software for the fur
creation and grooming; Te Foundrys Nuke
for compositing; and Apples Final Cut for ed-
iting. Animation was completed using mostly
keyframes in Maya, while facial animation was
achieved using a blendshape approach with
the expressions sculpted in Maya.
Te animators augmented the character
animation with motion capture performed
at Framestores in-house studio with a Vicon
system, and crowd simulation created from
Framestores own particle-based system. It
wasnt a complex avoidance system, but it did
the job in terms of cleverly managing the data
and making sure there was enough variation
in the crowds, Harrison-Murray explains. It
was created by a guy in our commercials divi-
sion, and I am sure we will build on it and
use it again in the future. For this project, the
crowd system handled a group of 1000 war-
riors in one scene and approximately 11,000
in another.
Te most difcult character to create was
the younger hero of the ice dwellers. He
required the most iterations, says Harrison-
Murray. He was tricky; he had a cat-like look
but couldnt look too primate-like, and he
had to have the appearance of a good, honest,
hard-working guy. Moreover, the character is
covered in furabout a half million hairs.
Another challenge was creating the fre.
As Harrison-Murray explains, the directors
wanted it to have the same dynamics and
movement of real fre, albeit with a painterly
aesthetic. Initially, the group produced the fre
using a volumetric fuid renderer, but had to
pull back on the rendering realism until the
imagery blended well with the painterly world.
Its hard to play with that many variables [in
the sim] to get it to look the way we did in the
renderer, he notes.
Te far background environments are most-
ly matte paintings. Sometimes they started as
geometry and later were projected back onto
the geometry in Nuke. Te forest foregrounds,
meanwhile, are CG, as are the burning trees.
Te city walls were built with geometry, with
an overlay of matte at the end.
Not surprising, all the diferent elements
within the scenes added up to quite a few lay-
Framestore created the 60-second, all-CG commercial Siege, which contains a full range of digital
imagery, from detailed structures and mountainous terrain, to digital characters, creatures, and
crowds, to smoke, fre, and freworks.
March 2011 23
Broadcast
nnnn
ers. Its endless stuf, says Harrison-Murray.
Te fre and ice dragons. Te matte behind
for the sky and the mountains. A band of for-
est and trees. A wall of smoke. And then each
of those elements had to be broken down into
separate render layers. Te snow had four, in-
cluding difuse and subsurface scattering, as
did the trees. Tere were layers of CG crowds,
and that was split into 10 diferent render
passes. We had mid-ground, hand-animated
surfaces for the Orc beast in the crowd. Te
hero Orcs came in with about eight or nine
diferent comp layers. Te ice dragon had lots
and lots of diferent render layers, with glitter,
specular, volumetric ice stuf. Plus, the fore-
ground had interactive footprints where the
armies had walked in the snow. Hero guys in
the foreground. I lost count.
Tere were many, many layers, Harrison-
Murray adds. It was tough on comp. We were
pulling a lot of CG and content from diferent
software and trying to give it a unifed feel.
Group Effort
While the Framestore flm and commercials
divisions share tech, they each have separate
pipelines due to the diferences in the scale
of the projects they encounter. We need our
tools to be lightweight and easily customiz-
able, says Harrison-Murray. For Siege,
though, the commercials group borrowed
from the flm group; the most valuable com-
modity: people.
Tey were quieter in flm at the time, and
we needed to ramp up quickly; we had to
double our size, and we got some good guys
to help us out, Harrison-Murray notes. Some
of that assistance came from Houdini artists,
who helped build the procedural forests with
techniques used for Clash of the Titans. One
of the major tech assets used from flm R&D
was the fur-grooming tool set, although the
fur-rendering tools were not transferable since
the flm side uses Pixars RenderMan, while
the commercials group uses Mental Ray for
rendering out the hair.
Yet, the help, whenever ofered, was greatly
appreciatedespecially given the condensed
timeframe of the commercial. We had three
months from start to fnish, Harrison-Mur-
ray states. At the early stages, the project had a
crew of a half-dozen, which ramped up to 60
at one point.
Whether its the commercials or flm group,
Framestore is best known for its photorealis-
tic CG characters set within live-action back
plates. And while the team may have been
taken out of its element for Siege, the results
are nonetheless stunning.
When it comes to Super Bowl commercials,
Anheuser-Buschs Bud Light brand tends to
get quite a bit of airtime, and this year was
not any diferent. And when it came to cre-
ating the digital work, at least for this years
game spots, Te Mill, headquartered in Lon-
don, seemed to be part of nearly every 30- or
60-second play. In all, the facility took on 19
commercials among its London, New York,
and Los Angeles ofces, including one of the
top favorites, Bud Lights Dog Sitter.
Te premise of the spot is simple: A guy
dog sits for a friend, who leaves him a refrig-
erator full of beer, along with several canines
that are really smart and will do whatever you
tell them. Lots of beer plus smart, obedient
dogs equals party (at least in the sitters mind),
during which the canines act as wait staf, pour
drinks behind the bar, spin tunes like a DJ,
and even man the barbecue grill.
No matter how smart the actorsin this
case, the dogsactually were, they needed
digital assistance to pull of these human-like
tricks. And thats where Te Mill New York
came in. In the spot, the dogs stand and walk
upright on their hind legs, performing tasks
such as holding trays, washing dishes, fipping
burgerswith their front paws. So, most of
the post work involved removing rigs from
each scene and then adding in new arms and
the objects with which they were interacting.
According to Tim Davies, VFX supervi-
sor for the spot, for any profle dog shots, it
was fairly easy to remove the rigs using a clean
plate, as no part of the rig occluded the dog.
But when the dogs walk toward the camera,
two trainers holding a horizontal bar would
stand either side of the dog. Tis rig, used to
support the dog, would cover a large section of
the animals upper torso and forearms, requir-
ing extensive cleanup work.
Davies, also the lead (Autodesk) Flame art-
ist on Dog Sitter, was on set during the flm-
ing, and after each take, he would acquire a
The canines in Dog Sitter required assistance from a trainer (top) and were flmed separately so
the animals would not be distracted. The fnal scene (bottom) was a compilation of the various shots,
including those with the human actors and the dogs, many of which required CG limb replacements.
Bud Light: Dog Sitter
March 2011 24
nnnn
Broadcast
plethora of high-resolution stills of the dogs
fur and textures using a Canon EOS 5D Mark
II camera. I asked the trainers to stand the
dogs upright so I could get a nice, clean shot of
their torsos without them being covered by the
rigs, he adds. Tose stills were then tracked
over the rigs, so the rigs could be removed.
Helping Hands
Yet, this was not simply an easy job of rotoing
and comping. In a lot of the shots, we com-
pletely removed their arms as well and put
new arms in, and added all the [serving] trays,
says Davies.
For instance, at the beginning of the party
sequence, a large dog answers the door while
holding a tray of beers. Tat dog needed a big
rig with a three-inch bar, and two trainers to
hold it up, notes Davies. We found that when
the dogs are standing upright on their back legs,
they are breathing quite heavily. So you cant
just track a still onto them. You have to animate
the expansion and contraction of the actual still
you are placing on top to simulate the breath-
ing. Ten you have to re-introduce the shadows
and seamlessly blend the fur.
As the dogs walked, supported by the rigs,
they often looked like they were leaning for-
ward in a rigid position, which was corrected
in Flame. Nearly all the dogs had to be roto-
scoped from the scene anyway, because they
had to be placed in front of or behind other
objects or dogs in the scenes. With the use of
this roto, we were able to adjust the posture of
the dogs by bending them at the waist, mak-
ing them appear more upright, says Davies.
Tere were also instances whereby the team
added a gentle sway to the dogs upper body
so it wouldnt looked as though the dog was
leaning on something fxed.
Every single dog required a separate take,
says Davies, noting that some of the animals
did not work well in the same space as the other
talent, whether human or canine. In the end,
nearly every scene was made up of more than
10 passes. Tat was key to the success, having
each dog shot as a separate layer. We were able
to choose the best of each dogs takes, and ev-
ery dogs performance could be retimed for the
best reaction. Davies ofers the example of the
bloodhound at the bar: We had over 50 sec-
onds of the dog looking up and down, which
allowed us to slip the timing of this layer inde-
pendently of the scene. Tis enabled the dog
to look up at the girl and then back down at
his beer glass while pouring, in perfect unison
with the girls actions.
In addition to the rig removal and comping,
Te Mill often had to replace and animate
limbs for shots in which the dogs arm was
holding onto somethingfor instance, the
doorknob, the beer tap, or any of the beer
trays. However, most of this was achieved by
animating still photography. I think the big-
gest success of the spot was that we decided
to go with an in-camera approach, says Da-
vies. In the early stages of the project, there
was talk about doing fully CG dogs. But thats
tough and time-consuming; they tend to end
up looking like CG dogs. Its not just the way
they look, but more the way they movethey
can look like animated cartoons.
Nevertheless, there were some instances
when CG was unavoidable. For one, in which
the dog is washing dishes, Davies says the crew
was unable to pull of the shot in 2D because
the motion and perspective needed was very
complex, so the CG team created models of
the dogs arms and upper body, and then sup-
plied photoreal animated elements of these
difcult tasks.
Te idea we were going for was that if these
dogs are clever and well trained, maybe they
could pull of these stunts, Davies explains.
We wanted to leave the question, Could
these dogs really do that?
Terefore, the motions were underplayed
and restricted. And, whenever possible, real
objects on poles and wires were shot for the
interaction, as was the case when the dog is
drying a glass with a towel. We got some of
the way there, Davies says. But not all the
way: Te team ended up incorporating a CG
mug and dog limbs to complete the contact.
Te Mill used Autodesks Maya for the CG
work in that shot, as well as for the dog fip-
ping burgers on the grill. We originally had
the dog fipping the burger over, which twist-
ed the dogs wrist around. It just felt like we
pushed it into the unbelievable, so we ended
up re-animating it at the last minute to sim-
plify the movement to something more believ-
able, explains Davies.
While some of the dog arms were re-created
in CG, the trays and other objects were added
in the Flame. We were able to project the
trays and bottle onto cards that were animated
in 3D space, allowing proper perspective and
parallax, notes Davies.
In all, the production company spent ap-
proximately three days on the shootand
Davies notes that they were extremely patient
since nearly every plate had to be set up for
VFX. We shot trays, we measured the dogs
height as they walked through the scene, and
then built trays to match and wheeled them
through the scene for the right focal and light-
ing references, he says. We had all these ele-
ments, and it was plate after plate. Te sav-
ing grace: the decision to go with a locked-of
camera, which eliminated the need for motion
control and camera tracking.
While all the dogs were shot on set, a few
of the more scrufy pooches had a greenscreen
placed behind them to ease the roto work, made
even more complicated by all the hair.
In addition to Flame, the team used Auto-
desks Smoke and Flare for the compositing,
Combustion for the roto work, and Maya for
the CG. Color grading was also handled by
Te Mill using FilmLights Baselight.
In addition to Dog Sitter, Davies worked
on two other humorous Bud Light spots for
the big game: Product Placement and Sev-
erance Package. But, it was Dog Sitter that
tied for the top spot in the USA Today Super
Bowl Ad Meter ranking.
This doggie DJ scene was among those with the highest number of layers. The canine in the fore-
ground alone required substantial work: The dog had to stand on its hind legs and bob its head to the
music, while its paw (with a digital assist) scrubbed the record back and forth.
CGW_Temp.indd 1 2/3/2011 4:27:52 PM
March 2011 26
nnnn
Broadcast
Animals are usually a sure crowd-pleaser when
it comes to commercials, and indeed, a bea-
ver that repays a motorists act of kindness was
well received during the Super Bowl break.
Te spot, which was driven by the same
crew that brought us Scream (featuring
a CG squirrel) during Super Bowl XLII in
2008, opens as a beaver lugging a tree branch
attempts to cross the road. In an ode to that
previous piece, the panicked animal, too
frightened to move, braces for impact with its
paws outstretched and mouth agape. Seeing
the helpless beaver, the man quickly swerves
his car to avoid hitting it. Te animal salutes
him in a sign of gratitude. Six months later, we
return to the same location, this time during a
rainstorm, as the driver stops the car just in the
nick of time, as a huge tree falls across the road.
As the shaken driver gets out of his car, he sees
that the bridge, visible in the original scene,
has been washed away by the now-raging river.
As relief sweeps over him, he sees the grateful
beaver standing beside a newly chewed tree
stump. Tis time, the rodent gives the man a
chest bump.
It was the same groupagency, director,
creatorfrom the Scream spot three years
ago, so we all knew one another, says Andy
Boyd, VFX supervisor/lead 3D artist at Meth-
od Studios, which handled the post work for
both productions. Te expectations, though,
were high since Scream had looked so good.
But for this, we were starting from the end-
point of all that other hard work. Te good
thing, though, is that technology has moved
on, so what was really hard then is not as hard
anymore in regard to the number of hairs on
an animal, for instance. Before, when I went
over 1gb of memory, my computer would
crash. Now I use 24gb, and it never crashes.
Leave It to Beaver
Obviously, the majority of the work on Carma
involved a computer-generated beaver. In fact,
there were seven to eight versions of the 3D
animal used in the 30-second spot, from the full
hero beavers, to the half CG/half live-action ani-
mal used for the chest bump. Tere was even a
real animal actor.
Te model looked fantastic, but we got
lucky on set with the director [Kinka Usher],
who spent a good amount of time trying to get
the animal [to do what was needed], and that
was a huge help, says Boyd. In the two end
shots, the real beaver almost did exactly what
we wanted him to doobviously not the chest
bump, but pretty damn close. So even though
we had the CG version, it is always good to use
the real stuf as much as possible.
Te digital model was built by Methods
team using Autodesks Maya and Pixologics
ZBrush, with rigging and animation done in
Maya. Te fur generation and rendering, how-
ever, was accomplished within Side Efects
Houdini.
According to Boyd, the crew carried over a
lot of technology from Scream, and adapted
it to the beaver. On Scream, it was the frst
time I did close-up fur stuf, the frst time I
had set up a fur system, so a lot of what I was
doing on the screen was prototyping, he says.
I now had that technology experience behind
me, so the work was more standard.
By applying lessons learned, the team was
able to generate a CG beaver with fve million-
plus hairs, 4000 of which are guide hairs. In
comparison, the CG squirrel contained closer
to one million hairs in its pelt. Tat would
have been a really big deal on the squirrel. [Te
model] would have drown in memory if we
had that many back then, Boyd notes. Not
so this time around, especially with a 64-bit
operating system. With the faster computers,
you can add the proper number of hairs un-
til it looks right, and you dont have to worry
about the number, as long as you stay under
fve million, he adds.
Grooming, to achieve the desired clumping
and texturing, was performed in Houdini and
rendered with Mantra. Tis was done using two
diferent approaches: Boyd groomed the bea-
ver used in the sunny shots, while Brian Burke
groomed the wet animal for the rainy shots. It
took on a totally diferent look and shape; its a
completely diferent groom. It could have been
a completely diferent animal, says Boyd of
the wet version. Despite the diference in styl-
ing, the model and rig were identical.
While the rig was not overly complicated,
the artists did add a more complex muscle sys-
An animal actor named Waldo (top, at right) starred in some scenes and served as a photo reference
for the realistic CG model (top, at left), which was created at Method using Maya, ZBrush, and Houdi-
ni. (Bottom) Scanline used its Flowline software to create the water sim near the end of the spot.
Bridgestone: Carma
March 2011 27
Broadcast
nnnn
tem with built-in dynamics, as they had done
for the squirrel. As it turns out, the real beaver
gave a fne performance, but if they would have
needed our model to do some more compli-
cated movement, like walking across the road,
we would have been ready, explains Boyd.
In the frst two shots, a trained beaver named
Waldo performed, carrying the tree branch.
Hes as trained as a beaver can possibly be,
says Boyd, noting that the group flmed the
animal on set for reference. Waldo also stood
up on his hindquarters, perfect for the chest
bump shot, but only if you waived food in
front of his face, Boyd adds with a chuckle.
For the chest bump, the artists used a CG arm
and chest, which was composited into the live
action; the artists also adjusted the head posi-
tion and the eye line slightly for the completed
shot. Te remaining shots contained the all-
CG model, while an animatronic was used for
shot reference.
According to Boyd, the most challenging
part of the project was the salute. It falls into
that weird ground of trying to get a creature
to do something that it cant really do, he
says. [Te work] can go into that uncomfort-
able place where [the model] looks real in the
frame but it is doing something that you know
it cant do. With this in mind, the team chose
a subtle motion for the salute that was notice-
able by viewers but didnt tread too far into
that unreal zone.
Te Pixel Farms PFTrack was used for cam-
era tracking. Matchmoving the beaver was
done in Maya.
In addition to the beaver, the commercial
also contains CG river shots amid the live ac-
tion. Burke modeled the digital bridge and
riverbed using Maya, while Scanline, a VFX
company in Germany that specializes in
fuid efects, generated the fast-moving river.
When we got the storyboards and saw the
raging-river shots, we wanted the work to be
the best, and Scanline does incredible water
work, says Boyd. Scanline used its proprietary
Flowline software to create the simulation,
and Methods Jake Montgomery pre-compd it
with the CG bridge and additional CG debris
using Te Foundrys Nuke. He then added
atmospherics and fnished the composite in
Autodesks Flame.
Aside from the simulation assist, the work
was handled by three CG artists and one com-
positor at Method. It was one of the best jobs
I have done in terms of just being in a small
team and having a lot of fun, says Boyd. Ev-
eryone really enjoyed themselves, and we were
lucky enough to have been given the time to
explore ideas to best tell this story.
Every so often a commercial uses a catchy tune
that stays with you for quite some time. Such
was the case for Volkswagens Black Beetle.
Te commercial features a fully animated
CG beetle (insect) that maneuvers its way
around the various obstacles and terrain it
encounters, doing so in an automotive style
and to the beat of Ram Jams Black Betty.
Te spot is fun, engaging, and creative in both
concept and design. Its an action-packed car
chase in the forest, says Te Mills Tom Bus-
sell, who, along with Juan Brockhaus, served
as lead 3D artists on the project.
Bussell and Brockhaus guided the spot
from beginning to endfrom storyboards,
to supervising the live-action shoot, to fnal
deliveryall of which spanned just six weeks.
When a project is predominantly based
around animation, the clients have to take
a huge leap of faith and trust us creatively as
well as technically, says Bussell. Te reality
of such a quick turnaround and so much CGI
is that it only comes together in the fnal few
days of the deadline.
Te scene opens in a lush, wooded environ-
ment, as some black bugs meander along the
groundwhen suddenly a black beetle over-
takes them, speeding along over the rocks and
dirt, quickening its pace across a moss-covered
branch spanning a creek. Te bug rounds a
corner, nearly careening into a centipede be-
fore catching air, and then buzzes past two
praying mantis, cuts through fre ants on the
march, again fies through the air above a feld
of grass and dandelion seed heads, before land-
ing sideways on a rock. Te screen grows dark
as a white line assumes the shape of the black
beetleand the new Volkswagen Beetle.
In contrast to the CG characters in the
spotthe mantis, ants, dragonfy, centipede,
caterpillar, and so forththe environments
are mostly live action, flmed in a studio. Tis
was no miniature, though. Christopher Glass
and his team re-created a huge section of or-
ganic forest inside the studio in Hollywood
that must have been 10x10 meters, describes
Bussell. Being on set felt like you were stand-
ing in a dense forest. Tey did a great job.
Beetle Mania
As for the insects, Te Mill created eight main
bugs, and in the case of the mantis and ants,
tweaked the subsequently replicated models
for variation. All the base models were gen-
erated using Autodesks Softimage and then
refned using the sculpting tools in Pixologics
ZBrush.
Te biggest challenge, though, was getting
the design of the hero beetle nailed down. For
all the other insects, we matched to how nature
intended them to look. Tat was the easy part,
says Bussell. But in a car commercial with no
actual car, there was a big design element to the
hero beetle; we had to convey the right message
about the car. We needed our beetle to subtly
reference the new design without the insect feel-
ing too engineered. If you look closely, you can
make out subtle shapes in the shell that act as
wheel arches, the eyes are headlamps, and the
silhouette from the profle is very similar to the
new car design.
Despite the commercial centering on the
design, the look was not fnalized until late
in the project. We needed to see the whole
animation together before knowing how far
to push the design of the beetle, adds Bus-
sell. Making that task somewhat easier was the
robust geometry-caching pipeline the crew
Volkswagen: Black Beetle
The Mill created the fun, lively spot for Volkswagen featuring computer-generated insects, including
the main character: the black beetle.
March 2011 28
nnnn
Broadcast
used, which gave them the ability to change
things up late into the project with little fuss
for the animators, rendering group, or lighters,
and enabled them to spend more time on the
creative aspects of the 3D.
Before the model was approved, the rigging
team, led by Luis San Juan, began building a
stable pipeline that would automate the rigs to
the numerous legs of these insects. Te centi-
pede rig was designed so that once the anima-
tor started working on the body, the legs would
subsequently move in the anatomically correct
way. Nevertheless, the animators could override
this movement with timed keyframes.
Although our brief was to create an insect
that behaves like a car, we felt it was impor-
tant to stay anatomically correct in order for
the animation to be believable, explains Bus-
sell. To this end, the artists studied various
BBC documentaries of insects, gathered slow-
motion footage, and built the digital insects
with this action in mind.
I know way too much about insects now!
says Bussell with a laugh.
In a complete 180 turn, though, the group
also studied iconic car-chase scenes, with the
reference ranging from Starsky and Hutch and
Te Fast and the Furious, to non-chase refer-
ence such as Te Matrix bullet-time efect.
Each shot in the commercial, from the fram-
ing of the shot to the animation of the beetle,
is based around similar concepts to those icon-
ic flm moments, notes Bussell.
While design and animation took some
time to develop, the music track was set from
day one, which was a tremendous help, says
Bussell, because it meant that the editor had a
track to cut to, and the artists had something
to base their animation on. Tis helped with
the buildup to the end crescendo in which the
beetle jumps of the log and fies through the
air Starsky and Hutch style, landing on the
rock and skidding to a halt, he adds.
Insects in Detail
Although the artists found a plethora of useful
textures online, they took things a step further,
contacting an expert at the Natural History
Museum, who helped the team fnd the spe-
cifc insects they were looking for. Tey then
took high-res photos of the bugs and, using
Adobes Photoshop, applied those surfaces,
along with some hand-painted textures, onto
the CG models. Te trick was to just keep
adding more and more detail, says Bussell.
Once the base model was created and the
UVs unwrapped, we started applying the high-
res textures. A fnal level of detail (pores and
imperfections) was then added in ZBrush.
Te insects were rendered in Softimage and
Mental Images Mental Ray. A Spheron cam-
era was used on set to capture HDRIs from
both a chrome (for refections and high spots)
and gray ball (for shading and color tempera-
ture) at the same angle and with the same
camera. Also for every shot, the crew photo-
graphed plastic insect models on set. We got
funny looks from the crew, but it was a useful
lighting guide, Bussell points out.
In addition to the beetle, the group focused
on the animation occurring around the main
insect. All these collective movements were
achieved in Autodesks Maya by a small team
led by Johannes Richter, which added particle
atmosphere to all the shotsfrom the pollen
to the small fying insectsto help bring the
shots to life. According to Bussell, the dust
trails and debris elements provided the biggest
challenge here, with the group using references
of various elementsfrom radio-controlled
cars skidding through dusty terrain, to a car
driving through the desert. It all boiled down,
though, to artistic license, since a bug the size
of the CG beetle wouldnt ever kick up as
much dust as it did in the spot.
Tat aside, we felt it was an important fnal
touch that referenced back to the idea that this
was a car chase, explains Bissell.
All these elements were then composited
into the fnal shots using Autodesks Flame
and Te Foundrys Nuke. Te comp team
also used Nuke to enhance the undergrowth
and vegetation of the live-action backgrounds.
Te environment in one of the fnal shots, in
which the beetle is fying through the air, was
put together entirely in Nuke using still pho-
tos from the set.
So, what made this project so successful?
Bissell says it boils down to a good idea from
the very start. I had the luxury of working
on some of the really great iconic work in ad-
vertising over the years, and this one is right
up there, he says. Every artist at Te Mill
wanted to work on it. Its just one of those
projects that has all the right ingredients from
the start.
In contrast to the Volkswagen commercial, in
Kia Optimas One Epic Ride, the focus is
on the vehicle throughout this wide-ranging
adventurewhich takes the audience from
land, to sea, to a distant planet, and beyond.
Te action starts of with all the suspense
of a James Bond flm, as a police ofcer im-
personator makes of with a couples Optima,
leaving them handcufed to his parked mo-
torcycle. As the person drives along a coastal
highway, a villain in a helicopter fres a high-
tech magnet, lifts the car, and carries it out to
sea to an awaiting yacht, where a handsome
fellow surrounded by beautiful women eagerly
awaits its arrival. Suddenly, in a nod to fantasy,
Poseidon emerges from the water and grabs
the vehiclebut only momentarily, as a green
light from a hovering spaceship beams the car
aboard. Te scene cuts to a sparse, dusty land-
scape, where an alien takes the wheel. A time-
warp portal opens, and the Optima is sucked
through to the other side, where a Mayan chief
receives this bounty from atop a pyramid, as
tens of thousands of warriors cheer in appre-
ciation of their new gift.
Sound a bit over the top? Tats the inten-
tion, says executive producer Melanie Wick-
ham from Animal Logic. Te purpose was
for everyone to go to extraordinary lengths to
get this car, with the antics getting more ridic-
ulous as [the spot] moves along, she says.
While the assets for the 60-second commer-
cial were built at Animal Logics Sydney head-
quarters, the live action was shot in California
at various locales, including a soundstage, with
some members of the Australia team attending
those shoots. Because of the short production
schedule for the spot, accurate previsualization
(created in Autodesks Maya) was especially
crucial to the spots overall success, notes Matt
Gidney, CG supervisor. Detailed concept
work was equally important, as it gave the di-
rector, agency, and client a clear understand-
ing of this design-intensive, multi-sequence,
multi-location spot.
Concept work is used to pitch something
beautiful, but for us, it was important to estab-
lishing direction as quickly as possible, says
Gidney.
Extreme Elements
Te spot incorporates many diferent sets
and infuses many diferent genres, each with
its own distinctive look. All the backgrounds
While the insects, like the two mantis above,
are CG, the environments in the commercial
are mostly organic, re-created in the studio
and shot as live action.
Kia Optima:
One Epic Ride
Joseph Taylor, School of Animation & Visual Effects Ting Chian Tey, School of Animation & Visual Effects Mustafa Lazkani, School of Animation & Visual Effects
enroll now
earn
your aa, ba, bfa, ma, mfa or
m-arch accredited degree
engage
in continuing art education courses
explore
pre-college scholarship programs
www.academyart.edu
800.544.2787 (u.S. Only) or 415.274.2200
79 new montgomery st, san francisco, ca 94105
Accredited member WASC, NASAD, Council for Interior
Design Accreditation (BFA-IAD), NAAB (M-ARCH)
*BFA Architecture and AA & BFA Landscape Architecture
programs not currently available online.
take classes online or
in san francisco
advertising
animation & Visual effects
architecture
*
art education
fashion
fine art
game Design
graphic Design
illustration
industrial Design
interior architecture & Design
landscape architecture
*
motion Pictures & television
multimedia communications
music for Visual media
Photography
web Design & new media
March 2011 30
nnnn
Broadcast
began with live-action plates, though a good
portion of the objects needed to support the
story line were built in CG. Even the alien
landscape is practical, shot near the Mojave
Desert, albeit with digital moons augment-
ing the landscape; a 3D alien completes the
scene. In addition to the practical backdrops,
the commercial incorporates matte paintings
and digital set extensions.
We were faking quite a lot. Every shot was
touched in some way, says Andy Brown, VFX
supervisor.
Te most obvious computer-generated ele-
ments (by way of the action) are the helicop-
ter, boat, and, of course, Poseidon. Mostly the
star of the spotthe caris practical, though
at times, it, too, had to be built digitally.
Maya was used to create and animate the
models; texturing was done in Maya and
Adobes Photoshop, with some experimenta-
tion conducted in Te Foundrys Mari. Mean-
while, Te Foundrys Nuke and Autodesks
Flame were employed for compositing. For
tracking, the group used 2d3s Boujou. Ren-
dering was done in Pixars RenderMan and
Animal Logics MayaMan, the studios Maya-
to-RenderMan software.
One scene that especially challenged the art-
ists, and for which they are most proud, is that
with the all-CG, water-simulated Poseidon.
During the past few years, Animal Logic
has been developing ALF, its 3D software
framework, and when it came time to tie up
some lose ends with the water module, Gid-
ney had sat down with the studios R&D team
to determine the best solution to incorporate
into the framework. After a test period during
which time the developers examined Side Ef-
fects Houdini, Next Limits RealFlow, and in-
house solutions, Animal Logic committed to
extending the functionality of the proprietary
water modules for the ALF Nexus tool set.
We decided that we could get more done
within our own framework, because once the
coding was done, we could iterate on the so-
lutions quickly, saving expensive resource cal-
culations, explains Gidney. We just have to
code a particular solution once. As a result, we
were able to do some very large simulations
distributed across the farm, producing vast
amounts of data describing water, which we
then could pass back into RenderMan. With
this setup, explains Wickham, the team only
has to write hooks into other software, such
as Houdini, Maya, and Autodesks Softimage,
saving a great deal of time otherwise spent do-
ing complex coding.
Tat decision paid of when it came time
to simulate the water for the Poseidon se-
quence. We were able to get those sims out
very quickly, says Wickham, noting that the
studio shares the framework across its flm
and commercials divisions. In fact, the water
module was used extensively in the animated
feature Legend of the Guardians: Te Owls
of GaHoole.
Animal Logics teams in both Los Angeles
and Sydney worked together on Epic Ride.
According to Brown, all the previs and shoot
prep was done in LA, and the spot was post-
ed in Sydney. As Wickham points out, Te
agency was a little nervous sending [the work]
down to Sydney, but with our review tools, its
now easy to work remotely. And thats a con-
cept the crew proved true in epic style.
NFL: Best
Fans Ever
Perennial advertising giants Budweiser and
Coke are not the only brands known for cre-
ative Super Bowl commercials. In fact, the
NFL has been coming up with smart plays
of its own in recent years, including 2011s
Best Fans Ever, featuring digitally altered
clips from a range of favorite television shows
past and present in which the characters are
re-dressed in team gear and football-centric
elements are inserted into the scenes.
We were tasked to create a story for the
Super Bowl built around the experience that
everyone shares, recalls editor Ryan McKenna
from Te Mill. Te group settled on the con-
cept of preparation, focusing on the antici-
pation and excitement of the big game.
A large crew from Te Mills New York of-
fce spent weeks sifting through mounds of
television footagefrom iconic series such as
Seinfeld, Cheers, 90201, Te Brady Bunch, and
the Sopranos, as well as Glee, Te Family Guy,
and morelooking for certain moments that
had potential. Tat is, potential for the clip to
be re-created into a fan moment. Tose clips
were then placed into categories describing the
scenefor instance, stars delivering one-liners,
A mix of in-camera and digital elements fuels the unique Epic Ride spot. Animal Logic used its own
water modules to create the all-CG Poseidon at top, while a range of off-the-shelf tools, including
Maya, were used to create the digital elements for the bottom scene, which also includes a live actor.
March 2011 31
Broadcast
nnnn
making entrances, eating food, and so on.
Te list of shows didnt really shrink much
from what we started with, says McKenna.
Tere were very few Nos. Having actor
Henry Winkler and actor/director Ron How-
ard sign of from day one, minute one, on
their Happy Days clips didnt hurt the NFLs
cause, either. Other talent soon followed,
though some with caveats. New Yorker Jerry
Seinfeld would only agree if he was portrayed
as a Giants fan, which is what we wanted him
to be anyway, says Ben Smith, creative director
at Te Mill-NY.
Real Fans
Te fctional location of the shows dictated
which team those characters would root for:
Cheers Norm is dressed in a Patriots jersey;
the Sopranos crew is decked out in Jets gear;
Te Dukes of Hazzard s General Lee sports
a Falcons logo. Te group decided early on,
though, that the characters, who span nearly
40 years of television, would wear modern
styles rather than those more appropriate for
their period. According to Smith, this made
the gags more obvious to the audience, despite
the fact that the change was otherwise seam-
lessly integrated into the various shots.
Typically, the agency would frst secure rights
to the imagery, and after the edit was fnalized
and locked, the post team would then begin
its work revising the clips. However, when the
client handed Te Mill this project, the group
was already facing a late-running clock. So, Te
Mill crew had little choice but to begin post
work on some of the clips that had been ap-
proved by the network, with the hope that the
talent would sign of as well; if the rights were
not granted, then the work was abandoned for
another clip that ft into the edit.
For this project, the edit didnt get locked
for about six weeks, McKenna notes. As a re-
sult, there were times when the team would
get about 90 percent fnished with the efects,
and a shot would change, requiring new edito-
rial, and with it, new efects.
One Shot at a Time
In the end, the commercial contains approxi-
mately 40 altered clips, though the digital crew
worked on far more than that, for some reason
or another, didnt make it into the fnal spot.
How the team approached each clip, however,
varied. No two shots were the same, says
Smith. Some shots contained 2D elements
flmed at Te Mill using the studios lighting
and greenscreen setup, and then composited
into the clip; others incorporated CG ele-
ments. Tis mixed approach required the team
to simply take a brute-force approachwhat-
ever the problems were in the shot, the artists
had to deal with them in whatever method
worked best for that clip. Sometimes the art-
ists tried diferent solutions until one stuck.
A lot of the camera work contained nodal
pans and tilts, so there wasnt much 3D cam-
era tracking to do, says Smith. Tat made
tracking and comping much easier. Often,
the group had to take out the camera move,
clean up the frame, composite the new imag-
ery, and then add the camera move back ion.
Ten there were trickier shots that required
CG, as with the Seinfeld clip of Jerry and New-
man. Even though the camera is just a pan,
because their bodies are moving so much, we
couldnt get away with a 2D approach. It had
to be a 3D solution, explains Smith. Tat
involved tracking the camera, roto-animating
both characters, and then building the jersey,
hat, and jacket, and then lighting, rendering,
and compositing as we normally would do.
Tats a long process. And just for a few sec-
onds of a clip. Times 40 clips.
What we learned on one shot couldnt be
applied to another because it contained a whole
other set of problems, says McKenna. Tats
unusual for VFX shots, where its usually one
setup that is propagated through all the shots.
Te work also entailed a great deal of cloth
simulation, since most of the revisions involved
clothing. It was a challenge because we were
dealing with a moving camera and a moving
person, notes Smith. Compositing 3D cloth
next to live-action limbs, where a live-action
hand meets a CG cufit had to track abso-
lutely perfectly or there would be slipping.
For the cloth sim, the artists used Au-
todesk Mayas nCloth. Tey also employed
Apples Final Cut, as well as a mishmash of
Autodesks Maya, Mudbox, Softimage, and
Flame; Science.D.Visions 3DEqualizer; Te
Pixel Farms PFTrack; Te Foundrys Nuke;
Adobes After Efects; and FilmLights Base-
light for color grading.
According to Smith, the most difcult foot-
age to work with was from 90210 because the
quality was so poor. But then again, nearly ev-
ery plate the group dealt with was a diferent
format and quality. To make matters worse, for
most of the shows the group had to work from
DVDs as opposed to higher-quality tapes due
to the time crunch.
Even though the path taken to get the f-
nal results took many twists and turns, in the
end, Te Mills work on Best Fans Ever gen-
erated a large number of fans as wellfrom
audiences as well as the talent used in the clips.
I was sure we would end up with four shows
[that would sign on]. Te
project just seemed too am-
bitious for the timeframe,
says McKenna. But this
just goes to show the power
of the NFL. People love
it. n
Karen Moltenbrey is chief editor
of Computer Graphics World.
The Mill team re-dressed actors and sets from approximately 40 iconic TV shows in NFL gear for an
NFL ad. The project required a tremendous amount of camera tracking, rotoscoping, and compositing,
though each clip required a unique approach.
Use your smart-
phone to view
video clips of
the commercials
discussed here.
C
r
y
W
o
l
f
e visual eects shots in Red Riding Hoods modern retelling of the
classic fairy tale total only approximately 230, but they include the
pivotal character, the werewolf, and the medieval setting in which the
wolf and Red live. Catherine Hardwicke directed the Warner Bros
romantic thriller that stars Amanda Seyfried as the red-hooded Val-
erie, Gary Oldman as the werewolf hunter Father Solomon, and Julie
Christie as Reds grandmother.
Jerey A. Okun supervised the visual eects work, which included
79 shots of the always CG werewolf created at Rhythm & Hues, and
a 3D village and set extensions by Zoic Studios, with Soho VFX han-
dling everything else. In addition, Paul Bolger at COS FX, hired to do
temp work, eventually took on nal shots. His work was so good, we
asked him to output at 2, Okun says. He added debris to a shot
in which someone ies through a bookshelf, made an ax y through
the air, did the eye-changing shots on the character that turns into the
wolf, and others.

CG Character
Claim Jump ers
Industrial Light & Magic uses techniques and processes
honed in visual eects work to create live-action director
Gore Verbinskis rst CG feature animation
By Barbara Robertson
Mother
of Invention
March 2011 32
In this modern fairy tale, a CG werewolf stalks Red Riding Hood by night, but
during the day, pretends to be a friend-or, maybe, family By Barbara Robertson
C
r
y
W
o
l
f
Which character? It could be anyone in the
village, Okun says coyly. Barbara Robertson,
CGW West Coast editor, asked Okun to tell us
more about the werewolf and the other visual
efects shots in the flm.
Does the werewolf transform from one of
the characters in the village during the flm?
We specifcally decided we didnt want to
show a transformation. He shows up reason-
ably formed every time. I was up for the challenge, but from the story
point of view, it didnt make sense, and the expense didnt make sense.
Even so, toward the end of the show, the studio asked us whether wed
again explore doing that. Instead, we used an old trick: When the wolf
gets angry during a big fght scene in the daytime, we fgure out who the
wolf is because the eyes of the individual who is the wolf change.
Reasonably formed?
To economize, Catherine [Hardwicke] decided to introduce the
wolf during an attack sequence with a series of blurs. Te concept
was brilliant, and the execution was doubly brilliant because of the
work by my editor, Neil Greenberg, and Craig Talmy, Derek Spears,
and others at Rhythm & Hues. We were able to fnd places where
you can begin to see the wolf. We ramped up the action beyond
what we thought we could aford, yet stayed within the budget and
schedule. We ended up with an exciting sequence that reveals the
wolf bit by bit based on the actions.
What does the werewolf look like?
Our wolf doesnt look like a wolf, exactly; it looks like our wolf. It
has four paws, a snout, a tail, and short hair, almost like a greyhound.
We discovered that we lost muscle defnition with longer hair. Derek
[Spears], the Rhythm & Hues supervisor, suggested porcupine quills on
his shoulders to make the character look more lethal. Catherine didnt
like quills, but we used something like thatspikey hair that looks like
it has a lot of product in it. Te face is based on a wolf s face, but
the nose is more lethal-looking, and the teeth and gums look like they
havent been brushed in years. We added dried blood in the fur, and
spittle and goo. Te wolf eats a lot of people. And we spent a lot of time
on the eyes, especially because we had to fgure out how to get human-
ity into the eyes. Te wolf s eyes are amber, and sometimes they glow.
We could control the glow on a per-shot basis based on how menacing
or kind he . . . or she . . . was. We studied a documentary about wolves
that had phenomenal shots of the eyes. When the lights right and you
see the wolf in profle, the depth is visible. So we had to do 3D eyes and
add glow on a 2.5D basis because the glows have to come from deep
inside. For a couple shots, we made the eyes in 8k resolution. I doubt
anyone will notice all this subtlety. But theyll feel it.
How did you come up with the design?
Catherine did an amazing amount of research. She found every were-
wolf from TV, flm, and books. Digital Domain created a 3D wolf on
a turntable from the concept art that Catherine and I presented to the
studio to get approval. Once we got a green light on the flm, Digital
Domain wasnt available, and we hired Rhythm & Hues to fesh out
March 2011 33
CG Character
nnnn
Image courtesy Warner Bros. Pictures.

In this modern fairy tale, a CG werewolf stalks Red Riding Hood by night, but
during the day, pretends to be a friend-or, maybe, family By Barbara Robertson
Jeffrey Okun
March 2011 34
f
the design. Te wolf has to do things a wolf
cant do, so our challenge was to still keep him
true to form, and Craig Talmy did an amaz-
ing job.
How does the werewolf act?
Teres a sequence in the frst attack where
he was supposed to stand on two legs and
corner Valerie. He looked horribly deformed.
Wolves cant lift their elbows sideways; they
can only go forward. So we had to fnd ways
to make our wolf look right with the action
they had planned and shot. We had toIll
create a new wordtruthify what they shot
on set. Craig fgured out some clever stuf,
and it all clicked when Catherine, who is
from Texas, decided with my help that the
wolf should be more like a rodeo horse than
an anthropomorphic wolf. We found mo-
tions we could justify from out-of-control
rodeo horses and bulls, and feral hyenas, and
we mixed them with feline actions. We had
sleekness and grace countering frenzy and
feral. And that was the key to making this
wolf what it was. In the dye pool alley, when
the wolf tries to persuade Valerie to come
away with him, or her, we mixed feral crazy
with reasoning and persuasiveness, and that
ups the ante. And then in the fnal sequence,
the wolf shows intelligence, patience, and the
beginnings of insanity. Te wolf s feral side
bubbles up because the clock is ticking.
What about facial animation? Does the
wolf talk?
We chose early on to have the wolf talk
through telepathy because we wanted to keep
him, or her, believable. Originally, we had a
lot of animation and secondary animation
with the facial animationthe face came
alive in a realistic way. We had wind blowing
in the fur. Subsurface muscles moving around
to demonstrate agitation and frustration. But,
we discovered that it looked like we had failed
to animate his mouth, like the animation was
unfnished. So, we had to dial back the sec-
ondary animation to a degree. It was always
a battle between how much we could do and
not make it look like he should be talking.
How did you flm the wolf on set?
We asked for a green-suited actor because
its always funny. Instead, the person the wolf
ends up being was there doing the dialog
wearing a wire mask that put the snout and
the eyes in the right place, and we had the
person posed at the right eye-line height. Also,
we had fufy, a Styrofoam cut-out of the wolf
so people could understand how big the thing
was; stufy, the 3D furred head and shoulders
on a C-stand with eyes that lit up; fatty, a
fat piece of foam core in the shape of the
wolf lying on the ground so people wouldnt
invade his space; and a stunt performer wear-
ing a wolf suit from a costume store, the wire
head thing, and a wire tail. When we were
blocking the scenes and working out the ac-
tions on set, the stunt guy was brilliant. Hes in
such good shape that he would really scare the
actors, and thats what we needed.
Our procedure involved a number of passes
that Catherine graciously let us have. For the
action scenes, we shot a pass with the stunt
guy and a witness camera. Ten wed shoot a
pass with stufy to get fur lighting reference.
Ten, wed set up a laser eye line or put an X
on a C-stand, and theyd do the scene with no
wolf. During a sequence in the dye pool al-
ley, we had such wild camera moves that we
knew we couldnt re-create them, so we shot
blank tiles and put them together after the fact
when we knew which part of the set they used.
We shot the move with all the actors. Ten
the camera operator, Steve Campanelli, shot a
blank version to the best of his recollection.
And we went in with still cameras and HD
cameras to fll in all the perspectives.
Was the movie shot entirely on sets?
We shot the entire movie inside a tiny sound-
stage, although its supposed to take place out-
doors, in a village in a forest in the hills. We built
the sets to have depth, even though theyre up
against a wall, by using forced perspective. We
fgured out what lenses the DP would use and
how often wed be shooting of the set based on
the lenses, then fgured out a way to not shoot
of the set, and then said, Lets throw away the
budget for a moment and fgure out what we
would do if we were flming on location in a
village in a forest. We included that in the vi-
sual efects package. Ten we had them hang
a neutral gray curtain of set. Most of the story
takes place in the snow, so if you shoot a bit
of set and the gray shows up, it will look like
gray sky.
What we didnt take into account was that
because of the style of shooting and the shot
schedule, a lot of lights had to be in place,
and some were in front of the set. So we had
a number of fx-it shots. For example during
Solomons arrival, Steve [Campanelli, camera
operator] brought the camera low and shot up,
which put Gary Oldmans head inside a light.
Zoic did a phenomenal roto job to fx that.
What kinds of set extensions did you do?
Zoic spent a great deal of time designing the
part of the village that we didnt build on set
alongside Catherine. Because it was 3D, they
were able to drop it into shots and open up
the feel of the village. When a shot felt claus-
trophobic, we also sometimes lowered the
frame and added sky, trees, and mountains
beyond, and the tops of houses. Grandmas
house was on another ridiculously small stage.
Tey put a white cyce all the way around it,
with lights everywhere and trees. We removed
all that, and instead of just getting rid of hot
spots, we added a lake and mountains to open
it up. We considered how they would shoot
it if it were real, scaled down based on bud-
get, and then looked for opportunities to add
things back in.
Inside the tavern, for example, they had a
small practical set and scenes with an awful lot
of people. And again, we have lights directly
behind peoples heads, but without any mo-
tivation. So we put windows in the tavern,
which fxed that and, oddly enough, makes
the tavern feel better.
Did it feel like you were illustrating a
fairy tale?
It did. On the visual efects end, we strug-
gle between what looks cool and what serves
the story best. It took a little while to get into
Catherines head to understand the look she
was going for. Ten, I understood it was a
little bit fairy tale with an edge of reality. So,
wed put a fock of birds on the ground to dis-
tract the eye. Darken the left and right sides
of the frame. Desaturate images and let the
reds pop.
Tis sounds like more work than 200-
plus shots suggests.
For me, it was some 500-odd shots. Zoics
set extensions would go to Rhythm & Hues
to drop the wolf in, and then Soho would add
the moon and sky. Zoic would build a city, and
Rhythm & Hues would add the wolf. Te three
vendors shared shots, and I had to account for
them as three separate shots because I had three
payments. It was a moving target with a lot of
pieces sliding in all directions. And Catherine
[Hardwicke] likes to screen a lot. Every two
weeks, we had a friends and family screening,
and the studio did screenings, as well. So, ev-
ery two weeks we had to have new or updated
shots ready to drop in. Tat was a lot of temp
work, and then that kicked into real work. So
to me, it felt like a 1500-shot show, it was that
intense. All the moving pieces. Staying liquid
and trying to be responsive to anything the di-
rector and studio asked for. And, staying on
budget. It was fun. n
nnnn
CG Character
The fusion of video, audio, animation and effects. The inspiration behind content
produced to inform and entertain audiences around the corner or across the
globe in any number of formats. This is the art of integration.
Experience the gallery of innovation that is the NAB Show

and let your next


masterpiece take shape. Here youll nd the technologies leading todays viewer
and user experiences, the tools that foster the collaboration of art and science,
and the power to unlock the endless combinations of creative expression within
your imagination. Learn more and register at www.nabshow.com.
Th Thee fu
pr rod od
gl llob o
EExpe pe
ma mast st
an and u
an aaaaa d th
yo yyy ur u ima
F
R
E
E
Exhibits O
nly Pass
USE CODE PA03.
Visit the NAB Show online gallery to view the entire collection of Integration-inspired paintings.
Get a free code reader at getscanlife.com.
CGW_Temp.indd 1 2/3/2011 4:26:25 PM
March 2011 36
nnnn
CGI
Mother
of Invention
2011 ImageMovers Digital LLC.
ImageMovers Digital takes a
last bow with its CG feature
Mars Needs Moms before the
curtain drops on the studio
By Barbara Robertson
T
he story, written by director Simon Wells and his wife,
Wendy Wells, takes a left turn from most Disney
movies in which mom plays no part. For their part,
these screenwriters sent mom to Mars to nurture a
colony of motherless aliens. But, the spaceship snags her 9-year-old
son Milo, too. Once on the planet, Milo learns he has one night
to send mom back home. Fortunately, a geeky and fat Earthman
named Gribble and a rebel Martian girl help him take on the alien
nation as he attempts to rescue his mom.
Created using performance capture, Disneys Mars Needs Moms
is the ffth such flm for producer Robert Zemeckis; the second
and last for
ImageMovers Digital (IMD), the perfor-
mance capture studio he founded; and the frst for Simon Wells.
Wells had directed several animated flmsincluding Te Prince of
Egypt, for which he received an Annie nominationand the live-
action feature Te Time Machine; he also had been a story artist for
Shrek 2, Shrek the Tird, Flushed Away, and other animated flms.
Te method intrigued me, Wells says. Id been fascinated by
motion capture. I like that you get to work with actors, they get to
work with one another, and that its performance, not voice-over
in a booth. So that appealed to the part of me that liked flming
live action. And I have directed a number of animated flms, so
having all the advantages of animation in postproduction appealed
enormously to me.
Wells took the job on one condition: that he and Wendy could
write it. Tey had done some development work, but they werent
happy with it, Wells says. So we took Berkeley Breatheds book
and worked directly from that. Te flm was always conceived as a
motion-capture project.
Modifying the Process
Although the process Wells used was similar to that developed by
Zemeckis to incorporate live-action techniques for A Christmas
Carol and previous flms, Wells modifed it in ways that refected
his work on traditional and CG animated flms. As with Zem-
eckiss flms, Wells captured actors dialog and performanceson
Mars, the hands, face, and body of as many as 13 people at once.
We did scenes in one continuous take, Wells says. Its better
than live action, where you do master shots and then coverage,
and each has to be set up while the actors go to their trailers, lose
energy, and then have to get up to speed again. Ten, he selected
the performances he liked.
At this point, the camera angle is irrelevant,
Wells says. Youll see, during the credit roll at the end of the
movie, the performances we captured. Tey are nothing like the
shots in the movie.
Wells shot the actors for fve weeks starting at the end of March
2009. When they fnished, he selected the performances he liked.
Te crew then applied the motion data captured from the bodies
in selected performances to what they call video game resolution
characters. But, whereas Zemeckis had a rig that allowed him to
steer a virtual camera using gear that resembled a real camera, Wells
drew on his experience as a story artist and director for traditional
animated flms to create a rough cut from the performances.
Wayne [Wahrman, editor] and I found we could imagine the
shots in 3D space, so I would draw thumbnails to describe to the
artists how we wanted the shots to go, explains Wells. We would
pretty much talk our way through the sequences, sketching them
out. Te crew made our low-res 3D shots from that.
With the 3D characters in the 3D sets for each shot, Wells
could decide where he wanted the virtual camera. Once youve
March 2011 37
Performance Capture
nnnn

ImageMovers Digital translated motion data onto FACS-based
expressions, such as these, that the crew implemented within a
blendshape facial animation system.
March 2011 38
nnnn
Performance Capture
built your 3D models, you have the actors
in a box, Wells says. You can have them
do their best takes over and over and over
without complaining, and you can do pho-
tography to your hearts content. We pasted
the facial performances onto the low-res
models so we could see the actors. It was a
bit squirrely to have what looked like a mask
stuck on a 3D model, but it was very useful
for timing.
Wells found the process of flmmaking us-
ing this method exhilarating and a bit danger-
ous. You can wander around and look at the
actors from any angle you like, he says. So its
a rabbit hole that you can quickly go down. I
was shooting and reshooting and reshooting
the same performance again and again. It takes
a certain discipline to make decisions and
move the story from scene to scene.
With the shots edited into sequences, the
camera moves in place, and the overall rhythm
of the movie set, Wells began working with the
animation team in August 2009. Huck Wirtz
supervised the crew of approximately 30 ani-
mators who refned the captured performanc-
es working with the motion-captured data ap-
plied to higher resolution models. Here, Wells
departed from Zemeckiss approach, too.
Bob [Zemeckis] and I have the same feel-
ing about staying true to the actors perfor-
mance, Wells says. But he was trying to get
photoreal movement. I chose to take it to a
degree of caricature, which was in tune with
the character design. Id take an eyebrow lift
and push it a bit farther. Te smiles got a bit
bigger. It was exactly what the actor did, just
pushed a bit.
Wells believes that most people wont notice
the caricature, which isnt cartoony, but gave
the characters more vitality. It makes them
feel a bit more alive than the straight transfer
of motion data, he says. In terms of actual
choices, the way the character behaves emo-
tionally came entirely from the actors. Tat
said, though, being able to translate that emo-
tion through the medium took skill.
Refning the Performances
Wirtz organized the work by sequences that
he scheduled on a nine- to 10-week basis, and
then split the work among himself and two
sequence leads. We all worked equally on
everything, but if it came down to someone
having to get pummeled, it was usually me. I
was glad to take it, though.
In addition, fve lead animators took re-
sponsibility for particular characters, usually
more than one character. And, Craig Halperin
supervised the crowd animation. We cre-
ated the motion cycles for him to use in Mas-
sive, Wirtz says. Otherwise, the crew used
Autodesks Maya for modeling, rigging, and
animation, and Mudbox for textures.
Te animators began translating the actors
emotional performances by cleaning up data
from the body capture frst. Ten, they moved
on to the more difcult tasks of refning the
motion-captured data for the hands and faces.
We had a great system worked out, Wirtz
says. Wed start by showing Simon a whole
scene with the data on the high-res models, but
with no tweaks. We called that zero percent.
Next, theyd adjust the eye directions, adding
blinks and altering eye motion, if needed, so all
the characters were looking at the right spots.
And, they worked on the hands. We made
sure the characters grabbed what they needed
to grab, Wirtz says. Tey showed the result to
Wells and called that stage 33 percent.
Once Wells approved the 33 percent stage,
the animators moved on to the mouths, mak-
ing sure the lips synched to the dialog and that
the facial expressions were appropriate. We
wanted to be sure the eyes caught the tone
Simon wanted, Wirtz says. Tat resulted in the
66 percent stage. Between 66 and 99 percent,
the animators worked on the fne details.
We did a lot of hand tuning on everything,
Wirtz says. A lot on the faces, on anything
they hold or grab, and any contactsfeet
on the ground, things like that. Sometimes
the data comes through perfectly and youre
amazed right out of the box, but you always
have to do the eyes.
Te facial animation system used blend-
shapes based on FACS expressions. It looked
at where the markers were and tried to simu-
late the expressions, Wirtz says. We also
based the system on studying muscle motion.
Te system kept evolving; we kept refning it.
It takes a lot of heavy math to spit out a smile.
Simon was happy with the performances by
the actors on stage, so we worked hard to keep
that emotion. We were defnitely translating a
human performance onto an animated charac-
ter, and what makes it come through is that we
tried to get back to the human performance.
Keeping the characters out of the Uncanny
Valley, where they look like creepy humans
rather than believable characters, depends pri-
marily on the eyes, Wirtz believes. We paid
close attention to what the eyes are doing, he
explains. We tried to follow every tick carefully.
Its not the eyeball itself, its also the fesh around
the eyes. It has to be there, working correctly,
all the motivational ticks and quirks in the eye-
brows. Te other side of it is the rendering.
Creating the Look
Rendering, along with texture painting, look
development, lighting, efects animation,
compositing, and stereo, fell under visual ef-
fects supervisor Kevin Baillies purview. Artists
textured models using Adobes Photoshop,
Maxons BodyPaint 3D, and Autodesks Mud-
box for displacements. Pixars RenderMan pro-
IMD could capture data from faces, hands, and bodies from as many as 13 actors on set at one time.
A crew of approximately 30 animators refned the performances once they were applied to CG models.
March 2011 40
nnnn
Performance Capture
duced the shading and lighting via an in-house
tool called Isotope that moved fles from Maya.
We know what real looks like, Baillie
says. Being a half-percent wrong puts a char-
acter in the Uncanny Valley. So, we made the
decision to stylize the characters and have a bit
of fun with them. When you have characters
that are more caricatures than photoreal hu-
mans, the audience lets the character of the
hook a little bit.
To help animators see how the eyes would
look once rendered, R&D engineer Mark Col-
bert created a virtual eye. Usually animators
work a little bit blind, Baillie punned, and
then explained that animators have geometry
for the iris, pupil, cornea, and so forth. But the
animation packages dont show the refraction
from the cornea to the pupil, and that pushes
the pupil. Tus, animators often have to ad-
just the eye line after rendering.
Mark [Colbert] created a way for anima-
tors to see the efect of the changing refraction
in the Maya viewport using a sphere with a
bump for corneal bulge and a CGFX shader,
Baillie says. Te CGFX shader produced the
results in real time.
Much of the flm takes place in the shiny-
metal Mars underground, which became an
interesting rendering predicament. Render-
Man is phenomenal at displacement, motion
blur, and hair, but shiny things and raytraced
refections are challenging, Baillie says. We
couldnt use spotlights. We had to have objects
cast light. So we implemented point-based
lighting techniques for refections.
Christophe Hery, who is now at Pixar, had
joined ImageMovers Digital during produc-
tion and helped the crew implement an evolu-
tion of techniques developed while he was at
Industrial Light & Magic, and for which he
had received a Sci-Tech award (see Bleeding
Edge, March 2010). We rendered scenes
with fully refecting walls and foors in 15
minutes, Baillie says. It was unheard of. Tat
optimization really saved our butts. We did all
our indirect illumination using point clouds
generated from all the lights in the scene that
had to emit light. Wed bake the foor into a
point cloud, and had those points cast lights.
When the characters Gribble and Milo fnd
themselves in an abandoned Martian city with
glowing lichen and other bioluminescent veg-
etation, the crew handled the lighting by baking
each little plant into a point cloud that cast light
onto the ground. Tey also used point clouds
for subsurface scattering on the characters faces.
Christophe really helped out a lot with
that, says Baillie. We had the characters sing-
ing on all four cylinders because we had the guy
who invented the technique working with us.
We started with a shadow-map version of sub-
surface scattering but ended up preferring the
look and speed of point-cloud-based subsurface
scattering. When they reached the limitations
of the point-cloud techniqueslips might
glow when they touchedPixars RenderMan
development team jumped in to help.
I think the thing that had the biggest im-
pact on the flm at the end of the day, and the
scariest at frst, was the extensive use of indi-
rect lighting with point clouds, Baillie says.
Te studio had put a lot of hours into a sys-
tem they had used for A Christmas Carol, so it
was hard to persuade them to change. But Im
super glad we pursued it.
Moving On
While they were still in production, the crew
learned that Disney would close the studio. It
was sad, Wells says. I understand Disneys
decision from a business point of view. To keep
the studio running, theyd have to guarantee a
tent pole every year. Tey didnt want to carry
another standing army of 450 artists. But from
the point of view of creative artists, this crew
was working together so efciently and fuidly,
producing such high-quality work, and it was
heartbreaking to see it broken up.
Already Baillie has joined with two other
former crew members from ImageMovers
Digital: Ryan Tudhope, who came to IMD
from Te Orphanage, and Jenn Emberly, who
was performance supervisor at IMD, and be-
fore that, animation supervisor at ILM. Te
three have founded Atomic Fiction, a visual
efects studio based in Emeryville, California.
For his part, Wirtz founded Bayou FX in his
home state of Louisiana, in Covington, near
New Orleans.
Its likely that the former IMDers will con-
tinue networking in interesting ways, as they
take what they have learned at the studio and
expand it out into the universe. Every once in
a while a bunch of us get together and remi-
nisce about the awesome things we did togeth-
er, Baillie says. One of the
hardest parts of my job was
fguring out which amazing
idea to go with. It was a con-
stant barrage of consistent
amazement. n
Barbara Robertson is an award-
winning writer and a contributing
editor for Computer Graphics World.
(Top) The design for the aliens meant they had to be CG characters; they couldnt be people in suits.
Animators scaled the mocap data appropriately. (Bottom) Gribble and Milo are two of the four human
characters in the flm, but they are caricatures, too, and that helps them avoid the Uncanny Valley.
Use your smart-
phone to read
accompanying
story.
Save the date, May 3-6, 2011. Be part of
something special, join FMX Europes leading
conference dedicated to innovative technology,
visual art, digital content and global production.
Visit Stuttgart during FMX and visit our partner
festivals, Stuttgart Festival of Animated Film, May 3-8
and the Animation Production Day, May 3-4.
16
th
Conference on Animation, Efects,
Games and Interactive Media
May 3 - 6, 2011 | Stuttgart | www.fmx.de
Contribute
Interact
Participate
Follow us on Twitter and Facebook.
March 2011 42
Khalid Al-Muharraqi
The Middle East entrances Westerners with its mystique: the labyrinthine passageways between cobbled alleys, the
exotic marketplaces, the unique animals so well adapted for the challenging desert environment. Digital artist Khalid Al
Muharraqi knows these sights well. Living in Bahrain, a Middle Eastern archipelago in the Persian Gulf, the artist expertly
captures the allure of the areas contemporary avor with its traditional underpinnings. However, his range as an artist
extends far beyond this cultural boundary with his diversied portfolio of creatures, animals, and architecture.
Muharraqi, owner of Muharraqi-Studios, has been creating CG work for approximately 15 years. His love for art began
in the traditional realm: as a painter working with watercolors and gouaches. I sold my rst painting when I was 14
years old for $2000, Muharraqi notes. In fact, it was his fathera painter himselfwho trained Muharraqi to paint, so I
was drawing and coloring since I was eight years old, he says. It helped me see how he thinks and how he sketches
his ideas. When he came to the US, he utilized his artistic eye, studying interior design before heading into marketing
and advertising after attending The Art Institute of Houston.
I try to reect in my work the things that I am interested in, says Muharraqi. I love the old stories that I used to see
in books when I was in my fathers studio. He loved gathering books and magazines that had lots of artistic images; those
images have affected my work. The artist points out, though, that he does not have a singular style: I love to change
and learn new things. Every project I do I learn something new. My art is always developing, and I grow with it.
At one point 10 to 15 years ago, most of Muharraqis work focused on design, corporate branding, and advertising. That
was when he embraced CG art. Since then, most of the pieces he has done in the Middle East were related to 3D visualiza-
tions of architectural developments due to the construction boom there. But my real fun work is character development
and design, as well as story-related projects, he adds. This line of work gives me greater pleasure and freedom.
A cross-platform artist, Muharraqi uses both the Mac and PC at the same time. I seem to have to upgrade all the time.
So when most people are thinking about their next phone or the latest car, I am interested in the fastest graphics card or
better processor performance. In terms of software, he is a NewTek LightWave userIt feels at home for me: simple,
fast, and makes sense all overthough he also counts Pixologics ZBrush, Adobes Photoshop, and Luxologys Modo as
part of his tool set, enabling him to bring a piece of his home, and then some, to the rest of us. Karen Moltenbrey
43 March 2011
Tere has never been a better time to be work-
ing in visual efects. Each year, global VFX
facilities are contributing amazing efects to
a flm industry that is constantly raising the
bar in order to push visual boundaries and cin-
ematic experiences further than ever before.
As well as being great news for audiences all
over the world, this drive to create bigger and
more elaborate efects is great for established and
aspiring VFX professionals who want to work at
the highest level creating cutting-edge visuals.
Finding the Right Match
Attracting and retaining the right talent is a
key part of any successful VFX facilitys strat-
egy when it comes to development and the
ability to compete and perform on a global
level.
Double Negative (DNeg) is an organiza-
tion built on passion and enthusiasm, as well
as world-class creative and technical talent,
says Hannah Acock, talent manager at DNeg.
Tese are the qualities that we look for in
people whenever we add to our team. People
who commit to do their best have integrity
and are willing to learn, stretch, and chal-
lenge themselves, and will always fnd plenty
of opportunities to develop and grow within
Double Negative.
What DNeg does is produce groundbreak-
ing visual efects for the cinema. If someone
can show the potential to do that on their reel
and is a proven team player, we are always
going to be interested in talking to that per-
son, Acock says.
What does it take to get to the reel deal?
Acock points out that an outstanding reel with
coherent shot breakdowns and an informative
resume is essential in any application. She ad-
vises prospects to ensure that their show reel
is working hard for themthere should never
be any excess or diluted work that will detract
from the main event, which should be the frst
15 seconds of any reel.
Due to the huge number of applications
that are received each day at DNeg and most
likely other top-level VFX studios in this in-
dustry, the most eye-catching reels are those
that are condensed and sharp, demonstrating
a high standard of work.
Reels that have been personalized to high-
light work relevant to DNeg (for instance,
photorealism) also stand out. To this end,
Acock advises that whenever possible, show
reels should be geared to the company that
a job seeker is applying to, making it much
easier for all facilities to review work that is
applicable to what they do. Its also impor-
tant for aspiring artists to remember that the
competition is ferce and reels do get rejected.
Any feedback that is ofered can help improve
a persons chances if they apply at a later date
with an updated and improved reel.
Leave Your Ego at the Door
As well as technical and creative expertise, per-
sonality ft is a key factor in the recruitment
process. Big egos and over-the-top self-promo-
tion do not sit well in any crewits always
best to let your work do the talking.
According to Acock, personal recom-
mendations through existing employees are
taken seriously at the studio. Who better to
understand the right ft for our culture than
existing DNeg team members? she adds.
We have had great success in adding to our
teams through recommendations, and our
recruitment team is always happy to hear from
people who have worked with our artists on
projects outside of DNeg.
Most organizations also recognize the huge
Face-to-Face
Learning
March 2011 44
nnnn
Recruitment
(Above) Double Negative artists created the effects for Scott Pilgrim
vs. the World. (Left) Hannah Acock, talent manager at the studio,
maintains that at DNeg, as elsewhere, hiring the right talent is a key
element to a VFX studios success.
DIGITAL IMAGING
CAD Conversion
Luxion, a developer of advanced rendering
and lighting technology, has announced
Version 2.2 of its KeyShot software for
creating photographic images from 3D
CAD models. The latest edition offers
improvements to the import pipeline and
model interaction in real time, as well as
enhanced render speeds, especially when
working with complex materials. KeyShot
2.2 includes native support for Autodesk
AutoCAD, ensuring interoperability with
leading CAD systems. Additional features
new to Version 2.2 include: the preserva-
tion of model structure from a CAD model,
including all subassemblies; preservation
of part/layer names; a part outline in the
real-time window; the ability to move multi-
ple objects; the ability to duplicate objects;
and support for SolidWorks 2011, Catia
V5, Autodesk Inventor 2011, DXF/DWG
(AutoCAD), SketchUp 8, 3Dconnexion
3D mice, and foreign languages, including
Chinese, English, French, German, Italian,
Japanese, Korean, and Polish. KeyShot
2.2 is available for the Mac and PC starting
at $995. KeyShot 2.2 is a free upgrade to
all existing KeyShot 2 customers. A free
trial is available online.
Luxion; www.keyshot.com
VIDEO
Expanded to Avid
Singular Software, developer of workow
automation applications for digital media,
has enhanced PluralEyes with expanded
support for nonlinear editing applications,
including Avids Media Composer and
NewsCutter software applications. Plural-
Eyes accelerates the workow for multi-
camera, multi-take, and dual-system audio
productions by analyzing audio information
and automatically synchronizing audio and
video clips. For video producers of all skill
levels, PluralEyes can be used for a range
of projects, from weddings and live events
to documentaries, commercials, and indie
films. PluralEyes for Media Composer
is now available for $149 running on
Windows XP, Vista, or Windows 7. A 30-
day free trial version of PluralEyes is also
available for download.
Singular Software;
www.singularsoftware.com
SIMULATION
Atmospheric Upgrade
E-on software, makers of digital nature
solutions and technologies, has unveiled
Version 5 of its Ozone atmospheric plug-
in for Maxons Cinema4D,
NewTeks LightWave, and
Autodesks 3ds Max, Maya,
and Softimage. Ozone 5
employs E-on softwares
Spectral 3 engine for the simulation and
rendering of atmospheric effects, includ-
ing realistic environments with volumetric
clouds, accurate light dispersion, and
natural phenomena, such as Godrays.
Ozone 5 is compatible with V-Rays Sun
and Sky technologies and MentalRays
specic technologies, such as Sun&Sky,
Photo metric Lights, and Distributed
Bucket Rendering. Version 5 adds faster
rendering; cloud cross shadowing, a user-
controllable feature whereby clouds cast
shadows on each other; and a library of
more than 150 predened atmospheric
conditions and 140-plus preset cloud
shapes. Ozone 5 runs on Windows XP,
Windows Vista, and Windows 7, and on
Mac OS X 10.5+ (Mac Intel only) for 32
and 64 bits. Ozone 5 is available for $295,
with upgrades from Ozone 4.0 available
for $95. RenderNode network rendering
licenses are available at $95 per node.
E-on software;
www.e-onsoftware.com
VFX
Five Filters
Phyx has released Phyx Stylist for the
FxFactory platform by Noise Industries,
developer of visual effects tools for the
postproduction and broadcast markets.
Phyx Stylist is a collection of genera-
tors and filters designed to work with
Adobe After Effects, Apple Final Cut
Pro, Apple Motion, and Apple Final
Cut Express. Phyx Stylists image
processing tools help artists repli-
cate the look of optical systems,
generate realistic fog, re-light
actors, remove unwanted haze
from footage, and produce shim-
mering stars, highlights, and illu-
minations. Phyx Stylist includes
five distinct filters and generators for
adding effects and enhancing image qual-
ity. Phyx Stylist is priced at $199. Adobe
After Effects, Apple Final Cut Studio, and
Apple Final Cut Express users can down-
load a free trial version of FxFactory from
the Noise Industries Web site.
Phyx; www.phyxware.com
Noise Industries; www.noiseindustries.com
March 2011, Volume 34, Number 2: COMPUTER GRAPHICS WORLD (USPS 665-250) (ISSN-0271-4159) is published monthly except in January and August (10 issues
annually) by COP Communications, Inc. Corporate ofces: 620 West Elk Avenue, Glendale, CA 91204, Tel: 818-291-1100; FAX: 818-291-1190; Web Address: info@
copprints.com. Periodicals Postage Paid at Glendale, CA, 91205 & additional mailing ofces. COMPUTER GRAPHICS WORLD is distributed worldwide. Annual subscrip-
tion prices are $72, USA; $98, Canada & Mexico; $150 International airfreight. To order subscriptions, call 847-559-7310.
2011 CGW by COP Communications, Inc. All rights reserved. No material may be reprinted without permission. Authorization to photocopy items for internal or
personal use, or the internal or personal use of specic clients, is granted by Computer Graphics World, ISSN-0271-4159, provided that the appropriate fee is paid
directly to Copyright Clearance Center Inc., 222 Rosewood Drive, Danvers, MA 01923 USA 508-750-8400. Prior to photocopying items for educational classroom use,
please contact Copyright Clearance Center Inc., 222 Rosewood Drive, Danvers, MA 01923 USA 508-750-8400. For further information check Copyright Clearance
Center Inc. online at: www.copyright.com. The COMPUTER GRAPHICS WORLD fee code for users of the Transactional Reporting Services is 0271-4159/96 $1.00 + .35.
POSTMASTER: Send change of address form to Computer Graphics World, P.O. Box 3296, Northbrook, IL 60065-3296.
For additional product news and information, visit CGW.com
SOFTWARE
WIN MAC
WIN WIN MAC
MAC
March 2011 47
WIN
Studio Update
pmG has released MessiahStudio5, the
latest version of its CG visual effects
software enhanced with user-suggested
improvements. The new Dynamic Render
interactive renderer shows users fully
rendered images of a scene as changes
are made, whereas the new Sketch mode
enables animators and directors to draw
in 2D on top of each frame, draw correc-
tions and guides for animation feedback,
or rough out a motion in traditional cell
style before animating in 3D. UV Bake
Render enables users to render complex
materials created in Messiahs shader ow
to a UV-based texture map and export
to game engines or models for sale.
MessiahStudio5 CG production soft-
ware is available in two versionsBasic,
priced at $499, and Professional, priced
at $1199which come complete with
effects, animation, rendering, and a soft-
ware development kit (SDK).
pmG Worldwide LLC;
www.projectmessiah.com
CG Combo
Tweak Software and The Foundry have
announced the integration of the compa-
nies RV and Nuke software tools. The
integrated RV/Nuke package adds high-
performance RV playback to the Nuke node-
based compositor and enables a seam-
less visual effects workow. The RV/Nuke
package developed by Tweak Software
with support from The Foundry creates an
integrated workow that provides artists
tools to play back, organize, compare,
and track the history of their work in Nuke.
Tweak Softwares RV image and sequence
viewer communicates with Nuke over a live
connection to enable seamless interopera-
tion. The RV/Nuke package is available as
a no-cost download to current users of
Nuke and RV.
Tweak Software; www.tweaksoftware.com
The Foundry; www.thefoundry.co.uk
RENDERING
V-Ray Max
Chaos Group has released its V-Ray 2.0
for 3ds Max rendering solution, which
includes the V-Ray RT interactive render-
ing system and V-Ray RT GPU, to meet
the growing demand for 3D stereoscopic
content. V-Ray 2.0 boasts new features
and improvements designed to enable
faster feedback and nal results. Version
2.0 adds native stereoscopic support,
as well as support for light dispersion.
Added utilities include: VRayLensEffects
for producing glow and glare with support
for obstacle images and diffraction, VRay-
CarPaintMtl material with accurate ake
simulation, VRayLightSelect to extract
the contribution of individual lights, VRay-
StereoRig for easy stereoscopic setup,
VRayLensAnalysis for realistic distortion
patterns for V-Ray Physical Cameras,
VRayDistanceTex for a range of effects
based on the distance between objects in
the scene, and VRayMultiSubTex to assign
different textures based on object ID.
V-Ray 2.0 for 3ds Max is now available.
Chaos Group; www.chaosgroup.com
VIDEO
3D Footage
Artbeats announces its burgeoning Stereo-
scopic 3D (S3D) stock footage library to
meet the growing demand for 3D elements
for television and lm. Current offerings
include a variety of nature shots, plus the
rst S3D aerial stock footage available to
the royalty-free market. New 3D content will
be added monthly, including pyrotechnic,
city scene, establishment, winter scene,
and additional aerial collections. Artbeats
used Pictorvisions gyrostabilized eclipse
3D aerial rig and two RED MX cameras
with Optimo 17-80mm lenses to capture
aerials of downtown Los Angeles, Holly-
wood, Santa Monica, Long Beach, and
the shores of Catalina. Artbeats 3D equip-
ment will expand to include RED Epic
cameras and a beamsplitter rig. Artbeats
S3D stock footage is offered as Quicktime
movie les with formats that include HD
stereo pairs, separate HD right and left
views, and 4K versions. Artbeats provides
metadata specic to each S3D clip, an
S3D Video Guide, and a free download-
able full-HD-resolution S3D clip. Royalty-
free pricing for S3D HD footage is $449
to $799 per clip. Artbeats FootageHub
rights-managed S3D HD clips range in
price from $499 to $1999.
Artbeats; www.artbeats.com/S3D
VIDEO
Quick Check
DSC Labs, a developer of products for
image quality improvement, introduc-
es Hawk Quick Check Chart (QCC),
designed by senior video and broadcast
engineer Gary Hawkins. Hawk QCC
provides a simple way to match cameras
on location. For a single- or multi-camera
shoot, the Hawk QCC records critical
information from an original scene setup
using a primary chart; it then provides
data for camera evaluation and scene
matching. Hawk QCC measures 11x9.5
inches and features white patches for
white balancing; gamma at 18 percent
gray for accurate iris or lighting adjust-
ment; black patches for scene black and
are checking; esh tones for checking
consistency of esh tones; and mid-gray
strip top and bottom to monitor evenness
of lighting. The new Hawk QCC is now
available for $130.
DSC Labs; www.dsclabs.com
48 March 2011
WIN
WIN
WIN MAC
HARDWARE
www.opIiIrack.com | 1-888-865-5535
*Compatible with optical motion capture technologies.
2011 NaturalPoint nc. All rights reserved. NaturalPoint, OptiTrack, trademarked slogans and logos are the property of NaturalPoint. All other product names are trademarks or registered
trademarks of their respective companies. Prices and specifications subject to change without notice. P-OT-303.1103
VCS:Pro (configurabIe)
Ior $4,999
VCS:Mini
Ior $199
Ditch your mouse and pick up a virtuaI camera.
The lnsighI VCS Irom $4,999. Powered by moIion capIure.
MotionBuiIder & Maya PIugins
Irom $1,000
Bring your designs to life.
Command up to 24 million pixels with AMD FirePro

professional graphics,
featuring AMD Eyefnity multi-display technology.
2011 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, FirePro, the FirePro logo, and combinations thereof are
trademarks of Advanced Micro Devices, Inc.. Other names are for informational purposes only and may be trademarks of their respective owners.
COMMAnD yOur CrEAtI Ons
ATI FirePro

V8800 2GB Graphics Card


(#WL051At)
Price starting at $1,399. Get yours now.
www.hp.com/go/ati
Contact: pro.graphics@amd.com
Monarch Butterfy provided courtesy of Massimo righi and turbosquid

Вам также может понравиться