Вы находитесь на странице: 1из 21

3D Movies And 2D-3D Animation

3D Movies
A 3-D (three-dimensional) film or S3D (stereoscopic 3D) film is a motion picture that enhances the illusion of depth perception. Derived from stereoscopic photography, a regular motion picture camera system is used to record the images as seen from two perspectives (or computer-generated imagery generates the two perspectives in postproduction), and special projection hardware and/or eyewear are used to provide the illusion of depth when viewing the film.

3D Technologies
Stereoscopic motion pictures can be produced through a variety of different methods.

1. Anaglyph Technology
Anaglyph is a type of stereo 3D image created from two photographs taken approximately 2.5 inches apart, centre distance typically between human eyes. The Red color field of the left photo is combined with that of the right photo in such a way as to create the illusion of depth. In America usually the RED, BLUE lens glasses are worn to view the effect, in Europe the RED, GREEN combination is common. Anaglyph images were the earliest method of presenting theatrical 3-D, and the one most commonly associated with stereoscopy by the public at large, mostly because of non theatrical 3D media such as comic books and 3D TV broadcasts, where polarization is not practical. They were made popular because of the ease of their production and exhibition.

In an anaglyph, the two images are superimposed in an additive light setting through two filters, one red and one cyan. In a subtractive light setting, the two images are printed in the same complementary colors on white paper. Glasses with colored filters in each eye separate the appropriate images by cancelling the filter color out and rendering the complementary color black. The anaglyph 3-D system was the earliest system used in theatrical presentations and requires less specialized hardware. Anaglyph is also used in printed materials and in 3D TV broadcasts where polarization is not practical.

How to Make A 3D Anaglyph Movie


Several different cameras used for making Anaglyph Movie, Mostly a digital camera used for this because it is so easy to get the images out and into the computer. Using those cameras take two separate pictures by moving cameras from about 1" to 2.5" inches apart for each set of pictures. A set consists of both a Right and Left image. These are then brought into the computer and manipulated in a video editing software (like: Corel Photo Paint, Photoshop, etc.) in order to blend the two images and create the Red Blue shift in the color fields. The result, when viewed on a computer monitor with Red/Cyan, glasses, is quite delicious.

Types of Glasses used in anaglyph technology


Red/Cyan anaglyph glasses In these glasses, The (RED) retinal focus differs from the image through the (CYAN) filter, which dominates the eyes focusing. Green/Magenta anaglyph glasses In these glasses, The (GREEN) retinal focus differs from the image through the (MAGENTA) filter, which dominates the eyes focusing. Blue/Amber anaglyph glasses In these glasses, The (BLUE) retinal focus differs from the image through the (AMBER) filter, which dominates the eyes focusing.

2. Polarization Technology
The polarization 3-D system has been the standard for theatrical presentations. The polarization system has better color fidelity and less ghosting than the anaglyph system. Anaglyph is also used in printed materials and in 3D TV broadcasts where polarization is not practical. 3D polarized TVs and other displays also available by several manufacturers in market. To present a stereoscopic motion picture, two images are projected superimposed onto the same screen through different polarizing filters. The viewer wears low-cost eyeglasses which also contain a pair of polarizing filters oriented differently (clockwise/counter clockwise with circular polarization or at 90 degree angles, usually 45 and 135 degrees, with linear polarization). As each filter passes only that light which is similarly polarized and blocks the light polarized differently, each eye sees a different image. This is used to produce a three-dimensional effect by projecting the same scene into both eyes, but depicted from slightly different perspectives. Since no head tracking is involved, the entire audience can view the stereoscopic images at the same time. Resembling sunglasses, RealD circular polarized glasses are now the standard for theatrical releases and theme park attractions.

Types of Glasses used in polarized technology


3D glasses are technically called polarized glasses. The illusion of 3D is made when the polarized lenses limit some light waves from making it to your eyes. The polarized lenses filter out all light waves and only allow them to travel at a specific direction.

3. Parallax barrier (3D-TV)


A parallax barrier is a device placed in front of an image source, such as a liquid crystal display (LCD), to allow it to show a stereoscopic image without the need for the viewer to wear 3D glasses. Placed in front of the normal LCD, it consists of a layer of material with a series of precision slits, allowing each eye to see a different set of pixels, so creating a sense of depth through parallax in an effect similar to what lenticular printing produces for printed products.

4. 3D Broadcasting (3D Channels)


DirecTV has become the first service provider to offer a 3D channel package. DirecTV Cinema in 3D is the first movie channel in full third dimension. Its line-up is currently limited, but if the demand continues to grow, its a sure bet new programming will closely pace its 2D counterparts.

Animation
Mickey Mouse, Donald Duck, Aladdin - are favourite cartoons of kids as well as adults. All these cartoon characters are the creation of the wonderful art of animation that captivates our eyes and makes our childhood days full of fun. How are these cartoons displayed on television or others? Basic animation is an easy and single key frame animation. Animation is a presentation of various displays and movements, which adds liveliness to your site or film. The Internet users are usually fond of browsing a website that is well animated with good graphics. In simple words, basic animation is the illusion of different movements, linked together in a proper way so that visitors/audiences get the effect of seeing a well coordinated set of actions.

Basic Types of Animation


The basic types of animation are the primary keynote for animation effect. The 3 basic types of animation are cel, stop and computer animation. 1. Cel Animation Cel animation refers to the traditional way of animation in a set of hand drawings. In this process of animation, various pictures are created which are slightly different but progressive in nature, to depict certain actions. Trace these drawings on a transparent sheet. This transparent sheet is known as cel and is a medium for drawing frames. Now draw outlines for the images and color them on the back of the cel. The cel is an effective technique that helps to save time by combining characters and backgrounds. You can also put the previous drawings over other backgrounds or cels whenever required. Here, you need not draw the same picture again as it has the facility of saving previous animations that can be used when required. Colouring a background may be a more difficult task than a single drawing, as it covers the whole picture. Background requires shading and lighting and will be viewed for a longer duration. Then use your camera to photograph these drawings. Today, cel animations are made more attractive by using the drawings together with music, matching sound effects and association of timing for each effect. E.g. To display a cartoon show, 10-12 frames are played in rapid succession per second to give a representation of movement in a cel animation.

2. Stop Animation Stop animation or stop motion animation is a technique to make objects move on their own. Here, a few images are drawn with some different positions and photographed separately. Puppetry is the one of the most used frame-to-frame animation types.

3. Computer Animation Computer Animation is the latest technique of animation that includes 2D and 3D animation. These animations not only enhance the hand-drawn characters but also make them appear real as compared to the above mentioned animations. a. 2D Animation: It is used through PowerPoint and Flash animations. Though its features are similar to cel animation, 2D animation has become popular due to simple application of scanned drawings into the computer like in a cartoon film. b. 3D Animation: It is used in film making where we require unusual objects or characters that are not easy to display. Use of 3D animation can create a crowd of people in a disaster like earthquake, flood or war. There are different shapes, support of mathematical codes, display of actions and colors which are mindblowing as if copied from an actual picture.

Computer 2D-3D Animation


Computer animation is the process used for generating animated images by using computer graphics. Computer animation refers to moving images produced by exploiting the persistence of vision to make a series of images look animated. Given that images last for about one twenty-fifth of a second on the retina fast image replacement creates the illusion of movement. Modern computer animation usually uses 3D computer graphics, although 2D computer graphics are still used for stylistic, low bandwidth, and faster real-time renderings. Computer animation is essentially a digital successor to the stop motion techniques used in traditional animation with 3D models and frame-by-frame animation of 2D illustrations. Computer generated animations are more controllable than other more physically based processes, such as constructing miniatures for effects shots or hiring extras for crowd scenes, and because it allows the creation of images that would not be feasible using any other technology. It can also allow a single graphic artist to produce such content without the use of actors, expensive set pieces, or props. To create the illusion of movement, an image is displayed on the computer screen and repeatedly replaced by a new image that is similar to the previous image, but advanced slightly in the time domain (usually at a rate of 24 or 30 frames/second). This technique is identical to how the illusion of movement is achieved with television and motion pictures. For 3D animations, objects (models) are built on the computer monitor (modelled) and 3D figures are rigged with a virtual skeleton. For 2D figure animations, separate objects (illustrations) and separate transparent layers are used, with or without a virtual skeleton. Then the limbs, eyes, mouth, clothes, etc. of the figure are moved by the animator on key frames. The differences in appearance between key frames are automatically calculated by the computer in a process known as tweening. Finally, the animation is rendered. For 3D animations, all frames must be rendered after modelling is complete. For 2D vector animations, the rendering process is the key frame illustration process, while tweened frames are rendered as needed. For pre-recorded presentations, the rendered frames are transferred to a different format or medium such as film or digital video. The frames may also be rendered in real time as they are presented to the end-user audience. Low bandwidth animations transmitted via the internet (e.g. 2D Flash, X3D) often use software on the end-users computer to render in real time as an alternative to streaming or pre-loaded high bandwidth animations.

An Example of Animation

The screen has a tennis ball at the top of the corner. Then, the ball is drawn on the top-left of the screen. Next, the screen has same background and ball, but the ball is redrawn or duplicated slightly to the right-down of its original position. This process is repeated, each time moving the ball a bit to the right-down and at the bottom position it drawn to left-up from previous position. If this process is repeated fast enough, the ball will appear to bounce smoothly from left to right. This basic procedure is used for all moving pictures in films and television. The moving ball is an example of shifting the location of an object. More complex transformations of object properties such as size, shape, and lighting effects often require calculations and computer rendering instead of simple re-drawing or duplication. To trick the eye and brain into thinking they are seeing a smoothly moving object, the pictures should be drawn at around 15 frames per second (frame/s) or faster (a frame is one complete image). At the rates below 15 frame/second most people can detect jerkiness associated with the drawing of new images which detracts from the illusion of realistic movement. Conventional hand-drawn cartoon animation often uses 15 frames/second in order to save on the number of drawings needed, but this is usually accepted because of the stylized nature of cartoons. Because it produces more realistic imagery computer animation demands higher frame rates to reinforce this realism. The reason no jerkiness is seen at higher speeds is due to persistence of vision. From moment to moment, the eye and brain working together actually store whatever one looks at for a fraction of a second, and automatically "smooth out" minor jumps. Movie film seen in theatres runs at 24 frames per second, which is sufficient to create this illusion of continuous movement.

Methods of animating virtual characters


In most 3D computer animation systems, an animator creates a simplified representation of a character's anatomy, analogous to a skeleton or stick figure. The position of each segment of the skeletal model is defined by animation variables, or Avars. In human and animal characters, many parts of the skeletal model correspond to actual bones, but skeletal animation is also used to animate other things. The computer does not usually render the skeletal model directly (it is invisible), but uses the skeletal model to compute the exact position and orientation of the character, which is eventually rendered into an image. Thus by changing the values of Avars over time, the animator creates motion by making the character move from frame to frame. There are several methods for generating the Avars values to obtain realistic motion. Traditionally, animators manipulate the Avars directly. Rather than set Avars for every frame, they usually set Avars at strategic points (frames) in time and let the computer

interpolate or 'tween' between them, a process called key framing. Key framing puts control in the hands of the animator, and has roots in hand-drawn traditional animation. In contrast, a newer method called motion capture makes use of live action. When computer animation is driven by motion capture, a real performer acts out the scene as if they were the character to be animated. His or her motion is recorded to a computer using video cameras and markers, and that performance is then applied to the animated character. Each method has its advantages, games and films are using either or both of these methods in productions. Key frame animation can produce motions that would be difficult or impossible to act out, while motion capture can reproduce the subtleties of a particular actor. For example, in the 2006 film Pirates of the Caribbean: Dead Man's Chest, actor Bill Nighy provided the performance for the character Davy Jones. Even though Nighy himself doesn't appear in the film, the movie benefited from his performance by recording the nuances of his body language, posture, facial expressions, etc. Thus motion capture is appropriate in situations where believable, realistic behaviour and action is required, but the types of characters required exceed what can be done through conventional costuming.

Creating characters and objects on a computer


3D computer animation combines 3D models of objects and programmed or hand "key framed" movement. Models are constructed out of geometrical vertices, faces, and edges in a 3D coordinate system. Objects are sculpted much like real clay or plaster, working from general forms to specific details with various sculpting tools. A bone/joint animation system is set up to deform the CGI model (e.g., to make a humanoid model walk). In a process called rigging, the graphics software is given various controllers and handles for controlling movement. Animation data can be created using motion capture or key framing by a human animator, or a combination of the two. 3D models rigged for animation may contain thousands of control points - for example, the character "Woody" in Pixar's movie Toy Story uses 700 specialized animation controllers. Rhythm and Hues Studios laboured for two years to create Aslan in the movie The Chronicles of Narnia: The Lion, the Witch and the

10

Wardrobe which had about 1851 controllers, 742 in just the face alone. In the 2004 film The Day After Tomorrow, designers had to design forces of extreme weather with the help of video references and accurate meteorological facts.

Computer animation development equipment


Computer animation can be created with a computer and animation software. Some impressive animation can be achieved even with basic programs; however, the rendering can take a lot of time on an ordinary home computer. Because of this, video game animators tend to use low resolution, low polygon count renders, such that the graphics can be rendered in real time on a home computer. Professional animators of movies, television, and video sequences on computer games make photorealistic animation with high detail. This level of quality for movie animation would take tens to hundreds of years to create on a home computer. Many powerful workstation computers are used instead. Graphics workstation computers use two to four processors, and thus are a lot more powerful than a home computer, and are specialized for rendering. A large number of workstations (known as a render farm) are networked together to effectively act as a giant computer. The result is a computer-animated movie that can be completed in about one to five year. A workstation typically costs $2,000 to $16,000, with the more expensive stations being able to render much faster, due to the more technologically advanced hardware that they contain. Pixars Renderman is rendering software which is widely used as the movie animation industry standard. It can be bought at the official Pixar website for about $3,500. It will work on Linux, Mac OS X, and Microsoft Windows based graphics workstations along with an animation program such as Maya and MAXON Cinema 4D. Professionals also use digital movie cameras, motion capture or performance capture, film editing software, props, and other tools for movie animation.

11

Computer facial animation


Computer facial animation is primarily an area of computer graphics that encapsulates models and techniques for generating and animating images of the human head and face. Due to its subject and output type, it is also related to many other scientific and artistic fields from psychology to traditional animation. The importance of human faces in verbal and non-verbal communication and advances in computer graphics hardware/software have caused considerable scientific, technological, and artistic interests in computer facial animation. Computer facial animation includes a variety of techniques from morphing to threedimensional modelling and rendering. It has become well-known and popular through animated feature films and computer games but its applications include many more areas such as communication, education, scientific simulation.

Techniques of Computer facial animation


1. 2D Animation Two-dimensional facial animation is commonly based upon the transformation of images, including both images from still photography and sequences of video. Image morphing is a technique which allows in-between transitional images to be generated between a pair of target still images or between frames from sequences of video. These morphing techniques usually consist of a combination of a geometric deformation technique, which aligns the target images, and a cross-fade which creates the smooth transition in the image texture. This animation technique only generates animations of the lower part of the face. These are then composited with video of the original actor to produce the final animation. 2. 3D Animation Three-dimensional head models provide the most powerful means of generating computer facial animation. The model was a mesh of 3D points controlled by a set of conformation and expression parameters. The former group controls the relative location of facial feature points such as eye and lip corners. Changing these parameters can re-shape a base model to create new heads. The latter group of parameters (expression) are facial actions that can be performed on face such as stretching lips or closing eyes. This model was extended by other researchers to include more facial features and add more flexibility. Animation is done by changing parameters over time. Facial animation is approached in different ways, traditional techniques include:

12

A. B. C. D.

Shapes/morph targets Skeleton-muscle systems bones/cages Motion capture on points on the face

A. Shape based systems offer a fast playback as well as a high degree of fidelity of expressions. The technique involves modelling portions of the face mesh to approximate expressions and visemes and then blending the different sub meshes, known as morph targets or shapes B. Skeletal Muscle systems, physically-based head models form another approach in modelling the head and face. Here the physical and anatomical characteristics of bones, tissues, and skin are simulated to provide a realistic appearance. Such methods can be very powerful for creating realism but the complexity of facial structures make them computationally expensive, and difficult to create. C. 'Envelope Bones' or 'Cages' are commonly used in games. They produce simple and fast models, but are not prone to portray subtlety. D. Motion capture uses cameras placed around a subject. The subject is generally fitted either with reflectors (passive motion capture) or sources (active motion capture) that precisely determine the subject's position in space. The data recorded by the cameras is then digitized and converted into a three-dimensional computer model of the subject. Until recently, the size of the detectors/sources used by motion capture systems made the technology inappropriate for facial capture.

Speech Animation
Speech is usually treated in a different way to the animation of facial expressions; this is because simple key frame-based approaches to animation typically provide a poor approximation to real speech dynamics. Often visemes are used to represent the key poses

13

in observed speech (i.e. the position of the lips, jaw and tongue when producing a particular phoneme); however there is a great deal of variation in the realisation of visemes during the production of natural speech. One of the most common approaches to speech animation is the use of dominance functions. Each dominance function represents the influence over time that a visemes has on a speech utterance. Finally, some models directly generate speech animations from audio. These systems typically use hidden models or neural nets to transform audio parameters into a stream of control parameters for a facial model. The advantage of this method is the capability of voice context handling, the natural rhythm, tempo, emotional and dynamics handling without complex approximation algorithms.

Face Animation Languages


Many face animation languages are used to describe the content of facial animation. They can be input to compatible "player" software which then creates the requested actions. Due to the popularity and effectiveness of XML as a data representation mechanism, most face animation languages are XML-based. For instance, this is a sample from Virtual Human Markup Language (VHML): <vhml> <person disposition="angry"> First I speak with an angry voice and look very angry, <surprised intensity="50"> But suddenly I change to look more surprised. </surprised> </person> </vhml>

Disposition: Surprise

Disposition: Angry

14

3D computer graphics
3D computer graphics (in contrast to 2D computer graphics) are graphics that use a three-dimensional representation of geometric data (often Cartesian) that is stored in the computer for the purposes of performing calculations and rendering 2D images. Such images may be stored for viewing later or displayed in real-time. Despite these differences, 3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire-frame model and 2D computer raster graphics in the final rendered display. In computer graphics software, the distinction between 2D and 3D is occasionally blurred; 2D applications may use 3D techniques to achieve effects such as lighting, and 3D may use 2D rendering techniques. A 3D model is the mathematical representation of any three-dimensional object. A model is not technically a graphic until it is displayed. Due to 3D printing, 3D models are not confined to virtual space. A model can be displayed visually as a two-dimensional image through a process called 3D rendering, or used in non-graphical computer simulations and calculations.

The process of creating 3D computer graphics


The process of creating 3D computer graphics can be sequentially divided into three basic phases: 1. 3D modeling, which describes the process of forming the shape of an object. 2. Layout and animation, which describes the motion and placement of objects within a scene 3. 3D rendering, which produces an image of an object.

1. 3D modeling
In 3D computer graphics, 3D modeling (also known as meshing) is the process of developing a mathematical representation of any three-dimensional surface of object via specialized software. The product is called a 3D model. Scene setup: Scene setup involves arranging virtual objects, lights, cameras and other entities on a scene which will later be used to produce a still image or an animation. Lighting: Lighting is an important aspect of scene setup. As is the case in real-world scene arrangement, lighting is a significant contributing factor to the resulting aesthetic and visual quality of the finished work. Texturing: Texturing process is used to add realism to 3D models.

15

3D Modeling 2. Layout and animation


Before objects are rendered, they must be placed (laid out) within a scene. This is what defines the spatial relationships between objects in a scene including location and size. Animation refers to the temporal description of an object, i.e., how it moves and deforms over time. If used for animation, this phase usually makes use of a technique called "key framing", which facilitates creation of complicated movement in the scene.

3D Layout

16

3. 3D Rendering
Rendering is the final process of creating the actual 2D image or animation from the prepared scene. This can be compared to taking a photo or filming the scene after the setup is finished in real life. Several different, and often specialized, rendering methods have been developed. These ranges from the distinctly non-realistic wireframe rendering through polygon-based rendering, to more advanced techniques such as: scan line rendering, ray tracing. Rendering may take from fractions of a second to days for a single image/frame. In general, different methods are better suited for either photo-realistic rendering, or realtime rendering. A. Real-time Rendering for interactive media, such as games and simulations, is calculated and displayed in real time, at rates of approximately 20 to 120 frames per second. In real-time rendering, the goal is to show as much information as possible as the eye can process in a fraction of a second. B. Non real-time Animations for non-interactive media, such as feature films and video, are rendered much more slowly. Rendering times for individual frames may vary from a few seconds to several days for complex scenes. Rendered frames are stored on a hard disk then can be transferred to other media such as motion picture film or optical disk. These frames are then displayed sequentially at high frame rates, typically 24, 25, or 30 frames per second, to achieve the illusion of movement. C. Photo-realism When the goal is photo-realism, techniques such as ray tracing or radiosity are employed. This is the basic method employed in digital media and artistic works. Techniques have been developed for the purpose of simulating other naturally-occurring effects, such as the interaction of light with various forms of matter. Examples of such techniques include particle systems (which can simulate rain, smoke, or fire), volumetric sampling (to simulate fog, dust and other spatial atmospheric effects), caustics (to simulate light focusing by uneven light-refracting surfaces, such as the light ripples seen on the bottom of a swimming pool), and subsurface scattering (to simulate light reflecting inside the volumes of solid objects such as human skin). The rendering process is computationally expensive, given the complex variety of physical processes being simulated. Computer processing power has increased rapidly over the years, allowing for a progressively higher degree of realistic rendering. Film studios that produce computer-generated animations typically make use of a render farm to generate images in a timely manner. However, falling hardware costs mean that it is entirely possible

17

to create small amounts of 3D animation on a home computer system. The output of the renderer is often used as only one small part of a completed motion-picture scene. Many layers of material may be rendered separately and integrated into the final shot using compositing software.

Rendering of 3D Model

Final 3D image After Rendering of 3D Model

18

A 3D Computer Animation effect making example in MAXON Cinema 4D Creating a Spill Effect

19

Final Product

20

Bibliography
Aggarwal, Hitesh. 3D Movies And 2D-3D Animation. Sri Ganganagar, Rajasthan: 8th Semester Seminar Presentation, 2011.

References
1. 2. 3. 4. Google Wikipedia MAXONs Cinema 4D Software Website Google Images

21

Вам также может понравиться