Вы находитесь на странице: 1из 59

OpenFX Tutorials

http://www.openfx.org/resources/tutorials/index.html This page contains a list of tutorials which all give simple step by step instructions with corresponding screenshots. A wide range of design and animation techniques will build up in this section which should culminate in a comprehensive 'manual of excellence' :) These tutorials are now available in more languages than just english - a list of available languages is displayed at the top of each tutorial. Developer Documentation Details of how to write plugins are described here Contributing If you feel you can explain a method of using OpenFX to create a building block technique that will be useful to others, please submit screenshots and brief step by step instructions of your tutorial to Stuart.

Animator tutorials

Using paths to define actor movement

Step 1

In designer make a path using the pen tool. In the file menu go to Export Model > Path and save the path object as a closed path.

Step 2

Step 3

Make a object that will move along the path and save it. Step 4 Now in Animator in the Add actor menu and select path. Click on Filed.. and choose the path you make

earlier in Designer.

Step 5

In the Add Actor menu select object and choose the follow object you created earlier.

Step 6 Set your animation length Ctrl+L or Frame > Set Animation Length in the menu bar. For this example have 30 frames. Note for paths to work, your animation must have more than 3 frames.

Step 7 Select the follow object and double click on the position time line for the object at frame 1. This will open the Movement Timeline for Model1 Actor.

Step 8 In the dialog box press select in the follow information box and select the path object. Back in the main dialog box make sure the 'follow' check box is checked, see opposite. That's it. Have fun. Download resource files here

Tutorial written by Andrew Heyworth


Animating a skeletal structure

Step 1 Make a tube object in designer. This will be a basic arm which will have a skeleton applied to it, so scale and proportion it to look something like an arm. To help with modelling use the subdivided tool (Ctrl+V) to add more detail to selected verticies.

Step 2 Now to add the joints with a basic skeletal frame. The skeleton is just a set of joints attached to selected

vertices which will modify the mesh when moved.

Step 3

With nothing selected, select Edit > Skeleton > Build joints. This will open a floating tool bar. Hover over each of the buttons in the floating menu will give you a brief description. Select move and click on the root joint (this is the initial joint) it's the small yellow box. Reposition it at the top of the arm mesh. Think of the root as the shoulder, which in this exercise do not need to move.

Step 4 Now select build in the floating tool bar. Click on the root and with the left mouse button still held down drag the pointer to the middle of the mesh and let go of the mouse button. A dialog box will open prompting you for a name for the new joint you just made. Call it elbow.

Step 5

Do the same as above but click on the elbow joint and drag down to the end of the mesh. Call this joint hand.

Step 6 With the joints made and in position we can attached the separate joints to the necessary vertices. Close the build joints menu and open the Hierarchy Joints floating menu. Select the elbow joint in the hierarchy window. In the rear window lasso the top three rows of vertices (see opposite). Go back to the hierarchy menu and press attach. This will assign the selected vertices to the elbow joint. You can now deselect these vertices by pressing deselect.

Step 7 Do the same for the hand joint but this time select the bottom two rows of vertices (see image). Save your newly make arm and go into animator.

Step 8 Set animation length to 30 frames and load the newly created object as a robot.

Step 9 With the robot loaded insert a key frame in the robots costume time line at frame 15. Make sure the keyframer window is open so that you can see the time line ( press 'k' to open keyframer). In the camera view press on the Go To Frame button and enter 15. This is the newly created key frame for your arm which is required before you can modify the mesh with your skeleton. Remember a costume keyframe has to be made and the Go To Frame has to be on that frame before you can use the skeleton. Step 10 Select the robot actor by pressing 'a' and clicking on robot1. Press 'v' this will display the robot view. In the robot hierarchy window select the hand joint and rotate it about the y axis in any view port this will create a key frame pose for your object. To create the effect of the object bending insert a key frame at 2 without repositioning any of the joints and follow steps 9-10. Support files are available here. Tutorial written by Andrew Heyworth

Step 1 In this tutorial we will use look at one aspect of saving a template motion for a robot actor so that it can be used and applied in compound character animations. We will begin with a model of a man that can be made to move using the built in skeleton. If you look at the image you will see the skelton inside the arms and legs etc. Note also the skeleton extends beyond the rear of the figure. The purpose of this skeleton is to allow the figure to turn, pivot, bend rise and fall. In order to use this model as a robot in the animation we will need to be able to move it in these ways. In this tutorial we will make a very short animated sequence that represents the figure taking one step as part of a walking aniamtion. Once we have the sequence we will save it as an animated sequence that can be imported and combined.

Step 2 We start by opening the animator and insterting our figure as a robot actor. We are just going to use this animation to set up the repeat sequence required to make the man take one step. To do this we will move the camera and the camera target so that we can see the whole figure from the side. It also might be a good idea to add a second camera at the front of the man watching how he moves from side to side. In building a walking sequence there are two aspects that we will need to consider. Firstly the

movement of the man's arms, legs and body. And secondly how the whole man moves forward as he takes his step. He will also rise up a little as he takes his steps. To achive this we will need to create key frames for the movement of the man and for the change of pose of the man. The walking sequence will requre 30 frames per step. To go through the step will require 5 keyframes and 5 poses. We will also use 5 keyframes in the movement timeline.

Step 3 An important part of the walking action is that the movement (forward) of the figure as he takes his step matched the movement. (So that he doesn't look like he is moonwalking!) we find that adding a ground plane and several Target actors can help in this process. The image in this step illustrates the keyframer with the keyframes for pose and for position. The main task now is to "Do the Walk". This will take a bit of work and adjustment and a little artistic skill in 'posing the robot' and positioning it. To help with this, the tutorial includes post files for each key frame in the sequence. These files are then used in turn p1.pze,p2.pze,p3.pze,p4.pze,p1.pze (the last post is the same as the first to allow for cyclic animation.

Step 4 To load the poses, select the Robot actor (click on its hold point) and engage the vertex tool, this will show the 3D Pose window and the Skeleton Hierarchy view. Using the hierarchy view the pose for the

robot can be loaded for each key frame in turn. Step through each keyframe using the buttons below the camera window the pose is loaded into the active keyframe, i.e. that frame is indicated below the camera window.)

Step 5 To get the required movement takes a bit of time and some trial and error. Once the character's poses look correct, you can move the character in the direction of walk to get a good looking relationship between the start of the step and the end.

Step 6 Now that the animation has been created we have to save the sequence so that it can be used in as many animation as we need this man to walk. We shall see in other tutorials that we can load this sequence into longer animations and tell the man to walk along paths of any shape. (Making Character animation much easier to achive.) N.B. Do not confuse these types of sequences with the sequences that OpenFX can export to a special OFX program that builds animations for the "Virtools" virtual reality simulator software. Double click on the Robot actor's root node (the hold point.) This is highlighted in green on the enlarged snapshot. The Costume Information dialog will pop up. In this dialog click

on the Export button and save the walking sequence as an .RSQ file (a Robot SeQuence). We will be able to use this robot sequence in any animation where the man is walking.

Step 7 If you want a complete spectrum of actions for this man you can build up a library of .RSQs. For example, going from standing to walk start. Walk step to standing transition. Or perhaps, getting up out of a chair. With a library like this, a very rich set of actions can be called on to animate a wide range of scenes. Finally: The RSQ files are just basic text files and they can be easily edited in Notepad to apply to a different model. So if you have another man with the same skeleton (and all humans should be buit with the same skeleton) then you can have any bipedal figure walking through your animations. This tutorial is still under construction.... Download resource files here. Tutorial written by Stuart

Insert Skeleton This tutorial show how to us an exported robot sequence.

Step 1 This tutorial follows on from the Robot Export Tutorial. If you have not tried the export tutorial you will need to load the START.OFX file from the TUTORIAL.ZIP file and export the .RSQ file. (Following the instructions in the RobotExport Tutorial. Because the .RSQ file contains a full path specification for the robot actor's MFX file it is important that you keep the MFX file in the same place that you used when exporting. In the .RSQ file included with this tutorial I have assumed that the model file is called male_large.mfx, and is in folder \data\tutorial\make_large.mfx. The .RSQ files are basically text files and if necessary the text path can be edited to match your location. e.g. the first few lines in the .RSQ files will look like: 5 1 30 30 c:\data\tutorial\male_large.mfx 111 26 1.000000 0.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 ... The line: "c:\data\tutorial\male_large.mfx" is an absolute file path reference and can be edited using Notepad to change this if you need to use the .RSQ file in a different folder. To use the robot actor .RSQ file it has to be inserted in an animation. Open the animator, or you can use the example included in the resources (file scene.ofx, with some objects and a path that will be used in Robot Path tutorial.

Step 2

We are now ready to insert our robot sequence. However, it is important to appreciate that the .RSQ file defines one ONE cycle of our walking animation, in this specific example, a sequence of 30 frames. For that 30 frames it contains skeleton poses and position key frames with a reference position taken from the first position keyframe in the sequence. The animation we are goint to insert the "skeleton sequence" into may not contain 30 frames (the example here is 150 frames) so we have the option of inserting multiple copies of the sequence (5 copies for our 150 frame animation.) We need to insert a Robot actor into the scene, this can be done in several way, click on the Insert icon from the toolbar and select a Robot actor. The robot actor selection dialog wil pop up. You can choose either to insert a normal robot model (from an .MFX) file or your can select a robot sequence (.RSQ file.) Note the extra line along the bottom of the selection dialog! This extra dialog will allow you to insert copies of the sequence. The insertion process will use the position information in the squence to ensure that sequence after the first are moved so that the action, in the second, and subsequent sequences, starts where the last sequence left off. As we have a 30 frame sequence and a 150 frame animation we will choose to insert 5 copies of the

sequence.

Step 2

The Robot is inserted to begin its walk at the current of the 3D cursor and will perform its action until the number of frames alloted have passed. Adjustments to what happens can be made to the actor in any of its 'pose' or 'position' key-frames. For a walking figure it is a good idea to follow up the action of loading the sequence, to make it follow a path. (See the Robot Path tutorial.) Download resource files here. Tutorial written by Stuart

Robot Path This tutorial show how to move a robot sequence so that a robot can be made to follow a path, to give the appearance of walking, running etc.

Step 1 This tutorial follows on from the Insert Skeleton tutorial, so we shall start with the this scene. The animation is set to extend for 150 frames and 5 cyles of the walking sequence apply to the walking man. Note in the scene the robot (man ) figure that walks in the negative Y direction for 150 frames. This is made up from 5 copies of the walk cycle. During the walk cycle, as well as moving the limbs and body of the figure, the whole figure moves a distace of one step. As each cycle repeats the figure begins the cycle at the location here the last cycle finshed. This gives the appearance of a semaless walk in the -Y direction. Note that there are a lot of keyframes in the costume and position time line. We could change the direction the figure walks by moving the object in each keyframe, but that is a lot of work. Instead, OpenFX has a feature that will move this figure so that it walks along a specified path. When OpenFX moves the figure to follow a path it uses the figure's stride length to make sure that the figure walks in a natural way along the path. If the figure reached the end of the path before the alloted duration it will stop at the end but continue to perform its wait cycle. Should this occur any timeline segments can be trimed from the end of the costume timeline. If the figure does not reach the end of the path in the number of frames then the figure will stop short of the end.

Step 2 Now we can instruct the robot actor to walk along the path. Note that this is NOT the same as telling an actor to follow a path. This special function is designed only for character animation and it moves all the key frame positions to lie along the path. If you later move the path the figure will NOT follow the path it will stay where it was. Of course, you could later tell the robot actor to follow the same (or another) path and that will move all the position keyframes to their new location. Also note that in telling the robot to move along a path will require the introduction of a number of rotation keyframes, so that it moves in the direction of the path, and does not just glide along. To instruct the robot actor to follow the path: select the robot actor double clicking in the actor/object hold point (indicated 1) in the image. You could also double click on any of the actor's costume timeline keyframes (marked 2). This will bring up the actor information dialog, at the bottom there is a button (3) that says "Robot Follow Path" click this button and select the "Path2" actor to follow.

Step 3 That is about all there is to do! There may be some tidying up, or it may be necessary to undo the step, change the path length or move the path and then re-apply the robot to follow it. If other actors are added this path can be moved, e.g. to walk parallel to the first actor, and the "Robot Follow Path" applied again. In the image you can see that the figure now moves along the path. Note: It is possible that you build the walking sequence with the figure moving along the +Y axis and not the -Y axis. In such cases the figure will appear to walk backwards. To rectify the situation if you hold down the CTRL key while selecting the path OpenFX will rotate the figure by 180 degrees so that it walks forwards. Download resource files here. When available. Tutorial written by Stuart

Particles This tutorial demonstrates how to set up animator particle effects and the great flexibility that can be achieved with the particle actor .

Step 1 This totorial does not build step by step. A different partical setup is illustrated in each step. Inserting a partical actor is done in the same way as all other actors are inserted. partical actors can also have position, orientation and scale timelines. In the Animator particle actors are represented as cubic wireframe boxes. The cube illustates where the particles are located in the scene and give an approximate scale. The scale is only approximate because the particles have quite a degree of variability. To get the greatest benefit from the use of the OpenFX particle systems it is worth knowing a little more about the concepts that underlie them. They are based on an idea from an interactive program that normally runs in real-time (using OpenGL for its rendering) and shows the particle system moving and evolving dynamically. This concept has had to be modified to allow for the reproducibility required by OpenFX, i.e. the ability to render individual frames in any order at at any instant of time (because of this OpenFX's particle system does not quite render in real time.) Each particle system actor represents of a single particle system, and within a particle system there may be anything from as few as 1, to as many as 10,000 particles. Like all the other actors in OpenFX, the particle system exists in the scene for the duration of its costume timeline. It is positioned, sized and orientated with appropriate timelines and keyframes in the same way as all the other actors. In the case of a particle system, the UP (vertical or Z) direction has a particular significance. A particle system consists of a set of one or more particles. These particles are the visible components of a particle system. They are dynamic and behave like a set of projectiles that are fired off from from the centre of the particle system. One good way to initially think of a particle system is as a sort of exploding firework; the bright sparks in the firework are the particles, each firework itself is a particle system. Of course our particle system can do more than look like a bursting firework. We can arrange for a continuous stream of particles to be ejected from the centre of the particle system. We can arrange for the particles to remain active (alive) for longer, we can arrange for the particle to be ejected with high speed or slow speed. They can be affected by pseudo-gravitational forces and appear to be affected by a degree of randomness in their ejected direction or speed, or anything else. By carefully choosing these settings we can model the appearance of a wide range of phenomena.

Since the particle system is dynamic it evolves over the time period of its costume timeline. The rate of change is designed to look realistic when the rendered movie is viewed. (For example, fireworks that should go off at 1 second interval - will go off at 1 second intervals, flames that flicker will appear to flicker in a natural way. etc. etc. etc.) If you want a particle effect to last longer you will need to increase the length of its timeline, a 10 second effect will need to be set for 300 frames At the centre of each particle system is the particle emitter, this is location in the scene at which the particles in a particle system are created (it is also the location from where the particles are projected.) The bounding box that appears in the Animator's window is intended to give a rough guide to the size of the particle system once it reaches a steady state. We talk of a 'steady state' because normally the particle system has to go through a transient phase, for example, while a flame builds up. In addition to the major properties of a particle system (lifetime etc.) the realism is increased by adding some random fluctuations to each of the particle's parameters. For example, in the firework effect above the direction in which the particles are emitted is totally random. (In the case where no randomness is given to the direction of particle emission, they are ejected vertically.) In OpenFX the vertical direction is local to the particle actor. Thus, if you wish to eject a stream of particles in a horizontal direction you would simply use the particle actor's orientation timeline and keyframes to point it in a different direction. One last point concerns the particles themselves, are they just points of colour? NO! To add realism the particles are actually little pictures (monochrome bitmaps.) The pixels in the bitmaps coloured white are replaced with the colour of the particle and the bitmap's alpha channel is used to set the particle's transparency, (white=opaque,

black=transparent) So the controls of a particle system consist of settings for: Emission rate, the number of particles generated per second.

Step 2

Speed of emission. Lifetime Colour Alpha (how transparent a particle is.) Acceleration, (gravity) may speed the particle up, or slow it down. Variation in direction of ejection. Variations in all the other properties, speed, lifetime, colour, In this first example we build a simple fire inside a small object. (Click on the illustrations to reveal more information and show the particle dialog that achieves the effect.)

Step 3 This show a fireworks effect. (Animations of all these effects can be seen in the Particles gallery.)

Step 4 A particle fountain. (Click on the illustrations to show the particle dialog that achieves the effects.)

Step 5 A glow like fire.

Step 6 A blue/white flame moving in front of the camera.

Step 7 Fluid flowing out of a pipe, use droplet texture. And a LOT of particles. No variability and the same particle size at start and end of their lifetime.

Step 8 A smokey fire, use darker colours. Gravity is reversed at the end so that the smoke forms a cloud before fading away.

Step 9 A starfield drifting past the camera. Simulated by a strong gravity force towards the camera. The particles do not change colour.

Download resource files here. When available.

Tutorial written by Stuart

Designer tutorials
Boolean This is a tutorial aimed at beginners to OpenFX and 3D modelling in general. It explains how to cut one object out using the boolean action in the OpenFX designer.

Step 1 Start Designer select 'Insert Primitive' from the tool bar and create an object.

Step 2 It's always a good idea to colour each piece of geometry to help with identifying it later. To access the materials attributes press F8 or go to attributes > materials setting in the menu bar. Then deselect the

newly created geometry by pressing Ctrl+D.

Step 3

Create another object, move it by pressing m or selecting the move tool on the tool bar so that it intersects the first object you made. This new object will be the tool object for the boolean process. Don't forget to give it a new colour in the materials editor. Leave this object selected for the time being.

Step 4

In the Actions menu select Boolean. For a subtractive Boolean leave all the setting as they are and press OK.

Step 5

Step 6 Now the Boolean operation is complete you have four pieces of geometry: tool, workpiece and two cut objects formed from the intersection of the two primary objects.

Step 7 Select none (Ctrl+D) and then object select any vertex within the new booleaned objects. This will highlight the objects vertices turning them yellow. Now delete this object.

Step 8 What you have left is the original tool object and a new cut away of the workpiece.

Step 9 With the two objects selected go to the Actions menu and perform a weld command (leave the settings as they are). This will combine the two object into one single object. Have Fun.

Tutorial written by Andrew Heyworth

Simple Skeleton This tutorial is for beginners who want to make a model that can be posed easily using the skeletal structure. Updated: Revised version available (02.03.2001)

Step 1 We are starting off by making a simple structure to animate. We are going to make a tank with a swinging turret. Make a box for the base and a cylinder for the barrel. I have the start of the barrel directly above the centroid of the box to make things easier.

Step 2 Click any Selector tool. I find that the Hierarchy Joints menu is sometimes unavailable if you are not using a Selector. Go into Edit > Hierarchy Joints. In the Hierarchy Joints Menu hit the "+". Click on the 1 that appears in the menu. Wait a second and it should go into rename mode. Type in "Base" then click anywhere in the pop-up except for the button area. This joint will be used to rotate the model relative to the model center. Since the barrel is going to be rotating relative to the base, it needs to be in a subsection of the base. Click on the joint Base if it isn't currently selected. To make things easier on yourself, you might want to reposition the blue crosshair in the 2d views before making each joint. Hit the "+". Rename the 2 to Shoulder. Make another joint off Shoulder and rename it Barrel. The Shoulder is there so that there will be a joint at the start and end of the limb (barrel). Renamed joints are an asset because they stop you from asking yourself questions like: Was joint 5 a left arm or was it a right arm? If you distribute your

model, renamed joints are appreciated. Select the box on your model. Click on "Base" in the Hierarchy Joints menu. One of the buttons in the popup corresponds to attach. It is the sixth button from the left. Click it. The cube is now known to OpenFX as "Base". Deselect the cube and select the cylinder. Click on "barrel", then attach. Close the Hierarchy Joints menu. Step 3 Go into Edit > Skeleton > Reposition joints. You should see a a light blue square and three circles appear with lines connecting them. If you didn't reposition the crosshair before making joints, you have trouble spotting the circles. All of them will be where you left the blue crosshair. Press Shift-A to locate any lost circles, remembering that some circles may be stacked on one point. Move the square so it is in the middle of the cube. Move the first circle so it above the square. Move the next circle to the start of the bun barrel. Move the last circle to the other end of the gun barrel. You have finished making the skeleton for your tank! This is a very good point to save your work. You may never be able to get your model straight again, so this save should be your base, unposed work. Save your model as something else from here on in.

Step 4 Go into Edit > Skeleton > Pose or hit Shift+A to go into Pose mode. Your tank should appear as two boxes now with the skeleton visible in it. Click on the Square. Dragging will rotate your entire model that you made around the square. Press Tab to move the yellow axis indicator to a new axis and a new rotation plane. Click on the first circle. This will rotate the base and everything below it on the hierarchy (like the barrel). This is different from the square due to if you added an extra shape to your model, the square would rotate it but the first circle wouldn't. You should note at this time that the OpenGL Window that displays your model will not be updated into the new position until you use the pan tool to move one of the other windows around. (V1.0, possible bug) The final circle rotates the barrel and anything that would be below it on the hierarchy (nothing on this model is below it). Press tab a few times and you will see that the axis indicator has three positions on the previous circle and one on the current circle. This fourth position is the spin axis. Note that repositioning the joints will cause the rotation axes to move from their current vectors to other, more confusing vectors.

Step 5 If you were to add another cylinder to the end of the barrel, you could adapt the barrel into an arm. Create another joint below barrel in the Joint Hierarchy and attach the new cylinder to it (If you are smart you may want to move the blue crosshair to the end of the new cylinder before making the new joint). Reposition the joint if necessary. This will leave you with an arm with an elbow joint. Try making a second arm on the model as your homework. Have Fun making posable models. Download model here. Tutorial written by Keith Kelly

Shampoo Bottle This demonstrates the use of the Lathe tool in the designer and texture mapping. It also shows how to use the animator to create a simple static scene.

Step 1 Using the pen tool create the three main components of the bottle: Lid, Body and Liquid. To help later on its better to construct the profiles about the Z axes (see opposite). Try to leave a small gap between all the components so no geometry is intersection any other

component, it will look better when finally rendering. 2

Step

Selecting each profile separately apply the Lathe tool with a setting of 14 segments and 360 degrees. As you lathe each profile assign a material in the Attributes menu or press F8. The setting are: LID, BODY, and LIQUID. Use a refraction index setting of 1.50 (glass) for the body texture and 1.333 (water) for the liquid. Remember: when rendering a refraction surface turn on trace shadows and trace reflections. Hint: To Lathe accurately about the center, open the Display Co-ordinates dialog box in the view menu. Click anywhere in the top view and change X, Y, Z in the Display Co-ordinates box to 0,0,0. Then Actions >

Lathe or Ctrl+L.

Step 3

Now to select the faces that will have a Decal mapping applied to them, for the label. Using the selector tool from the tool bar hold down shift and lasso the front 5 vertices top and bottom. With these selected go into Attributes > materials settings and open the maps tab. Here select set and choose <new>. Amend the menu to look like this example. Hint: Selecting the required vertices couldn't be easier, just remember what ever you want to zoom in on must me visible in all three view ports before you can select it.

Step 4 Some sections of the label image are not required these sections can be made transparent by telling OpenFX which colour isn't required in the Decal Colour section of the Image Map Settings dialog box. In this dialog box we must also select the label which is to be applied to the surface as a still image.

Step 5 Still in the Image Map Setting dialog box, press the move button. A small floating menu will appear, in the rear view window click on the top left hand vertex were the map will go and press the T/L button then click on the top right vertex and press T/R click on the bottom left vertex and press B/L and finally press Done. The Decal is now positioned. Save the file as bottle.

Step 6 From the original file we'll produce two. One for the bottle and lid and the other for the liquid. Copy the original file and open it, delete the lid and bottle objects and select the liquid. In the Actions menu select Duplicate, click ok. Now we have a copy of the liquid object. Double click on the scale icon and reduce it in all axis to .99 click on scale vertices and scale from center. Now flip the

normals using Actions > Reverse and change the material to this example. Save this file as liquid. Open the original file, delete the liquid and save the file as bottle.

Step 7 Create a new file for the backdrop object .Using a single extruded polygon and assign the plate plugin as a texture. Set the Scale for X, Y, Z to 3.0, don't select dirty plate and have colour variation as 0.0. Save this as wall.

Step 8 In animator load the bottle and liquid model. If they appear in different places select one at a time, double click the move icon and set the X, Y, Z to 0,0,0. Load the wall object and position as opposite. This scene has been light with a spot light for the refraction surfaces and a general omni light. A ground object forms the floor and a sky object set to gradient of pale blue / light yellow. Play with the lighting and sky to get the best balance of colour and shadows.

Step 9 Render the scene with trace reflection and trace shadows and trace glass on. Be warned this does take a long time, so go make yourself a cup of tea and sit back. Download resource files here.

Tutorial written by Andrew Heyworth

Rubber Bones in Designer This shows how to use the rubber bones mechanism to distort objects in the designer.

Step 1 Two spheres and two bones. The extent of the range is shown for each bone, this is drawn as a radius around each end of the bone. The range of bone 1 extends over the whole of sphere 1 but does not influence sphere 2. The range of bone 2 is larger and does extend to cover the sphere around bone 1.

Step 2 When bone 2 is elongated (by moving node 2) there is a small distortion of the top part of the sphere around bone 1.

Step 3 When we rotate bone 1, the sphere around it is distorted but the influence does not affect the sphere at the end of bone 2 as it is too far away.

Step 4 In this example we see the effect of distorting a sphere when all vertices lies within the range of all the bones. (Start)

Step 5 In this example we see the effect of distorting a sphere when all vertices lies within the range of all the bones, move vertices 1 and 2 (at the end of the bones) together.

Step 6 In this example we see the effect of distorting a sphere when all vertices (bone nodes) lie within the range of all the bones, move vertex 4.

Step 7 This example is a variation of the previous example, where the bone joining nodes 3 and 4 lies too far away from the vertices in the sphere to influence it.

Step 7 The result of moving vertices 1&2 and 3&4 together is illustrated. (Moving 3&4 has no effect)

Step 9 Moving 1&2 causes an 'on-block' move of the the vertices because all those bones influencing the vertices move by the same amount at the same time.

Download resource files here.

Tutorial written by Stuart

Shrink Wrap This tutorial shows how to use the shrink wrap feature (based on the marching cubes concept) to build models with a regular mesh arond any arbitrary shapes.

Step 1 In this tutorial we will demonstrate the Shrink Wrapping plug-in action. This action uses the Visualisation Toolkit library to provide the marching cubes algorithm for rendering an iso-surface in a potential field. The field is generated by using OpenFX's mesh topology to define the potential field. This can be done using either a vertex, an edge or a face. The idea of the action is to place a volume filling cubic array of points (a point cloud) around all the selected vertices and then to use this array to build a surface mesh that lies in this within this volume of points and lies at a place where the specified equi-potential takes the value zero. The shape of this surface is defined by the choice of element (vertex, line, face) to be wrapped and the distance from the element that is defined to be the location of the zero potential value. The action requires that this distance is specifed in terms of the 'units' of scale in the model, so the first thing that any user of this tool must do is make sure they are aware of the scaling in use. The best way to see how this tool works is to start with a few simple shapes and look at the results of the application of the tool.

Step 2 We will start with two vertices, and link them together with an edge so that we can see the effect of shrinking around vertices and around edges. The vertex on the left is circled in red, the coordinates dialog is used to set the coordinate origin at the position of this vertex. The 3D cursor (circled in yellow) is then placed at a distance along the x axis and the coordinates dialog used to make the distance between vertex and 3D cursor (current location) 1 (one) unit. The hold point for this model is indicated at the yellow cross. The second vertex is located approximately 1 unit to the right of the second vertex.

(This action requires at lease two vertices to be selected.)

Step 3 We can now try the shrink wrap tool. Bring up the dialog Action->PlugIn_Actions->Shrink_Wrap. This brings up the dialog. Click on the button that says "Round Vertices". The default maximum sampling distance and surface sample distance will suffice for this first step. Note that at the bottom of the dialog are 3 boxes to enter the samping resolutions. These 3 numbers dictate the coarsness (or finenss of the mesh.) To understand the sampling resolution: Think of the X (horiziontal direction) the default is for 25 mesh points to be spaced out along this dimension to enclose the whole span of the selected vertices (in the X direction) and an additional distance of the "Maximim Sampling Distance From Feature". The location of the surface will be drawn at a distance of the "Surface Sampled at Distance From Feature". A fair choice of these paramters will depend on the dimensions of the model itself but a Sampled distance of about 25% of the maximum sampling distance will give a nice smooth surface around the vertices.

Step 4 Back to the example. With the "Round Vertex" action selected - Click OK. The resulting mesh has been created around the two vertices. It is not very smooth because the 25x25x25 grid points cover a very large region of space out to a range of 1.0. If we repeat the process using a surface distance of 0.7 the mesh surface will be built at a much greater distance.

Step 5 This shows the result when we set a range of 1.0 and surface distance of 0.7 instead of 0.1 (again using the "Round Vertex" option.) You will see that the surface is drawn much further away from the vertices and it looks like two spheres intersection. However if you check out the mesh in detail you will see it is a single surface. Essentially it is mesh that maps out the value of a potential field emminating from the two vertices.

Step 6 If we now re-load the original two vertex model and repeat the Shrink map example, but this time with the "Round Edges" option selected we will get a cylinder with rounded off edges. This is because we are considering the field emminating from the edge and is it does not "fall off" from just the vertices.

Step 7 Again, changing the surface distance to 0.7 (leaving the max set to 1.0) will produce a surface that is further away from the edge used by the tool.

Step 8 It takes a little experiment to get a good feel for the adjustment of the parameters of the Shrink Wrap function. It should not be necessary to change the default Sample Resolution. Begin by trying to get the form of the shape that you want by choosing values for Sampling distance and the Maximum Sampling Distance. Remember, that the significance of these depends on the unit distances. I suggest that you start by assigning the "Sampling Distance", use the "Units" floating dialog to visually decide how far away from the shape you want the surface to lie. Then choose the "Maximum Sampling Distance" so that it about 2 to 3 times as far away. (This allows for the surface to have a fairly smooth appearance and to maximise the use of the Sample Resolution. (The UNDO function is a useful way to experiment.) When you have a good shape you can always repeat the step with a larger Sample Resolution. If you make the Maximum Sampling distance too large or you make the Surface Distance too large then the mesh will come up against the internal boundary that surrounds all the selected vertices, giving a result like this.

Step 9 The Shink Wrap action can also wrap around polygons, for example if you shinkwrap around a cube a nice "dice' type object can be created. The image for this step illustrate the effect of shrink wrappping around a point cloud, a 3D curve and a cube. Lots of interesting shapes can be built using this tool alone. It can even be used to build a more regular mesh around some shapes that have a poor mesh.

Download resource files here.

Tutorial written by Stuart

Paint Map This tutorial shows how to use the enhanced map painting dialog (also called the Texture Coordinates Edition) to fine tune image maps to achive a good match between the image map and the way it covers the painted facets.

Step 1 This utility is specifically designed to allow a single image map to contain a collection of sub images. For example it can be used to design the textures that apply to a character, face clothing etc. By setting the mapping coordinated in this editor the mesh for the face can be aligned with the part of the picture representing the face, the torso mesh mapping coordinates can be aligned with the part of the image representing the torso, and so on.

Step 2 To use this editor, begin by selecting all the faces all the faces for which the mapping coordinates are to be changed. It is necessary to place some form of initial mapping onto these faces and they all must

have the same image mapping applied. (The initial mapping is necessary because, if no mapping coordinates are applied before the map coordinates editor is started all the vertices in the mesh will be superimposed. (Important: The mesh that appears in this editor is NOT the mesh that makes up the shape of the object being textured, this mesh is a mesh of surface mapping coordinates. We can start with this arrangements of planar polygons that have an arbitrary map applied and select the indicated vertices. These apply to the rectangle hightlighted in the 3D window.

Step 3 Once the faces (vertices) and map have been selected the mapping coordinates editor appears as a floating window about OpenFX (on a two screen desktop it can be dragged off onto a second screen.) The editor will show the map image and superimpose on it a mesh representing the location of the image map coordinates of the vertices within selected faces of the original model mesh. (Remember, the model structure mesh is three dimensional, wheras the mapping coordinate mesh is two dimensional, it covers the surface of the selected faces.) Here we see the picture used for the map with the mapping coordinates of the vertices of the selected mesh drawn in red. The mapping coordinate editor has a single menu with a number of commands that represent the tools and functions of the editor, the most commonly used ones also have keyboard equivalents. By moving the position of the vertices in the above texture mesh map, the part of the map that covers any particular part of the object may be altered.

Step 4 If we reposition the mesh in RED as shown then the rectangle is painted by that part of the map image that lies inside the rectangle. (As shown.) We can use the "Update" command to update the mapping and the Select commands to select and de-select the vertices in the mesh. (Note that this selection only applies to vertices in the "Paint Mapper" is has nothing to do with selected/de-select/hidden vertices in the normal designer!) Update - Update the mapping coordinates in the model from the data in the Paint Map Coordinates editer (also called the Texture Coordinate Editor (TCE) (After this, the design module and renderer, can be used to inspect the effect of any mapping changes without leaving tje TCE.) Selection - This menu offers the option to select all or deselect all the vertices in the editor (selected vertices are drawn as larger rectangles, deselected vertices as small rectangles.) The following keyboard equivalents may be used Select All (ctrl+A) Deselect All (ctrl+D) Select (keyboard key S) When this tool is active you can click and drag a selection box to enclose one or more vertices. Clicking on a single vertex (without dragging) will select it. (or deselect it, if it is already selected.)

Step 5

By rotating, scaling, moving (one vertex or several selected vertices) we can obtain the mapping shown here. Move (keyboard key M) Move the selected vertices by clicking and dragging the mouse. If no vertices are selected you can move a single vertex by clicking and dragging it. Rotate (keyboard key R) Rotate the selected vertices. Click in the image and drag the mouse (LMB held down.) Rotation is around the point of first click. Scale (keyboard key X) Scale the selected vertices. Click and drag the mouse. Scaling takes place at the point of first click.

Step 6 After re-alignment the map rectangle surrounds the upper row of characters. This is applied to the indicated rectangle with the result pointed to in the 3D window.

Step 7 When we select all the vertices and reposition the image map coordinates the result will be like this. To finish we use the "Exit" command. Exit - Return to the OpenFX design module We can change the pen colour in which the mesh and vertices are drawn using the menu command. Pen - This submenu offers a selection of colours in which to draw the map

mesh. Choose the one that offers the best contrast with the background image. To look at a part of the image/mesh in greater detail the Pand and Zoom tools can be used. Pan (keyboard key P) When the window is showing an enlarged image (i.e. when zoomed in) this tool will move the image so that those parts of it, that lie outside the visible window, will be moved into view. Zoom (keyboard key Z) Use this tool to enlarge part of the image, for fine mesh adjustment. Download resource files here. When available. Tutorial written by Stuart

Renderer tutorials
Renderer Settings How to use the renderer settings - especially the Ray Tracing settings.

Step 1 Select the desired rendering settings in the Control, Options, Settings, and Tracing tab pages of

the Renderer dialog. Press one of the the Render buttons on the right of the Renderer dialog to start rendering. By default the Scanline (Software) renderer is used to produce the images. Various levels of Ray tracing can be enabled in the Tracing tab page. Phong rendering is suggested in most cases since ray tracing is notoriously memory and time intensive. If your hardware can support it the GPU Render button will be un-grayed and you can use it. The software renderer is multi-threaded and will benefit greatly if additional processors/cores are installed in the system. Rendering on a two processor system will almost be twice as fast as with one processor.

Step 2 The GPU renderer uses the same control settings as the scanline renderer Because this renderer relies on the graphics adapter's processor hardware it is important to appreciate that some limitations apply to this renderer. Only the first 8 lights are used. No spotlight shadows are available. No light distance attenuation applies. Not all image post-processors or plug-textures are available. Ray tracing is disabled.

This renderer works much faster than the software renderer - up to 32 times as fast. When rendering some animations it will work almost in real-time at large frame sizes and high settings of anti-aliasing. It works most efficiently when rendering ansimations because all the models and image files are only loaded once at the start of the sequence and do not have to be loaded at every frame. When just rendering a single frame all the modes and images in use for that frame have to be loaded.

Step 3: Renderer Control Tab Page The Control page of the Renderer offers the most significant control parameters. The most important buttons are on the right and control whether the scanline software renderer is used or the harware rendering engine(GPU rendering) is used. The hardware rendering engine will be much faster for most images and movies (up to 30 times faster) but it will not work on older hardware and some graphics cards may not be compatible. (You must have a graphics adapter that supports at lease OpenGL 2.0) Resolution - Choose the resolution to which the image is rendered. Choose one of the presets or enter the number of horizontal and/or vertical pixels in the edit controls. The number of pixels in the image is horizontal resolution multiplied by the vertical resolution. The higher the resolution the longer it takes and the more memory it needs to render the image. Anti-Aliasing - Choose the level of anti-aliasing. In low resolution modes the appearance of an image can be improved if it is anti-aliased. Basically, anti-aliasing smoothes out some of the 'jaggies' that are visible at the edges of colored regions in the image. Anti-Aliasing is very time consuming: it will take a little under 4 times as long to render a Good anti-aliased image than an image with no anti-aliasing. Using Best level of anti-aliasing, the image will take approximately 9 times longer to render than a non anti-aliased one. Accelerated Anti-Aliasing - Accelerate the anti-aliasing process, with small quality reduction. Normally anti-aliasing an image increase the rendering time quite significantly, up to 9 or 10 times for a setting of Best. The main effect of anti-aliasing is seen near the edges of models or where faces with different colors join. Pixels in the interior of faces benefit very little from antialiasing. When this option is selected, anti-aliasing is applied near the edges of faces and not in the center. For most models this is as good as full anti-aliasing and it can be up to 3 times faster. Images with models that have image maps or shaders, where there is quite a bit of detail in the interior of faces, will not benefit as much from having their anti-aliasing accelerated. Image Format - Select the file format into which the image or images will be stored. This automatically determines the color resolution in the image. For an single image or multiple still images the GIF format uses an 8 bit palette, while the TGA and TIF formats are either HighColor or True-Color (16 bit or 24 bit).

Image Name - Name the image and the directory to which it is to be rendered. The default image directory is "Render". Movie Name - Name the animation and the directory to which it is to be rendered and compressed. The default animation directory is "Movies". Movies are written in the AVI fiel format and can be written using any installed codec or in an uncompressed format for high quality postprocessing. Current Frame - Specify which frame of an animation is to be rendered. Range - Specify a range of frames to be rendered.

Step 4: Renderer Options Tab Page The Options page of the Renderer offers additional control parameters. Show Image While Rendering - Displays a preview image in the Rendering Status Window during the rendering. Startup Preferences - Enables preference settings to override the defaults. Hardware Renderer Settings - These apply to the GPU renderer: Output directly to Screen - If you use this option you will be limited in the resolution of the output you can make to the sise of your desktop window. And it must draw on top of your desktop so you will not be able to continue working while renderiing. Render to Pixel buffer (P-buffer) - This is the best option, Render to Frame Buffer Object (FBO) - Render to offscreen buffer. This should be the fastest option - but it behaves differently on different adapter cards.

If you tick the Apply Software Post..... button then the images created by the OpenGL renderer will be passed to the software processor functions (external DLLs). This is useful since not all the postprocessors have been re-written for use with the OpenGL renderer. If you tick the Copy Hardware Depth... button then the hardware depth buffer is copied to the software depth buffer and you can use depth dependent software post-processors (like fog).

Step 5: Renderer Settings Tab Page The Settings page of the renderer offers additional control parameters. The Smoothing angle, ambient light, shadow density and spotlight buffer size ONLY apply to the scanline software renderer. Note: These settings only apply to the Software Renderer. Animation Speed - Set the speed value built into the FLI/FLC files. The value is in jiffies, zero implies the animation will play as fast as possible. Smoothing Angle - Set the maximum Phong smoothing angle. If the angle between any adjacent faces exceeds this value, the Phong smoothing is automatically turned off for those faces. This switch can help eliminate artefacts that sometimes appear in an image if the smoothing in a model has been set inappropriately. Ambient Lighting % - Overrides the ambient light settings usually specified in the Sky Actor. A maximum of 30% works quite well, values greater than 70% produce images that have very little contrast. Shadow Density % - Specify the darkness of shadows cast on the Ground. Set the density of the shadows. 50% is the default, 100% gives completely black shadows, 0% gives no shadows at all. Spotlight Buffer - Alter the resolution of the shadows cast by spotlights. The shadows created by the spotlights require a block of memory. A setting of 320 reserves a block of memory of 256k for each spotlight. The size of the block dictates the smoothness of the shadow. A value of zero implies that a spotlight will not cast a shadow. You can calculate optimum settings for best results by multiplying the number of pixels by 4 (e.g. 320x200x4=256,000 bytes required).

Motion Blur - Higher values generate Motion Blur. The normal range is 0-10. When a number outside this range is entered the full integrated motion blur comes into play. Digits in Filenames - Specify the number of leading zeros in a Rendered filename. JPEG Picture Quality - Specify the JPEG compression quality. Higher quality creates larger file sizes.

Step 6: Renderer Ray-Tracing Tab Page The Tracing page of the Renderer enables Ray Trace rendering to be used. Turning on any of the Tracing options instructs the Renderer to use ray tracing where appropriate. Note: These options and settings only apply to the Software Renderer. Trace Shadows - All shadows will be ray traced, this increases the rendering time significantly.. Trace Reflections - All reflective surfaces will be ray traced, this also increases the rendering time. Trace Glass - All glass surfaces will be ray traced, this included normal glass, like windows, as well as refractive glass. Algorithm Parameters - Ray tracing a sene with 10,000 of polygons requires some optimisation to be undertaken before the scene is rendered. This is explained more fully in the section on "Setting the ray tracing parameters". There are no 'hard and fast' rules for chosing which approach to use. If you have a very complex scene then you may wish to increase the "Polygons per Voxel" setting. Up to 1000 polygons per voxel will still be acceptable if you have a scene with several hundred thousand polygons. It would be unwise to increase the Recursive depth beyond about 10 as this will hugely increase the amount of time it takes to optimise the scene before ray-tracing begins, and will also increase the amount of memory used to store the data. To understand how to use the ray-tracing setting we need to explain a little of how Ray Tracing is optimised. Depending on what the scene is, one of these algorithmic approaches may work better.

Step 7: Status Information The reported information in the status line of the renderer window can be very useful in making subsequent renderings of the same scene go faster.

F = (Number of faces in scene) V = (Number of vertices in scene) Dp = (Maximum octree depth used) Th = (Target number of faces to be assigned per voxel) F/V = (The maximum number of faces assigned by the optimisation algorithm per voxel) Size = (The [maximum(and minimum)] dimension (in internal units) of any voxels created by the optimisation) Vox = (Number of voxels created by the optimisation) Fc = (Total number of face assignments made to voxels in the spatial subdivision)

Some of the terms used above may be unfamiliar to you but in order set the optimisation parameters you will need to understand these terms which arise in the standard ray tracing algorithm and how it is optimised to enable scenes with very large numbers of polygons (faces in the jargon) to be rendered.

Step8 Ray Tracing Basics: Ray tracing may be a slow procedure but it does give superb photographic quality images. There are many books that describe the theory of ray tracing in detail. A full description of a complete package is given in [1] and there are also many available freeware and shareware ray tracing programs. One of the most comprehensive is POV-Ray which is available from a large number of WWW locations. The idea on which ray tracing is based seems almost too simple to be true but - here it is: Follow the paths taken by particles of light (photons) on their journey from light source through the scene (as they are scattered back and forth between objects) and on until they are either captured by the camera or head off to infinity. In practice this is a hopeless task because of the huge number of photons emitted by a light source and the fact that all but a minute fraction of them will ever be scattered into the camera or eye. So ray tracing does it in reverse. It sends feelers or rays along the path of the photos from the camera out into the scene. If the feeler rays find anything they work back towards the sources of illumination and give us a path for the photons to follow from source to photographic plate. Sometimes these feeler rays may encounter reflective surfaces and when that happens they follow a new path and continue their journey. There are lots of other things that can happen to feeler rays. Sometimes they may divide with each sub-ray following separate paths. The way in which the ray tracing algorithm handles such situations and the sophistication of the mathematical models it uses for light/surface interaction governs the quality of the images produced. The standard algorithm for ray tracing can be summarised in five steps:

Load the data describing objects in the scene to memory. Move each model into its appointed position. Apply the viewing transformation. Calculate the direction of a feeler ray from the viewpoint so that it passes through the point in the projection plane equivalent to a pixel in the display raster. The feeler ray is followed (traced) out into the scene until it intersects a polygon. Once the point of intersection is located the surface properties can be determined and the pixel set to the appropriate colour value. If when a ray first hits something in the scene it isn't

totally absorbed we can continue to trace its path until it reaches a final destination or is so attenuated that there is nothing to see.

So that's the basic algorithm, but if you have ever experimented with ray tracing software you probably found that at most you could use about 100 primitive shapes. Your main observation probably was the hours it took the computer to trace even an image of moderate resolution. What is worse is that the rendering time increases as the number of models in the scene increases thus if you need to render a scene with models made up from thousands of polygons a basic ray tracing renderer is virtually useless. So the main question is: Why is ray tracing so much slower? The answer is in the algorithm, in particular the step: For each pixel test every polygon and find the one closest to the origin of the ray. This is potentially an enormous number of operations. For example to synthesize an image of size 640 by 480 using a 2 by 2 anti-aliasing supersample with a scene of 20,000 polygons requires 24 billion tests. To try and reduce the time it takes to render a scene with a ray tracer some attempt must be made to try and reduce this enormous number of calculations. A technique we call optimisation.

Step 9: Ray Tracing Optimisation: It is the job of an optimisation routine to divide the scene up in some way so that every ray does not have to be tested against every polygon. Optimisation usually requires two modification to the ray tracing algorithms: A scene evaluation step that takes place before the scene is renderer. As each ray is traced the scene evaluation data is used to reduce the number of polygons that must be tested against it as the ray passes from point to point. There are two approaches that have received much investigation: Bounding volumes Spatial subdivision

The renderer uses the spatial subdivision method and we will look at that. The idea here is to break up space into a number of small boxes so that only a few polygons actually lie in each little box. Then as the rays are traced through the scene we follow them from box to box and check to see if a ray hits any of the polygons lying inside the boxes. If there are only a few polygons in each box we will have considerably reduced the number of calculations and hence the time it takes to render a scene! Ideally putting only one or two polygons in each box (in technical terms the "boxes" are know as "voxels") should be the goal of the first part of the optimisation scheme. A two dimensional illustration of this principle is given below, it shows a collection of objects and a series of rectangles that divide up the region in which the objects lie so that so that only one object is in each little rectangle: It is the job of the first optimisation step (scene evaluation) to divide up the scene into the boxes to be used in the later tracing step. There are conflicting goals here, each voxel should have as few polygons (faces) inside them as possible. This implies that they should be small - but if there are too many boxes then the tracing step will not be very fast because as the ray is followed it will have to travel through a very large number of boxes. This is a "balancing act" - technical term "Optimisation". It requires quite a bit of processing to find the best arrangement of the voxels but since many hours of processor time might be saved during rendering it is usually well worth the effort. One comment worth making: The time it takes to optimise a scene is dependent on the number of polygons in the scene not on the number of pixels in the output picture. So it takes just as long to optimise a scene when rendering a 60 by 40 image as it does for a 1024 by 768 image. However it takes much much longer to trace the image for the larger raster. Thus here again is another very profitable reason to do the best possible job at optimising the scene prior to tracing any rays.

Step 10: Octrees The optimisation step starts by calculating the volume of space occupied by all the faces in the scene to create a bounding box to enclose all the faces in the scene (the so called root node in the octree). If this box contains more than the minimum number of faces it is divided into 8 smaller volumes. These child volumes are checked and any faces assigned to the parent and also fall inside them are put into to a list for each child. If any child box contains more than the minimum

number of faces it is further subdivided into 8 sub-volumes, just as its parent's box was. This process is recursive and gives us a tree like structure called an octree, because at each branch there are 8 subdivisions. This subdivision process for a cubic region of space is illustrated. The big cube has been divided into 8, one child node is further divided at each stage. In hierarchical terms the octree resembles: The numbers in the circles represent an example of the number of faces inside the box associated with that node in the octree. You can see that as the subdivision is refined the number of faces in each box decreases. In this particular example the subdivision process is stopped when the number of faces in each box falls below 10. Once the octree is fully built ray tracing can proceed. Thus when the ray-tracing part of the algorithm follows a ray through the scene it moves from box to box and only a maximum of 10 faces have to tested in each box. A big saving in computation time. The tree structure of the octree makes the task of following a ray through the scene quite quick. There is a bit of work involved in calculating the movement of the ray and in getting a fairly quick test to see in which box the current position of the traced ray lies but we don't need to worry about this to understand the setting of the optimisation parameters. Step 11: Polygon Splitting This figure of the man illustrates how the division into voxels works and how the polygons are split into adjacent voxels. In (a) we see a model of a man made up about 25,000 triangular polygons. In (b) the subdivided polygons are shown using a false colour. In (c) the voxels and their subdivisions around the figure are shown. You can easily see how the larger voxels lie at the edge> As they do not contain polygons and thus do not have to be subdivided. In (d) a close up of the man's hand is shown with a tracing ray passing through it. In (e) the same view as in (d) is shown but with the voxels also included.

As the ray is traced through the scene it passes in and out of the voxels. When int enters a voxel that contains any polygons (or parts of polygons) the list of polygons are checked for intersection with the ray. Hopefully this number of polygons will be small for each voxel and when a ray does enter a voxel. Again hopefully, when there are polygons in a voxel an intersection will occur and there will be no need to trace the ray through any other voxels. Before actually looking at the parameters there is one complicating issue we must mention: All this looks great in theory until we come to consider a model made from a connected network of polygonal facets, then we have a bit of a dilemma! A network of connected facets cannot be separated into individual faces. No matter how hard we try, (how may times we subdivide), there is no way that we can separate each polygon so that a rectangle in the subdivision contains only one polygon. The fact that the polygons are connection is responsible for that. The very best we could probably do is associate some polygons with more than one voxel simultaneously. We are going to have to accept that in practice a one polygon to one subdivision goal is unattainable. So we must set a more realistic limit on the number of polygons we are prepared to accept in each voxel. How close the renderer gets to the 1 to 1 ideal can be determined from the reported information in the status bar. The (F) value is the number of faces in the scene, the (Fc) value reports how

many faces have been placed in voxels. Thus if Fc=F we have a perfect 1 to 1 set-up and the ray tracing should be very fast. The (F/V) value is the maximum number of faces assigned to any one polygon, ideally this should be equal to the desired value (which is set by the Threshold parameter). In most cases we rarely have either Fc=F or F/V = Threshold , any scene in which Fc < 5 F should be regarded as acceptable. In other scenes it is probably worth experimenting with the paameter settings.

Step 12: Setting the Ray Tracing Parameters OpenFX provides two alternative methods of partitions the triangles in the voxels. In 'partition' mode NO additional polygons are created in the model as they are divided up among the voxels. Instead, each voxel records a list of every face that passes through that volume. When a voxel is divided (at the boundary between at least one face) those lying completely on either side of the dividing plane are assigned to either side. If a polygon crosses the dividing plane then it is assigned to both sides. (By choosing the dividing plane so that it passes through a vertex we can be sure that eventually the voxels at the bottom of the tree will only contain the specified number of polygons. In 'Split' mode the polygons in the model are split along the boundary of the voxel and those that lie to one side are assigned to lie in the child voxel on that side, those that lie on the other

side of the dividing plane are assigned to the voxel on that side. In both these cases the parent voxel retains NO polygons, all polygons lie in a voxel at the bottom of the hierarchy. Note: To make sure that we do not miss any polygons or make too many, or not enough voxels, OpenFX uses integer coordinates internally for its vertex representations whild carrying out the optimisation.

This switch parameter gives you the option to choose which of these methods of model partitioning best suit your scene. The Recursive Depth setting tells the renderer what is the maximum number of subdivisions the octree it may use. Thus OpenFX won't go on subdividing even if the other settings would allow or require it. This parameter is needed because in some scenes too much memory would be needed and the optimisation step would take too long to complete. (Each additional subdivision (where required) takes 8 times as much memory and 8 times as long to compute. (Not all voxels will require subdivision but the number of voxels will get enormous. For example the default setting of 8 subdivisions would require> 16 million voxels. Twelve subdivision would require > 68 billion voxels, about 6.8TB of memory.) The Polygons per Voxel is

related to the number of faces that are assigned to each voxel. It is the desired number of faces we would like to see assigned to each voxel. (All this is discussed above.) Ideally one or two but for larger models 9 or 10 or even 100 to 150 are possible values. It just depends on the number of polygons in the scene and only experimentation will tell if you are heading in the right direction. Again the (F/V) and (Fc) values reported should help you refine your settings.

Tutorial written by Stuart

Вам также может понравиться