Академический Документы
Профессиональный Документы
Культура Документы
Welcome to SynthEyes, our camera-tracking system and match-moving application. With SynthEyes, you can process your film or video shot to determine the motion and field of view of the camera taking the shot, or track an object moving within it. You can combine feature locations to produce an object mesh, then extract a texture map for that object. After reviewing the matchmoved shot, inserting some sample 3-D objects and viewing the RAM playback, you can output the camera and/or object motions to any of a large number of popular 3-D animation programs. Once in your 3-D animation program, you can add 3-D effects to the live-action shot, ranging from invisible alterations, such as virtual set extensions, to virtual product placements, to over-the-top creature insertions. With the right setup, you can capture the motion of an actors body or face. SynthEyes can also help you stabilize your shots, taking full advantage of its two and three-dimensional tracking capabilities to help you generate rock-solid moving-camera shots. The comprehensive stabilization feature set gives you full directorial control over stabilization. If you work with film images, especially for TV, the stabilization system can also help you avoid some common mistakes in film workflows that compromise 3-D tracking, rendering, and effects work. And if you are working on a 3D stereo movie (two cameras), SynthEyes can not only help you add 3-D effects, but it can help you match the two shots together to reduce eyestrain. Unless you are using the demo version, you will need to follow the registration and authorization procedure described towards the end of this document and in the online tutorial. To help provide the best user experience, SynthEyes has a Customer Care center with automatic updates, messages from the factory, feature suggestions, forum, and more. Be sure to take advantage of these capabilities, available from SynthEyess help menu. Be sure to check out the many video tutorials on the web site. We know many of you are visual learners, and the subjects covered in this manual are inherently 3-D, so a quick look at the tutorials, and playing along in the program as you read, can make the text much more accessible. DONT PANIC. SynthEyes is designed to integrate into the most sophisticated visual effects workflows on the planet. So it is only natural that you will probably see some things in this manual you do not understand. Even if you dont understand 3-D color LUTs or the half floating point image format, youll be fine! (hello, google!) We have worked hard to make sure that when you do not
need that functionality, everything still works, nice and simple. As you learn more and do more, youll likely discover that when you want to do something, some control that hadnt make sense to you before will suddenly be just what you need. So jump on in, the waters fine!
Contents
Quick Start: Automatic Tracking Quick Start: Supervised Tracking Quick Start: Stabilization Shooting Requirements for 3-D Effects Basic Operation Opening the Shot Automatic Tracking Supervised Tracking Fine-Tuning the Trackers Checking the Trackers Nodal Tripod-Mode Shots Lenses and Distortion Running the 3-D Solver 3-D Review Cleaning Up Trackers Quickly Setting Up a Coordinate System Zero-Weighted Trackers Perspective Window Exporting to Your Animation Package Building Meshes Texture Extraction Optimizing for Real-Time Playback Troubleshooting Combining Automatic and Supervised Tracking Stabilization Rotoscoping and Alpha-Channel Mattes Object Tracking Joint Camera and Object Tracking Multi-Shot Tracking Stereo Movies Motion Capture and Face Tracking
Finding Light Positions Curve Tracking and Analysis in 3-D Merging Files and Tracks Batch File Processing Reference Material: System Requirements Installation and Registration Customer Care Features and Automatic Update Menu Reference Control Panel Reference Additional Dialogs Reference Viewport Features Reference Perspective Window Reference Overview of Standard Tool Scripts Preferences and Scene Settings Reference Keyboard Reference Viewport Layout Manager Support
Start SynthEyes from the shortcut on your desktop, and on the main menu select File/New or File/Import/Shot. In the file-selection dialog that opens, elect the first frame, FLYOVER0000.JPG, of the shot from the folder you have downloaded it to. The shot settings panel will appear. Screenshots in this manual are from a PC with the light-colored user interface option so that they print better; the OS X and/or dark-colored interfaces are slightly different in appearance but not function.
On the summary panel, click Full Automatic. (An additional step or two will later let you automatically fine-tune the trackers to further improve accuracy.)
Once you see Finished solving, hit OK to close this final dialog box. SynthEyes will switch to a quad-viewport configuration (experienced users can disable switching with a preferences setting). Each tracker now has a small x (tracker point) to show its location in 3-D space. You can zoom in on any of the views, including the camera view, using the middle mouse scroll and middle-mouse pan to see more detail. (You can also right-drag for a smooth zoom.) Mac OS X: SynthEyes uses middle-mouse-drag to pan, but OS X may display the Dashboard instead. To fix that, fire up the Expose and Spaces controls in the OS X System Preferences panel, and change the middle mouse button from Dashboard to - (nothing). Youll still be able to access dashboard via F12. The status line will show the zoom ratio in the camera view, or the worldunits size of any 3-D viewport. You can Control-HOME to re-center all four viewports. See the Window Viewport Reference for more such information.
In the main viewports, look at the Left view in the lower left quadrant. The green marks show the 3-D location of features that SynthEyes located. In the Left view, they fall on a diagonal running top-left to lower-right. Since most of these points are on the ground in the scene, wed like them to fall on the ground plane of the animation environment. SynthEyes provides tools to let you eyeball it into place, but theres a much better way Switch to the Coordinate System control panel using the toolbar button or the Windows menu (or F8). The coordinate system panel is used to align and scale the solved scenes. Refer to the picture below for the location of the 3 trackers labeled in red. We will precisely align these 3 trackers to become the ground plane. Note that the details of which trackers are present may change somewhat from version to version.
Begin by clicking the *3 button at top right of the coordinate system panel. Next, click on the tracker labeled 1 (above) in the viewport. On the control panel, the tracker will automatically change from Unlocked to Origin. In this example, we will use trackers (1 and 2) aligned front to back. The coordinate system mini-wizard (*3 button) handles points aligned left to right or front to back. By default, it is at LR, so click the *3 button, which currently reads LR, to change it to FB. Click the tracker labeled 2, causing it to change to Lock Point. The Y field above it will change to 20. The full-screen capture (above) showed SynthEyes right after completing this step. Select the tracker labeled 3, slightly right of center. It will change from Unlocked to On XY Plane (ie the ground plane). Why are we doing all this? The choice of trackers to use, the overall size (determined by the 20 value above), and the choice of axes is arbitrary, up to you to make your subsequent effects easier. See Setting Up the Coordinate System for more details on why and how to set up a coordinate system. Note that SynthEyes scene settings and preferences allow you to change how the axes are oriented to match other programs such as Maya or Lightwave: ie a Z-up or Yup mode. This manuals examples are in Z-Up mode unless otherwise noted; the corresponding choices for one of the Y-Up modes should be fairly evident. After you click the third tracker you will be prompted (Apply coordinate system?) to determine whether the scene should be re-solved to apply your new settings. Select Yes. Hit Go! and SynthEyes will recalculate the tracker and camera positions in a flash. To do this SynthEyes changed the solving mode (on the Solver control panel ) from Automatic to Refine, so that it will update the match-move, rather than recalculating from scratch. Afterwards, the 3 trackers will be flat on the ground plane (XY plane) and the camera path adjusted to match, as shown (after control-home):
You could have selected any three points to define the coordinate system this way, as long as they arent in a straight line or all bunched together. The points you select should be based on how you want the scene to line up in your animation package. Switch to the 3-D control panel . Select the magic wand tool on the panel. Change the mesh type drop-down at left of the wand to create a Pyramid instead of a Box. Zoom in the Top viewport window so the tracker points are spread out. In the Top viewport, drag out the base of a rectangular pyramid. Then click again and drag to set its height. Use the move , rotate , and scale tools to make the box into a small pyramid located in the vacant field. Click on the color swatch under the wand, and select a sandy pyramid color. Click somewhere empty in the viewport to unselect the pyramid (bright red causes lag in LCDs). On the View menu, turn off Show Trackers and Show 3-D Points, and switch to the camera viewport. You can do that by changing the selector on the toolbar to Camera, or by clicking the tab of the camera view itself. at top right
10
Hit Play . Note that there will appear to be some jitter because drawing is not anti-aliased. It wont be present when you render in your 3-D application. SynthEyes is not intended to be a rendering or modeling system; it operates in conjunction with a separate 3-D animation application. (You can create antialiased preview movies from the Perspective window.) Hit Stop . If you see a delayed reaction to the stop button, Edit/Preferences and turn on the Enhance Tablet Responsiveness checkbox. Rewind to the beginning of the shot (say with shift -A). By far, the most common cause of sliding of an inserted object is that the object has not been placed at the right altitude over the imagery. You should compare the location of your insert to that of other nearby trackers, in 3-D, adding a tracker at key locations if necessary. You will also think you have sliding if you place a flat object onto a surface that is not truly flat. Normally we would place the pyramid more carefully. To make a preview movie, switch to the Perspective window. Right-click and select Lock to Current Cam. Right-click again and select Preview Movie.
11
Click on at upper right and change the saved file type to Quicktime Movie. Enter a file name for the output movie in QuickTime format, typically in a temporary scratch location. (If you dont have QuickTime installed, use one of the sequenced file types and later use SynthEyes as a video playback program.) Click on Compression Settings, and select Sorensen Video 3 at High Quality, 29.97 frames per second, leave the Key Frames checkbox on, and turn off the Limit data rate checkbox. Click OK to close the compression settings. Back on the Preview Movie Settings, turn off Show Grid, and hit Start. The preview movie will be produced and played back in the Quicktime Player. You can export to your animation package at this time, from the File/Export menu item. Select the exporter for your animation package from the (long) menu list. SynthEyes will prompt for the export location and file name; by default a file with the same name as the currently-open file (flyover in this case), but with an appropriate file extension, such as .ma for a Maya ASCII scene file. Note that the SynthEyes demo version does not export; you can get the exported version of this file for many target packages from the "Sample Files" section of the website (under the Learning tab). This completes this initial example, which is the quickest, though not necessarily always the best, way to go. Youll notice that SynthEyes presents many additional views, controls, and displays for detecting and removing tracking glitches, navigating in 3-D, handling temporarily obscured trackers, moving objects and multiple shots, etc. In particular, after auto-tracking and before exporting, you should always check up on the trackers, especially using Clean Up Trackers and the graph editor, to correct any glitches in tracking (which can result in little glitches in the
12
13
Tip: You can create a tracker at any time by holding down the C key and left-clicking in the camera view. Begin creating trackers at the locations in the image below, by putting the cursor over the location, pushing and holding the left mouse button, and adjusting the tracker position while the mouse button is down, looking at the tracker insides window on the control panel to put the point of the feature at the center. Look for distinctive white or black spots in the indicated locations. Tip: the name of each supervised tracker will be shown in the camera, perspective, and 3D viewports, by default. You can have none of the names shown, or the names shown for all trackers, using the controls on the View menu. After creating the first tracker, click the green swatch under the mini tracker view window and change the color to a bright yellow to be more visible. Or, to do this after creating trackers, control-A to select them all, then click the swatch. Tip: there are two layouts for the tracking control panel, a small one for recommended for laptops, and a larger one recommended for high-resolution displays, selected by the Wider tracker-view panel checkbox in the preferences. In between, take your pick!
15
Once the eleven trackers are placed, type control-A (command-A on Mac) to select all the trackers. On the tracker control panel, find . Raise this spinner , called Key Every, from zero to 20. This says you wish to automatically re-key the tracker every 20 frames to accommodate changes in the pattern. Tip: you should have at least six trackers visible on each frame of the shot, with substantial amounts of overlap if they do not last throught the shot. For good results, keep a dozen or so visible at all times, spread out throughout the image, not bunched together. If the shot moves around a lot, you may need many more trackers to maintain satisfactory coverage. Hit the Play button , and SynthEyes will track through the entire shot.
On this example, the trackers should stay on their features throughout the entire shot without further intervention. You will notice that one has gone offscreen and been shut down automatically. (Advanced feature hint: when the image has black edges, you can adjust the Region-of-interest on the image preprocessing panel to save storage and ensure that the trackers turn off when they reach the out-of-bounds portion.) If necessary, you can reposition a tracker on any frame, setting a key and teaching the tracker to follow the image from that location subsequently. After tracking, with all the trackers still selected (or hit Control/commandA), click the Lock ( ) button to lock them play around (or get messed up). , so they will not re-track as you
16
This is a similar guide picture to that from auto-tracking, though the trackers are in different locations. Click the *3 button, then click on tracker #1. Click the *3 button, now reading LR, to change it to FB. Click tracker #2. Click tracker #3. Now switch to the Solve control panel. Hit the Go! button. A display panel will pop up, and after about 3 seconds, it should say Finished solving. Hit OK to close the popup. You could add some objects from the 3-D panel at this time, as in the automatic tracking example. You can add some additional trackers now to increase accuracy. Use (or shift-F) to go to the end of the shot, and change to backward tracking by clicking the big on the main toolbar. It will change to the backwards direction ) button. . On the Tracker control panel, turn on the Create (
Hint: When you play the scene, SynthEyes updates the tracking data for trackers that are set to the same direction as the playback itself. Create additional trackers spread through the scene, for example only on white spots. Switch their tracker type from a match tracker to a white-spot
17
Hit Play to track them out. The tracker on the rock pile gets off-track in the middleyou can either correct it by dragging and re-tracking, but it will be easiest for this one to keep it as a match-type tracker. Scrub through the shot to verify they have stayed on track, then control-A to select them all, and turn on the lock .
Switch to the Solver control panel , change the top drop-down box, the solving mode, from Automatic to Refine, and hit Go! again. Go to the 3-D Panel , click on the create wand , change the object type from Box (or Pyramid) to Earthling, then drag in the Top view to place an earthling to menace this setting. Click a second time to set its size. In the following example, a tracker on the concrete pad was used to adjust the height of the Earthling statue to prevent sliding. You can use pan-to-follow mode (hit the 5 key to turn it on or off) to zoom in on the tracker (and nearby feet) to monitor their positioning as you scrub. The final scene is available from the web site as flyover_sup.sni.
18
Typically, supervised tracking is performed more carefully, tracking a single tracker at a time and monitoring it directly. SynthEyes generates keys every 20 frames with the settings shown; normally such automatically-generated keyframes are adjusted manually to prevent drifting. If you look at the individual trackers in this example, you will see that some have drifted by the end of the shot. Normally they are corrected, hence the term supervised tracking. For more detailed information on supervised tracking, read the manuals later write-up of supervised tracking, and see the online tutorial Care and Feeding of Supervised Trackers.
19
Click the Full Automatic button on the summary panel to track and solve the shot. If we wanted, we could track without solving, and stick with 2D tracks, but well use the more stable and useful 3-D results here. Select the Shot/Image Preparation menu item (or hit the P key). In the image prep viewport, drag a lasso around the half-dozen trackers in the field near the parking lot at left (see the Lasso controls on the Edit menu for rectangular lassos). We could stabilize using all the trackers, but for illustration well stabilize this particular group, which would be typical if we were adding a building into the field. Click the stabilization tab , change the Translation stabilizationaxis drop-down to Peg, and the Rotation drop-down to Filter. Reduce the Cut Frequency spinner to 0.5 Hz. This will attenuate rotation instability, without eliminating it. You should have something like this:
21
The image prep window is showing the stabilized output, and large black bands are present at the bottom and left of the image, because the image has been shifted (in a 3-D way) so that it will be stable. To eliminate the bands, we must effectively zoom in a bit, expanding the pixels. Hit the Auto-Scale button and that is done, expanding by almost 30%, and eliminating the black bars. This expansion is what reduces image quality, and it should always be minimized to the extent possible. Use the horizontal spinner to the right of the frame number at bottom center to scrub through the shot. The shot is stabilized around the purple point of interest at left center. You can see some remaining rotation. You may not always want to make a shot completely stone solid. A little motion gives it some life. In this case, merely attenuating the jitter frequency becomes ineffective because the shot is not that long. To better show what were going to do next, click the Final button at right, turning it to Padded mode. Increase the Margin spinner, below it, to 0.125. Instead of showing the final image, were showing where the final image (the red outline) is coming from within the original image. Scrub through the shot a little, then go to the end (frame 178). Now, change the Rotation mode to Peg also. Instead of low-pass-filtering the rotation, we have locked the original rotation in place for the length of the shot. But now, by the end of the shot the red rectangle has gone well off the original imagery. If you temporarily click Padded to get back to the Final image, there are two large black missing portions.
22
Scrub through the shot, and youll see that the red rectangle stays completely within the source image, which is good: there wont be any missing parts. In fact, you can Auto-scale again and drop the zoom to about 27%. Click Padded to switch back to the Final display mode, and scrub through to verify the shot again. Note that the black and white dashed box is the boundary of the original image in Final mode. You can reduce the slight blurring caused by resampling the image to zoom it: click to the Rez tab, and switch the Interpolation method from Bi-Linear to 2-Lanczos. You can see the effect of this especially in the parking lot. Tip: the interpolation method gives you a trade off between a sharper image and more artifacts, especially if the image is noisy. Bi-linear produces a softer image with fewer artifacts, Mitchell-Netravali is a little sharper, and then the comes Lanczos-2 and the sharpest, Lanczos-3.
23
By adding the hand-animated directorial component of the stabilization, we were able to achieve a very nice result, without requiring an excessive amount of zoom. [By intentionally moving the point of interest zoom can be reduced further to under 15%.] , the required
If you look carefully at the shot, you will notice some occasional strangeness where things seem to go out of focus temporarily. This is the motion blur due to the cameras motion during shooting. Important: To minimize motion blur when shooting footage that will be stabilized, keep the cameras shutter time as small as possible (a small shutter angle for film cameras). Doubtless you would now like to save the sequence out for later compositing with final effects (or maybe a stabilized shot is all you needed). Hit P to bring the image prep dialog back up, and select the Output tab the Save Sequence button. . Click
Click the button to select the output file type and name. Note that for image sequences, you should include the number of zeroes and starting frame number that you want in the first image sequence file name: seq001 or seq0000 for example. After setting any compression options, hit Start, and the sequence will be saved. There are a number of things which have happened behinds the scene during this quick start, where SynthEyes has taken advantage of the 3-D solves field of view and many trackers to produce better results than traditional stabilizing software. SynthEyes has plenty of additional controls affording you directorial control, and the ability to combine some workflow operations that normally would be separate, improving final image quality in the process. These are described later in the Stabilization section of the manual.
24
Look in the Top and Left views and notice how all of the trackers are located a fixed distance away from the camera. SynthEyes must do that because in a tripod shot, there is no perspective available to estimate the distance. You can easily insert 3-D objects and get them to stick, but aligning them will be more difficult. You can use SynthEyess single-frame alignment capability to help do that. For illustration now, go to the 3-D control panel and use the create
tool to create a cylinder, box, or earthling in the top view. No matter where you create it, it will stick if you scrub through the shot. You can reposition it using the other 3-D tools, move , rotate , and scale , however you like. You can change the number of segments in the primitive meshes with the # button and spinners on the 3-D panel, immediately after creation or later. Once you finish playing, delete all the meshes you have created.
25
Hit Play. You now have much smoother camera work, without being overly robotic. Use the output tab sequence back out if you wish. on the image preprocessor to write the
If you want to insert an object into the stabilized shot, you need to update the trackers and then the camera solution. On the Image Preprocessors Output tab, click Apply to Trkers once. Close the image preprocessor, then go to the Solver panel , make sure the solver is in Refine Tripod mode, and click Go!
26
For this second group of cases, if the camera spins around on a tripod, it is IMPOSSIBLE, even in theory, to determine how far away anything is. This is not a bug. SynthEyes tripod tracking mode will help you insert 3-D objects in such tripod shots anyway. The axis alignment system will help you place 3-D objects in the scene correctly. It can also solve pure lock-off shots. If the camera was on a tripod, but shoots a single moving object, such as bus driving by, you may be able to recover the camera pan/tilt plus the 3-D motion of the bus relative the camera. This would let you insert a beast clawing into the top of the bus, for example. For visual examples, see the Tutorials section of our web site.
27
Basic Operation
Before describing the match-moving process in more detail, here is an overview of the elements of the user interface, beginning with an annotated image. Details on each element can be found in the reference sections.
Color Scheme
SynthEyes offers two default color schemes, a light version (shown) and a dark version. The light version generally matches the operating system defaults (and so is somewhat different on a PC and Mac), intended for a brighter officestyle environment. The darker user-interface scheme matches programs such as Combustion, Fusion, Shake, etc, which are designed to be used in a darker studio environment. To switch schemes, select the Edit/Reset Preferences menu item and you will be given a choice. You can change virtually all of the colors in the user interface individually, if you like. For example, you can change the default tracker color from green to blue, if you are constantly handling green-screen shots. See Keeping Track of the Trackers for more information.
Tool Bar
The tool bar runs across the top of the application, including normal Windows icons, buttons to switch among the control panels, and several viewport
29
BASIC OPERATION
controls. SynthEyes includes full undo and redo support. Three buttons at right control Customer Care Center functions such as messages and upgrades.
Control Panels
At any time, one of the control panels is displayed in the control panel area, as selected by the toolbar buttons or some menu items. The control panel can be floated by the Window/Float One Panel menu item. You can use a control panel with any viewport. SynthEyes uses control panels as a way to organize all the many individual controls. Each control panel corresponds to a particular task, and while that control panel is open, the mouse actions in the viewports, and the keyboard accelerator keys, adapt to help accomplish a particular task. The buttons on the toolbar are arranged so that you can start at the left and work to the right. This approach has been used in a variety of older applications, and is making a comeback in new applications as well because of its organizational ability. By contrast, Adobe programs such as Photoshop and Illustrator have many different palettes that can appear and disappear individually. If you are more familiar with this style, or it is more convenient for a particular task, you can select the Window/Many Floating Panels option, and have any number of panels open at once. Keep in mind that only one panel is still primarily in charge, and there may be unwanted interactions between panels in some combinations. The primary panel is still marked in hot yellow, while the other panels are a cooler blue.
30
BASIC OPERATION
Note that a moving object can be active, but not selected, and vice versa. Similarly, a mesh object can be selected but not the edit mesh, and vice versa.
Play Bar
The play bar normally appears at the top of most control panel selections, and features play , stop, frame forward , etc controls as well as the frame number display. Frames are numbered from 0 unless you adjust the preferences. You can move the playbar to the toolbar, if you have a high-resolution monitor, using a setting on the preferences panel. This is especially useful if you are using the Float Many Panels mode. You can also float the playbar by itself, using the Window/Float Playbar menu item.
Viewports
The main display area can show a single viewport, such as a Top or Camera View, or several independent viewports simultaneously as part of a layout, such as Quad. Viewports grab keyboard and mouse focus immediately when the mouse enters the field (except if you are entering text, see below), so you can move the mouse into a view and start clicking or mouse-wheel-zooming or using accelerator keys immediately.
Layouts
A layout consists of one or more viewports, shown simultaneously in a particular arrangement. Select a layout with the drop-down list on the toolbar. You can change any pane of a layout to a different type by clicking the tab just above the upper left corner of the pane, creating a Custom layout. The single Custom layout is changed each time you do this. A tab at top right of each pane brings it full size, or back if it already is. You can adjust the relative sizing of each pane by dragging the gutters between panes. To name your custom layouts and create different pane arrangements, use the layout manager (see the Window menu). Some viewport types can appear only once in a particular layout: you cant have two camera viewports in one layout.
31
BASIC OPERATION
Your layouts are stored in the SynthEyes file, you can also set up your own default configurations (preferences) using the layout manager.
Coordinate Systems
SynthEyes can operate in any of several different coordinate system alignments, such as Z up, Y up, or Y up left-handed (Lightwave). The coordinate axis setting is controlled from Edit/Scene Settings; the default setting is controlled from the Edit Preferences. The viewports show the directions of each coordinate axis, X in red, Y in green, Z in blue. One axis is out of the plane of the screen, and is labeled as t(towards) or a(away). For example, in the Top view in Z-up mode, the Z axis is labeled Zt. SynthEyes automatically adjusts the scene and user interface when you change the coordinate system setting. If a point is at X/Y/Z = 0,0,10 in Z-up mode, then if you change to Y up mode, the point will be at 0,10,0. Effectively, SynthEyes preserves the view from each direction: Top, Front, Left, etc, so that the view from each direction never changes as you change the coordinate system setting. The axis will shift, and the coordinates of the points and cameras. Consequently, you can change the scene coordinate axis setting whenever you like, and some exporters do it temporarily to match the target application.
Spinners
Spinners are the plus/minus button things next to the numeric edit fields. You can drag upwards and downwards from within the spinner to rapidly adjust the value, or click the plus or minus to change a little at a time. The text field of a spinner has some additional processing to make experimentation easier. As you enter a new value for a spinner, the keystrokes
32
BASIC OPERATION
do not take effect immediately, as that tends to make things fly around inconveniently. When you have finished entering a value, hit the enter key. This will enter the value into SynthEyes, and re-select the entire field. This enables you to experimentally enter several different values for a parameter without having to re-select the text field with the mouse each time. To exit back to normal mouse-driven keyboard focus, simply hit the enter key a second time, without changing the value at all. This behavior seems complicated to explain but is simple and efficient in practice. Spinners show keyed frames with a red outline. You can remove a key or reset a spinner to a default or initial value by right-clicking it. If you shift-right-click a key, all following keys are truncated. If you control-right-click a key, all keys are removed from the track.
Tooltips
Tooltips are helpful little boxes of text that pop up when you put the mouse over an item for a little while. There are tooltips for the controls, to help explain their function, and tooltips in the viewports to identify tracker and object names. The tooltip of a tracker has a background color that shows whether it is an automatically-generated tracker (lead gray), or supervised tracker (gold).
Status Line
Some mouse operations display current position information on the status line at the bottom of the overall SynthEyes window, depending on what window the mouse is in, and whether it is dragging. For example, zooming in the camera view shows a relative zoom percentage, while zooming in a 3-D viewport shows the viewports width and height in 3-D units.
Keyboard Accelerators
SynthEyes offers keyboard accelerators for menu entries, various control buttons, and Sizzle scripts, as described in the keyboard manager reference section. You can change the keyboard accelerators from the keyboard manager, initiated with Edit/Edit Keyboard Map. Note that the tracker-related commands will work only from within the camera view, so that you do not inadvertently corrupt a tracker. On a PC, you can also use Windowss ALT-whatever acceleration to access the menu bar, such as ALT-F-X to exit.
Menus
When you see something like Shot/Edit Shot in this manual, it is referring to the Edit Shot menu item within the Shot section of the main menu.
33
BASIC OPERATION
SynthEyes also has right-click menus that appear when you click the right mouse button within a viewport. The menu that appears will depend on the viewport you click in. The menus also show the keyboard equivalent of each menu item, if one is defined.
Click-on/Click-off Mode
Tracking can involve substantial sustained effort by your hands and wrists, so proper ergonomics are important to your workstation setup, and you should take regular breaks. As another potential aid, SynthEyes offers click-on/click-off mode, which replaces the usual dragging of items around with a click-on/move/click-off approach. In this mode, you do not have to hold the mouse buttons down so much, especially as you move, so there should be less strain (though we can not offer a medical opinion on this, use at your own risk and discretion). You can set the click-on/click-off mode as a preference, and can switch it on and off whenever convenient from the Window menu. Click-on/click-off mode affects only the camera view, mini-tracker view, 3D viewports, perspective window, and spinners, and affects only the left and middle mouse buttons, never the right. This captures the common needs, without requiring an excess of clicking in other scenarios.
Scripts
SynthEyes has a scripting language, Sizzle, and uses Sizzle scripts to implement exporters, some importers, and tool functions. While many scripts are supplied with SynthEyes, you can change them as you see fit, or write new ones to interface to your studio workflow. You can find the importers on the File/Importers menu, exporters on the File/Exporters menu, and tool scripts on the main Script menu. On your machine, scripts are stored in two places: a central folder for all SynthEyes users, and a personal folder for your own. Two menu items at the top of the Script menu will quickly open either folder. SynthEyes mirrors the folder structure to produce a matching sub-menu structure. You can create your own My Scripts folder in your personal area and place all your own scripts in that area, to be able to quickly find your scripts and distinguish them from the standard system scripts. Similarly, a studio might have an Our Shared Scripts folder in the shared SynthEyes scripts folder.
34
BASIC OPERATION
which are small floating windows with a column of buttons, one for each script or menu item you would like to quickly access. Use the Script Bar Manager to create and modify the script bars, and the Script bar submenu of the Scripts menu to open them. Script bars are automatically re-opened and re-positioned each time you start SynthEyes, if they were open when SynthEyes last closed. Important: Script bars are stored and deployed as small text files. When you create a new script bar, you must store it in either one of the system-wide script folders or your user script folder. To see the proper locations, use the Script/User script folder or Script/System script folder menu items to open the File Explorer or Finder to the corresponding folder. The Perspective view has its own kind of floating toolbars as well, though they are limited to existing built-in operations and there is no manager for them.
35
37
Start Frame, End Frame: the range of frames to be examined. You can adjust this from this panel, or by shift-dragging the end of the frame range in the time bar. Stereo Off/Left/Right. Sequences through the three choices to control the setup for stereo shots. Leave at Off for normal (monocular) shots, change to Left when opening the first, left, shot of a stereo pair. See the section on Stereoscopic shots for more information. Frame rate: Usually 24, 24.98, or 29.97 frames per second. NTSC is used in the US & Japan, PAL in Europe. Film is generally 24 fps, but you can use the spinner for over- or under-cranked shots or multimedia projects at other rates. Some software may have generated or require the rounded 25 or 30 fps, SynthEyes does not care whether you use the exact or approximate values. Interlacing: No for film or progressive-scan DV. Yes to stay with 25/30 fps, skipping every other field. Minimizes the amount of tracking required, with some loss of ability to track rapid jitter. Use Yes, But for the same thing, but to keep only the other (odd) field. Use Starting Odd or Starting Even for interlaced video, depending on the correct first field. Guessing is fine. Once you have finished opening the shot in a second, step through a few frames. If they go 2 steps forward, one back, select the Shot/Edit Shot menu item, and correct the setting. Use Yes or None for source video compressed with a non-field-savvy codec such as sequenced JPEG.
38
39
40
After Loading
After you hit OK to load the shot, the image prefetch system begins to bring it into your processors RAM for quick access. You can use the playbar and timebar to play and scrub through the shot. Note: image prefetch puts a severe load on your processor by designit rushes to load everything as fast as possible, taking advantage of highthroughput devices such as RAID disks. However, if the footage is located on a low-bandwidth remote drive, prefetch may cause your machine to be temporarily unresponsive as the operating system tries to acquire the data. If you need to avoid this, turn on the Read 1f at a time option on the Shot menu. It is a sticky preference. If that does not help enough, turn off prefetch on the Shot menu, or turn off the prefetch preference to turn prefetch off automatically each startup. You can use the Image Preprocessing stage to help fit the imagery into RAM, as will be described shortly. Even if the shot does not fit in RAM, you can get RAM playback of portions of the shot using the little green and red playback markers in the timebar: you can drag them to the portion you want to loop. Sometimes you will want to open an entire shot, but track and solve only a portion of it. You can shift-drag the start or end of the shot in the timebar (you may want to middle-drag the whole timebar left or right first to see the boundary. Select the proper coordinate system type (for MAX, Maya, Lightwave, etc) at this time. Adjust the setting scene (Edit/Edit Scene Settings), or the preference setting if desired.
41
You can activate the image preprocessing panel either from the OpenShot dialog, or from the Shot menu directly. The individual controls of the image preprocessor are spread among several tabbed subpanels, much like the main SynthEyes window. These include Rez, Levels, Cropping, Stabilize, Lens, Adjust, Output, and ROI.
42
As you modify the image preprocessing controls, you can use the frame spinner and assorted buttons to move through the shot to verify that the settings are appropriate throughout it. Fetching and preprocessing the images can take a while, especially with film-resolution images. You can control whether or not the image updates as you change the frame# spinner, using the control button on the right hand side of the image preprocessor. The image preprocessing engine affects the shots as they are read from disk, before they are stored in RAM for tracking and playback. The preprocessing engine can change the image resolution, aspect ratio, and overall geometry. Accordingly, you must take care if you change the image format---if you change the image geometry, you may need to use the Apply to Trackers button on the Output tab, or you will have to delete the trackers and do them over, since their positions will no longer match the image currently being supplied by the preprocessing engine. The image preprocessor allows you to create presets within a scene, so that you can use one preset for the entire scene, and a separate preset for a small region around a moving object, for example.
Image Adjustments
As mentioned, the image adjustments allow you to fix up the image a bit to make it easier for you and SynthEyes to see the features to be tracked. The preprocessors image adjustments encompass 3-D LUTs, saturation and hue, level adjustments, and channel selection and/or bit depth. Rez Tab. You can change the processing and storage formats or reduce image resolution here to save memory. Floating point format provides the most accuracy, but takes much more time and space. Float processing with Half or 16-
43
44
Floating-Point Images
SynthEyes can handle floating-point images from EXR, TIFF, and DPX image formats. Floating-point images offer the greatest accuracy and dynamic range, at the expense of substantially greater memory requirement and processing time. The 64-bit SynthEyes version is recommended for handling floating-point images due to their large size. DPX images will offer the highest performance. Floating point images may use 32-bit floats, or the 16-bit half format. The half format does not have as much dynamic range, but it is almost always enough for practical work even using High-Dynamic-Range images. The good news is that Half-floats are half the size, only 16 bits. The bad news is that it takes a substantial amount of time to translate between the half format and an 8 bit, 16-bit, or float format you can track or display. Accordingly, SynthEyes offers separate bit-depth selections for processing and for storage. If you need the extended range of a float (or 16-bit int) format, you can use that for any processing (especially gamma correction and 3-D LUTs), to reduce banding, then select a smaller storage format, Half, 16-bit, or 8bit. But keep in mind that additional processing time will be required. Though a floating-point imagefloat or halfprovides accuracy and dynamic range, to track or display it, it must be converted to a standard 8-bit or 16-bit form, albeit temporarily. To understand the necessary controls, here are a few details on how that is done (industry-wide). Eight and sixteen bit (unsigned) integers are normally considered to range from 0 to 255 or 65535. But to convert back and forth, the numbers are
45
Minimizing Grain
The grain in film images can perturb tracking somewhat. Use the Blur setting on the image preparation panel to slightly filter the image, minimizing the grain. This tactic can be effective for compression artifacts as well. SynthEyes can stabilize the images, re-size them, or correct for lens distortion. As it does that, it interpolates between the existing pixels. There are several interpolation modes available. You can produce a sharper image when you are re-sampling using the more advanced modes, but you increase the grain as you do so.
46
Memory Reduction
It is much faster to track, and check tracking, when the shot is entirely in the PCs RAM memory, as fetching each image from disk, and possibly decompressing it, takes an appreciable amount of time. This is especially true for film-resolution images, which take up more of the RAM, and take longer to load from disk. SynthEyes offers several ways to control RAM consumption, ranging from blunt to scalpel-sharp. Starting from the basic Open-Shot dialog, if your source images have 16 bit data, you can elect to reduce them to 8 bit for storage, by unchecking the 16bit checkbox and reducing memory by a factor of two. Of course, this doesnt help if the image is already 8 bit. If you have a 2K or 4K resolution film image, you might be able to track at a lower resolution. The DeRez control allows you to select or image resolution selections. If you reduce resolution by , the storage required drops to the previous level, and a reduction by reduces the storage to 1/16 th the prior amount, since the resolution reduction affects both horizontal and vertical directions. Note that by reducing the incoming image resolution, your tracks will have a higher noise level which may be unacceptable; this is your decision. If you can track using only a single channel, such as R, G, or luma, you obtain an easy factor of 3 reduction in storage required. The most precise storage reduction tool is the Region Of Interest (ROI), which preserves only a moving portion of the image that you specify, and makes the rest black. The black portion does not require any RAM storage, so if the ROI is only 1/8th the width and height of the image, a reduction by 1/64th of storage is obtained. The region of interest is very useful with object-type shots, such as tracking a face or head, a chestplate, a car driving by, etc, where the interesting part is comparatively small. The ROI is also very useful in supervised tracking, where the ROI can be set up for a region of trackers; once that region is tracked, a different ROI can be configured for the next group. A time savings can be achieved even though the next group will require an image sequence reload. (See the section on presets, below, to be able to save such configurations.) The ROI is controlled by dragging it with the left mouse button in the Image Preprocessing dialogs viewport. Dragging the size-control box at its lower right of the ROI will change the ROI size.
47
Disabling Prefetch
SynthEyes reads your images into RAM using a sophisticated multithreaded prefetch engine, which runs autonomously much of the time when nothing else is going on. If you have a smaller machine or are maybe trying to run some renders in the background, you can turn off the Shot/Enable prefetch setting on the main menu.
48
Image Centering
The cameras optic axis is the point about which the image expands or contracts as objects move closer or further away. Lens distortion is also centered about this point. By convention of SynthEyes and most animation and compositing software, this point must fall at the exact center of the image. Usually, the exact optic center location in the image does not greatly affect the 3-D solving results, and for this reason, the optic center location is notoriously difficult to determine from tracking data without a laboratory-grade camera and lens calibration. Assuming that the optic axis falls in the center is good enough. There are two primary exceptions: when an image has been cropped offcenter, or when the shot contains a lot of camera roll. If the camera rolls a lot, it would be wise to make sure the optic axis is centered. Images can be cropped off-center during the first stages of the editorial process (when a 4:3 image is cropped to a usable 16:9 window), or if a film camera is used that places the optic axis allowing for a sound channel, and there is none, or vice versa (none is allowed for, but there is one).
49
51
Automatic Tracking
Overall process
The automatic tracking process can be launched from the Summary panel (Full Automatic or Run Auto-tracker), by the batch file processor, or controlled manually. By breaking the overall process down into sub-steps, you can partially re-run it with different settings, saving time. Though normally you can launch the entire process with one click, the following write-up breaks it down for your education, and sometimes you will want to run or re-run the steps yourself. The automatic tracking process has four primary stages, as controlled by the Feature panel: 1. Finding potential trackable points, called blips 2. Linking blips together to form paths 3. Selecting some blip paths to convert to trackers 4. Running the solving process to find the 3-D coordinates of the trackers, as well as the camera path and field of view. You can optionally include a Step 3.5: fine-tuning the trackers, which reanalyzes the trackers using supervised tracking techniques to improve their accuracy. (After the automatic tracking process runs, you will still be cleaning up trackers, setting up a coordinate system, and exporting, but those topics are discussed separately and are the same for automatic and supervised tracking.) Typically, blips are computed for the entire shot length with the Blips all frames button. They can be (re)computed for a particular range by adjusting the playback range, and computing blips over just that range. Or, the blips may be computed for a single frame, to see what blips result before tracking all the frames, or when changing blip parameters. As the blips are calculated, they are linked to form paths from frame to frame to frame. Finally, complete automatic tracking by clicking Peel All, which will select the best blip paths and create trackers for them. Only the blip paths of these trackers will be used for the final camera/object solution. You can tweak the automatic tracking process using the controls on the Advanced Features panel, a floating dialog launched from the Feature control panel. You can delete bad automatically-generated trackers the same as you would a supervised tracker; convert specific blip paths to trackers; or add
53
AUTOMATIC TRACKING
additional supervised trackers. See Combining Automatic and Supervised Tracking for more information on this subject. If you wish to completely redo the automated tracking process, first click the Delete Leaden button to remove all automatic trackers (ie with lead-gray tooltip backgrounds), and the Clear all blips button. After changes to the Roto splines, you may also need to click Link Framesin most cases you will be prompted for that. Note that the calculated blips can require megabytes of disk space to store. After blips have been calculated and converted to trackers, you may wish to clear them to minimize storage space. The Clean Up Trackers dialog encourages this. (There is also a preferences option to compress SynthEyes scene files, though this takes some additional time when opening or saving files.)
Motion Profiles
SynthEyes offers a motion profile setting that allows a trade-off between processing speed and the range of image motions (per frame) that can be accommodated. If the image is changing little per frame, there is no point searching all over the image for each feature. Additionally, a larger search area increases the potential for a false match to a similar portion of the image. The motion profile may be set from the summary or feature panels. Presently, two primary settings are available: Normal Motion. A wider search, taking longer. Crash Pan. Use for rapidly panning shots, such as tripod shots. Not only a broader search, but allows for shorter-lived trackers that spin rapidly across the image. Low Detail. Use for green-screen shots where much of the image has very little trackable detail.
There are several other modes from earlier SynthEyes versions which may be useful on occasion.
54
AUTOMATIC TRACKING
Green-Screen Shots
Although SynthEyes is perfectly capable of tracking shots with no artificial tracking marks, you may need to track blue- or green-screen shots, where the monochromatic background must be replaced with a virtual set. The plain background is often so clean that it has no trackable features at all. To prevent that, green-screen shots requiring 3-D tracking must be shot with tracking marks added onto the screen. Often, such marks take the form of an X or + made of electrical or gaffing tape. However, a dot or small square is actually more useful to SynthEyes over a wide range of angles. With a little searching, you can often locate tape that is a somewhat different hue or brightness as the background just enough different to be trackable, but sufficiently similar that it does not interfere with keying the background. You can tell SynthEyes to look for trackers only within the green- or bluescreen region (or any other color, for that matter). By doing this, you will avoid having to tell SynthEyes specifically how to avoid tracking the actors. You can launch the green-screen control dialog from the Summary control panel , using the Green Screen button.
When this dialog is active, the main camera view will show all keyed (trackable) green-screen areas, with the selected areas set to the inverse of the key color, making them easy to see. [You can also see this view from the Feature panels Advanced Feature Control dialog by selecting B/G Screen as the Camera View Type.] Upon opening this dialog, SynthEyes will analyze the current image to detect the most-common hue. You may want to scrub through the shot for a frame with a lot of color before opening the dialog. Or, use the Scrub Frame
55
AUTOMATIC TRACKING
control at lower right, and hit the Auto button (next to the Average Key Color swatch) as needed. After the Hue is set, you may need to adjust the Brightness and Chrominance so that the entire keyed region is covered. Scrub through the shot a little to verify the settings will be satisfactory for the entire shot. The radius and coverage values should usually be satisfactory. The radius reflects the minimum distance from a feature to the edge of the green-screen (or actor), in pixels. The coverage is the amount of the area within the radius that must match the keyed color. If you are trying to match solid non-key disks that go as close as possible to an actor, you might want to reduce the radius and coverage, for example. You should use the Low Detail motion hint setting at the top of the Summary panel to when tracking green-screen shots (it normally reads Normal). SynthEyess normal analysis looks for the motion of details in the imagery, but if the most of the image is a featureless screen, that process can break down. With Low Detail selected, SynthEyes uses an alternate approach. SynthEyes will configure the motion setting automatically the first time you open the greenscreen panel, as it turns on the green-screen enable. See also a technique for altering the auto-tracker parameters to help green-screen shots. The green-screen settings will be applied when the auto-track runs. Note that it is undesirable to have all of the trackers on a distant flat back wall. You need to have some trackers out in front to develop perspective. You might achieve this with tracking marks on the floor or (stationary) props, or by rigidly hanging trackable items from the ceiling or light stands. In these cases, you will want to use supervised tracking for these additional non-keyed trackers. Since the trackers default to a green color, if you are handling actual green-screen shots (rather than blue), you will probably want to change the tracker default color, or change the color of the trackers manually. See Keeping Track of the Trackers for more information. After green-screen tracking, you will often have several individual trackers for a given tracking mark, due to frequent occlusions by the actors. As well as being inconvenient, it does not give SynthEyes as much information as it would if they were combined. You can use the Coalesce Nearby Trackers dialog to join them together; be sure to see the Overall Strategy subsection. The Projection Screen Creator script creates mesh geometry in the scene that is textured with the current shot imagery. As a special feature, you can tell it to Matte out the screen color, leaving only the un-keyed actors and set visible in the 3-D perspective view. Or, use an existing alpha channelbe sure to turn on Keep Alpha when opening the shot.You can use this to place the actors at their appropriate location in the 3-D world as an aid to creating your composite. Use the Camera to Tracker Distance script to determine the distance: the value in parentheses, ie along the camera axis, is the correct value to give to the Projection Screen Creator script.
56
AUTOMATIC TRACKING
You can write the green-screen key as an alpha-channel or RGB image using the image preprocessor. Any roto-splines will be factored in as well. With a little setup, you can use the roto-splines as garbage mattes, and use small roto dots to repair the output matte to cover up tracking marks.
You can turn on the Peel button, then click on a blip, converting it to a full tracker. Repeat as necessary.
57
AUTOMATIC TRACKING
Alternatively, you can use the Add Many Trackers dialog to do just that in an intelligent fashionafter an initial shot solution has been obtained.
, or by double-clicking the miniature swatch at the left of the tracker name in the graph editor . If you have already created the trackers, lasso-select the group, and shift-click to add to it (see the Lasso controls on the Edit menu for rectangular lassos). Then click the color swatch on the Tracker panel to set the color. In the graph editor panel, if you have several selected, double-click the swatch to cause the color of all the trackers in the group to be set. Right-clicking the track panel swatch will set the color back to the default. If you are creating a sequence of supervised trackers, once you set a color, the same color will be used for each succeeding new tracker, until you select an existing tracker with a different color, or right-click the swatch to get back to the default. You will almost certainly want to change the defaults, or set the colors manually, if you are handling green-screen shots. You will see the tracker colors in the camera view, perspective view, and 3-D viewports, as well as the miniature swatch in the graph editor. If you have set up a group of trackers with a shared color, you can select the entire group easily: select any tracker in the group, then click the Edit/Select same color menu item or control-click the swatch in the graph editor. Each tracker has two possible colors: its primary color, and a secondary color. The secondary color is used for each tracker when the View menu's Use alternate color menu item is checked. The second color is usually set up by the Set Color by RMS Color script; having done that you can switch back and forth between color selections using the menu item. To aid visibility, you can select the Thicker trackers option on the preferences panel. This is particularly relevant for high-resolution displays, where the pixel pitch may be quite small. The Thicker trackers option will turn on by default for monitors over 1300 pixels horizontal resolution.
58
AUTOMATIC TRACKING
Note that there are some additional rules that may occasionally override the color and width settings, with the aim of improving clarity and reducing clutter.
Skip-Frame Track
The Features panel contains a skip-frame checkbox that causes a particular frame to be ignored for automatic tracking and solving. Check it if the frame is subject to a short-duration extreme motion blur (camera bump), an explosion or strobe light, or if an actor suddenly blocks the camera. The skip-frames checkbox must be applied to each individual frame to be skipped. You should not skip more than 2-3 frames in a row, or too many frames overall, or you can make it more difficult to determine a camera solution, or at least create a temporary slide.
59
AUTOMATIC TRACKING
You should set up the skip-frames track before autotracking. There is some support for changing the skipped frames after blipping and before linking, but this is not recommended; you may have to rerun the auto-tracking step.
60
Supervised Tracking
Solving for the 3-D positions of your camera and elements of the scene requires a collection of trackers tracked through some or all of the shot. Depending on what happens in your shot, 7 or 8 may be sufficient (at least 6), but a complex shot, with trackers becoming blocked or going off the edge of the frame, can require substantially more. If the automated tracker is unable to produce satisfactory trackers, you will need to add trackers directly. Or, you can use the techniques here to improve automatically-generated ones. Specific supervised trackers can be especially valuable to serve as references for inserting objects, or for aligning the coordinate system as desired. WARNING: Tracking, especially supervised tracking, can be stressful to body parts such as your fingers, hands, wrists, eyes, and back, like any other detail-oriented computer activity. Be sure to use an ergonometrically sound workstation setup and schedule frequent rest breaks. See Click-on/Click-off mode. To begin supervised tracking, select the Tracker control panel on the Create button . . Turn
Tip: You can create a tracker at any time by holding down the C key and left-clicking in the camera view. Or, right-click in the camera view and select the Create Trackers item. In either case you will be switched to the Tracker control panel. Rewind to the beginning of the shot .
Locate a feature to track: a corner or small spot in the image that you could reach in and put your finger on. Do not select a reflective highlight that moves depending on camera location. Left-click on the center of your feature, and while the button is down, position the tracker accurately using the view window on the command panel. The gain and brightness spinners located next to the mini-tracker-view can make shadowed or blown-out features more visible. Adjust the tracker size and aspect ratio to enclose the feature, and a little of the region around it, using either the spinner or inner handle. Adjust the Search size spinner or outer handle based on how uncertainly the tracker moves each frame. This is a matter of experience. A smooth shot permits a small search size even if the tracker accelerates to a higher rate. Create any number of trackers before tracking them through the shot. It is easier to do either one or 3-6 at a time. To track them through the shot, hit the Play or frame forward button or use the mouse scroll wheel inside the mini-tracker-view (scrubbing the time bar does not cause tracking). Watch the trackers as you move through the shot. If any get off track, back up a frame or two, and drag them in the image back to
61
SUPERVISED TRACKING
the right location. The Play button will stop automatically if a tracker misbehaves, already selected for easy correction.
62
SUPERVISED TRACKING
moved to the mouse location immediately when the button is depressed. In zdrop-lock mode, if you click over a mesh, it will be ignored. You can click a different tracker to select it, or use other usual left-mouse functionality, without issue. The status line will show ZDROP-LOCK when the mouse is in the camera view. You can keep an eye on a tracker or a few trackers by turning on the Pan to Follow item on the Track menu (keyboard: 5 key), and zooming in a bit on the tracker, so you can see the surrounding context. When Pan To Follow is turned on, dragging the tracker drags the image instead, so that the tracker remains centered. Or, the number-pad-5 key centers the selected tracker whenever you click it.
63
SUPERVISED TRACKING
supervised tracking, as discussed here. Please see the reference page for details of operating SimulTrack.
The SimulTrack view shows each frame of a tracker with a position key, and allows you to adjust the position key location: essentially the SimulTrack is an entire collection of mini tracker views, each corresponding to a different frame. To take advantage of this, you can set up a tracker for smooth keying, as in the prior section, open a SimulTrack view (either floating or as part of a viewport configuration), and track the tracker. You'll then see all automatically-generated keyframes simultaneously, and you can adjust any of them directly in the SimulTrack view, without having to change frames if you do not want to. Make sure Track/Smooth after keying is on, and the adjacent frames will automatically be updated to reflect the changed key. Note: if your machine's performance doesn't allow adequately rapid updates with Smooth after keying turned on, turn it off and re-run the track from the beginning after you have adjusted all the keys. After you have an initial solve for the shot, you have an exciting option available to you: you can have SynthEyes generate the entire set of auto-keys automatically, acting as if the tracker is a zero-weighted tracker. Set the first position key at the beginning of the lifetime of the tracker. Then click to a much later frame where the tracked feature is still visible, and set another key using ZDrop (hold down the 'z' key and click in the camera view). The two keys enable SynthEyes to estimate the tracker's 3-D location, then generate a position key at each appropriate frame (determined by its keyautomatically setting on the Tracker Control Panel). All of those automaticallygenerated keys will pop up in the SimulTrack view. The locations will be approximate, based on the accuracy of your keys and the existing solve. Then, use the SimulTrack view to tweak the positioning of each of the generated key locations, which will trigger the Smooth after keying functionality. You can use the strobe functionality (click on the 'S') to check each key location for consistency with its neighborsgo ahead and adjust it even while strobing! After each key has been adjusted, you'll have an accurate supervised track for the feature.
64
SUPERVISED TRACKING
Note that this is one potential workflow, not a required workflow. On a simple shot, allowing supervised trackers to track through an entire shot by themselves may be faster. You can then still use the SimulTrack view to monitor the results. We provide the tools, you decide the best way is to use them.
Offset Tracking
Sometimes you want to track something that can not be seen or tracked very well, but there is something nearby that can be easily tracked. This arises particularly at the corners of objects, when a very variable background is moving behind the objectwhich would make for time-consuming tracking. You may also want to get approximate tracks for nearby detailed features for 3-D modeling, after completing one solid tracker.
65
SUPERVISED TRACKING
To track in this situation, use the offset channel of trackers. After you have completed basic 2-D tracking of the easy feature, either turn on the Offset checkbox to begin setting up an offset on the current tracker; click the New+ button on the Tracker control panel to make (clone) a copy of the selected tracker and turn on the offset channel; or begin adjusting the Offset spinners on the Tracker Control panel. WARNING: you should have plenty of experience with supervised tracking before considering offset tracking. The offset trackers can only be as good as the original supervised tracks, and the offset trackers will be trickier to think about! The point of the offset channel is that it enables you to track in one place, but have the point of interest, the location whose 3-D location will be determined, be located somewhere else (typically a short distance away). When the offset button is turned on, the offset channel's values will be added to the 2-D trackers position. The tracker view on the Tracker Control panel will show the image at the final offset tracker position. Once the offset button is turned on, you can no longer change the 2-D tracker itself. You will see only a small crosshair that you can move to the desired location. You will set keys on the offset channel throughout the shot, so that it stays accurately on the point you are tracking. The crosshair graphic shows sidebars on frames with offset-channel keys. Similarly, the timing bar shows the location of offset-channel keys, not 2-D trackers keys. If the offset tracker winds up in the wrong 3-D location compared to the (solved) 2-D tracker, it is because you have not set up the offset channel accurately! The relative 3-D location is ENTIRELY determined by what keys you set. An offset tracker does not contain as much information as a normal tracker. You can turn the Offset button on and off repeatedly as necessary. As you do, you will see the tracker jump back and forth between the normal 2-D location and the offset location. When the offset button is on, you can edit the offset tracker location. When the offset button is off, you can edit the underlying 2-D tracker. Once you have set up an offset channel, if you turn the offset button back off, you will see an additional crosshair in the camera view and tracker view that you can move around to create new offset keys. For this to happen, there must be only one tracker selected, it must not be locked, and the Tracker Control panel must be open. You can use the New+ button repeatedly on either the original 2-D tracker or any resulting offset tracker to create additional offset trackers. This is useful for detailed 3-D modeling. Important: if you need to change the 2-D tracker, you will need to do it for EVERY one of the cloned offset trackers. So it is important to get the 2-D tracker correct before beginning to clone offset trackers from it!
66
SUPERVISED TRACKING
Offset tracking is best for simple camera motions, such as a left to right dolly, even if it bounces quite a bit, versus a complex motion-control move that will require many different keys in the offset to get right.
Combining Trackers
You might discover that you have two or more trackers tracking the same feature in different parts of the shot, or that are extremely close together, that you would like to consolidate into a single tracker. Select both trackers, using a lasso-select or by shift-selecting them in the camera view or graph editor (see the Lasso controls on the Edit menu for rectangular lassos). Then select the Track/Combine trackers menu item, or the Shift-7 (ampersand &). All selected trackers will be combined, preserving associated constraint information. If several of the trackers being combined are valid on the same frame, their 2-D positions are averaged. Any data flagged as suspect is ignored, unless it is the only data available. Similarly, the solved 3-D positions are averaged. There is a small amount of intelligence to maintain the name and configuration of the most-developed tracker. When you combine trackers that have offsets, the offset channel data is lost: it is baked into the combined tracker position. Note: the camera views lasso-select will select only trackers enabled on the current frame, not the 3-D point of a tracker that is disabled on the present frame. This is by design for the usual case when editing trackers. Control-lasso to lasso both the 2-D trackers and the 3-D points, or shift-click to select 3-D points.
67
SUPERVISED TRACKING
Pan To Follow
While tracking, it can be convenient to engage the automatic Pan To Follow mode on the Track menu, which centers the selected tracker(s) in the camera view, so you can zoom in to see some local context, without having to constantly adjust the viewport positioning. When pan to follow is turned on, when you start to drag a tracker, the image will be moved instead, so that the tracker can remain centered. This may be surprising to begin with. Once you complete a tracker, you can scrub through the shot and see the tracker crisply centered as the surroundings move around a bit. This is the best way to review the stability of a track.
Skip-Frame Track
If a few frames are untrackable due to a rapid camera motion, explosion, strobe, or actor blocking the camera, you can engage the Skip Frame checkbox on the feature panel to cause the frame to be skipped. You should only skip a few frames in a row, and not that many over all. The Skip Frames track will not affect supervised tracking, but it affects solving, causing all trackers to be ignored. After solving, the camera will have a spline-interpolated motion on the resulting unsolved frames. If you have a mixture of supervised and automatic tracking, see the section on the Skip-Frame track in Automated Tracking as changing the track after automated tracking can have adverse effects.
68
You can turn on the Fine-tune during auto-track checkbox on the Finetune trackers dialog or summary panel to have fine tuning done during autotracking. Or, you can do an automatic track and solve, then decide to fine-tune and refine the solve later: the work-flow is up to you.
Controlling Fine-Tuning
When you fine-tune, SynthEyes will modify each auto-tracker so that there is only one key every 8 frames (by default), then run the supervised tracker at all the intermediate frames. There are several options you can control when starting fine-tuning: The spacing between keys The size of the trackers The aspect ratio of the trackers (usually square, 1.0) The horizontal and vertical search sizes The shots current supervised-tracker filter interpolation mode. Whether all auto trackers will be tuned, or those that are currently selected (whether they are automatic, or a previously-unlocked automatic tracker, which would not otherwise be processed). Whether you want the trackers to remain auto-trackers, or be changed to be considered gold supervised trackers.
69
Usage Suggestions
The fine-tuning process is not necessary on all shots. The automatic tracker produces excellent results, and fine-tuning may produce results that are indistinguishable from the original. Shots with a slow camera motion or with busy, repeating, content (such as woods and shrubbery) may deserve special attention. You can do a quick test by selecting and fine-tuning a single tracker, then comparing its track (using tracker trails) before and after fine-tuning using Undo and Redo. (See the online tutorial.) If the fine-tuning is beneficial, then fine-tune the remaining trackers. After fine-tuning, be sure to check the tracker graphs in the graph editor and look for isolated spikes. Occasional spikes are typical when a tracker is in a region with a lot of repeating fine detail, such as a picket fence. Keep in mind that though fine-tuning can help give you a very smooth track, often there are other factors at play as well, especially film grain, compression artifacts, or interlacing.
70
4. Turn on View/Show tracker trails on the main or right-click menu. 5. Scrub through the shot using the time bar. Look for funny hooks at the beginning or end of a track, especially at the edges of the image, zig-zag discontinuities in the trails.
Your mind is good at analyzing motion paths without the images perhaps even better because it is not distracted by the images themselves. This process is helpful in determining the nature of problematic shots, such as shots with low perspective, shots that unexpectedly have been shot on a tripod, tripod shots with some dolly or sway, and shots that have a little zoom somewhere in
71
After you open the graph editor, make sure it is in the tracks view , if youve been playing earlier. If the shot is supervised tracking, make click on the sort order button from sort alphabetic to sort by time . If you have also.
resized the window you may want to reset the horizontal scaling
Next click on the two buttons at lower right of the panel until they look like this , which selects squish mode, with no keys, with the tracker-count background visible (it starts out visible). The graph editor on one example shot looks like this:
72
Each bar corresponds to one of the trackers; Tracker4 is selected and thicker. The color-coded background indicates that the number of trackers is problematic at left in the yellow area, OK in the middle, and safe on the right. You can configure the safe level on the preferences. Above this limit (default 12), the background will be white (gray for the dark UI setting), but below the safe limit, the background will be the safe color (configurable as a standard preference), which is typically a light shade of green: the number of trackers is OK, but not high enough to hit your desired safe limit. This squished view gives an excellent quick look at how trackers are distributed throughout the shot. The color coding varies with for tripod-mode shots and for shots with hold regions. Zero weighted trackers do not count. Hint: When the graph editor is in graph mode , you can look at a direct graph of the number of valid trackers on each frame by turning on the #Normal channel of the Active Trackers node.
73
The graph editor is divided into three main areas: a hierarchy area at top left, a canvas area at top right, and a tool area at the bottom. You can change the width of the hierarchy area by sliding the gutter on its right. You can partially or completely close the tool area with the toolbox at left. A minimal view is particularly handy when the graph editor is embedded in a viewport layout. In the hierarchy area. you can select trackers by clicking their line. You can control-click to toggle selections, or shift-drag to select a range. The scrollbar at left scrolls the hierarchy area. You can also select trackers in the canvas area in squish mode, using the same mouse operations as in the hierarchy area. The icons next to the tracker name provide quick control over the tracker visibility, color, lock status, and enable. Warning: you can not change the enable, or much else, of a tracker while it is locked! The small green swatch shows the display color of a tracker or mesh. Double-clicking brings up the color selection dialog so you can change the display color. You can shift-click a color, and add all trackers of that color to the current selection, control-click the swatch of an unselected tracker to select only trackers of that color, or control-click the swatch on a selected tracker to unselect the trackers of that color. Jumping ahead, the graph editor hierarchy also shows any coordinatesystem lock settings for each tracker: x, y, and z for the respective axis constraints; l (lower-case L) when there is a linked tracker on the same object; i for a linked tracker on a different object (an indirect link); d for a distance constraint; 0 for a zero-weighted tracker; p for a pegged tracker; F for a tracker you specified to be far; f for a tracker not requested to be far, but solved as far for cause.
74
In this example, a tracker suddenly started jumping along fence posts, from pole to pole on three consecutive frames. The red curve is the horizontal U velocity, the green is the vertical V velocity, and the purple curve is the tracker figure-of-merit (for supervised trackers). You can see the channels listed under Tracker15 at left. The green circles show which channels are shown; zoom, pan, and color controls are adjacent. Double-clicking will turn on or off all the related channels. There are a variety of different curves available, not only for the trackers but for other node types within SynthEyes. The graph editor is a mult-curve editorany number of completely different kinds of curves can be displayed simultaneously. There is no single set of coordinate values in the vertical direction because the zoom and pan can be different for each kind of channel. To determine the numeric value at any particular point on a curve, put the mouse over it and the tooltip will pop up with the set of values. The graph editor displays curves for each node that is exposed (its channels are displayed; Enable, U. Vel, V. Vel, etc above). The graph editor also displays curves for all selected nodes (trackers, cameras, or moving objects) as long as the Draw Curves for Selected Nodes button is turned on. This gives you quite a bit of quick control over what is drawn, and enables you to compare a single tracker or cameras curves to any other tracker as you run through them all, for example.
75
76
In this example, two trackers have been supervised-tracked with a Key Every setting of 20 frames (but starting at different frames). The tracker Figure of Merit (FOM) curve measures the amount of difference between the trackers reference pattern and what is found in the image. You see it drop down to zero each time there is a key, because then the reference and image are the same. One tracker has a small FOM value that stays mostly constant. The other tracker has a much larger FOM, and in part of the shot it is much larger. In a supervised shot, the reason for that should be investigated. You can use this curve to help decide how often to place a key automatically. The 20 frame value shown above is plenty for those features. If you see the following, you should reduce the spacing between keys.
77
Youll also be able to see the effect of the Key Smooth setting: the key smoothing will flatten out a steadily increasing curve into a gently rounded hump, which will reduce spikes in the final camera path.
Velocity Spikes
Heres an example of a velocity curve from the graph editor:
78
At frame 217, the tracker jumped about 3 pixels right, to a very similar feature. At frame 218, it jumped back, resulting in the distinctive sawtooth pattern the U velocity curve exhibits. If left as-is, this spike will result in a small glitch in the camera path on frame 217. You can repair it using the Tracker control panel in the main user interface by going to frame 217. Jiggle back and forth a few frames with the S and D keys to see whats happening, then unlock the tracker and drop down a new key or two. Step around to re-track the surrounding frames with the new keys (or rewind and play through the entire sequence, which is most reliable). DeGlitch Mode You can also repair the glitch by switching to the Deglitch mode of the graph editor, then clicking on the first (positive) peak of the U velocity at frame 217. SynthEyes will compute a new tracker location that is the average of the prior and following locations. For most shots, this will eliminate the spike. If you see a velocity spike in one direction only, it will be more difficult to correct: it means that the tracker has jumped to a nearby feature, and not come back. You will have to put it back in its correct location and then play (track) through the rest of the shot. The deglitch tool can also chop off the first or last frame of a tracker, which can be affected when an object moves in front, or a feature is moving offscreen.
79
80
81
Introducing Holds
Some shots are more complex, however: they contain both sections where the camera translates substantially, and where the camera pans substantially without translation. For example, the camera dollies down a track, looking to the left, reaches the end of the track, spins 180 degrees, then returns down the track while looking to the right. Such a shot is complex because none of the trackers visible in the first section of the shot are visible in the third portion. During the second panningtripod portion, all the trackers must be Far and can have no depths because the camera never translates during their lifetime. Taken literally, and of course were talking computers here, mathematically there is no way for SynthEyes to tell what happened between the first and third sections of the shotthe camera could have translated from here to Mars during the second section, and since the Far points are infinitely far away, the tracking data would be the same. Instead, we need to tell SynthEyes the camera is not translating during the second section of the shot. We call this a hold, and there is a button
for this on the Summary and Solver control panels. By animating the Hold button, you can tell SynthEyes which range(s) of frames that the camera is panning but not translating. SynthEyes calculates a single XYZ camera position for each section of frames where the hold button is continuously onthough it continues to calculate separate pan, tilt, and roll (and optionally zoom) for each frame. (Note: you do not have to set up a hold region if the camera comes to a stop and pans, but only a little, so that most of the trackers are visible both before and after the hold region. That can still be handled as a regular shot.) The Hold button can be animated on and off in any pattern: off-on-off as above; off-on, a shot with a final panning section; on-off, a shot with an initial panning section followed by a translation; on-off-on, a pan at each end of a dolly; off-on-off-on, a translation, a pan, another translation, and a final pan; etc. There is no requirement on what happens during each pan or each translate, they can all be different. In effect, you are building a path with beads in it, where each bead is a panning hold section.
82
The tool operates in a variety of modes. Only automatically-generated tracks are affected. Golden and Far trackers are not affected.
83
Usage Hints
You should play with the Hold Preparation tool a bit, setting up a few fake hold regions, so you can see what the different modes do. The Undo button on the Hold Preparation Tool is there for a reason! It will be easier to see what is happening if you select a single tracker and switch to Selected mode, instead of changing all the trackers. After running the Hold Preparation operation (Apply button), you may want to switch to the Sort by Time option in the graph editor.
If you need to change the hold region late in your workflow, it is helpful if the entire tracking data is still available. If you have run a Truncate, the tracking data for the interior of the hold regions will be gone and have to be re-tracked. For that reason, the Truncate operation should be used sparingly, perhaps only when first learning. If you have done some tracker preparation, then other things, then need to redo the preparation, use the Select By Type item on the Script menu to select the Far trackers, then delete them. Make sure not to delete any Far trackers you have created specially.
84
85
87
If you are unsure if the camera zoomed or not, try the fixed-lens setting first, and switch to zoom only if warranted. Generally, if you solve a zoom shot with the fixed-lens setting, you will be able to see the zooms effect on the camera path: the camera will suddenly push back or in when it seems unlikely that the real camera made that motion. Sometimes, this may be your only clue that the lens zoomed a little bit. Important: Never use Known mode solely because someone wrote down the lens setting during shooting. Like the turn-signal of an oncoming car, it is only a guess, not something you can count on. Do not set a Known focal length unless it is truly necessary. You may have the scribbled lens focal length from on-set production. If you also know the plate size, you can use the Fixed, with Estimate setting to speed up the beginning of solving a bit, and sometimes to help prevent spurious incorrect solutions if the tracking data is marginal. The mode is also useful when you are solving several shots in a row that have the same lens setting: you can use the field of view value without worrying about plate size. In either case, you should rewind to the beginning of the shot and either reset any existing solution, or select View/Show Seed Path, then set the lens field of view or focal length to the correct estimated value. SynthEyes will compute a more accurate value during solving. It can be worthwhile to use an estimated lens setting as a known lens setting when the shot has very little perspective to begin with, as it will be difficult to determine the exact lens setting. This is especially true of object-mode tracking when the objects are small. The Known lens mode lets you animate the field of view to accommodate a known, zooming lens, though this will be rare. For the more common case where the lens value is fixed, be sure to rewind to the beginning of the shot, so that your lens FOV key applies to the entire shot. When a zoom occurs only for a portion of a shot, you may wish to use the Filter Lens F.O.V. script to flatten out the field of view during the non-zooming portions, then lock it. This eliminates zoom/translation coupling that causes noisier camera paths for zoom shots. See the online tutorial for more details. You can also set up animated filter controls using the post-solve filtering to selectively filter more during the stationary non-zooming portion.
88
The early approach, in image prep, is controlled from the Lens tab of the image preparation dialog. It lets you set distortion coefficients to remove the distortion from the source imagery (or later add it). But you must already know the coefficients, or fiddle to find them. The image preprocessor can also accept lens presets, if you have precalibrated the lens or obtained a preset elsewhere. The presets can specify fisheye lenses or other complex distortion patterns. The late approach to lens distortion, during solving, allows the solving engine to determine a most likely distortion value. The approach uses only a single distortion parameter appropriate only for moderate distortion, not severe moustache distortion or fisheye lenses. The imagery you see will be the distorted (original source) images, with the tracker locations made to match up in the camera view, but not perspective view. Usually you are going to want to produce some undistorted footage once you determine the distortion, at least for temporary use.
89
90
Lens De-Centering
What we would call a lens these dayswhether it is a zoom lens or prime lens, or fisheye lenstypically consists of 7-11 individual optical lens elements. Each of those elements has been precisely manufactured and aligned, and by the nature of this, they are all very round in order to work properly. Then they are stacked up in a tube, which is again very round (along with gears and other mechanisms) to form the kind of lens we buy in the store. The important part of this explanation is that a lens is very round and symmetric and has a single well-defined center right down the middle. You can picture a laser beam right down the exact center of each individual lens of the overall lens, shooting in the front and out the back towards the sensor chip or film. With an ideal camera, the center beam of the lens falls exactly in the middle of the sensor chip or film. When that happens, parallel lines converge at infinity at the exact center of the image, and as objects get farther away, they gravitate towards the center of the image. While that seems obvious, in fact it is rarely true. If you center something at the exact center of the image, then zoom in, youll find that the center goes sliding off to a different location! This is a result of lens de-centering. In a video camera, de-centering results when the sensor chip is slightly off-center. That can be a result of the manufacturers design, but also because the sensor chip can be assembled in slightly different positions within its mounting holes and sockets. In a film camera, the centering (and image plate size) are determined solely by the film scanner! So the details of the scanning process are important (and should be kept repeatable). De-centering errors creates systematic errors in the match-move when left uncorrected. The errors will result in geometric distortion, or sliding. Most rendering packages can not render images with a matching de-centering, guaranteeing problems. And like the example zooming in earlier, the de-centered lens can result in footage that doesnt look right. It is fairly easy to determine the position of the lens center using a zoom lens. See the de-centering tutorial on the web site. Even if you will use a prime lens to shoot, you can use a zoom lens to locate the lens center, since the lenses are repeatable, and the error is determined by the sensor/film scan.
91
92
If a shot lines up in SynthEyes, but not in your compositing or animation software, checking these items is your first step. Since SynthEyes preprocesses the images, or mathematically distorts the tracker locations, generally the post-tracking software will not receive matching imagery unless care is taken as described below to generate matching imagery. SynthEyes has a script that simplifies workflow when dealing with distorted imagery. It uses a simple approach to the setup. You can do whatever it does manually or via your own (modified) script if you need to do something different.
93
If you need to do something different, or want to do more of the step manually, here is what is happening behind the scenes. The Lens Workflow script performs the following actions for you: transfers any calculated distortion from the lens panel to the image preprocessor, turns off the distortion calculation for future solves, changes the scale adjustment on the image prep Adjust tab to remove black pixels, selects Lanczos interpolation, updates the tracker locations (ie Apply to Trkers on the Output tab), adjusts the field of view, and adjusts the back plate width (so focal length will be unchanged). When you do the Change Shot Images with the Switch to saved footage option, SynthEyes resets the image preprocessor to do nothing: if the lens distortion and other corrections have already been applied to the modified images, you do not want to perform them a second time once you switch to the already-corrected images. The Clear button on the Delivering Distorted Imagery In this workflow option, you create and track undistorted imagery, generate CG effects, re-distort the effects, then composite the distorted version back to the original imagery. Determine lens distortion via checklines or solving with Calculate Distortion turned on. Save then Save As to create a new version of the file. (recommended). Click Lens Workflow on the Summary panel (or start the Lens/Lens Workflow script). Select the Final output option Redistorted(2) and hit OK. The script will pad the image so that the output contains every input
94
Obviously this is a more complex workflow than delivering undistorted images, but it is a consequence of the end product desired.
95
96
Solving Modes
Switch to the Solve control panel . Select the solver mode as follows:
Auto: the normal automatic 3-D mode for a moving camera, or a moving object. Refine: after a successful Auto solution, use this to rapidly update the solution after making minor changes to the trackers or coordinate system settings. Tripod: camera was on a camera, track pan/tilt/roll(/zoom) only. Refine Tripod: same as Refine, but for Tripod-mode tracking. From Seed Points: use six or more known 3-D tracker positions per frame to begin solving (typically, when most trackers have existing coordinates from a 3-D scan or architectural plan). You can use Place mode in the perspective view to put seed points on the surface of an imported mesh. Turn on the Seed button on the coordinate system panel for such trackers. You will often make them locks as well. From Path: when the camera path has previously been tracked, estimated, or imported from a motion-controlled camera. The seed position, and orientation, and field of view of the camera must be approximately correct. Indirect: to estimate based on trackers linked to another shot, for example, a narrow-angle DV shot linked to wide-angle digital camera stills. See Multi-shot tracking. Individual: when the trackers are all individual objects buzzing around, used for motion and facial capture with multiple cameras. Disabled: when the camera is stationary, and an object viewed through it will be tracked.
The solving mode mostly controls how the solving process is started: what data is considered to be valid, and what is not. The solving process then proceeds pretty much the same way after that, subject to whatever constraints have been set up.
97
World Size
Adjust the World Size on the solver panel to a value comparable to the overall size of the 3-D set being tracked, including the position of the camera. The exact value isnt important. If you are shooting in a room 20 across, with trackers widely dispersed in it, use 20. But if you are only shooting items on a desktop from a few feet away, you might drop down to 10. Important: the world size does not control the size of the scene, that is the job of the coordinate system setup. The world size is used to stabilize some internal mathematics during solving; essentially all the coordinates are divided by it internally, so that the coordinates stay near 1 even if raised to a large power. Then after the calculation, the world size is multiplied back in. This process improves your computers accuracy. Choose your coordinate system to keep the entire scene near the origin, as measured in multiples of the world size. If all your trackers will be 1000 worldsizes from the origin (for example, near [1000000,0,0] with a world size of 1000), accuracy might be affected. The Shift Constraints tool can help move them all if needed. As you see, the world size does not affect the calculation directly at all. Yet a poorly chosen world size can sabotage a solution. If you have a marginal solve, sometimes changing the world size a little can produce a different solution, maybe even the right one. The world size also is used to control the size of some things in the 3-D views and during export: we might set the size of an object representing a tracker to be 2% of the world size, for example.
Go!
Youre ready, set, so hit Go! on the Solver panel . SynthEyes will pop up a monitor window and begin calculating. Note that if you have multiple cameras and objects tracked, they will all be solved simultaneously, taking interobject links into accounts. If you want to solve only one at a time, disable the others.
98
During Solving
If you are solving a lengthier shot where trackers come and go, and where there may be some tracking issues, you can monitor the quality of the solving from the messages displayed. As it solves, SynthEyes is continually adjusting its tentative solution to become better and better (iterating). As it iterates, SynthEyes displays the field of view and total error on the main (longest) shot. You can monitor this information to determine if success is likely, or if you should stop the iterations and look for problems. SynthEyes will also display the range of frames it is adding to the solution as it goes along. This is invaluable when you are working on longer shots: if you see the error suddenly increase when a range of frames is added, you can stop the solve and check the tracking in that range of frames, then resume. You can monitor the field of view to see if it is comparable to what you think it should be either an eyeballed guess, or if you have some data from an on-set supervisor. If it does not seem good to start, you can turn on Slow but sure and try again. Also, you can watch for a common situation where the field of view starts to decrease more and more until it gets down to one or two degrees. This can happen if there are some very distant trackers which should be labeled Far or if there are trackers on moving features, such as a highlight, actor, or automobile. If the error suddenly increases, this usually indicates that the solver has just begun solving a new range of frames that is problematic. Your processor utilization is another source of information. When the tracking data is ambiguous, usually only on long shots, you will see the message Warning: not a crisp solution, using safer algorithm appear in the solving
99
After Solving
Though having a solution might seem to be the end of the process, in fact, its only the middle. Heres a quick preview of things to do after solving, which will be discussed in more detail in further sections. Check the overall errors Look for spikes in tracker errors and the camera or object path Examine the 3-D tracker positioning to ensure it corresponds to the cinematic reality. Add, modify, and delete trackers to improve the solution. Add or modify the coordinate system alignment Add and track additional moving objects in the shot Insert 3-D primitives into the scene for checking or later use Determine position or direction of lights Convert computed tracker positions into meshes Export to your animation or compositing package.
Once you have an initial camera solution, you can approximately solve additional trackers as you track them, using Zero-Weighted Trackers (ZWTs).
RMS Errors
The solver control panel displays the root-mean-square (RMS) error for the selected camera or object, which is how many pixels, on average, each tracker is from where it should be in the image. [In more detail, the RMS average is computed by taking a bunch of error numbers, squaring them, dividing by the number of numbers to get the average square, then taking the square root of that average. Its the usual way for measuring how big errors are, when the error can be both positive and negative. A regular average might come out to zero even if there was a lot of error!] The RMS error should be under 1 pixel, preferably under 0.5 for welltracked features. Note that during solving, the popup will show an RMS error that can be larger, because it is contains contributions from any constraints that have errors. Also, the error during solving is for ALL of the cameras and objects combined; it is converted from internal format to human-readable pixel error
100
Solving Issues
If you encounter the message "Can't find suitable initial frames", it means that there is limited perspective in the shot, or that the Constrain button is on, but the constrained trackers are not simultaneously valid. Turn on the checkboxes next to Begin and End frames on the Solver panel, and select two frames with many trackers in common, where the camera or object rotates around 30 degrees between the two frames. You will see the number of trackers in common between the two frames, you want this to be as high as possible. Make sure the two frames have a large perspective change as well: a large number of trackers will do no good if they do not also exhibit a perspective change. Also, it will be a good idea to turn on the "Slow but sure" checkbox. You may encounter "size constraint hasn't been set up" under various circumstances. If the solving process stops immediately, probably you have no trackers set up for the camera or object cited. Note that if you are doing a moving object shot, you need to set the cameras solving mode to Disabled if you are not tracking it also, or you will get this message. When you are tracking both a moving camera and a moving object, you need to have a size constraint for the camera (one way or another), and a size constraint for the object (one way or another). So you need TWO size constraints. It isn't immediately obvious to many people why TWO size constraints are needed. This is the related to a well-known optical illusion, relied on in shooting movies such as "Honey, I Shrunk the Kids". Basically, you can't tell the difference between a little thing moving around a little, up close, and a big thing moving around a lot, farther away. You need the two size constraints to set the relative proportions of the foreground (object) and background (camera). The related message Had to add a size constraint, none provided is informational, and does not indicate a problem. If you have SynthEyes scenes with multiple cameras linked to one another, you should keep the solver panels Constrain button turned on to maintain proper common alignment. See also the Troubleshooting section.
101
102
3-D Review
After SynthEyes has solved your scene, youll want to check out the paths in 3-D, and see what an inserted object looks like. SynthEyes offers several ways to do this: traditional fixed 3-D views, including a Quad orthogonal isometric view, camera-view overlays, user-controlled 3-D perspective window, preview movies, and velocity vs time curves.
Quad View
If you are not already in Quad view, switch to it now on the toolbar: . You will see the camera/object path and 3-D tracker locations in each view. You can zoom and pan around using the middle mouse button and scroll wheel. You can scrub or play the shot back in real-time (in sections, if there is insufficient RAM). See the View menu for playback rate settings.
103
3-D REVIEW
Frequently these trackers should either distant horizon points that should be changed to Far, corrected, or deleted if they are on a moving object or the result of some image artifact. Such points can also arise when a tracker is visible for only a short time when the camera is not moving. The Clean up trackers dialog can do this automatically. Note: the too-far-away test can cause trouble if you have a small world size setting but are using measured GPS coordinates. You should offset the scene towards the origin using the Shift Constraints script. You should also look for trackers that are behind the camera, which can occur on points that should be labeled Far, or when the tracking data is incorrect or insufficient for a meaningful answer. After repairing, deleting, or changing too-far-away or behind-camera trackers, you should use the Refine mode on the Solver panel to update the solution, or solve it from scratch. Eliminating such trackers will frequently provide major improvements in scene geometry.
104
3-D REVIEW
This glitch was easy to pick outso large the U and V velocities had to be moved out of the way to keep them clearly visible. The deglitch tool fixes it. easily
You can look at the overall error for a tracker from the Coordinate System panel . This is easiest after setting the main menus View/Sort by Error, unselecting all the trackers (control/command-D), then clicking the down arrow on your keyboard to sequence through the trackers from worst towards best. In addition to the curves in the graph editor, you can see the numeric error at the bottom of the tracker panel : both the total error, and the error on the current frame. You can watch the current error update as you move the tracker, or set it to zero with the Exact button. For comparison, following is a tracker graph that has a fairly large error; it tracks a very low contrast feature with a faint moving highlight and changing geometry during its lifespan. It never has a very large peak error or velocity, but maintains a high error level during much of its lifespan, with some clearly visible trends indicating the systematic errors it represents.
105
3-D REVIEW
The vertical scale is the same in these last three graphs. (Note that in the 3 one, the current time is to the left, before frame 160 or so, hence the blue arrow.)
rd
106
3-D REVIEW
You can sort the trackers within the graph editors Active Trackers node by changing Sort Alphabetic to Sort By Error .
The SimulTrack view can also be helpful in quickly looking at many or all of the trackers, especially in Sort By Error mode. The error curve for the tracker is shown in each tile (as long as right-click/Show Error in the SimulTrack view is on). Do not blindly correct apparent tracking errors. A spike suggesting a tracking error might actually be due to a larger error on a different tracker that has grossly thrown off the camera position, so look around.
107
3-D REVIEW
Theres a spike around frame 215-220, to find it, expose the Active Trackers, select them all (control/command-A), and use Isolate mode around that range of frames. The result:
108
3-D REVIEW
Weve found the tracker that causes the spike, and can use the deglitch tool , or switch back to the tracker control panel unlock the tracker, correct it, then re-lock it. and camera viewport,
Tip: In the capture above, the selected tracker is not visible in the hierarchy view. You can see where it is in the scroll bar, thoughit is located at the white spot inside the hierarchy views scroll bar. Clicking at that spot on the scroll bar will pan the hierarchy view to show that selected tracker. If that is the last glitch to be fixed, switch to the Solve control panel and re-solve the scene using Refine mode. ,
You can also use the Finalize tool on the tracker control panel to smooth one or more trackers, though significant smoothing can cause sliding. If
109
3-D REVIEW
your trackers are very noisy, check whether film grain or compression artifacts are at fault, which can be addressed by image-preprocessor blur, verify that the interlace setting is correct, or see if you should fine-tune the trackers. Alternatively, you can fix glitches in the object path by using the deglitch tool directly on the camera or moving objects curves, because it works on any changeable channel. You can also move the object using the 3-D viewports and the tools on the 3-D control panel on the offending frame. , by repositioning the object
Warning: If you fix the camera path, instead of the tracker data, then later re-solve the scene, corrections made to the camera path will be lost, and have to be repeated. It is always better to fix the cause of a problem, not the result.
Path Filtering
If you have worked on the trackers to reduce jitter, but still need a smoother path (after checking in your animation package), you can filter the computed camera or object path using the Path Filtering dialog, launched from the Window menu or the Solver panel. Warning: filtering the path increases the real error, and causes sliding. Remember that your objective is to produce a clean insert in the image, not produce an artificially smooth camera trajectory that works poorly. Path filtering is legitimately useful in several difficult situations: first, in object tracking, when the object is small and may have only a few trackers; second, when there is little perspective or few or noisy trackers, such that there is ambiguity between camera position and orientation; and third, when objects must be inserted that come very close to the camera, where jitter in the solution is very visible. The path filtering controls permit you to animate the frequency and strength of the filtering applied, and apply it selectively to only some axes. If the camera was on a dolly, you can apply it selectively to the camera height (Z or Y). For object tracking, distance filtering is often the way to go, as it is the object/camera distance that winds up with the most jitter, and that is what is filtered (or locked on the locking control panel). When path filtering is set up, it is (and must be) re-applied after each solve, whether the solve is "from scratch" or a refine operation. For a more subtle workflow, do the following: Solve the shot with no filtering in place Open the Path Filtering dialog, Configure filtering for the translation, distance, or height axis only,
110
3-D REVIEW
Select the "To Seed Path" option, Click the Apply Filter Now button, Turn off filtering on all axes, Open the Solver Locking panel, Turn on locks for the axes that you just filtered above, Run a Refine solver cycle.
The advantage of this workflow is that it applies filtering to one or more axes, then gives you the best solution for the remaining, unfiltered, axes taking into account the results on the filtered axes. The filter frequency and strength are animated parameters. When you configure filtering, you should animate the parameters to reduce filtering (increase the frequency) at times when the path is changing rapidly, or hits actual bumps. You should increase filtering (reduce the frequency) when the camera or object is nearly stationary, or is moving smoothly. You can animate the strength to blend in and out of filtering, or to make the filtering less effective.
111
This dialog has a generally systematic organization, with a few exceptions. Each category of trackers has a horizontal row of controls, and the number of trackers in that category is in parentheses after the category name. A tracker can be a member of several categories. Down the left edge, a column of checkboxes control whether or not the category of trackers will be fixed. Mostly, trackers are fixed by deleting them, but after you have identified them, you can also adjust them manually if that is appropriate. When clicked on, the Select buttons in the middle select that category of trackers in the viewport. They flash as they are selected, making them easier to find. At the top of the panel, notice that the Clean-up dialog can work on all the trackers, or only the selected ones. It records the selected trackers as you open the panel, and they are not affected by selecting trackers with these buttons. At right are a column of spinners that determine the thresholds for whether a tracker is considered to be far-ish, short-lived, etc. The initial values of these thresholds are good starting points but not the last word.
113
active. Each bad frame will be marked in red. If you click the bad-frames Select button, the trackers with bad frames are selected in the viewport. This makes the tracks thicker in the squish view, which is also helpful. If you turn on the Delete checkbox for Bad Frames, there are two choices for how to handle that: Disable and Clear. The clear option does what happens during Show: it clears out the tracking results so it looks like the frame was tracked, but the feature was not found, resulting in the red section in the squish view. The disable option re-keys the trackers enable track, so that there is no attempt to track on those previously-bad frames. There will be no track at all on those frames in the squish view. As a result, the Clear choice is better when you want to see where there were problems, and potentially go back and fix the problems. The Disable option is better when you want to permanently shut down the trackers on those spots. Be aware that though a frame of a tracker may be bad, and you are better off without it and the glitch it causes, having a missing frame can also create glitchesthe higher the overall error (and poorer the fit), the larger the glitch caused by a missing frame. A missing frame on a fairly unique tracker close to the camera will cause a bigger glitch than a missing frame on a tracker far from the camera that is one of many similar trackers. Manually repairing the bad frames will always produce the best results. Far-ish Trackers The tracker clean-up dialog detects trackers with too little parallax for an adequate distance measurement. Consider a tracker 1 meter from the camera, and the camera moving 10 cm to its right. The position of the tracker in the two images from the camera will be different, tens or hundreds of pixels apart. What if the tracker is 1 kilometer from the camera, and the camera moved the same 10
114
The percentage threshold appears to the right of the high-error trackers line as usual. The hpix error threshold appears underneath it, to the right of the Unsolved/Behind category, an otherwise empty space because that category requires no thresholds. As an example of the first criterion, consider a tracker that is visible for 20 frames. However, 8 of those frames are bad frames as defined for that category. The percentage of bad frames is 8 out of 20, or 40%, and at the standard threshold of 30% the tracker would be considered high-error and eligible for deletion. Typically such trackers have switched to an adjacent nearby feature for a substantial portion of their lifespan. Unlocking the User Interface The clean-up trackers dialog is modal, meaning you can not adjust any other controls while the dialog is displayed. However, it is often helpful to adjust the user interface with the dialog open, for example to configure the graph editor or to locate a tracker in the viewports.
115
116
117
118
Back to Reality
Though our story is far-fetched, it is quite a bit more accurate than you might think. Though well skip the hundred marching feet, you will be telling SynthEyes exactly how to position the model within the coordinate system. And importantly, if you dont give SynthEyes enough information about how to position the model, SynthEyes will take advantage of the lack of information: it will do whatever it finds convenient for it, which rarely will be convenient for you. If you give SynthEyes conflicting information, you will get an averaged answerbut if the information is sufficiently conflicting, it might take a long time to provide a result, or even throw up its hands and generate a result that does not satisfy any of the constraints very well.
119
The alignment line approach is used for tripod-mode and even singleframe lock-off shots. The constrained camera path methods (for experts!) are used when you have prior knowledge of how the shot was obtained from on-set measurements. One last point: you might wonder which trackers get selected to be constrained: Tracker37, Tracker19, etc. You will pick the trackers to create the coordinate system that you want to see in the animation/compositing package. You must decide what you want! If the shot has a floor and you have trackers on the floor, you probably want those trackers to be on the floor in your chosen coordinate system. Your choice will depend on what you are planning to do later in your animation or compositing package. It is very important to realize: the coordinate system is what YOU want to make your job easier. There is no correct answer, there is no coordinate system that SynthEyes should be picking if only it was somehow smarterThey are all the same. The coordinate measuring machine is happy to measure your scene for you, no matter where you put it! You dont need to set a coordinate system up, if you dont want to, and SynthEyes will plough ahead happily. But picking one will usually make inserting effects later on easier. You can do it either after tracking and before solving, or after solving. Hint: if you will be exporting to a compositing package, they often measure everything, including 3-D coordinates, in terms of pixels, not inches, meters, etc. Be sure to pick sizes for the scene that will work well in pixels. While you might scale a scene for an actor 2m tall, if you export to a compositor and the actor is two pixels tall that will rarely make sense.
Three-Point Method
Heres the simplest and most widely applicable way to set up a coordinate system. It is strongly recommended unless there is a compelling reason for an alternative. SynthEyes has a special button to help make it easy. Well described how to use it, and what it is doing, so that you might understand it, and be able to modify its settings as needed.
120
Switch to the Coordinate System control panel . Click the *3 button; it will now read Or. Pick one tracker to be the coordinate system origin (ie at X=0, Y=0, Z=0). Select it in the camera view, 3-D viewport, or perspective window. On the coordinate system panel, it will automatically be changed from Unlocked to Origin. Again, any tracker can be made the origin, but some will make more sense and be more convenient than others. The *3 button will now read LR (for left/right). Pick a second tracker to fall along the X axis, and select it. It will automatically be changed from Unlocked to Lock Point; after the solution it will have the X/Y/Z coordinates listed in the three spinners. Decide how far you want it to be from the origin tracker, depending on how big you want the final scene to be. Again, this size is arbitrary as far as SynthEyes is concerned. If you have a measurement from the set, and want a physically-accurate scene, this might be the place to use the measurement. One way or another, decide on the X axis position. You can guess if you want, or you can use the default value, 20% of the world size from the Solver panel. Enter the chosen X-axis coordinate into the X coordinate field on the control panel. The *3 button now reads Pl. Pick a third point that should be on the ground plane. Again, it could be any other trackerexcept one on the line between the origin and the X-axis tracker. Select the tracker, and it will be changed from Unlocked to On XY Plane (if you are using a Z-Up coordinate system, or On XZ Plane for Y-up coordinates). This completes the coordinate system setup, so the *3 button will turn off. The sequence above places the second point along the X axis, running from left to right in the scene. If you wish to use two trackers aligned stage front to stage back, you can click the button from LR (left/right) to FB (front/back) before clicking the second tracker. In this case, you will adjust the Y or Z coordinate value, depending on the coordinate system setting. To provide the most accurate alignment, you should select trackers spread out across the scene, not lumped in a particular corner. Depending on your desired coordinate system, you might select other axis and plane settings. You can align to a back wall, for example. For the more complex setups, you will adjust the settings manually, instead of using *3. You can lock multiple trackers to the floor or a wall, say if there are tracking marks on a green-screen wall. This is especially helpful in long traveling shots. If you are tracking objects on the floor, track the point where the object meets the floor; otherwise youll be tracking objects at different heights from the floor (more on this in a little).
Size Constraints
As well as the position and orientation of your scene, you need to control the size of the reconstructed scene. There are four general ways to do this:
121
Suppose you have three trackers that you want to define the back wall (Z up coordinate system). 1) 2) Go to the coordinate system control panel If the three trackers are A, B, and C, select B, then hold down ALT (Mac: Command) and click A.
122
This has nailed down the translation, but rotation only partiallythe feet will be busy. You also need to specify another rotation, since B and C can spin freely around A so far (or around the Y axis about any point in the plane). You might have two other trackers, D and E, that should stack up vertically. Select E and Alt/Command-Click tracker D and set it to Parallel to Z Axis (or X axis if they should be horizontal). Details of Lock Modes There are quite a few different constraint (lock) modes that can be selected from the drop-down list. Despite the fair number of different cases, they all can be broken down to answering two simple questions: (1) which coordinates (X, Y, and/or Z) of the tracker should be locked, and (2) to what values. The first question can have one of eight different answers: all the combinations of whether or not each of the three coordinate axes is locked, ranging from none (Unlocked) to all (Lock Point). Rather than listing each of the combinations of which axes are locked, the list really talks about which axis is NOT locked. For example, an X Axis lock really locks Y and Z, leaving X unlocked. Locking to the XZ plane actually locks only Y. The naming addresses WHAT you want to do, not HOW SynthEyes will achieve it. The second question has three possible answers: (a) to zero, (b) to the corresponding Seed and Lock spinner, or (c) the corresponding coordinate from the tracker assigned as the Target Point. Answer (c) is automatically selected if a target point is present, while (a) is selected for On lock types, and (b) for Any lock types. Use the Any modes when you have some particular coordinates you want to lock a tracker to, for example, if a tracker is to be placed 2 units above the ground plane. Watch Out! If you select several trackers, some with targets, some without, the lock type list will be empty. Either select fewer trackers, or right-click the Target button to clear the target tracker setting from all selected trackers. Heres the total list:
Lock Mode Unlocked Lock Point Origin On X Axis Axes Locked None X,Y,Z X, Y, Z Y, Z To What Nothing Spinners Zero Zero
123
Configuring Constraints for Tripod-Mode Shots When the camera is configured in tripod mode, a simpler coordinatesystem setup can be used. In tripod mode, no overall sizing is required, and no origin is required or allowed. The calculated scene must only be aligned, though even that is not always necessary. The simplest tripod alignment scheme relies on finding two trackers on the horizon, or at least that youd like to make the horizon. Of the two, you assign one to be the X axis, say, by setting it up as a Lock to the coordinates X=100,
124
This is the view with the recommended (front/back-variant) constraint setup in Z-Up coordinates, as applied to a typical shot, after solving. Only trackers with constraints are listed, along with what they are locked to (coordinates or another tracker). The solved position is shown, along with the 3-D error of the constraint. For example, if a tracker is located at (1,0,0) but is locked to (0,0,0), the 3-D error will be 1. It will have a completely different 2-D error in hpix on the coordinate system panel. The constrained points view lets you check your constraints after solving, giving you the resulting 3-D errors, or check your setup before solving, without any error available yet. You can select the trackers directly from this view and tweak them with the coordinate system panel displayed. Upside-down Cameras: Selecting the Desired Solution Many coordinate system setups can be satisfied in two or more different ways: completely different camera and tracker positions that are camera matches, and satisfy the constraints. To review, the most basic 3-point setup consists of a point locked to the origin and a point locked to a specific coordinate on the X axis, plus a third point locked to be somewhere on the ground plane (XY plane for Z-up). This setup can be satisfied two different ways. If you start from one solution, you can get the other by rotating the entire scene 180 degrees around the X axis. If the camera is upright in the first solution, it will be upside-down in the second. The third point
125
126
For a more useful example, consider a collection of trackers for features on the set floor. You can apply constraints telling SynthEyes that the trackers should be on the floor, but some may be paint spots, and some may be pieces of trash a small distance above the floor. With the Constrain box off, SynthEyes solves the scene, ignoring the constraints, then applies them at the end, only by spinning, rotating, and scaling the scene. In the example of trackers on a floor, the trackers are brought onto an average floor plane, without affecting their relative positions. The model is fundamentally not changed by the constraints. On the other hand, with the Constrain checkbox on, the constraints are applied to each individual tracker during the solving process. Applied to trackers on a floor, the vertical coordinate will be driven towards zero for each and every such tracker, possibly causing internal conflict within the solving process. If you have tracked 3 shadows on the floor, and the center of one tennis ball sitting on the floor, you have a problem. The shadows really are on the floor, but the ball is above it. If all four height values are crunched towards zero, they will be in conflict with the image-based tracking data, which will be attempting to place the tennis ball above the shadows. You can add poorly chosen locks, or so many locks, that solving becomes slower, due to additional iterations required, and may even make solving impossible, especially with lens distortion or poor tracking. By definition, there will always be larger apparent errors as you add more locks, because you are telling SynthEyes that a tracker is in the wrong place. Not only are the tracker positions affected, but the camera path and field of view are affected, trying to satisfy the constraints. So dont add locks unless they are really necessary. Generally, it will be safer to leave the Constrain checkbox off, so that solving is not compromised by incorrectly configured constraints. You will want to turn the checkbox on when using multiple-shot setups with the Indirectly solving method, or if you are working from extensive on-set measurements. It must be on to match a single frame. Pegged Constraints With the constraints checkbox on, SynthEyes attempts to force the coordinate values to the desired values. It can sometimes be helpful to force the coordinates to be exactly the specified value, by turning on the Peg button on the trackers Coordinate system panel.
127
128
129
There are parallel lines under the eaves and window, configured to be parallel to the X axis. Vertical (Z) lines delineate edges of the house and door frame. The selected line by the door has been given a length to set the overall scale. The alignment tool gives you camera placements and FOV for completely locked-off shots, even a single still photograph such as this. What Lines Do I Need? The alignment solver can be used after a shot has been solved and a lens field of view (FOV) determined; it might be used without a solve, with a known FOV; or it might be used to determine the lens FOV. In each case it will determine the camera placement as well. If the FOV is known, either from a solve or an on-set measurement, you will need to set up at least two lines, which must be parallel to two different coordinate axes in 3-D (X, Y, or Z). This means they must not be parallel to each other (because then they would be parallel to the same axis). You may have any number of additional lines. When the FOV is not known, you must define at least three lines. Two of them must be parallel to each other and to a coordinate-system axis. The third line must be parallel to a different coordinate system axis. You may have additional lines parallel to any of the three coordinate system axes. Note: SynthEyes permits unoriented lines to be used to help find the lens distortion. Unoriented lines do not have to be aligned with any of the desired
130
Go to a frame in your sequence that nicely shows the lines you plan to use for alignment. All the lines must be present on this single frame, and this frame number will be recorded in the At nnnf button at the lower-left of the lens panel. You can later return to this frame just by clicking the button. If you later play with some lines on a different frame, and need to change the recorded frame number, right-click the button to set the frame number to the current frame.
131
133
Manual Alignment
You can manually align the camera and solved tracker locations if you like. This technique is most useful for tripod-mode shots; it is generally better to set up an accurate coordinate system using the methods above for normal shots. To align manually, switch to the 3-D control panel and the Quad or Quad Perspective view. Click on the camera (typically Camera01) in one of the viewports to select it, so that it is listed in the dropdown on the 3-D control panel. It will be easiest, though not strictly necessary, to turn on the selection-lock button right underneath the dropdown. Turn on the Whole button on the 3-D control panel, then use the move , rotate , and scale tools to reposition the camera using the viewports. As you do this, not only the camera will move, but its entire trajectory and the tracker locations. By default, meshes will not be carried along, so that you can import a 3-D model (such as a new building), then reposition the camera and trackers relative to the buildings (fixed) position. However, you can turn on Whole affects meshes, on the 3-D viewport or perspective-view right-click menus, and meshes will be moved. You can use the same technique for moving-object shots, discussed later. In that case, you will usually click the World button to change to Object coordinates; you can then re-align the objects coordinate system relative to the objects trackers (much like you move the pivot point in a 3-D model). As you do this, the object path will change correspondingly to maintain the overall match.
134
135
136
137
Coord button on the Summary panel , select the 2nd (on-axis) tracker, and in the Coordinate panel, change it from Lock Point (at 20,0,0) to On X Axis or On Y Axis. 5. Solve with Go! on the Solver panel Note that you can use whatever more complex setup you like in step 4, as long as it completely constrains both the translation and rotation, but not the size. WARNING: You might be tempted to think Hmmm, the camera is on a dolly, so the entire path must be exactly 43 inches off the floor, let me set that up! (by not turning U/D back off). But this is almost always a bad idea! The obvious problem is that the dolly track is never really completely flat and free of bumps. If the vertical field of view is 2 meters, and you are shooting 1080i/p HDTV, then roughly your track must be perfectly flat to 1 millimeter or so to have a sub-pixel impact. If your track is that flat, congratulations. The conceptually more subtle, but bigger impact problem is this: a normal tripod head puts the camera lens very far from the center of rotation of the headroughly 1 foot or 0.25 meter. As you tilt the head, the position of the camera increases and decreases up to that much in height! Unless your camera does not tilt during the shot, or you have an extra-special nodal-pan head, the camera height will change dramatically during the shot. A Straight Dolly Track Setup If your camera rides a straight dolly track, you can use the length of that track to set the scale, and almost the entire coordinates system if desired. While the camera height measurement setup discussed above is simpler, it is appropriate mainly for a studio environment with a flat floor. The dolly track setup here is useful when a dolly track is set up outdoors in an environment with no clearly-defined ground planein front of a hillside, say. For this setup, you should measure the distance traveled by the camera head down the track, by a consistent point on the camera or tripod. For example, if you have a 20 track, the camera might travel only 16 or so because there will be a 2 dead zone at each end due to the width of the tripod and dolly. Measure the starting/ending position of the right front wheel, say. Next, clear any solved path (or click View/Show seed path), and animate the camera motion, for example moving from 0,0,0 at the beginning of the shot to 16,0,0 at the end (or wherever it reaches the maximum, if it comes back). You now have two main options: A) mostly tracker-based coordinate setup, or B) mostly dolly-based coordinate setup, for side-of-hillside shots.
138
139
140
141
143
Both hard and soft locks operate at full effect all the time, regardless of the state of the Constrain checkbox on the solver panel. As with path constraints, field of view constraints affect the solution as a whole. If you have a spike in the field of view track on a particular frame, adding a constraint on that single frame will not do what you probably expect. All the trackers locations will be affected, and you will have the same spike, but in a slightly different location. This is not a bug. Instead, you need to also key surrounding frames. In all cases, identifying and correcting the cause of the spike will be a better approach if possible. If the lens zooms intermittently, you can determine an average zoom value for each stationary portion of the shot, and lock the field of view to that value.
144
In order for a trackers seed point to be dragged and used for spinal alignment, it must be set to Lock Point mode. Spinal Align Mode In Align mode, your path is moved around as the coordinate system is repeatedly updated, but the shape of the path and the relationship to the trackers is not affected. The RMS error of the solve is unchanged. This can be a nice way to help get that specific coordinate system alignment you want; it allows a mixture of object and tracker constraints. You can use a combination of locks on the camera and on trackers in the scene. As you drag the camera or tracker, the alignment will be repeatedly recalculated. Use the figure of merit value to keep track of whether you have an overconstrained setup: the value is normally very small, such as 0.000002. If it rises much above that, you dont have a minimal set of constraints (typically it reaches 0.0200.050). That is not a problemunless you begin solving with the Constrain checkbox on. Note that all camera and object locks are treated as hard locks by the alignment software. Spinal Solve Mode In Solve mode, you are changing the solve itself, generally by adding constraints on the camera path, then re-solving. The RMS error will always get worse! But it lets you interactively repair weak spots in your solve. The spinal solve performs a Refine operation on your existing solution, meaning that it makes small changes to that solution. If the constraints you add after the initial solve, either directly or by dragging with the spinal solve mode, change the solution too much, then you will get a solution that is the best solution near the old solution rather than the best overall solution, which you would obtain by starting the solve from scratch (ie Automatic solving mode). To maintain interactive response rates, the spinal solve panel allows you to terminate the refine operation earlyand while dragging youre just going to be changing things again anyway. When you stop dragging, SynthEyes will perform a last refine cycle to allow the refine to complete, although you can also keep it from taking too long. After youve been moving around for a bit, especially if your solves are not completing all the way, you can click the Finish button to launch a final normal Refine cycle (Finish is the same as the Go button on the solver panel). Spinal editing might be used in especially subtle ways on long shots. Match-moving inherently produces local rate of change measurements, and small random errors (often amplified by small systematic effects such as lens
146
147
Zero-Weighted Trackers
Suppose you had a visual feature you were so unsure of, you didnt want it to affect the camera (or object) path and field of view at all. But you wanted to track it anyway, and see what you got. You might have a whole bunch of leaves on a tree, say, and hope to get a rough cloud for it. You could take your tracker, and try bringing its Weight in the solution down to zero. But that would fail, because the weight has a lower limit of 0.05. As the weight drops and the tracker has less and less effect, there are some undesirable side effects, so SynthEyes prevents it. Instead, you can click the zero-weighted-tracker (ZWT) button on the tracker panel, which will (internally) set the weight to zero. The undesirable side effects will be side-stepped, and a new capability emerges. ZWTs do not affect the solution (camera or object path and field of view, and normal tracker locations), and can not be solved until after an initial solution has been obtained. ZWTs are solved to produce their 3-D position at the completion of normal solving. Tip: There is a separate preference color for ZWTs. Though it is normally the same color as other trackers, you can change it if you want ZWTs to stand out automatically. Importantly, ZWTs are automatically re-solved whenever you change their 2-D tracking, the camera (or object) path, or the field of view. This is possible because the ZWT solution will not affect the overall solution. It makes possible a new post-solving workflow.
149
ZERO-WEIGHTED TRACKERS
you were able to use the ZWT capability to help do the tracking better and quicker.
Juicy Details
ZWTs dont have to be only on a camera, they can be attached to a moving object as well. You can also configure Far ZWTs. The ZWT calculation respects the coordinate system constraints: you can constrain Z=0 (with On XY Plane) to force a ZWT onto the floor in Z-up mode. A ZWT can be partially linked to another tracker on the same camera or object. It doesnt make sense to link to a tracker on a different object, since such links are always in all 3 axes, overriding the ZWT calculation. Distance constraints are ignored by ZWT processing. If you have a long shot and a lot of ZWTs and must recalculate them often (say by interactively editing the camera path), it is conceivable that the ZWT recalculation might bog down the interactive update rate. You can temporarily disable ZWT recalculation by turning off the Track/ZWT auto-calculation menu item. They will all be recalculated when you turn it back on.
Stereo ZWTs
A stereo pair of trackers can be made into a stereo ZWT by changing either tracker to a ZWTthe other member will be changed automatically. A stereo ZWT pair can produce a position from as little as a single frame in each camera. After a solve has been produced, for true moving-camera shots, you can track in one camera, make the pair into a ZWT, and then have an excellent idea where the tracker will be in the other camera, potentially simplifying tracking.
150
ZERO-WEIGHTED TRACKERS
The Add many trackers dialog, below, provides a wide range of controls to allow the best and most useful trackers to be created. You can run the dialog repeatedly to address different issues. You can also use the Coalesce Nearby Trackers dialog to join multiple disjointed tracks together: the sum is greater than the parts!
When the dialog is launched from the Track menu, it may spend several seconds busily calculating all the trackers that could be executed, and it saves that list in a temporary store. The number of prospective trackers is listed as the Available number, 2754 above. By adjusting the controls on the dialog, you control which of these prospective trackers are added to the scene when you push the Add button. At most, the Desired number of trackers will be added. Basic Tracker Requirements The prospective trackers must meet several basic requirements, as described in the requirements section of the panel. These include a minimum length (measured in frames), and an amplitude, plus average and peak errors. The amplitude is a value between zero and one, describing the change in brightness between the tracker center and background. Larger values will require more pronounced trackers. The errors numbers measure the distance between the 2-D tracker position and the computed 3-D position of the tracker, mapped back into the image. The average error limits the noisiness and jitter in the trackers, while the
151
ZERO-WEIGHTED TRACKERS
peak error limits the largest glitch error. Notice that these controls do not change any trackers, but instead select which of the prospective trackers are actually selected for addition. To a Range of Frames To add trackers in a specific range of frames in the shot, set up that region in the Frame-Range Controls: from a starting frame to an ending frame. Then, set a minimum overlap: how many frames each prospective tracker must be valid, within this range of frames. For example, if you have only a limited number of trackers between frames 130 and 155, you would set up those two as the limits, and set the minimum overlap to 25 at most, perhaps 20. To an Area in 3-D Space To add trackers in a particular 3-D area of the scene, open the camera view, and go to a frame that makes the region needing frames clearly visible. Lasso the region of interestit does not matter if there are any trackers there already or not. The lassoed region will be saved. (Fine point: the frame number is also saved, so it does not matter if you change frames afterwards.) Open the Add many trackers dialog, and turn on the Only within last Lasso checkbox. The only trackers selected will be those where the 3-D point falls within the lassoed area, on the frame at which the lasso occurred. Zero-Weighted vs Regular Trackers Once all the criteria have been evaluated, and a suitable set of trackers determined, hitting Add will add them into the scene. There are several options to control this (which should be configured before hitting Add). The most important decision to make is whether you want a ZWT or a regular tracker. Intrinsically, the Add many trackers dialog produces ZWTs, since it has already computed the XYZ coordinates as part of its sanity-checking process. By using ZWTs, you can add many more trackers without appreciably affecting the re-solve time if you later need to change the shot. So using ZWTs is computationally very efficient, and is an easy way to go if you need more trackers to build a mesh from. On the other hand, if you need additional trackers to improve the quality of the track, by adding more trackers in an under-populated region of 3-space or range of frames, then adding ZWTs will not help, since they do not affect the overall camera solution. Instead, check the Regular checkbox, and ordinary trackers will be created, still pre-solved with their XYZ coordinates. You can solve again using Refine mode, and the camera path will be updated taking into account the new trackers. If you add hundreds or thousands of regular trackers, the solve time will increase substantially. Designed for the best camera tracking, SynthEyes is most efficient for long shots, not for thousands of trackers. To see why this choice was made, note that even if all the added trackers are of equal quality, the solution
152
ZERO-WEIGHTED TRACKERS
accuracy increases much slower than the rate trackers are added. You can use some of the trackers for the solve, and keep the rest as ZWTs. Other New Tracker Properties Normally, you will want the trackers to be selected after they are added, as that makes it easy to change them, see which were added, etc. If you do not want this, you can turn off the Selected checkbox. Finally, you can specify a display color for the trackers being added by selecting it with the color swatch, and turning on the Set color checkbox. That will help you identify the newly-added trackers, and you can re-select them all again later using the Select same color item on the Edit menu. It may take several seconds to add the trackers, depending on the number and length of trackers. Afterwards, you are free to add additional trackers to address other issues if you likethe ones already added will not be duplicated.
153
ZERO-WEIGHTED TRACKERS
When you open the dialog, you can adjust the controls (described shortly) and then click the Examine button. SynthEyes will evaluate the trackers and select those to be coalesced, so that you can see them in the viewports. The text field, reading (click Examine) in the screen capture above, will display the number of trackers to be eliminated and coalesced into other trackers. At this point, you have several main possibilites: 1. click Coalesce to perform the operation and close the panel; 2. adjust the controls further, and Examine again; 3. close the dialog box with the close box (X) at top right (circle at top left on Mac), then examine the to-be-coalesced trackers in more detail in the viewports; or 4. Cancel the dialog, restoring the previous tracker selection set. If you are unsure of the best control settings to use, option 3 will let you examine the trackers to be coalesced carefully, zooming into the viewports. You can then open the Coalesce Nearby Trackers dialog again, and either adjust the parameters further, or simply click Coalesce if the settings are satisfactory. What Does Nearby Mean? The Distance, Sharpness, and Consistency controls all factor into the decision whether two trackers are close enough to coalesce. It is a fairly complex decision, taking into account both 2-D and 3-D locations, and is not particularly amenable to human second-guessing. The controls are pretty straightforward, though. As an aside, it might seem that all that is needed is to measure the 3-D distance between the computed tracker points, and coalesce them if the points are within a certain distance measured in 3-D (not in pixels). However, this simplistic approach would perform remarkable poorly, because the depth uncertainty of a tracker is often much larger than the uncertainty in its horizontal
154
ZERO-WEIGHTED TRACKERS
image-plane position. If the distance was large enough to coalesce the desired trackers, it would be large enough to incorrectly coalesce other trackers. Instead, SynthEyes uses a more sophisticated and compute-intensive approach which is evaluated over all the active frames of the trackers. The first and most important parameter is the Distance, measured in horizontal pixels. It is the maximum distance between two trackers that can be considered for coalescing. If they are further apart than this in all frames, they will definitely not be coalesced. If they are closer, some of the time, they may be coalesced, increasingly likely the closer they are. The second most important parameter, the Consistency, controls how much of the time the trackers must be sufficiently close, compared to their overall lifetime. So very roughly, at 0.7 the trackers must be within the given distance on 70% of the frames. If a track is already geometrically accurate, the consistency can be made higher, but if the solution is marginal, the consistency can be reduced to permit matches even if the two trackers slide past one another. The third parameter, Sharpness, controls the extent to which the exact distance between trackers affects the result, versus the fact that they are within the required Distance at all. If Sharpness is zero, the exact distance will not matter at all, while at a sharpness of one (the maximum), if the trackers are at almost the maximum distance, they might as well be past it. Sharpness can be used to trade off some computer time versus quality of result: a small distance and low sharpness will give a faster but less precise result. Settings with a larger distance and larger sharpness will take longer to run but product a more carefully-thought-out resultthough the two sets of results may be very similar most of the time, because the larger sharpness will make the larger distance nearly equivalent to the smaller distance and low sharpness. If you are handling a shot with a lot of jitter in the trackers, due to large film grain or severe compression artifacts, you should decrease the sharpness, because those small differences in distance are in fact meaningless. What Trackers should be Coalesced? Three checkboxes on the coalesce panel control what types of trackers are eligible to be coalesced. First, you can request that Only selected trackers be coalesced. This allows you to lasso-select a region where coalescing is required. (Note: if you only need 2 particular trackers coalesced, for sure, use Track/Combine Trackers instead.) Second, frequently you will only want to coalesce auto-trackers, or trackers created by the Add Many Trackers dialog. By default, supervised nonzero-weighted trackers are not eligible to be coalesced. This prevents your carefully-constructed supervised trackers from inadvertently being changed. However, you can turn on the Include supervised non-ZWT trackers checkbox to make them eligible.
155
ZERO-WEIGHTED TRACKERS
SynthEyes will also generally coalesce only trackers that are not simultaneously active: for example, it might coalesce two trackers that are valid on frames 0-10 and 15-25, respectively, but not two trackers that are valid on frames 0-10 and 5-15. If both are autotrackers, if they are simultaneously active, they are not tracking the same thing. The exception to this is if they are a large autotracker and a small one, or an autotracker and a supervised tracker. To combine overlapping trackers, turn off the Only with non-overlapping frame ranges checkbox. A satisfactory approach might be to coalesce once with the checkbox on, as is the default, then open the dialog again, turn the checkbox off, and Examine the results to see if something worth coalescing turns up. An Overall Strategy Although we have talked as if SynthEyes only combines two trackers, in fact SynthEyes considers all the trackers simultaneously, and can merge three or more trackers together into a single result in one pass. It is possible that coalescing immediately a second time may produce additional results, but this is probably sufficiently rare to make it unnecessary in routine use. However, after you coalesce trackers, it will often be helpful to do a Refine solving cycle, then coalesce again. After the first coalesce, the refine cycle will have an improved geometric accuracy due to the longer tracker lifetimes. With the improved geometry, additional trackers may now be stable enough to be determined to be tracking the same feature, permitting a coalesce operation to combine them together, and the cycle to repeat. Viewing this pattern in reverse, observe that a broader distance specification will be required initially, when trackers on the same feature may be calculated at different 3-D positions. This is particularly relevant to green-screen shots, where the comparatively small number of trackable features and their frequently short lifetime, due to occlusion by the actors, can result in higher-than-usual initial geometric inaccuracy. Because the green-screen tracking marks are generally widely separated, there is little harm in increasing the allowable coalesce Distance. The features can then be coalesced properly, and the Refine cycle will then rectify the geometry. The process can be repeated as necessary. If you are using Add Many Trackers and then Coalescing and refining, you should turn on the Regular, not ZWT checkbox on the Add Many dialog, so that the added trackers will affect the Refine solution.
156
Perspective Window
The perspective window is an essential tool for checking a solved scene, building meshes, and extracting textures. The perspective window allows you to go into the scene to view it from any direction. Or, you can lock the perspective view to the tracked camera view. You can build a collection of test or stand-in objects to evaluate the tracking. Later, well see that it enables you to assemble tracker locations into object models as well. The perspective window is controlled by a large right-click menu, where many different mouse modes can be selected as well as various one-shot tools. The middle mouse button can always be used for general navigation. The left mouse button may be used instead, with the same variations, when the Navigation mode is selected.
Freeze Frame
Each perspective view can be independently disconnected from the main user interface time slider, frozen on a particular frame. This can be useful to view a shot from two different frames simultaneously (to link trackers from different parts of the same shot), or to view two shots with different lengths
157
PERSPECTIVE WINDOW
simultaneously and with some independent control. That is especially helpful for multi-shot tracking, where the reference shot is only a few frames long. See View/Freeze on this frame on the right-click menu. Using the normal A, s, d, F, period, and comma accelerator keys within a frozen perspective window will change the frozen frame, not the main user interface time. To update the main user interface time from within the perspective window, use the left and right arrows (or move outside the perspective window!). To re-set the frozen time to the current time, hit View/Freeze on this frame again. To unfreeze, use View/Unfreeze. The Scrub mouse mode will scrub the frozen frame number, or the normal frame number, depending on whether or not the frame is frozen.
Stereo Display
SynthEyes can display anaglyph stereo images, on the right-click menu select View/Stereo Display. If it is a stereo shot, both images will be displayed if the image is enabled. If it is not a stereo shot, SynthEyes will artificially create two views for the stereo display. See the settings on View/Perspective View Settings. They include the inter-ocular distance and vergence distance, plus the type of glasses you have. (You should look for glasses that strongly reject the unwanted colors, some paper glasses are best!) The normal anaglyph views still show the colors in the original images, though if you select the Luma versions, you will get a gray-scale version of the scene for each eye; some people prefer this for assessing depth.
Navigation
SynthEyes offers four basic ways to reposition the camera within the 3-D environment: pan (truck), look, orbit, and dolly in/out. You can select which motion you want by several different ways: by using the control and ALT keys, by clicking one of the navigation mode buttons on the Mouse toolbar overlay first, or by dragging within one of the mouse buttons. You can navigate at any time by using the middle mouse button, or with the left mouse button when the navigation mode is selected. Navigating with the Mouse Toolbar The simplest form of navigation uses the Pan, Look, Orbit, and Dolly buttons on the Navigate toolbar, which you can re-open if needed by rightclicking and selecting Toolbars/Mouse. You can click on one of the four buttons (Pan, Look, Orbit, and Dolly) to select that mode. It will turn light blue, as the active mouse mode. Left-clicking and dragging in the viewport will then create the corresponding camera motion. Alternatively, you can drag within the button itself to create the corresponding motion without permanently changing the mode. This is good for quickly doing something other than the current mode, without changing the mode. For example, a quick Look while you have been panning.
158
PERSPECTIVE WINDOW
You can also navigate using the middle mouse button. When the mouse mode is set to something OTHER than Navigate, one of the Pan, Look, Orbit, and Dolly buttons will still remain lit in a tan color. That color indicates the action the middle mouse button will create, when depressed in the rest of the window. Similarly, you can middle-click one of the buttons to change the action of the middle mouse button, or middle-drag within the button to temporarily create a motion without changing the mode. Navigating with Control and ALT The idea behind using Control and ALT is that you can change the motion at any time, as you are doing it, by pressing or releasing a key, while keeping the mouse moving. That makes it a lot faster than switching back and forth between various tools and then moving the camera. When neither control nor ALT are pressed, dragging will pan the display (truck the camera sideways or up and down). Control-dragging will cause the camera to look around in different directions, without translating. Control-ALTdragging will dolly the camera forwards or backwards. ALT-dragging will cause the camera to orbit. More Navigation Details The center of the orbit will be selected from the first (lowest-numbered) applicable item in the following list: 1. the net center of all selected trackers, or in place mode, about their seed point, 2. the net center of all selected meshes, 3. the center of any selected object or light, 4. the net center of all selected vertices in the current edit mesh (more on that later), or 5. around a point in space directly ahead of the camera. HINT: if you are trying to orbit around selected vertices and it is not working as expected, do a control-D/command-D to clear the current (tracker and mesh) selection. You can see which motion will happen by looking at the top-left of the perspective window at the text display. The shift key will create a finer motion of any of these types. The mouses scroll wheel will dolly in and out if the perspective window is not locked to the camera, or if it is, it will change the current time. If locked, shiftscrolling will zoom the time bar. If you hold down the Z or / (apostrophe/double-quote) key when leftclicking, the mouse mode will temporarily change to be navigation mode; the mode will switch back when the mouse button is released. You can also switch to
159
PERSPECTIVE WINDOW
navigate mode using the N key. So it is always easy to navigate and quick to navigate without having to move the mouse to select a different mode. Zooming and Panning in 2-D The navigation modes described above correspond to physically moving or rotating the camera. Sometimes we wish to zoom into the image itself, as we can do with the camera viewwhich does not correspond to any camera motion at all. You can zoom into the image by selecting the Zoom mouse mode, then dragging in the viewport (or in the button itself). Or, use right-drag to zoom as in the camera view. The Pan 2D mode then allows you to pan within the zoomed-in image. A 2-D pan and 3-D pan can be very difficult to distinguish, and confusing. Whenever the image has been zoomed into, SynthEyes displays a wide pinkish border around the entire image, and the normal Pan, Look, Orbit, and Dolly buttons disabled. Clicking any of those buttons will reset the zoom so that the entire image is displayed. You can also reset the zoom by right-clicking the zoom butotn. Other Mouse Toolbar Buttons The mouse toolbar also contains a Scrub mouse mode button and a Lock button. The Scrub mode allows you to move rapidly through the entire shot, independent of the main toolbar, even if Freeze Frame is engaged in this window. The lock button provides a quick way to turn the lock (background image) on and off. Note that you may see a slight rotation as you turn it off, as an unlocked perspective view always keep the camera rotation angle at zero.
160
PERSPECTIVE WINDOW
Most items are described as part of the perspective right-click menus.The mouse mode items have additional functionality, as described above. The Paint toolbar contains a few extra slots that are the parameters of the paint operations, such as the width and opacity function of the brush. Using the Save as Defaults item on the Toolbars submenu will store the current toolbar positions and status for use as defaults for following SynthEyes runs in the per-user pertool11.xml file. Knowledgeable users can hand-edit the pertool11.xml file to alter or add new toolbars if they follow the format carefully.
Creating Objects
Create objects on the perspective window grid with the Create mesh object mode. Use the 3-D panel or right-click menu to control what kind of object is created. Selecting an object type from the right-click menu launches the creation mode immediately. If the SynthEyes user interface is set so that a moving object is active on the Shot menu, the created object will be attached to that object. Use the # button on the 3-D panel to adjust the number of segments of the selected standard geometric object (cubes, cylinders, etc). The Duplicate Mesh script on the Script menu can clone and offset a mesh, so you can create a row of fence posts quickly, for example. The Duplicate Mesh on Trackers script can copy a tree mesh onto each tracker on a hillside, for example.
Importing Objects
You can import OBJ or DXF meshes into SynthEyes using File/Import Mesh. You can later discover the original source of the meshes using File/Scene Information (meshes subsection) If you later change the source mesh file, you can reload it inside the SynthEyes scene, without having to reestablish its other settings, by selecting it and clicking the Reload button on the 3-D Panel.
Opacity
It can be helpful to make one or more meshes partially transparent; this can be achieved using the opacity spinner on the 3-D Panel, which ranges from an opacity of zero (fully transparent and invisible) to an opacity of one (fully opaque; the default). The opacity setting affects the mesh in the perspective view and in the camera view only if the OpenGL camera view is used. See View/OpenGL Camera View and the preference Start with OpenGL camera view. The OpenGL camera view is the default on Mac OS X.
161
PERSPECTIVE WINDOW
Note that while in an ideal world the opacity setting would simulate turning a solid mesh into an attenuating solid, in reality opacity is simulated using some fast but surface-based alpha compositing features in OpenGL. Depending on the situation, including other objects in the scene, the transparent view may differ substantially from what a true attenuating solid would produce, but generally the effect generated should be quite satisfactory for helping understand the scene.
Grid Operations
The perspective windows grid is used for object creation and mesh editing. It can be aligned with any of the walls of the set: floor, back, ceiling, etc.
162
PERSPECTIVE WINDOW
A move-grid mode translates the grid, while maintaining the same orientation, to give you a grid 1 meter above the floor, say. A shared custom grid position can be matched to the location of several vertices or trackers using the right-click|Grid|To Facets/Verts/Trackers menu item. If 3 or more trackers (or vertices) are selected, the grid is moved into the plane defined by the best-fitting approximation to them (ignoring outliers). If two are selected, the grid is rotated to align the side-to-side axis along the two. If one is selected, the grid slides to put that tracker at the origin. To take advantage of this, you should first select 3 or more vertices or trackers, and align once to set the overall plane. Then select only one of them, and align again to set it to be the origin. Select two of them, and align again to spin the plane so the X axis is parallel to the two. (You can do it in the order 3+,1,2 or 3+, 2, 1 ... the 3+ should always be first!) You can easily create an object on the plane defined by any 3 (or more) trackers by selecting them, aligning the grid to the trackers, then creating the object, which will be on the grid. You can toggle the display of the grid using the Grid/Show Grid menu item, or the G key.
Shadows
The perspective window generates shadows to help show tracking quality and preview how rendered shots will ultimately appear. The 3-D panel includes control boxes for Cast Shadows and Catch Shadows. Most objects (except for the Plane) will cast shadows by default when they are created. If there are no shadow-catching objects, shadows will be cast onto the ground plane. This may be more or less useful, depending on your ground plane; if the ground is very irregular or non-existent, this will be confusing. If there are shadow-catching objects defined, shadows will be cast from shadow-casting objects onto the shadow-catching objects. This can preview complex effects such as a shadow cast onto a rough terrain. Shadows may be disabled from the main View menu, and the shadow black level may be set from the Preferences color settings. The shadow enable status is sticky from one run to the next, so that if you do not usually use it, you will not have to turn it off each time you start SynthEyes. Note that as with most OpenGL fast-shadow algorithms, there can be shadow artifacts in some cases. Final shadowing should be generated in your 3D rendering application. Note that the camera viewport does not display shadows by design.
163
PERSPECTIVE WINDOW
Edit Mesh
The perspective window allows meshes to be constructed and edited, which is discussed in Building Meshes from Tracker Positions. One mesh can be selected as an edit mesh at any timeselect a mesh, then right-click Set Edit Mesh or hit the M key. A mesh's status as the edit mesh is independent of its status as a selected mesh. To not have any edit mesh, clear the overall selection by clicking in empty space, then right-click Set Edit Mesh (ie to nothing).
Preview Movie
After you solve and add a few test objects, you can render a test Quicktime movie (except on Win64) or a BMP, Cineon, DPX, JPEG, OpenEXR, PNG, SGI, or Targa sequence (also TIFF on Mac). While the RAM-based playback is limited by the amount of RAM, and has a simplified drawing scheme to save time, the preview movie supports anti-aliasing. The movie playback can later run at the full rate regardless of length. Right-click in the perspective window to bring up the menu and select the Preview Movie item to bring up a dialog allowing the output file name, compression settings, and various display control settings to be set. Usually you will want to select square pixel output for playback on computer monitors in Quicktime; it will convert 720x480 source to 640x480, for example, so that the preview will not be stretched horizontally. If you are making a Quicktime movie, be sure to bring up the compression settings and select something, Quicktime has no default and may crash if you do not select something. Also, different codecs will have their own parameters and requirements. Important Tip: the H.264 codec requires that the Key every N frames checkbox be off, and the limit data-rate to 90 kb/sec checkbox be off: otherwise there will be only one frame. Similarly, image files used in a sequence may have their own settings dialog. Note that image sequences written from the Preview Movie are always 8 bit/channel with no alpha. You can re-write image sequences at 16 bit and including an alpha channel using the Image Preprocessor (again depending on details of the source and output file format).
Technical Controls
The Scene Settings dialog contains many numeric settings for the perspective view, such as near and far camera planes, tracker and camera icon sizes, etc. You can access the dialog either from the main Edit menu, or from the perspective windows right-click menu. By default, these items are sized proportionate to the current world size on the solver control panel. Before you go nuts changing the perspective window
164
PERSPECTIVE WINDOW
settings, consider whether it really means that you need to adjust your world size instead!
165
SynthEyes offers a scripting language, SIZZLE, that makes it easy to modify the exported files, or even add your own export type. See the separate SIZZLE User Manual for more information. New export types are being added all the time, check the export list in SynthEyes and the support site for the latest packages or beta versions of forthcoming exporters.
167
General Procedures
You should already have saved the scene as a SynthEyes file before exporting. Select the appropriate export from the list in the File/Exports area. SynthEyes keeps a list of the last 3 exporters used on the top level of the File menu as well. Hint: SynthEyes has many exports. To simplify the list, click Script/System Script Folder, create a new folder Unused in it, and move all the scripts for applications you do not use into that folder. You will have to repeat this process when you later install new builds, however. There is also an export-again option, which repeats the last export performed by this particular scene file, with the most-recently-used export options, without bring up the export-options dialog again to save time for repeated exports. When you export, SynthEyes uses the file name, with the appropriate file extension, as the initial file name. By default, the exported file will be placed in a default export folder (as set using the preferences dialog). In most cases, you can either open the exported file directly, or if it is a script, run the script from your animation package. For your convenience, SynthEyes puts the exported file name onto the clipboard, where you can paste it (via control-V or command-V) into the open-file dialog of your application, if you want. (You can disable this from the preferences panel if you want.) Note that the detailed capabilities of each exporter can vary somewhat. Some scripts offer popup export-control dialogs when they start, or small internal settings at the beginning of each Sizzle script. For example, 3ds max does not offer a way to set the units from a script before version 6 and the render settings are different, so there slightly different versions for 3dsmax 5 and 6+. Settings in the Maya script control the re-mapping of the file name to make it more suitable for Maya on Linux machines. If you edit the scripts, using a text editor such as Windows Notepad, you may want to write down any changes as they must be re-applied to subsequent upgraded versions. Be aware that not all packages support all frame rates. Sometimes a package may interpret a rate such as 23.98 as 24 fps, causing mismatches in timing later in the shot. Or one package may produce 29.96 vs 29.97 in another. Handle image sequences and use frame counts rather than AVIs, QTs, frame times, or drop-frame time codes wherever possible. The Coordinate System control panel offers an Exportable checkbox that can be set for each tracker. By default, all trackers will be exported, but in some cases, especially for compositors, it may be more convenient to export only a few of the trackers. In this case, select the trackers you wish to export, hit control-I to invert the selection, then turn off the checkbox. Note that particular export scripts can choose to ignore this checkbox.
168
Before exporting to one of these packages, you should verify your units setting. Alternatively, if you observe that your imported scene has different values than in SynthEyes, you should check the units setting in SynthEyes. Hint: if you will be exporting to a compositing package, they often measure everything, including 3-D coordinates, in terms of pixels, not inches, meters, etc. Be sure to pick sizes for the scene that will work well in pixels. While you might scale a scene for an actor 2m tall, if you export to a compositor and the actor is two pixels tall that will rarely make sense.
Image Sequences
Different software packages have different conventions and requirements regarding the numbering of image sequences: whether they start at 0 or 1, whether there are leading zeroes in the image number, and whether they handle sequences that start at other numbers flexibly.
169
170
COLLADA Exports
The COLLADA export ("DAE" extension) is an industry-standard format that is very capable and currently the only way to directly export full set reconstruction information, ie all object positions, meshes, and texturing. Unfortunately it is not necessarily read effectively by all programs that read COLLADA files. Programs that read Filmbox (FBX files) should also be able to read COLLADA files. You will have to see what the capabilities are of the programs you use. In some software such as 3ds max and maya, there may be three different ways to import COLLADA files: a built-in COLLADA importer; via the Filmbox importer; or via various open-source COLLADA importers. The capabilities of each of these can be expected to change over time. We will be working with other vendors to encourage them to be able to read all the information that SynthEyes produces! Ultimately we expect that this format will produce the best results with the least effort for everyone.
171
172
173
Note that you may have to create the folder. The folder is specific just to your userid, other users of the same machine, if it is shared, will have to do the same thing. AfterEffects will notice the filters when it next starts up. We have provided a matching "redistort" filter which allows you to redistort footage to match, and be re-composited with, the original footage. You can read more about Lens Workflow in this manual. The AfterEffects distortion filters perform only bi-linear interpolation, which soften the images slightly compared to the Lanczos and Mitchell-Netravali options available in SynthEyes. So you may still find it desirable to use SynthEyes for better filtering and its additional preprocessing options, but using AfterEffects makes for a nice approach.
174
Bentley MicroStation
You can exporter to Bentleys Microstation V8 XM Edition by following these directions. Exporting from SynthEyes 1. MicroStation requires that animated backgrounds consist of a consecutive sequence of numbered images, such as JPEG or Targa images. If necessary, the Preview Movie capability in SynthEyess Perspective window can be used to convert AVIs or MOVs to image sequences.
175
176
177
Blender Directions
Blender has a tendency to change around frequently, so the details of these directions might best be viewed more as a guide than the last word. Unlike with commercial software, new versions of Blender do not always support previous scripts, so you may need to stick with an older version of blender until new blender features have stabilized and been compensated for. When working with image sequences and blender, it will be a good idea to ensure that the overall frame number is the same as the number in the image file name. Although you can adjust the offset, Blender incorrectly eliminates a frame number of zero. 1. 2. 3. 4. 5. 6. 7. 8. In SynthEyes, export to Blender (Python) Start Blender Delete the default cube and light Change one of the views to the blender Text Editor In the text editor, open the blender script you exported in step 1. Hit ALT-P to run the script Select the camera (usually Camera01) in the 3-D Viewport In a 3-D view, select Camera on the View menu to look through the imported, animated, SynthEyes camera 9. Select View/Background image 10. Click Use Background Image 11. Select your image sequence or movie from the selection list. 12. Adjust the background image settings to match your image. Make sure the shot length is adequate, and that Auto Refresh is on. If the images and animation do not seem to be synced correctly, you probably have to adjust the offset. 13. Decrease the blend value to zero, or you can go without the background, and set up compositing within blender.
178
Cinema 4D Procedure
1. Export from SynthEyes in Lightwave Scene format (.lws) see below. 2. Start C4D, import the .lws file, yielding camera and tracking points. 3. To set up the background, add a Background using the Objects menu 4. Create a new Texture with File/New down below. 5. At right, click on next to the file name for texture. 6. Select your source file (jpeg sequence or movie). 7. Click on the right-facing triangle button next to the file name, select Edit. 8. Select the Animation panel 9. Click the Calculate button at the bottom. 10. Drag the new texture from the texture editor onto the Background on the object list. Background now appears in the viewport.
DotXSI Procedure
1. In SynthEyes, after completing tracking, do File/Export/dotXSI to create a .xsi file somewhere. 2. Start Softimage, or do a File/New. 3. File/Import/dotXSI... of the new .xsi file from SynthEyes. The options may vary with the XSI version, but you want to import everything. 4. Set the camera to Scene1.Camera01 (or whatever you called it in SynthEyes). 5. Open the camera properties. 6. In the camera rotoscopy section, select New from Source and then the source shot. 7. Make sure Set Pixel Ratio to 1.0 is on. 8. Set Use pixel ratio to Camera Pixel Ratio (should be the default) 9. In the Camera section, make sure that Field of View is set to Horizontal. 10. Make sure that the Pixel Aspect Ratio is correct. In SynthEyes, select Shot/Edit Shot to see the pixel aspect ratio. Make sure that XSI has the exact same value: 0.9 is not a substitute for 0.889, so fix it! Back story: XSI does not have a setting for 720x480 DV, and 720x486 D1 causes errors! 11. Close the camera properties page. 12. On the display mode control (Wireframe, etc), turn on Rotoscope.
ElectricImage
The ElectricImage importer relies on a somewhat higher level of user activity than normal, in the absence of a scripting language for EI. You can export either a camera or object path, and its associated trackers. 1. After you have completed tracking in SynthEyes, select the camera/object you wish to export from the Shots menu, then select File/Export/Electric
179
2.
3. 4. 5.
Fusion 5
There are several Fusion-compatible exporters. The main exporter is the Fusion 5 composition export, which can be opened directly in Fusion. The Tracker 2-D Paths export can write all the exportable trackers to a text file, which can then be read in Fusion with the Import SynthEyes Trackers script and assigned to any Point-type input on a node. Select a node and start the Import script from its right-click menu. At present, it appears that you should animate the desired control before importing, then tell the script to proceed anyway when it notices that the control is already animated. There is also a generic 2-D path exporter for Fusion.
Houdini Instructions:
1. File/New unless you are addding to your existing scene. 2. Open the script Textport 3. Type source "c:/shots/scenes/flyover.cmd" or equivalent.
180
Lightwave
The Lightwave exporter produces a lightwave scene file (.lws) with several options, one of them crucial to maintaining proper synchronization. As mentioned earlier, Lightwave requires a units setting when exporting from SynthEyes. The SynthEyes numbers are unitless: by changing the units setting in the lightwave exporter as you export, you can make that 24 in SynthEyes mean 24 inches, 24 feet, 24 meters, etc. This is different than in Lightwave, where changing the units from 24 inches would yield 2 feet, 0.61 meters, etc. This is the main setting that you may want to change from scene to scene. Lightwave has an obscure preferences-like setting on its Compositing panel (on the Windows menu) named Synchronize Image to Frame. The available options are zero or one. Selecting one shifts the imagery one frame later in time, and this is the Lightwave default. However, for SynthEyes, a setting of zero will generally be more useful (unless the SynthEyes preference First Frame is 1 is turned on). The Lightwave exporter from SynthEyes allows you to select either 0 or 1. We recommend selecting zero, and adjusting Lightwave to match. You will only have to do this once, Lightwave remembers it subsequently. In all cases, you must have a matching value on the exporter UI and in Lightwave, or you will cause a subtle velocity-dependent error in your camera matches in Lightwave that will drive you nuts until you fix the setting. The exporter also has a checkbox for using DirectShow. This checkbox applies only for AVIs, and should be on for most AVIs that contain advanced codecs such as DV or HD. If an AVI uses an older codec and is not opened automatically within Lightwave, export again with this checkbox turned off.
Modo
The modo exporter handles normal shots, tripod shots, object shots, zooms etc. It transfers any meshes you've made, including the UV coordinates if you've frozen a UV map onto a tracker mesh. The UI includes the units (you can override the SynthEyes preferences setting); the scaling of the tracker widgets in Modothis is a percentage value, adjust to suit; plus there is an overall scaling value you can tweak if you want to (better to set up the coordinates right instead). Limitations 1. Only Image Sequences can be transferred to and displayed by Modo -- modo does not support AVI or Quicktime backdrops. 2. Image sequences in modo MUST have a fixed number of digits: the first and last frames must have the same number of digits (may require leading
181
Nuke
The nuke exporter produces a nuke file you can open directly. Be sure to select the exporter appropriate to your version of Nukethe files are notably different between Nuke versions. The 5.0 exporters are substantially more feature-rich than the Nuke 4 exporter, handling a wide variety of scene types. The pop-up parameter panel lets you control a number of features. The Nuke exporter will change SynthEyes meshes to Nuke built-ins where possible, such as for boxes and spheres. It can export non-primitive meshes as OBJ files and link them in automatically. If the other meshes are not exported, they are changed to bounding boxes in Nuke. Note that SynthEyes meshes can be scaled asymmetrically; you can either burn the scaling into the OBJ file (especially useful if you wish to use the OBJ elsewhere), or you can have the scaling duplicated by the Nuke scene. You can indicate if you have a slate frame at the start of the shot, or select renderable or non-rendering tracker marks. The renderable marks are better for tracking, the non-rendering marks better for adding objects within Nukes 3-D view. The size of the renderable tracker marks (spheres) can be controlled by a knob on the enclosing group. You can ask for a sticky note showing the SynthEyes scene file, or a popup message with the frame and tracker count. Note that Nuke 5.1 and earlier requires only INTEGER frame rates throughout. SynthEyes will force the value appropriately, but you may need to pay attention throughout your pipeline if you are using Nuke on 23.976 fps shots, which is 24 fps from an HD/HDV camera.
182
Poser
Poser struggles a little to be able to handle a match-moved camera, so the process is a bit involved. Hopefully Curious Labs will improve the situation in further releases. The shot must have square pixels to be used properly by Poser; it doesn't understand pixel aspect ratios. So if you have a 720x480 DV source, say, you need to resample it in SynthEyes, AfterEffects or something to 640x480. Also, the shot has to have a frame rate of exactly 30 fps. This is a drag since normal video is 29.97 fps, and Poser thinks it is 29.00 fps, and trouble ensues. One way to get the frame rate conversion without actually mucking up any of the frames is to store the shot out as a frame sequence, then read it back in to your favorite tool as a 30 fps sequence. Then you can save the 640x480 or other square-pixel size. Note that you can start with a nice 720x480 29.97 DV shot, track it in SynthEyes, convert it as above for Poser, do your poser animation, render a sequence out of Poser, then composite it back into the original 720x480. One other thing you need to establish at this time---exactly how many frames there are in your shot. If the shot ranges are 0 to 100, there are 101; from 10 to 223, there are 214. 1. After completing tracking in SynthEyes, export using the Poser Python exporter. 2. Start Poser. 3. Set the number of frames of animation, at bottom center of the Poser interface, to the correct number of frames. It is essential that you do this now, before reading the python script 4. File/Run Python Script on the python script output from SynthEyes. 5. The Poser Dolly camera will be selected and have the SynthEyes camera animation on it. There are little objects for each tracker, and also SynthEyes boxes, cones, etc are brought over into Poser. Open Question: How to render out of Poser with the animated movie background. The best approach appears to be to render against black with an alpha channel, then composite over the original shot externally.
Shake
SynthEyes offers three specific exporters for Shake, plus one generic one: 1. 2. 3. 4. MatchMove Node. Tracker Node Tracking File format 3-D Export via the AfterFX via .ma or Maya ASCII exports.
The first two formats (Sizzle export scripts) produce shake scripts (.shk files); the third format is a text file. The fourth option produces Maya scene files that Shake reads and builds into a scene using its 3-D camera.
183
trueSpace Directions:
Warning: trueSpace has sometimes had problems executing the exported script correctly. Hopefully Caligari will fix this soon. 1. 2. 3. 4. 5. 6. 7. 8. 9. In SynthEyes, export to trueSpace Python. Open trueSpace. Right-click the Play button in the trueSpace animation controls. Set the correct BaseRate/PlayRate in the animation parameters to match your source shot. Open the Script Editor. From inside the Script Editor, Open/Assign the python script you created within SynthEyes. Click Play (Time On) in the Script Manager. When the Play button turns off, close the ScriptManager. Open the Object Info panel.
184
Vue 5 Infinite
The export to Vue Infinite requires a fair number of manual steps pending further Vue enhancements. But with a little practice, they should only take a minute or two. 1. Export from SynthEyes using the Vue 5 Infinite setting. The options can be left at their default settings unless desired. You can save the python script produced into any convenient location. 2. Start Vue Infinite or do a File/New in it. 3. Select the Main Camera 4. On its properties, turn OFF "Always keep level" 5. Go to the animation menu, turn ON the auto-keyframe option. 6. Select the Python/Run python script menu item, select the script exported from SynthEyes, and run it. 7. In the main camera view, select the "Camera01 Screen" object (or the equivalent if the SynthEyes camera was renamed) 8. In the material preview, right-click, select Edit Material. 9. The material editor appears, select Advanced Material Editor if not already.
185
186
Vue 6 Infinite
1. Export from SynthEyes using the Vue 6 Infinite option, producing a maxscript file. 2. Import the maxscript file in Vue 6 Infinite 3. Adjust the aspect ratio of the backdrop to the correct overall aspect ratio for your shot. This is important since Vue assumes square pixels, and if they arent (for all DV, say), the camera match will be off badly.
187
Building Meshes
SynthEyes has the tools to aid you in the task of set reconstruction: building a digital model of the as-filmed motion-picture or television set. Set reconstruction involves two main tasks, building meshes to represent the geometry, and extracting textures to place upon the meshes. The meshes may be simple cards, with an alpha channel for detail, or complex digital models. Set models can serve to catch or cast shadows, act as front-projection targets, surfaces for vehicles or characters to move upon, serve as a basis for extensions to the set, etc. These uses can be previewed within SynthEyes before moving to your compositing or animation package to produce finished work. We will begin by considering how to build mesh geometry. There are three basic approaches to that: creating simple planar 'cards'; by converting tracker locations into a mesh; or by setting up geometric primitives (boxes, cylinders, etc) to match the scene. Using cards is easy for relatively simple camera motions; the second approach works best with irregular natural geometry; while using primitives works best for man-made objects and sets. Set reconstruction activities happen nearly exclusively within the perspective window(s), via the right-click menu or perspective-window toolbars. You should almost always complete the tracking of the scene before beginning set modeling activities, as changes to the tracking will allow the entire scene to shift around in ways that will force you to manually update your set model. You should always have a clearly defined and well-thought-out coordinate system set up first!
189
BUILDING MESHES
Cards may be parented to moving objects, simply have the moving object active on the main toolbar when you create the card. The card will inherit and be aligned with respect to the object's coordinate system, rather than the world coordinate system. Creating the Card You can create cards using the "Add Cards" mouse mode of the perspective window, found in the Other Modes section of the right-click menu, or on the mesh toolbar. Note that cards are simply planes, positioned and configured easily: you can place a plane manually and texture it and the result is just as much a card. With the Add Cards mode active, you can lasso-select within the perspective view. The trackers that fall within the lasso are examined and used to determine the plane of the card, in 3-D. If there are a few trackers that are much further away than the rest, that do not fit well on the plane of the others, they will be ignored. The bounding box of the lassoed area determines the size of the plane. You can move the mouse around as you are lassoing to bump up the size of the plane. You might notice the plane jumping around a bit if the trackers you have lassoed don't form a particularly flat plane: keep moving until you get the plane you want! Alternatively, you can pre-select the trackers to be used to locate the plane, using any method of tracker selection you want. While the Add Card mode is active, use ALT-drag (command-drag) to do a simple lasso of the trackers, without creating the plane yet. This makes it easier to navigate around in 3-D and verify that you have selected the trackers you want, and that they are reasonably planar, before creating the card. If there are already trackers selected as you start to add a card, those trackers are used to locate it, and instead of a lasso, the location where you first push the mouse button and where you release it are used as the corners of the plane; it is a simple rectangular drag (in the plane defined by the trackers). Preselecting the trackers is more convenient when you want to carefully set locate the edges of the plane. As you create a new card, its texturing parameters are copied from the previous card, if any, so you can configure the first one as you like, then create additional cards quickly. You may want to choose the resolution specifically for each cardsmaller cards need less, bigger cards need moreto maintain an adequate but not excessive amount of oversampling of the actual pixels. Once you have created a card, it will be selected in the viewports, so that you can work on the texturing. If you would like to compare the position of the plane to the trackers used to align it, click Undo once. This will show the created, but unselected card, plus the selected trackers used to align it. If you did not preselect the trackers, instead lassoing them within the tool, only the trackers
190
BUILDING MESHES
actually used to locate the card will be selected, not any outliers. You can unlock from the camera and orbit about to get a better idea of what you have, then click Redo to re-select the card. Moving a Card If you want to reposition the card along its own axes, be sure to switch the perspective window to View/Local coordinate handles, so that you will be sliding the plane along its own plane, not the coordinate axes. But be sure to read the next paragraph! If you re-solve the scene, for example after adding trackers and hitting refine, or after changing the coordinate system setup, the position of the card can be updated based on the new positions of the trackers originally used to create it. Run the Linking/Align Via Links dialog on the perspective window's right-click menu. Since the trackers may shift around arbitrarily, or if you moved the card after creating it, the card may no longer be in some exact location you wanted, and you will need to manually adjust it. Texturing For the full details of texture extraction, see the texture extraction chapter. Here is a quick preview. SynthEyes will pull an average texture from the scene and write it to a file. You have control over the resolution, format, etc parameters of that, as set from the Texture Control Panel (opened from the Window menu). When you create a card using Add Cards, the texturing parameters will be preconfigured to create a texture and write it to disk. The file name is based on the card name and the overall saved SynthEyes scene file name: you must have already saved the file at least once for this to work. The texture will not be produced until you click the Run or Run All buttons on the Texture Control Panel, or until you solve, if the Run all after solve checkbox is turned on. Once the texture is produced, you can paint on its alpha channel in SynthEyes or externally. If you do so externally, you should turn off the Create checkbox so the texture is not later accidentally overwritten. Tricks Cards can exactly represent only perfectly flat surfaces, since they are themselves flat. It is relatively common to use flat cards in 3-D compositing setups because the amount of perspective shift that should occur, if the item on the card was more accurately modeled, is not discernable. You can extract and build up composites with multiple levels of cards with extracted textures, if you build appropriate alpha channels for them. If the camera moves a lot, a single card can start to present an inaccurate view, one that shows its essential flatness. You can create multiple cards, oriented differently to match different parts of the shot, compute the texture
191
BUILDING MESHES
based on the correspondingly limited portion of the shot, and fade them in and out over the duration of the shot (in your composition or 3-D animation app).
the object-creation type on the 3-D panel, and create a cylinder in the middle of the mesh object (it will be created on the ground plane). You will see the
192
BUILDING MESHES
shadow on the tracker mesh. Use the cylinders handles to drag it around and the shadow will move across the mesh appropriately. For more fun, right-click Place mode and move the cylinder around on the mesh. In your 3-D application, you will probably want to subdivide the mesh to a smoother form, unless you already have many trackers. A smoother mesh will prevent shadows from showing sharp bends due to the underlying mesh. In practice, you will want to exercise much finer control over the building of the mesh: what it is built from, and how. The mesh built from the flyover trackers winds up with a lot of bumpiness due to the trees and sparsity of sampling. SynthEyes provides tools for building models more selectively. If you are following along, keep SynthEyes open at this point, as we'll continue on from here in the section on Front Projection, right before we start with Texture Extraction. Adding Vertices To produce more accurate geometry, especially with natural ground surfaces, you can increase the mesh density with the Track menus Add many trackers dialog, rapidly creating additional trackers after an initial auto-track and solve has been performed, but before using Convert To Mesh. Especially for man-made objects, there may not be a tracker where you need one to accurately represent the geometry. SynthEyes uses spot tracking which favors the interior of objects, not the corners, which are less reliable due to the changing background behind them. So even if you used auto-tracking and the Add many trackers dialog, you will probably want to add additional supervised trackers for particular locations. To produce additional detail trackers, especially at the corners of objects, the offset tracking capability can be very helpful, which is an advanced form of supervised tracking. With offset tracking, you can use existing supervised trackers to add new nearby trackers on corners and fine details, without too much time. You can clone off a number of offset trackers to handle small details in a building, for example. But beware! The accuracy of an offset tracker is entirely determined by the quality of the offsets you build; if you do too quick a job, the offset tracker will not be accurate in 3-D. Offset tracking works better when the camera motion is simple, even if it is bumpy; for example, a dolly to the right. Whether you use Add Many Trackers, create new supervised trackers, or clone out new offset trackers, you can use Convert to Mesh to add them to the existing edit mesh if you are auto-triangulating. Controlling Auto-Triangulation The convert-to-mesh and triangulate tools operate only on selected trackers or vertices, respectively. Usually you will want to select only a subset of the trackers to triangulate. After doing so, you may find that you want to take out
193
BUILDING MESHES
some facets and re-triangulate them differently to better reflect the actual world geometry or your planned use. You can accomplish that by deleting the offending facets (after selecting them by selecting all their vertices), and then selectively re-triangulating. The triangulation depends on the direction from which the perspective view camera observes the trackersit is an essentially 2-D process that works on the view as seen from the camera. You should ensure the camera does not view the group of trackers edge-on, as an unusable triangulation will result. Instead, for trackers on a ground plane, the camera should look down from above. Using the Assemble Mesh Mouse Mode You can manually create the triangulation of the mesh using the Assemble Mesh mode of the perspective window. Obviously that takes some more time, at a bit over one click per facet, but it should not be too bad. It allows the very specific control necessary for objects such as detailed building models. To use this mode, go to the perspective window and select it from the Mesh Operations submenu of the right-click menu. Do not use the Convert to Mesh item on any trackers. Instead, begin clicking on the three trackers you want to form the first facet. As you click on the third, the facet will be created. Click on a fourth tracker, and a new facet will be created from two of the prior three and the new one in a reasonably intelligent fashion. As you click on each additional tracker, a new facet will be created. You can hit Undo if the triangle created isn't the one you want. To get the triangle you want, click on the vertex you do not want to deselect it; with only two vertices then selected, clicking on another vertex will create a facet with it and the two selected. To start a new triangle in a different location, hold down the control key as you click a tracker: the previously selected vertices will be deselected, leaving only the new vertex selected. Clicking two more trackers will produce the first new facet. Mesh Editing Often an outlying tracker may need to be removed from the mesh, for example, the top of a phone pole that creates a tent in an otherwise mostly flat landscape. You can select that vertex, and right-click Remove and Repair. Removed vertices are not deleted, to give you the opportunity to reconnect them. Use the Delete Unused Vertices operation to finally remove them. Long triangles cause display problems in all animation packages, as interpolation across them does not work accurately. SynthEyes allows you to subdivide facets by placing a vertex at center, and converting the facet to three
194
BUILDING MESHES
new ones, or subdivide the edges by putting a vertex at the center of each edge and converting each facet to four new ones. Also, you can add vertices directly using the Add Vertices tool, or move them around with the move tool. Both of these rely on the grid to establish the basic positioning, typically using the Grid menus Align to Trackers/Vertices option. You can then add vertices on the grid, move them along it, or move them perpendicular to it by shift-dragging. You can move multiple vertices by lassoselecting them, or shift-clicking them from Move mode. Important!: if you move vertices, or add new ones, they will not update if you update the mesh after a new Solver operation; see the next section. After we get into object tracking, you will see that you can use the mesh construction process to generate starting points for object modeling efforts as well. What Happens If I Refine? After you have built a mesh from tracker locations, you may need to update the tracking and solution. The tracker locations will change, and without further action, the mesh vertices will not. The tracker locations are stored in the mesh as the mesh is generated. However, SynthEyes contains a Linking subsystem that can record the original tracker producing a vertex. This subsystem stores the linkages automatically when you use the Convert to Mesh and Assemble Mesh operations of the perspective window. After a solve, you can update the mesh using the Linking/Update meshes using links operation on the right-click menu. This does not happen automatically, as there are several possibilities involved with linking, as you will see below. Important: the links cover only the vertices generated from trackers. If you create additional vertices, either manually or by subdividing, those vertices can not and will not be updated. Links apply for a specific shot, ie the shot the tracker is a part of. A given vertex can be linked to different trackers on different shots! When you use Update meshes using links, the currently-active shot determines which links are used to update the vertices.
195
BUILDING MESHES
The perspective view's linking tools can make aligning primitive meshes accurately easier. Exact alignment is especially important when a texture will be extracted for the mesh, as even a little 'sliding'caused by the mesh not being placed accuratelywill blur the extracted texture. Three-Point Alignment In the standard coordinate system setup, you click on 3 trackers to set up a coordinate system: an origin, and X-axis, and a point on the ground plane. You can do something very similar to align a mesh to three trackers. In this case, you set up three links between a tracker and the corresponding vertex on the mesh. You must set the mesh as the edit mesh so that its vertices are exposed. Select the first vertex and the first trackerboth can be selected at once. Use the corresponding Lasso operation for the vertices and tracker. It can be convenient to zoom the perspective window, or use two different perspective or 3D viewports to do that. Select the Add link and align mesh menu item on the Linking submenu, and a new link will be created and the mesh moved to match them up. Select the second vertex and second tracker, and do the Add link and align mesh item again, and both vertices will match, achieved by moving, rotating, and scaling the mesh. Repeat for the third vertex and tracker. These do not have to be an exact match! This time, the mesh will rotate about the 3-D line between the first two trackers/vertices so that the third vertex and tracker fall on the same plane, just like for the coordinate system setup. The first two links are "lock points" whereas the third pair is like "on plane." So the third link is of a different type than the first two, the tracker is linked to a position relative to the three vertices. If you change the solve, you should update the position of the primitive using the Linking/Align via Links dialog, described next. Align Via Links Dialog The three-point alignment method produces exact tracker/vertex matches for a small number of points. Alternatively, you can use the Align Via Links dialog (launched from the Linking submenu) to align a mesh as best possible to any number of links.
196
BUILDING MESHES
You should establish all your links first, using the Add Links to selected menu item. Then launch the dialog. You can align the mesh to the trackers, or cause the camera and all the trackers to be moved to match the mesh, leaving it right where it is! This second option is useful when you have been given an entire existing 3-D model of the set that you must match the solve to. On the dialog, you'll notice that you can control whether or not the mesh is allowed to scale to match the links, and whether or not that scaling is separate on each axis, or uniform. When you have a complex existing mesh to match, you'll certainly want uniform scaling. But if you are matching a box to some trackers, and do not know the correct relative sizes of the axes, you should use the nonuniform scaling. You can also cause the tracker locations to be locked to the mesh locations, which is handy when you have aligned the world to the mesh positionthe created links will cause the solve to reproduce this same alignment with the mesh later, in additional solves, without having to re-run this dialog. Summary of Ways to Create Links As a recap, here are some different operations that create links, as found mostly on the Linking submenu of the perspective right-click menu: The Add Card operation creates a link for each tracker used to determine the plane of the card; each link is a composite link between the tracker and three corners of the card. Assemble Mesh and Convert to Mesh create a simple link between each tracker and its corresponding vertex. Add Link and Align adds a simple link for the first two trackers/vertices, and adds a composite link between the third tracker and the three vertices.
197
BUILDING MESHES
Add Links to Selected creates a simple link between the selected tracker and vertex.
This last operation comes in three flavors. If one tracker and several vertices are selected, a link is created between the tracker and each vertex. If only several trackers are selected, no vertices, then a link is created for each tracker to each vertex that is already at the same location, as seen in the image. If only vertices are selected, then each one is matched with the nearest tracker that encloses it, and if the tracker is solved, the vertex location is updated to be at that exact same location. The second and third methods are intended for reestablishing links. Note that each apparent vertex often consists of several vertices with different normals or UV coordinates at the same location. Other Helpful Operations When you are working on an existing primitive, it can be helpful to add a few specific 'detail' trackers to it, once it is already in place. You can select the trackers, then use the Punch In Trackers operation on the Mesh Operations submenu to do that. This operation depends on the viewing direction of the camera: as seen in the image, the facets containing each tracker will be removed and replaced with new facets that include the new tracker at their apex. To aid your linking operations, you can also remove links, or show the trackers with links, using operations on the Linking submenu. When you show (flash) the trackers with links, if there are selected vertices, then only trackers that are linked to the selected vertices will be flashed.
198
BUILDING MESHES
Front Projection Go back to the Ground Reconstruction example. With the cylinder casting an interesting shadow on an irregular surface, right-click Texturing/Rolling Front Projection. The mesh apparently disappears, but the irregular shadow remains. This continues even if you scrub through the shot. In short, the image has been front projected onto the mesh, so that it appears invisible. But, it continues to serve as a shadow catcher. In this Rolling Front Projection mode, new U,V coordinates are being calculated on each frame to match the camera angle, and the current image is being projected, ensuring invisibility. Alternatively, the Frozen Front Projection mode calculates U,V coordinates only once, when the mode is applied. Furthermore, the image from that frame continues to be applied for the rest of the frames. This kind of configuration is often used for 3-D Fix-It applications where a good frame is used to patch up some other ones, where a truck drives by, for example. Because the image is projected onto a 3-D surface, some parallax can safely be developed as the shot evolves, often hiding the essentially 2-D nature of the fix. If the mesh geometry is accurate enough, this amounts to texturemapping it with a live frame. The U,V coordinates of the mesh can be exported and used in other animation software, along with the source-image frame as a texture, in the rare event it does not support camera mapping. Frozen Front Projection is the prelude to the texture extraction capabilities described next.
199
Texture Extraction
In the previous section, we described how you to use the cameramapping-type texture coordinate generation capabilities of SynthEyes to be able to use the existing shot images as textures for meshes that model the set. Simple camera mapping is not ideal for set reconstruction, as the images have limited resolution, are subject to compression noise, require mesh texture coordinate values determined by the camera view, and may be partially obscured by moving objects.
The texture extraction system uses many frames to produce a more accurate texture at any practical resolution, and can use the mesh's existing texture coordinate values. With a suitably accurate mesh, the extracted texture can be good enough that you will have to pay attention that it does not look "too good" you will have to add noise to renders to match the original shot. With these capabilities, you can generate textures not only for objects visible and modeled in the scene, but also for far background imagery, such as mountains or clouds. Especially when the shot is a nodal pan, you can create a panoramic background that is larger than any of the individual frames, by using a large mesh that the camera sweeps across. Warning #1: as you go to generate that 16K x 16K texture, keep in mind that the amount of viable information in it will depend on what images are fed into the process; for example, you can not produce a high resolution texture from a single 720x480 input image. You can easily produce a high-resolution image with many blurry pixels! Warning #2: Texture generation requires large amounts of memory, many times more than a regular image of the same resolution, especially as 'undo' is considered. So though the 32-bit version contains the same extraction capabilities, a substantial 64-bit machine and 64-bit SynthEyes license is recommended for realistic commercial HD and film work with texture extraction. If you are using a 32-bit system, please do not complain about slow-downs or even crashes as image sequences are kicked out of RAM to solve large texturesyou have been warned! Warning #3(!): The extracted textures will have low noise, which is an advantage of texture extraction over using individual frames. Clean images make a good starting point for further work, however, you will usually need to add noise as part of your compositing, to match the original shot. You will need noise that changes each frame to match the original shot. Set reconstruction involves both meshes and texturing. In the meshbuilding section, we described how you can build models of geometry in the set. When the set involves complex details, especially 'natural' elements that are not man-made, it can be hard to build precise meshes.
201
TEXTURE EXTRACTION
Instead, you can use painting to create the geometry: the texture extraction system facilitates creating and painting alpha channels for the textures that can handle the fine details. However, as a 2-D approach, they can not be viewed from other angles as effectively. The texture display and extraction capabilities are controlled from the Texture Control Panel, launched from the Window menu or the button at the bottom of the 3-D Panel, which contains a small subset of the texture controls for information. Note that SynthEyes is a match-moving and set reconstruction tool, so it is not designed for more complex materials specifications, repeating textures, lighting controls, and the like, which are used downstream for CGI creation.
Texture Display
SynthEyes can display a texture on meshes, whether SynthEyes has generated it, or a different CG package. With the mesh selected, the texture control panel open, and the Create checkbox off, click Set and open the texture image.
202
TEXTURE EXTRACTION
If you have problems with the orientation or left/right or top/bottom handedness of the image, use the top drop-down (orientation) to obtain the correct orientation of the map on the mesh. While the mesh is selected, it will show the red selection color in the viewport blended with the texture, making the texture harder to see. You can suppress this by turning on the Hide mesh selection checkbox on the Texture Control Panel. You'll still see the drag handles, but the texture will be shown asis. This can be a little tricky, the effect will turn off if you close the Texture Control Panel. For help when you are painting alpha channels on textures, or having them computed, you can have the mesh display only the alpha channel by turning on the Show only texture alpha button. It can be handy to repeatedly flip the two checkboxes on and off to better understand the texture and alpha channel.
203
TEXTURE EXTRACTION
Blocking
If there is an actor or other object moving continuously in front of the mesh, shutting down extraction for the entire time may not be an option. Instead, SynthEyes offers two methods to prevent portions of the image from being used for extraction: blocking meshes and garbage mattes. These may not be necessary, if the disturbance is small and short-lived and there are enough other frames: the image may not be materially affected. Blocking Meshes The idea behind blocking meshes is simple, direct, and physically based. It is applicable for static portions of the set, rather than actors. If you have modeled a wall and a desk in front of it, then the mesh for the desk will block the portions of the wall behind it, preventing those sections from participating in the wall's texture extraction. If the camera moves sufficiently that the portion behind the desk is exposed elsewhere in the shot, then the wall's entire texture can be computed. If not, the wall texture will have a blank spot one that can not be seen as it is behind the desk! A complex scene may have many meshes in it, some used for texture extraction, some used for blocking, some not. To allow performance to be optimized, the texture control panel contains a control over whether or not a particular mesh should be tested to see if it blocks any texture extractions. It may be set to Blocking or Non-blocking and defaults to non-blocking; usually only a few must be set to blocking, if any. The blocking control can and should be adjusted as needed for meshes whether or not they are having their textures extracted or not. There is a notable calculation overhead for blocking meshes, as something similar to a render must be performed on each frame for each blocking render.
204
TEXTURE EXTRACTION
Garbage Mattes If an actor or a hard-to-model object is moving in front of the mesh being extracted, you can instead set up an animated garbage matte to exclude that area. The garbage mattes are set up with SynthEyes's roto system, which is normally used to control the automatic tracker. By the time you get to texture extraction, the time for that use is well passed, and you can add additional garbage mattes or remove existing ones without any ill effect. To make the texture extraction for a particular mesh sensitive to the garbage splines, turn on the Blocked by Garbage Splines checkbox. (The control defaults to off to prevent many needless examinations of the complex animated spline shapes.) As with its normal use, the garbage mattes set up for texture extraction do not have to be particularly exact, they can be quick and dirty.
Tilt Control
Consider a vertical cylinder with a camera orbiting it in a horizontal plane. At any point in time, the camera gets a good view of the part of the cylinder facing it, but the portions seen edge on can not be seen well, as the texture is fore-shortened with the pixels crunched together. As the camera orbits, the portions that can been seen well, and not seen well, change continuously. To ensure that texture extraction proceeds on the portions that can be accurately extracted, and ignores the portions that can not, there is a Tilt control. With the tilt control at zero, all grazing angles are considered. At one, extraction proceeds only on the portions where that precisely face the camera, with intermediate cutoff angles at values in between. The tilt angle check is performed on a facet-by-facet basis, so meshes where this calculation is necessary should have adequate segmentation. If there are only 12 facets around, for example, then there will only be 3 facets between straight-on and edge-on, roughly 30 degrees at a time will be turned off or on. A segment count of 48 might be a better choice.
Run Control
The texture extraction process can run under manual or automatic control. You can run it on only the selected mesh(es) using the Run button, or on all meshes using the Run All button. In both cases, of course only meshes set up for extraction or blocking will be processed. It will take less time in total to process all meshes simultaneously than to process each mesh separately. You can also have the mesh extraction process run automatically at the end of each solve operation (normal or refine). You should use this option judiciously, as extraction can take quite some time and you will not want to do it over and over while you are working on unrelated tracking or solving issues.
205
TEXTURE EXTRACTION
or of course, when you click the Save button on the texture control panel. When you are painting an alpha channel or creating one with the alpha spinners, you will need to save the affected texture manually when you are done painting on it (unless you re-extract it or change one of the format controls listed above).
Alpha Generation
When a mesh covers only the interior of an object, the pixels underneath it repeat reliably in every frame. For example, a side of a building, a poster on a wall, etc. However, the edge of a castle or a natural scene with rocks or trees will have an irregular border that is tough to model with mesh geometry. If the geometry extends past the edge of the object, those pixels will vary over time, depending on what is behind the object. As the camera moves, those pixels that are not over a part of the object will sweep across different parts of the background, potentially producing a broad spread of pixel values. We can exploit that to produce an alpha channel for the mesh that is opaque where it covers the object of interest, and transparent for the background. Such meshes and textures can then readily be used in traditional compositing, especially if the mesh is always a flat plane. SynthEyes measures the RMS error (repeatability) of pixels, and offers the Alpha Control section of the texture control panel. To generate an alpha channel, turn on the Create Alpha checkbox. The Low error spinner sets the level at which the alpha channel will be fully opaque (below this lower limit). The High error spinner sets the level at which the alpha channel will be fully transparent (above this upper limit). The Sharpness spinner controls what happens in between those limits, much like a gamma control. You can increase the Low limit until portions of the alpha channel start to drop out that should not, and decrease the High limit to the point that the background is fully clear. The alpha channel will update immediately as you do this, without having to recalculate the texture.
206
TEXTURE EXTRACTION
Important: though the texture will be updated in the viewports immediately as you adjust the alpha spinner controls, you should click the Save button when you are done, to re-write the modified textures to disk. You should not expect this process to be perfect; it depends strongly on what the background is behind the object and how much variability there is in the background itself. For example, a green-screen background will always stick with the foreground, because it never varies! To make it easier to see what you have in the alpha channel, you can use the Show only texture alpha checkbox and of course the Hide mesh selection checkbox. You can also use your operating system's image-preview tools to look at the texture images that have been stored to disk. To clean up the alpha channel, or create one from scratch, you can paint in it directly, as described in the next section.
Alpha Painting
SynthEyes offers a painting system that allows you to directly paint in the alpha channel of extracted textures. You can paint fine detail into textures, especially cards, rather than trying to create extremely detailed geometry. And you can better capture natural elements. Painting can be completely controlled only from the Paint toolbar, accessed from the perspective view's right-click menu. There must be exactly one mesh selected; it should not be the edit mesh. There are convenience buttons to show only the alpha channel, hide the selection, and hide the mesh completely on the toolbar. There are four mouse modes for painting, all of which use the same three Size, Sharpness, and Density settings. These settings are adjusted by dragging vertically, starting within the respective Density etc box on the Paint toolbar. Fine Point!: The settings affect the last stroke drawn (so you can change it), as well as the next stroke you draw. To change the parameters before starting a new stroke, without affecting the old stroke, either click one of the drawing mode buttons again, or right-click one of the setting buttons. While the setting buttons are attached to an existing stroke, there is an asterisk (*) after the name of the button. You can re-attach to the last stroke by double-clicking one of the settings buttons. The Size setting controls the size of the brush, in pixels. The sharpness setting controls the type of fall-off away from the center of the brush. The Density setting ranges from -1 to +1: at -1, painting makes the pixels immediately transparent, at +1, painting makes the pixels immediately opaque. In between, pixels are made only somewhat more transparent or opaque. The transparent and opaque buttons on the toolbar set the density quickly to the respective value. The Paint Alpha mode is for 'scribbling' on a mesh while holding down the (left) mouse button, turning the texture extracted at those pixels transparent or
207
TEXTURE EXTRACTION
opaque etc as controlled by the settings. You can paint away extra pieces of geometry, where the texture is the blurry background, adjust and soften edges to match the desired portion of the texture, etc. Note: you must paint on the mesh, you really are painting on the geometry. If you click off the edge of the mesh, thinking the size of the pen is going to affect the mesh, it will not, nothing will happen at all. The Paint Alpha loop is a scribble-type mode, but it creates filled regions, to rapidly fill a slightly noisy interior in an automatically-created alpha channel, or to knock a hole around some unwanted texture. The Pen Z Alpha mode produces straight-line segments between endpoints, with one endpoint created for each click of the mouse. The "Z" in "Pen Z" refers to the shape of the paths created, not any particular meaning. Use Pen Z mode to create clean straight lines along edges, to mask the edge of a building, for example. The Pen S Alpha mode produces curved spline-based curves between endpoints, with an endpoint created per mouse click. Again, the "S" refers to the shape produced, though you can think of it as spline or smooth as well. Use Pen S mode to create smoother curved edges. In addition to Undo, the paint toolbar contains buttons to delete the last stroke (and then the one before that, and before that, ....) and to delete all the strokes. After finishing painting, click the Save button on the texture panel to rewrite the altered texture(s) to disk. Your paint strokes are recorded, so that if you later re-calculate the texture, the paint strokes will be re-applied to the new version of the texture. If you have changed to the mesh or solve substantially, you may need to re-paint or touch up the alpha to adjust.
Far Meshes
You may want to create background textures for large spherical or planar background meshes, ie sky maps. This can be inconvenient, as to work properly the sky map or distant backdrop must be very large and very far away, ie several thousand times farther away than the maximum distance of the camera motion. To simplify this, SynthEyes allows you to create "Far Meshes" similar to far trackers. Far meshes automatically translate along with the camera, allowing a conveniently-sized small mesh to masquerade as a large one. Set the mesh to Far using the button on the 3-D Control Panel. Afterwards, you will see it move with the camera, do not be alarmed!
208
TEXTURE EXTRACTION
209
TEXTURE EXTRACTION
210
Image Storage
First, you want to get the shot into RAM. Clearly, having a lot of RAM will help. If you are using a 32-bit system (XP/Vista-32 or OS X), you can only cache about 2.5 GB of imagery in RAM at a time, regardless of how much RAM is in your system, due to the nature of 32-bit addressing. In SynthEyes-64, running on XP/Vista-64, you can use your entire RAM, except for about 1.5 GB. If your shot does not fit, you have two primary options: using the small playback-range markers on the SynthEyes time bar to play back a limited range of the shot at a time, or to reduce the amount of memory by down-sampling the images in the SynthEyes image preprocessor (or maybe drop to black/white). If you have 4K film or RED scans and are playing back on a 2K monitor, you might as well down-sample by 2x anyway. If you have a RAID array on your computer, SynthEyess sophisticated image prefetch system should let you pull large sequences rapidly from disk.
Actual-Speed Playback
Once you have your shot playing back as rapidly as possible, you probably want it to play at the desired rate, typically 24, 25, or 29.97 fps.
211
Safe Areas
You can enable one or more safe-area overlays from the safe area submenu of the View menu.
212
Troubleshooting
Sliding. This is what you see when an object appears to be moving, instead of stationary on a floor, for example. This is a user error, not a software error, typically due to object placement errors. Almost always, this is because the inserted object has not been located in exactly the right spot, rather than indicating a tracking problem. Often, an object is inserted an inch or two above a floor. Be sure you have tracked the right spot: to determine floor level, track marks on the floor, not tennis balls sitting on it, which are effectively an inch or two higher. If you have to work from the tennis balls, set up the floor coordinate system taking the ball radius into account, or place the object the corresponding amount below the apparent floor. Also, place trackers near the location of the inserted object whenever possible. Another common cause of sliding: a tracker that jumps from one spot to another at some frame during the track. It lines up in SynthEyes, but not XXX. The export scripts do what they can to try to ensure that everything lines up just as nicely in your post-tracking application as in SynthEyes, but life is never simple. There are preferences that may be different, maybe youre integrating into an existing setup, maybe you didnt think hitting xxx would matter, etc. The main causes of this problem have been when the field of view is mangled (especially when people worry about focal length instead, and have the wrong back plate width), and when the posttracking application turns out to be using a slightly different timing for the images, one frame earlier or later, or 29.97 vs 30 fps etc, or with or without some cropping. Camera01: No trackers, please fix or set camera to disabled. You have created a scene with more than one camera, opening a new shot into an existing fileone with no trackers. The message is 100% correct. You need to select the original camera on the Shot menu, then Shot/Remove object. Cant locate satisfactory initial frame when solving. When the Constrain checkbox is on (Solver panel), the constrained trackers need to be active on the begin and end frames. Consequently, keeping Constrain off is preferable. Alternatively, the shot may lack very much parallax. Try setting the Solver Panels Begin and/or End frames manually. For example, set the range to the entire shot, or a long run of frames with many trackers in common. However, keep the range short enough that the camera motion from beginning to end stays around 30 degrees maximum rotation about any axis. I tried Tripod mode, and now nothing works and you get Cant locate satisfactory initial frame or another error message. Tripod mode turns all the trackers to Far, since they will have no distance data in tripod mode. Select all the trackers, and turn Far back off (from the coordinate system control panel).
213
TROUBLESHOOTING
Bad Solution, very small field of view. Sometimes the final solution will be very small, with a small field of view. Often this means that there is a problem with one or more trackers, such as a tracker that switches from one feature to a different one, which then follows a different trajectory. It might also mean an impossible set of constraints, or sometimes an incomplete set of rotation constraints. You might also consider flipping on the Slow but sure box, or give a hint for a specific camera motion, such as Left or Up. Eliminate inconsistent constraints as a possibility by turning off the Constrain checkbox. Object Mode Track Looks Good, but Path is Huge. If youve got an object mode track that looks good---the tracker points are right on the tracker boxes---but the object path is very large and flying all over the place, usually you havent set up the objects coordinate system, so by default it is the camera position, far from the object itself. Select one tracker to be the object origin, and use two or more additional ones to set up a coordinate system, as if it was a normal camera track. Master Reset Does Not Work. By design, the master reset does not affect objects or cameras in Refine or Refine Tripod mode: they will have to be set back to their primary mode anyway, and this prevents inadvertent resets. Cant open an image file or movie. Image file formats leave room for interpretation, and from time to time a particular program may output an image in a way that SynthEyes is not prepared to read. SynthEyes is intended for RGB formats with 8 or more bits per channel. Legacy or black and white formats will probably not read. If you find a file you think should read, but does not, please forward it to SynthEyes support. Such problems are generally quick to rectify, once the problematic file can be examined in detail. In the meantime, try a different file format, or different save options, in the originating program, if possible, or use a file format converter if available. Also, make sure you can read the image in a different program, preferably not the one that created it: some images that SynthEyes couldnt read have turned out to be corrupted previously. Cant delete a key on a tracker (ie by right-clicking in the tracker view window, or right-clicking the Now button). If the tracker is set to automatically key every 12 frames, and this is one of those keys, deleting it will work, but SynthEyes will immediately add a new key! Usually you want to back up a few frames and add a correct key; then you can delete or correct the original one. Or, increase the auto-key setting. Also, you can not delete a key if the tracker is locked.
Crashes
By far the largest source of SynthEyes crashes is running your machine out of memory. Auto-tracked HD scenes will do that easily on 32-bit systems. If you suspect that may be a problem, or SynthEyes crashes, reduce the queue length down to 10 on the shot setup dialog when you open the shot (or by doing a Shot/Edit Shot). It is also a good idea to re-open SynthEyes if you have auto-
214
TROUBLESHOOTING
tracked the same shot several timesor turn down the undo setting because the amount of data per undo can be very large. In the event that SynthEyes detects an internal error, it will pop up an Imminent Crash dialog box asking you if you wish to save a crash file. You should take a screen capture with Print Screen on your keyboard, then respond Yes. SynthEyes will save the current file to a special crash location, then pops up another dialog box that tells you that location (within your Documents and Settings folder). You should then open a paint program such as Photoshop, Microsoft Paint, Paint Shop Pro, etc, and paste in the screen capture. Save the image to a file, then e-mail the screen capture, the crash save file, and a short description of what you were doing right before the crash, to SynthEyes technical support for diagnosis, so that the problem can be fixed in future releases. If you have Microsofts Dr. Watson turned on, forwarding that file would also be helpful. The crash save file is your SynthEyes scene, right before it began the operation that resulted in the crash. You should often be able to continue using this file, especially if the crash occurred during solving. It is conceivable that the file might be corrupted, so if you recently had saved the file, you may wish to go back to that file for safety.
215
Guide Trackers
Guide Trackers are supervised trackers, added before automated tracking. Pre-existing trackers are automatically used by the automated tracking system to re-register frames as they move. With this guidance, the automated tracking system can accommodate more, or crazier, motions than it would normally expect. Unless the overall feature motion is very slow, you should always add multiple guide trackers distributed throughout the image, so that at any location in the image, the closest guide tracker has a similar motion. [The main exception: if you have a jittery hand-held shot where, if it was stabilized, the image features actually move rather slowly, you can use only a single guide tracker.] Note: guide trackers are rarely necessary, and are processed differently than in previous versions of SynthEyes.
217
218
Stabilization
In this section, well go into SynthEyes stabilization system in depth, and describe some of the nifty things that can be done with it. If we wanted, we could have a single button Stabilize this! that would quickly and reliably do a bad job almost all the time. If thats what youre looking for, there are some other software packages that will be happy to oblige. In SynthEyes, we have provided a rich toolset to get outstanding results in a wide variety of situations. You might wonder why weve buried such a wonderful and significant capability quite so far into the manual. The answer is simple: in the hopes that youve actually read some of the manual, because effectively using the stabilizer will require that you know a number of SynthEyes concepts, and how to use the SynthEyes tracking capabilities. If this is the first section of the manual that youre reading, great, thanks for reading this, but youll probably need to check out some of the other sections too. At the least, you have to read the Stabilization quick-start. Also, be sure to check the web site for the latest tutorials on stabilization. We apologize in advance for some of the rant content of the following sections, but its really in your best interest!
219
STABILIZATION
Most imaging software, including you, expects the optic center of an image to fall at the center of that image. Otherwise, it looks weirdthe fundamental camera geometry is broken. The optic center might also be called the vanishing point, center of perspective, back focal point, center of lens distortion. For example, think of shooting some footage out of the front of your car as you drive down a highway. Now cut off the right quarter of all the images and look at the sequence. It will be 4:3 footage, but its going to look strangethe optic center is going to be off to the side. If you combine off-center footage with additional rendered elements, they will have the optic axis at their center, and combined with the different center of the original footage, they will look even worse. So when you stabilize by translating an image in 2-D (and usually zooming a little), youve now got an optic center moving all over the place. Right at the point youve stabilized, the image looks fine, but the corners will be flying all over the place. Its a very strange effect, it looks funny, and you cant track it right. If you dont know what it is, youll look at it, and think it looks funny but not know what has hit you. Recommendation: if you are going to be adding effects to a shot, you should ask to be the one to stabilize or pan/scan it also. Weve given you the tool to do it well, and avoid mishap. Thats always better than having someone else mangle it, and having to explain later why the shot has problems, or why you really need the original un-stabilized source by yesterday.
In-Camera Stabilization
Many cameras now feature built-in stabilization, using a variety of operating principles. These stabilizers, while fine for shooting babys first steps, may not be fine at all for visual effects work. Electronic stabilization uses additional rows and columns of pixels, then shifts the image in 2-D, just like the simple but flawed 2-D compositing approach. These are clearly problematic. One type of optical stabilizer apparently works by putting the camera imaging CCD chip on a little platform with motors, zipping the camera chip around rapidly so it catches the right photons. As amazing as this is, it is clearly just the 2-D compositing approach. Another optical stabilizer type adds a small moving lens in the middle of the collection of simple lens comprising the overall zoom lens. Most likely, the result is equivalent to a 2-D shift in the image plane. A third type uses prismatic elements at the front of the lens. This is more likely to be equivalent to re-aiming the camera, and thus less hazardous to the image geometry.
220
STABILIZATION
Doubtless additional types are in use and will appear, and it is difficult to know their exact properties. Some stabilizers seem to have a tendency to intermittently jump when confronted with smooth motions. One mitigating factor for in-camera stabilizers, especially electronic, is that the total amount of offset they can accommodate is smallthe less they can correct, the less they can mess up. Recommendation: It is probably safest to keep camera stabilization off when possible, and keep the shutter time (angle) short to avoid blur, except when the amount of light is limited. Electronic stabilizers have trouble with limited light so that type might have to be off anyway.
3-D Stabilization
To stabilize correctly, you need 3-D stabilization that performs keystone correction (like a projector does), re-imaging the source at an angle. In effect, your source image is projected onto a screen, then re-shot by a new camera looking in a somewhat different direction with a smaller field of view. Using a new camera keeps the optic center at the center of the image. In order to do this correctly, you always have to know the field of view of the original camera. Fortunately, SynthEyes can tell us that.
Stabilization Concepts
Point of Interest (POI). The point of interest is the fixed point that is being stabilized. If you are pegging a shot, the point of interest is the one point on the image that never moves. POI Deltas (Adjust tab). These values allow you to intentionally move the POI around, either to help reduce the amount of zoom required, or to achieve a particular framing effect. If you create a rotation, the image rotates around the POI. Stabilization Track. This is roughly the path the POI tookit is a direction in 3-D space, described by pan/tilt/roll anglesbasically where the camera (POI) was looking (except that the POI isnt necessarily at the center of the image). Reference Track. This is the path in 3-D we want the POI to take. If the shot is pegged, then this track is just a single set of values, repeated for the duration of the shot. Separate Field of View Track. The image preparation system has its own field of view track. The image preps FOV will be larger than main FOV, because the image prep system sees the entire input image, while the main tracking and solving works only on the smaller stabilized sub-window output by image prep. Note that an image prep FOV is needed only for stabilization, not for pixel-level adjustments, downsampling, etc. The Get Solver FOV button transfers the main FOV track to the stabilizer.
221
STABILIZATION
Separate Distortion Track. Similarly there is a separate lens distortion track. The image preps distortion can be animated, while the main distortion can not. The image prep distortion or the main distortion should always be zero, they should never both be nonzero simultaneously. The Get Solver Distort button transfers the main distortion value (from solving or the Lens-panel alignment lines) to the stabilizer, and begs you to let it clear the main distortion value afterwards. Stabilization Zoom. The output window can only be a portion of the size of the input image. The more jiggle, the smaller the output portion must be, to be sure that it does not run off the edge of the input (see the Padded mode of the image prep window to see this in action). The zoom factor reflects the ratio of the input and output sizes, and also what is happening to the size of a pixel. At a zoom ratio of 1, the input and output windows and pixels are the same size. At a zoom ratio of 2, the output is half the size of the input, and each incoming pixel has to be stretched to become two pixels in the output, which will look fairly blurry. Accordingly, you want to keep the zoom value down in the 1.1-1.3 region. After an Auto-scale, you can see the required zoom on the Adjust panel. Re-sampling. Theres nothing that says we have to produce the same size image going out as coming in. The Output tab lets you create a different output format, though you will have to consider what effect it has on image quality. Re-sampling 3K down to HD sounds good; but re-sampling DV up to HD will come out blurry because the original picture detail is not there. Interpolation Filter. SynthEyes has to create new pixels in-between the existing ones. It can do so with different kinds of filtering to prevent aliasing, ranging from the default Bi-Linear, 2-Mitchell, to the most complex 3-Lanczos. The bi-linear filter is fastest but produces the softest image. The Lanczos filters take longer, but are sharperalthough this can be drawback if the image is noisy. Tracker Paths. One or more trackers are combined to form the stabilization track. The trackers 2-D paths follow the original footage. After stabilization, they will not match the new stabilized footage. There is a button, Apply to Trkers, that adjusts the tracker paths to match the new footage, but again, they then match that particular footage and they must be restored to match the original footage (with Remove f/Trkers) before making any later changes to the stabilization. If you mess up, you either have to return to an earlier saved file, or re-track.
Overall Process
Were ready to walk through the stabilization process. You may want to refer to the Image Preprocessor Reference. Track the features required for stabilization: either a full auto-track, supervised tracking of particular features to be stabilized, or a combination.
222
STABILIZATION
If possible, solve the shot either for full 3-D or as a tripod shot, even if it is not truly nodal. The resulting 3-D point locations will make the stabilization more accurate, and it is the best way to get an accurate field of view. If you have not solved the shot, manually set the Lens FOV on the Image Preprocessors Lens tab (not the main Lens panel) to the best available value. If you do set up the main lens FOV, you can import it to the Lens tab. On the Stabilization tab, select a stabilization mode for translation and/or rotation. This will build the stabilization track automatically if there isnt one already (as if the Get Tracks button was hit), and import the lens FOV if the shot is solved. Adjust the frequency spinner as desired. Hit the Auto-Scale button to find the required stabilization zoom Check the zoom on the Adjust tab; using the Padded view, make any additional adjustment to the stabilization activity to minimize the required zoom, or achieve desired shot framing. Output the shot. If only stabilized footage is required, you are done. Update the scene to use the new imagery, and either re-track or update the trackers to account for the stabilization Get a final 3-D or tripod solve and export to your animation or compositing package for further effects work.
There are two main kinds of shots and stabilization for them: shots focusing on a subject, which is to remain in the frame, and traveling shots, where the content of the image changes as new features are revealed.
Stabilizing on a Subject
Often a shot focuses on a single subject, which we want to stabilize in the frame, despite the shaky motion of the camera. Example shots of this type include: The camera person walking towards a mark on the ground, to be turned into a cliff edge for a reveal. A job site to receive a new building, shot from a helicopter orbiting overhead A camera car driving by a house, focusing on the house.
To stabilize these shots, you will identify or create several trackers in the vicinity of the subject, and with them selected, select the Peg mode on the Translation list on the Stabilize tab. This will cause the point of interest to remain stationary in the image for the duration of the shot.
223
STABILIZATION
You may also stabilize and peg the image rotation. Almost always, you will want to stabilize rotation. It may or may not be pegged. You may find it helpful to animate the stabilized position of the point of interest, in order to minimize the zoom required, see below, and also to enliven a shot somewhat. Some car commercials are shot from a rig that shows both the car and the surrounding countryside as the car drives: they look a bit surreal because the car is completely stationaryhaving been pegged exactly in place. No real camera rig is that perfect!
In such shots, there is no single feature to stabilize. Select the Filter mode for the stabilization of translation and maybe rotation. The result is similar to the stabilization done in-camera, though in SynthEyes you can control it and have keystone correction. When the stabilizer is filtering, the Cut Frequency spinner is active. Any vibratory motion below that frequency (in cycles per second) is preserved, and vibratory motion above that frequency is greatly reduced or eliminated. You should adjust the spinner based on the type of motion present, and the degree of stabilization required. A camera mounted on a car with a rigid mount, such as a StickyPod, will have only higher-frequency residual vibration, and a larger value can be used. A hand-held shot will often need a frequency around 0.5 Hz to be smooth. Note: When using filter-mode stabilization, the length of the shot matters. If the shot is too short, it is not possible to accurately control the frequency and distinguish between vibration and the desired motion, especially at the beginning and end of the shot. Using a longer version of the take will allow more control, even if much of the stabilized shot is cut after stabilization.
Minimizing Zoom
The more zoom required to stabilize a shot, the less image quality will result, which is clearly bad. Can we minimize the zoom, and maximize image quality? Of course, and SynthEyes provides the controllability to do so. Stabilizing a shot has considerable flexibility: the shot can be stable in lots of different ways, with different amounts of zoom required. We want a shot that
224
STABILIZATION
everyone agrees is stable, but minimizes the effect on quality. Fortunately, we have the benefit of foresight, so we can correct a problem in the middle of a shot, anticipating it long before it occurs, and provide an apparently stable result. Animating POI The basic technique is to animate the position of the point-of-interest within the frame. If the shot bumps left suddenly, there are fewer pixels available on the left side of the point of interest to be able to maintain its relative position in the output image, and a higher zoom will be required. If we have already moved the point of interest to the left, fewer pixels are required, and less zoom is required. Earlier, in the Stabilization Quick Start, we remarked that the 28% zoom factor obtained by animating the rotation could be reduced further. Well continue that example here to show how. Re-do the quick start to completion, go to frame 178, with the Adjust tab open, in Padded display mode, with the make key button turned on. From the display, you can see that the red output-area rectangle is almost near the edge of the image. Grab the purple point-of-interest crosshair, and drag the red rectangle up into the middle of the image. Now everything is a lot safer. If you switch to the stabilize tab and hit Autoscale, the red rectangle enlarges there is less zoom, as the Adjust tab shows. Only 15% zoom is now required. By dragging the POI/red rectangle, we reduced zoom. You can see that what we did amounted to moving the POI. Hit Undo twice, and switch to the Final view. Drag the POI down to the left, until the Delta U/V values are approximately 0.045 and -0.035. Switch back to the Padded view, and youll see youve done the same thing as before. The advantage of the padded view is that you can more easily see what you are doing, though you can get a similar effect in the Final view by increasing the margin to about 0.25, where you can see the dashed outline of the source image. If you close the Image Prep dialog and play the shot, you will see the effect of moving the POI: a very stable shot, though the apparent subject changes over time. It can make for a more interesting shot and more creative decisions. Too Much of a Good Thing? To be most useful, you can scrub through your shot and look for the worst frame, where the output rectangle has the most missing, and adjust the POI position on that frame. After you do that, there will be some other frame which is now the worst frame. You can go and adjust that too, if you want. As you do this, the zoom required will get less and less.
225
STABILIZATION
There is a downside: as you do this, you are creating more of the shakiness you are trying to get rid of. If you keep going, you could get back to no zoom required, but all the original shakiness, which is of course senseless. Usually, you will only want to create two or three keys at most, unless the shot is very long. But exactly where you stop is a creative decision based on the allowable shakiness and quality impact. Auto-Scale Capabilities The auto-scale button can automate the adjustment process for you, as controlled by the Animate listbox and Maximum auto-zoom settings. With Animate set to Neither, Auto-scale will pick the smallest zoom required to avoid missing pieces on the output image sequence, up to the specified maximum value. If that maximum is reached, there will be missing sections. If you change the Animate setting to Translate, though, Auto-scale will automatically add delta U/V keys, animating the POI position, any time the zoom would have to exceed the maximum. Rewind to the beginning of the shot, and control-right-click the Delta-U spinner, clearing all the position keys. Change the Animate setting to Translate, reduce the Maximum auto-zoom to 1.1, then click Auto-Scale. SynthEyes adds several keys to achieve the maximum 10% zoom. If you play back the sequence, you will see the shot shifting around a bit10% is probably too low given the amount of jitter in the shot to begin with. The auto-scale button can also animate the zoom track, if enabled with the Animate setting. The result is equivalent to a zooming camera lens, and you must be sure to note that in the main lens panel setting if you will 3-D solve the shot later. This is probably only useful when there is a lot of resolution available to begin with, and the point of interest approaches the boundary of the image at the end of the shot. Keep in mind that the Auto-scale functionality is relatively simple. By considering the purpose of the shot as well as the nature of any problems in it, you should often be able to do better.
226
STABILIZATION
Suppose you want to stabilize at the location of a single tracker, but you want to stabilize the rotation as well. With a single tracker, rotation can not be stabilized. If you select two trackers, you can stabilize the rotation, but without further action, the point of interest will be sitting between the two trackers, not at the location of the one you care about. To fix this, select the desired POI tracker in the main viewport, and increase its weight value to the maximum (currently 10). Then, select the other tracker(s), and reduce the weight to the minimum (0.050). This will put the POI very close to your main tracker. If you play with the weights a bit, you can make the POI go anywhere within a polygon formed by the trackers. But do not be surprised if the resulting POI seems to be sliding on the image: the POI is really a 3-D location, and usually the combination of the trackers will not be on the surface (unless they are all in the same plane). If this is a problem for what you want to do, you should create a supervised tracker at the desired POI location and use that instead. If you have adjusted the weights, and later want to re-solve the scene, you should set the weights back to 1.0 before solving. (Select them all then set the weight to 1).
STABILIZATION
If you are resampling from a larger resolution to a smaller one, you should use the Blur setting to minimize aliasing effects (Moire bands). You should consider the effect of how much of the source image you are using before blurring. If you have a zoom factor of 2 into a 3K shot, the effective pixel count being used is only 1.5K, so you probably would not blur if you are producing 1920x1080p HD. Due to the nature of SynthEyes integrated image preparation system, the re-sampling, keystone correction, and lens un-distortion all occur simultaneously in the same pass. This presents a vastly improved situation compared to a typical node-based compositor, where the image will be resampled and degraded at each stage.
228
STABILIZATION
Note that within the image preparation subsystem, some animated tracks are animated by the field, and some are animated by the frame. Frame: levels, color/hue, distortion/scale, ROI Field: FOV, cut frequency, Delta U/V, Delta Rot, Delta Zoom When you are animating a frame-animated item on an interlaced shot, if you set a key on one field (say 10), you will see the same key on the other field (say 11). This simplifies the situation, at least on these items, if you change a shot from interlaced to progressive or yes mode or back.
After Stabilizing
Once youve finished stabilizing the shot, you should write it back out to disk using the Save Sequence button on the Output tab. It is also possible to save the sequence through the Perspective windows Preview Movie capability. Each method has its advantages, but using the Save Sequence button will be generally better for this purpose: it is faster; does less to the images; allows you to write the 16 bit version; and allows you to write the alpha channel. However, it does not overlay inserted test objects like the Preview Movie does. You can use the stabilized footage you write for downstream applications such as 3dsmax and Maya. But before you export the camera path and trackers from SynthEyes, you have a little more work to do. The tracker and camera paths in SynthEyes correspond to the original footage, not the stabilized footage, and they are substantially different. Once you close the Image Preparation dialog, youll see that the trackers are doing one thing, and the now-stable image doing something else.
229
STABILIZATION
You should always save the stabilizing SynthEyes scene file at this point for future use in the event of changes. You can then do a File/New, open the stabilized footage, track it, then export the 3-D scene matching the stabilized footage. But if you have already done a full 3-D track on the original footage, you can save time. Click the Apply to Trkers button on the Output tab. This will apply the stabilization data to the existing trackers. When you close the Image Prep, the 2D tracker locations will line up correctly, though the 3-D Xs will not yet. Go to the solver panel, and re-solve the shot (Go!), and the 3-D positions and camera path will line up correctly again. (If you really wanted to, you could probably use Seed Points mode to speed up this re-solve.) Important: if you later decide you want to change the stabilization parameters without re-tracking, you must not have cleared the stabilizer. Hit the Remove f/Trkers button BEFORE making any changes, to get back to the original tracking data. Otherwise, if you Apply twice, or Remove after changes, you will just create a mess. Also, the Blip data is not changed by the Apply or Remove buttons, and it is not possible to Peel any blip trails, which correspond to the original image coordinates, after completing stabilization and hitting Apply. So you must either do all peeling first; remove, peel, and reapply the stabilization; or retrack later if necessary.
Flexible Workflows
Suppose you have written out a stabilized shot, and adjusted the tracker positions to match the new shot. You can solve the shot, export it, and play around with it in general. If you need to, you can pop the stabilization back off the trackers, adjust the stabilization, fix the trackers back up, and re-solve, all without going back to earlier scene files and thus losing later work. Thats the kind of flexibility we like. Theres only one slight drawback: each time you save and close the file, then reopen it, youre going to have to wait while the image prep system recomputes the stabilized image. That might be only a few seconds, or it might be quite a while for a long film shot. Its pretty stupid, when you consider that youve already written the complete stabilized shot to disk! Approach 1: do a Shot/Change Shot Images to the saved stabilized shot, and reset the image prep system from the Preset Manager. This will let you work quickly from the saved version, but you must be sure to save this scene file separately, in case you need to change the stabilization later for some reason. And of course, going back to that saved file would mean losing later work.
230
STABILIZATION
Approach 2: Create an image prep preset (stab) for the full stabilizer settings. Create another image prep preset (quick), and reset it. Do the Shot/Change Shot Images. Now youve got it both ways: fast loading, and if you need to go back and change the stabilization, switch back to the first (stab) preset, remove the stabilization from the trackers, change the shot imagery back to the original footage, then make your stabilization changes. Youll then need to re-write the new stabilized footage, re-apply it to the trackers, etc. Approach 1 is clearly simpler and should suffice for most simple situations. But if you need the flexibility, Approach 2 will give it to you.
231
In these situations, the automatic tracker needs to be told, for each frame, which parts of the image should be used to match-move the camera and each object (and for the remainder, which portions of the image should be ignored totally). Hint: Often you can let the autotracker run, then manually delete the unwanted trackers. This can be a lot quicker than setting up mattes. To help find the undesirable trackers, turn on Tracker Trails on the Edit menu. SynthEyes provides two methods to control where the autotracker tracks: animated splines and alpha channel mattes. Both can be used in one shot. To create the alpha channel mattes, you need to use an external compositing program to create the matte, typically by some variation of painting it. If youve no idea what that last sentence said, you can skip the entire alpha channel discussion and concentrate on animated splines, which do not require any other programs.
233
This listbox always contains a Garbage item. If you assign Garbage to a spline, that spline is a garbage matte and any blips within it are ignored. If a blip isnt covered by any splines, then the alpha channel determines to which object the blip is assigned.
Spline Workflow
When you create a shot, SynthEyes creates an initial static full-screen rectangle spline that assigns all blips to the shots camera. You might add additional splines, for garbage matte areas or moving objects you want to track. Or, you might delete the rectangle and add only a new animated spline, if you are tracking a full-screen moving object. Ideally, you should add splines before running the autotracker the first time, that will be simplest. However, if you run the autotracker, then decide to add or modify the splines (using the Roto panel), you can then use the Features panel to create a new set of trackers: Delete the existing trackers using control-A and delete, or the Delete Leaden button on the features panel, Click the Link Frames button, which updates the possible tracker paths based on your modified splines. Dont worry, you will be prompted for this if you forget in almost all cases. Click the Peel All button to make new trackers.
The separate Link step is required to accommodate workflows with manual Peeling using Peel mode button. (You may also be prompted to Link in when entering Peel mode.)
Animated Splines
Animated splines are created and manipulated in the camera viewport only while the rotoscope control panel is open. At the top of the rotoscope panel, a chart shows what the left and right mouse buttons do, depending on the state of the Shift key. Each spline has a center handle, a rotate/scale handle, and three or more vertex control handles. Splines can be animated on and off over the duration of the shot, using the stop-light enable button .
234
To create an animated spline, turn on the magic wand tool , go to the splines first frame and left-click the splines desired center point. Then click on a series of points around the edge of the region to be rotoscoped. Too many points will make later animation more time consuming. You can switch back and forth between smooth and corner vertex points by double-clicking as you create. After you create the last desired vertex, right click to exit the mode. You can also turn on and use create-rectangle and create-circle spline creation modes, which allow you to drag out the respective shape. After creating a spline, go to the last frame, and drag the control points to reposition them on the edge. Where possible, adjust the spline center and rotation/scale handle to avoid having to adjust each control point. Then go to the middle of the shot, and readjust. Go one quarter of the way in, readjust. Go to the three quarter mark, readjust. Continue in this fashion, subdividing each unkeyed section until the spline is in the correct location already, which generally wont be too long. This approach is much more effective than proceeding from beginning to end. You may find it helpful to create keys on all the control points whenever you change any of them. This can make the spline animation more predictable in some circumstances (or to suit your style). To do this, turn on the Key all CPs if any checkbox on the roto panel. Note that the splines dont have to be accurate. They are not being used to matte the objects in and out of the shot, only to control blips which occur relatively far apart. Right-click a control point to remove a key for that frame. Shift-right-click to remove the control point completely. Shift-left-click the curve to create a new control point along the existing curve. As you build up a collection of splines in the viewport, you may wish to hide some or all of them using the Show this spline checkbox on the roto control panel. The View menu contains an Only selected splines item; with it enabled, only the spline selected in the roto panels list will appear in the viewport.
235
236
237
238
Object Tracking
Heres how to do an object-tracking shot, using the example shot lazysue.avi, which shows a revolving kitchen storage tray spinning (called a Lazy Susan in the U.S. for some reason). This shot provides a number of educational opportunities. It can be tracked either automatically or under manual supervision, so both will be described. The basic point of object tracking is that the shot contains an object whose motion is to be determined so that effects can be added. The camera might also be moving; that motion might also be determined if possible, or the objects motion can be determined with respect to the moving camera, without concern for the cameras actual motion. The object being tracked must exhibit perspective effects during the shot. If the object occupies only a small portion of the image, this will be unlikely. A film or HD source will help provide enough accuracy for perspective shifts to be detected. For object-tracking, all the features being tracked must remain rigidly positioned with respect to one another. For example, if a head is to be tracked, feature points must be selected that are away from the mouth or eyes, which move with respect to one another. If the expression of a face is to be tracked for character animation, see the section on Motion Capture. Moving-object tracking is substantially simpler than motion capture, and requires only a single shot and no special on-set preparation during shooting. IMPORTANT: You need to have six or more different trackers visible pretty much all of the time (technically it can get down to 3 for short periods of time but with very low accuracy). Generally you should plan for at least 8-10 to make allowance for short-term problems in some of them. More trackers means more accuracy and less jitter. Fine Print: no matter how many trackers you have, if they are all on the same plane (the floor, a piece of paper, a tablet, etc), they only count as four, and as the rule says, you must have six! If the object being tracked is flat, you will have to use a known (flat) mesh as a reference, entering known coordinates for each tracker. Hint: if the object occupies only a portion of the image, it will not supply enough perspective shift to permit the field of view of the lens to be estimated accurately. You must either also do a camera track (even a tripod solve will do), or you must determine a lens field of view by a different method (a different shot, say), and enter it as a Known Field of View.
Automatic Tracking
Open the lazysue.avi shot, using the default settings.
239
OBJECT TRACKING
On the Solver panel , set the cameras solving mode to Disabled.
On the Shot menu, select Add Moving Object. You will see the object at the origin as a diamond-shaped null object. Switch to the Roto panel , with the camera viewport selected.
Scrub through the shot to familiarize yourself with it, then rewind back to the beginning. Click the create-spline (magic wand) button on the Roto panel.
Click roughly in the center of the image to establish the center point. Click counterclockwise about the moving region of the shot, inset somewhat from the stationary portion of the cabinetry and inset from the bottom edge of the tray. Right-click after the last point. [The shape is shown below.] Click the create-spline (magic wand) button again to turn it off.
Double-click the vertices as necessary to change them to corners. In the spline list on the Roto panel, select Spline1 and hit the delete key. On the object setting underneath the spline list, change the object setting from Garbage to Object01. Your screen should look something like this:
240
OBJECT TRACKING
Go to the Feature Panel Hit Blips all frames. Hit Peel All. Go to the end of the shot. Verify that the five dots on the flat floor of the lazy susan have associated trackers: a green diamond on them. If you need to add a tracker to a tracking mark, turn on the Peel button on the Feature panel. Scrub around to locate a long track on each untracked spot, then click on the small blip to convert it to a tracker. Turn off Peel mode when you are done. Switch to the Coordinate System Panel Go to frame 65. Change the tracker on the floor that is closest to the central axis to be the origin. Set the front center floor tracker to be a Lock Point, locked to 10,0,0. Set the front right tracker to XY Plane (or XZ plane for a Y-Up axis mode). Switch to the Solver Panel . . .
Make sure the Constrain checkbox is off. Hit Go!. Go to the After Tracking section, below.
Supervised Tracking
The shot is best tracked backwards: the trackers can start from the easiest spots, and get tracked as long as possible into the more difficult portion at the beginning of the shot. Tracking backwards is suggested for features that are coming towards the camera, for example, shots from a vehicle. Open the lazysue.avi shot, using the default settings. On the Solver panel , set the cameras solving mode to Disabled.
On the shots menu, select Add Moving Object. You will see the object at the origin as a diamond-shaped null object.
241
OBJECT TRACKING
On the Tracker panel, turn on Create with the moving object, not the camera. . The trackers will be associated
Switch to the Camera viewport, to bring the image full frame. Click the To End button on the play bar. to (backwards).
Create a tracker on one of the dots on the shelf. Decrease the tracker size to approximately 0.015, and increase the horizontal search size to 0.03. Create a tracker on each spot on the shelf. Track each as far as possible back to the beginning of the shot. Use the tracker interior view to scroll through the frames and reposition as needed. As the spots go into the shadow, you can continue to track them, using the tracker gain spinner. When a tracker becomes untrackable, turn off Enable , and Lock the tracker Right-click the spinner to reset it for the next tracker. .
Continue adding trackers from the end of the shot roughly as follows:
Begin tracking from the beginning, by rewinding, changing the playback direction to forward , then adding additional trackers. You will need to add these additional trackers to achieve coverage early in the shot, when the primary region of interest is still blocked by the large storage container.
242
OBJECT TRACKING
Switch to the graph editor in graph mode , sort by error mode . Use the mouse to sweep through and select the different trackers. Or, select Sort by error on the main View menu, and use the up and down arrows on the keyboard to sequence through the trackers. Look for spikes in the tracker velocity curves (solid red and green). Switch back to the camera view as needed for remedial work. Switch to the Coordinate System control panel the end of the shot. and camera viewport, at
Select the tracker at center back on the surface of the shelf; change it to an Origin lock. Select the tracker a bottom left on the shelf, change it to Lock Point with coordinate X=10. Select the tracker at front right; change it to an On XY Plane lock (or On XZ if you use Y-axis up for Maya or Lightwave). Switch to the Solver control Panel .
Switch to the Quad view; zoom back out on the Camera viewport. Hit Go! After solving completes in a few seconds, hit OK. Continue to the After Tracking section, below.
243
OBJECT TRACKING
After Tracking
Switch to the 3-D Objects panel , with the Quad viewport layout selected.
Click the World button, changing it to Object. Turn on the Magic Wand tool and select the Cone object.
In the top view, draw a cone in the top-right quadrant, just above and right of the diamond-shaped object marker. Hint: it can be easier to adjust the cones position in the Perspective view, locked to the camera, with View/Local coordinate handles turned on. Scrub the timeline to see the inserted cone. In your animation package, a small amount of camera-mapped stand-in geometry would be used to make the large container occlude the inserted cone and reveal correctly as the shelf spins. Advanced techniques: use Coalesce Trackers and Clean Up Trackers.
Difficult Situations
When an object occupies only a relatively small portion of the frame, there are few trackers, and/or the object is moving so that trackers get out of view often, object tracking can be difficult. You may wind up creating a situation where the mathematically best solution does not correspond to reality, but to some impossible tracker or camera configuration. It is an example of the old adage, Garbage In, Garbage Out (please dont be offended, gentle reader). Goosing the Solver Small changes in the initial configuration may allow the solver to, essentially randomly, pick a more favorable solution. Be sure to use the Slow but sure checkbox and all the different possibilities of the Rough camera motion selection, both on the solver panel. Trying a variety of manually-selected seed frames is also suggested. Small changes in trackers, or adding additional trackers, especially those at different depths, may also be helpful in obtaining the desired solution. Inverting Perspective Sometimes, in a low-perspective object track, you may see a situation where the object model and motion seem almost correct, except that some things that are too far away are too close, and the object rotates the wrong way. This is a result of low/no/conflicting perspective information. If you cannot improve the trackers or convince the solver to arbitrarily pick a different solution, read on. The Invert Perspective script on the Script menu will invert the object and hopefully allow you to recover from this situation quickly. It flips the solved trackers about their center of gravity, on the current frame, changes them to seed
244
OBJECT TRACKING
trackers (this will mess up any coordinate system), and changes the solving mode to From Seed Points. You can then re-solve the scene with this solution, and hopefully get an updated, and better, path and object points. You should then switch back to Refine mode for further tracking work! Using a 3-D Model You might also encounter situations where you have a 3-D model of the object to be tracked. If SynthEyes knows the 3-D coordinates of each tracker, or at least 6-10 of them, it will be much easier to get a successful 3-D track. You can import the 3-D model into SynthEyes, then use the Perspective windows Place mode to locate the seed point of each tracker on the mesh at the correct location. Or, carefully position the mesh to match one of the frames in the shot, select the group of trackers over top of it, and use the Track menus Drop onto mesh to place all the seed points onto the mesh at once. Turn on the Seed checkbox on for each (if necessary, usually done automatically), and switch to the From Seed Points solving mode on the solver panel .
If you have determined the 3-D coordinates of your tracker externally (such as from a survey or animation package), construct a small text file containing the x, y, and z coordinates, followed by the tracker name. Use File/Import/Tracker Locations to set these coordinates as the seed locations, then use the From Seed Points solver option. If the tracker named doesnt exist, it will be created (using the defaults from the Tracker Panel, if open), so you can import your particular points first, and track them second, if desired, though tracking first is usually easier. The seed points will help SynthEyes select the desired (though suboptimal) starting configuration. In extreme situations, you may want to lock the trackers to these coordinates, which can be achieved easily by setting all the imported trackers to Lock Points on the Coordinate System panel . To make this easy, all the affected trackers are selected after an Import/Tracker Locations operation. Overall Distance Constraints When an object track has little perspective, it can jitter large amounts in depth, ie the distance from the camera to the object. SynthEyes allows you to create an overall distance constraint to control and de-jitter that distance using the solver locking panel. You should track and solve the shot, then set up a distance constraint using the computed distance as a guide: the graph editor and Get 1f buttons on the solver locking panel make this easy to see and control. Then when you Refine the solution, the distance will match what you have set upie a nice smooth curve. Since such shots contain little perspective to start with, extreme precision in depth is not required.
245
OBJECT TRACKING
When you use an overall distance constraint, the distance is measured from the camera to the origin of the object (or for plain camera solves, from the camera to the overall coordinate system origin). For moving objects tracks with distance constraints, you should pay some attention to the location of the origin. You must be certain to set up a coordinate system for the object! You should set up the origin either at the natural center of rotation of the object (for example, at the neck of a head), or at a point located within the point cloud of the object. If the origin is well outside the object, any small jitter in the orientation of the object will have unwarranted effects on its position as well.
246
247
248
Multi-Shot Tracking
SynthEyes includes the powerful capability of allowing multiple shots to be loaded simultaneously, tracked, linked together, and solved jointly to find the best tracker, camera, and (if present) object positions. With this capability, you can use an easily-trackable overview shot to nail down basic locations for trackable features, then track a real shot with a narrow field of view, few trackable features, or other complications, using the first shot as a guide. Or, you might use a left and right camera shot to track a shot-in-3-D feature. If you dont mind some large scene files, you can load all the shots from a given set into a single scene file, and track them together to a common set of points, so that each shot can share the same common 3-D geometry for the set. In this section, well demonstrate how to use a collection of digital stills as a road-map for a difficult-to-track shot: in this case, a tripod shot for which no 3-D recovery would otherwise be possible. A scenario such as this requires supervised tracking, because of the scatter-shot nature of the stills. The tripod shot could be automatically tracked, but theres not much point to that because you must already perform supervised tracking to match the stills, and theres not much gained by adding a lot more trackers to a tripod shot. It will take around 2 hours to perform this example, which is intentionally complex to illustrate a more complex scenario. The required files for this example can be found at http://www.ssontech.com/download.htm: both land2dv.avi and DCP_103x.zip are required. The zip file contains a series of digital stills, and should be unpacked into the same working folder as the AVI. You can also download multix.zip, which contains the .sni scene files for reference. Prerequisites: You need to be able to do supervised tracking and handle coordinate system setup for this description; it does not contain a beginner-level description. Start with the digital stills, which are 9 pictures taken with a digital still camera, each 2160 by 1440. Start SynthEyes and do a File/New. Select DCP_1031.JPG. Use the default settings, including an aspect ratio of 1.5.
249
MULTI-SHOT TRACKING
Create trackers for each of the balls: six at the top of the poles, six near ground level on top of the cones. Create each tracker, and track it through the entire (nine-frame) shot. Because each camera position is much different than its predecessor, you will have to manually position the tracker in each frame. It will be helpful to turn on the Track/Hand-held Sticky menu setting. You can use control-drag to make final positioning easier on the high-resolution still. Create the trackers in a consistent order, for example, from back left to front left, then back right to front right. After completing each track, Lock the tracker. The manual tracking stage will take around an hour. The resulting file is available as multi1.sni. Set up a coordinate system using the ground-level (cone) trackers. Set the front-left tracker as the Origin, the back-left tracker as a Lock Point at X=0,Y=50,Z=0, and the front-right tracker as an XY Plane tracker. You can solve for this shot now: switch to the Solver panel and hit Go! You should obtain a satisfactory solution for the ball locations, and a rather erratic and spaced out camera path, since the camera was walked from place to place. (multi2.sni) It is time for the second shot. On the Shot menu, select Add Shot (or File/Import/Shot). Select the land2dv.avi shot. Set Interlacing to No; the shot was taken was a Canon Optura Pi in progressive scan mode. Bring the camera view full-screen, go to the tracker panel , and begin tracking the same ball positions in this shot with bright-spot trackers. Set the Key spinner to 8, as the exposure ramps substantially during the shot. The balls provide low contrast, so some trackers are easiest to control from within the tracker view window on the tracker panel . The back-right ground-level ball is occluded by the front-left above-ground ball, so you do not have to track the back-right ball. It will be easiest to create the trackers in the same order as in the first shot. (multi3.sni) Next, create links between the two sets of trackers, to tell SynthEyes what trackers were tracking the same feature. You will need a bare minimum of six (6) links between the shots. Switch to the coordinate system panel , and the Quad view. Move far enough into the shot that all trackers are in-frame.
Camera/Viewport Matching
To assign links, select a tracker from the AVI in the camera view. Go to the top view and zoom in to find the matching 3-D point from the first shot, and ALT-click it (Mac: Command-click). Select the next tracker in the camera view, and ALT-click the corresponding point in the Top view; repeat until all are
250
MULTI-SHOT TRACKING
assigned. If you created the trackers consistently, you can sequence through them in order.
Match By Name
Another approach is to give each tracker a meaningful name. In this case, clicking the Target Point button will be helpful: it brings up a list of trackers to choose from. A more subtle approach is to have matching names, then use the Track/Cross Link By Name menu item. Having truly identical names makes things confusing, so the cross link command ignores the first character of each name. You can then name the trackers lWindowBL and rWindowBL and have them automatically linked. After setting up a number of matching trackers, select the trackers on the video clip, and select the Cross Link By Name menu item. Links will be created from the selected trackers to the matching trackers on the reference shot. Notes on links: a shot with links should have links to only a single other shot, which should not have any links to other shots. You can have several shots link to a single reference.
Ready to Solve
After completing the links, switch to the Solver panel . Change the solver mode to Indirect, because this cameras solution will be based on the solution initially obtained from the first shot.(multi4.sni) Make sure Constrain is off at this time. Hit Go! SynthEyes will solve the two shots jointly, that is, find the point positions that match both shots best. Each tracker will still have its own position; trackers linked together will be very close to one another.
251
MULTI-SHOT TRACKING
In the example, you should be able to see that the second (tripod) shot was taken from roughly the location of the second still. Even if the positions were identical, differences between cameras and the exact features being tracked will result in imperfect matches. However, the pixel positions will match satisfactorily for effect insertion. The final result is multi5.sni.
252
Stereoscopic Movies
3-D movies have had a long and intermittent history, but recently they have made a comeback as the technology improves, with polarized or framesequential projectors, and better understanding and control over convergence to reduce eyestrain. 3-D movies may be a major selling point to bring larger audiences back to theaters. Filmmakers would like to use the entire arsenal of digital techniques to help produce more compelling and contemporary films. SynthEyes can and has been used to handle stereo shots using a variety of techniques based on its single-camera workflow, but there are now extensive specialized features to support stereo filmmaking. SynthEyes is designed to help you make stereo movies. The stereo capabilities range from tracking and cross-camera linking to solving, plus a variety of user-interface tweaks to simplify handling stereo shots. A special Stereo Geometry panel allows constraints to be applied between the cameras to achieve specific relative positions and produce smoother results. Additional stereo capabilities will be added as the stereo market develops. Stereo filmmaking requires a wide variety of techniques from throughout SynthEyes: the image preprocessor, tracking, solving, coordinate system setup, etc. The material here builds upon that earlier material; it is not repeated here because it would be exactly that, a repetition. If this is the first part of the manual you are reading, expect to need to read the rest of it. You will need to know a fair amount about 3-D movie-making to be able to produce watchable 3-D movies. 3-D is a bleeding edge field and you should allow lots of time for experimentation. SynthEyes technical support is necessarily limited to SynthEyes; please consult other training resources for general stereoscopic movie theory and workflow issues in other applications. SynthEyes's perspective view can display anaglyph (red/cyan is a good combination) or interlaced views from both cameras simultaneously, controlled by the right-click menu's View/Stereo Display item and the Scene Settings panel. You can select either normal color-preserving or gray-scale versions of the anaglyph display. When using an interlaced display, you should probably have more than one display and float the interlaced perspective to that monitor. Only the actual perspective view will be interlaced, not the entire user interface.
STOP!
253
STEREOSCOPIC MOVIES
24/7 Perspective With stereo, there are two different views all the time, and even a single frame from each camera is enough to produce a 3-D solve. At least in theory, you never have to worry about tripod shots that do not produce 3-D. Every shot can produce 3-D. Every stereo shot can also be used in a motion-capture setup to produce a separate path for even a single moving feature. Thats clearly good news. But before you get too excited about that, recall that in a stereo camera rig, the cameras are usually under 10 cm apart. Compare that to a dolly shot or crane shot with several meters of motion to produce perspective. And each of the hundreds of frames in a typical moving-camera shot produces additional data to help produce a more accurate solution. So, even though you can produce 3-D from a very short stereo shot, the information will not be very accurate (thats the math, not a software issue), and longer shots with a moving camera will always help produce better-quality 3-D data. On a short shot with no camera rig translation (with the rig on a tripod), you can get 3-D solves for features near to the camera(s). Features that are far from the cameras must still be configured as Far to SynthEyes, meaning that no 3-D depth can be determined. Similarly, for motion-capture points, accuracy in depth will degrade as the points move away from the camera. The exact definition of far depends on the resolution and field of view of the cameras, you might consider something far if it is several hundred times the inter-ocular distance from the camera. Easier Sizing If we know the inter-ocular distance (and we always should have a measurement for the beginning or end of the shot), then we know the coordinate system sizing immediately. There is no need for distance measurements from the set, and no problem with consistency between shots. That makes coordinate system setup much simpler. On a stereo shot, when an inter-ocular distance is set up, the *3 coordinate system tool generates a somewhat different set of constraints, one that aligns the axes, but does not impose its own size, allowing the inter-ocular distance to have effect. Keep in mind that the sizing is only as good as the measurement. If the measurement is 68 +/- 1 mm, that is over 1% uncertainty. If you have some other measurement that you expect to come out at 6000 mm and it comes out at 6055, you shouldnt be at all surprised. Some scenes with little perspective will not vary much depending on inter-ocular distance, so the inter-ocular distance may size the scene accurately. If you have a crucial sizing requirement, you should use a direct scene measurement, it will be more accurate. (In that case, switch to a Fixed interocular distance, instead of Known.)
254
STEREOSCOPIC MOVIES
Most of these should be fairly obvious. Many can be manipulated in post, and SynthEyes is designed to help you achieve the required matching, even from very low-tech rigs. Even the simplest rig will require matching work in post-production. It is not possible to bolt two cameras together, even with any kind of mechanical alignment feature, and have the cameras be optically aligned. Cameras are not manufactured to be repeatable in this way; the circuit board and chip-in-socket alignment within the camera is not sufficiently accurate or repeatable between cameras to be directly useful. Synchronization The cameras should be synchronized so that they take pictures at exactly the same instant. Otherwise, when you do the tracking and solving, you will by definition have some very subtle geometric distortions and errors: basically you cant triangulate because the subject is at two different locations, corresponding to each different time. To make life interesting, if the film will be projected using a framesequential projector (or LCD glasses), then the two cameras should be synchronized but 180 degrees out of phase. But that will mean you can not track, it is the worst possible synchronization error. Instead, for provable accuracy you should film and track at twice the final rate (eg 48 or 60 fps progressive), then have the projectors show only every other frame from each final image stream. If circumstances warrant that you shoot unsynchronized or antisynchronized footage, you must be aware that you (and the audience) will be subject to motion-related problems.
255
STEREOSCOPIC MOVIES
CMOS cameras are also subject to the Rolling Shutter problem, which affects monocular projects as well as stereoscopic ones. The rolling shutter problem will also result in geometric errors, depending on the amount of motion in the imagery. To cover a common misconception, this problem is not reduced by a short shutter time. It at all possible, use (synchronized) CCD or film cameras. One-Toe vs. Two-Toe Camera Rigs Ideally, a camera rig has two cameras next to each other, perfectly aligned. If both camera viewing axes are perfectly parallel, they are said to be converged at infinity, and this is a particularly simple case for manipulation. Usually, one or both cameras toe in slightly to converge at some point closer to the camera, just as our eyes converge to follow an approaching object. Mechanically, this may be accomplished directly, or by moving a mirror. We refer to the total inwards angle of the cameras as the vergence angle. It might seem that there is no difference between one camera toeing in or two, but there is. Consider the line between the cameras. With both cameras properly aligned and converged at infinity, the viewing direction is precisely perpendicular to the line between the cameras. If one camera toes in, the other remains at right angles to the line between them. If both cameras toe in, they both toe in an equal amount, with respect to the line between them. If you consider an object approaching the rig along the centerline from infinity, the two-toe rig remains stationary with both cameras toeing in. The onetoe rig moves backwards and rotates slightly, in order to keep one camera at right angles to the perpendicular line between the camera centers. SynthEyes works with either kind of rig. Though the one-toe rigs seem a little unnatural (effectively they make the audience turn their heads), the motions are very small and not really an issue for people, except for those who are trying to do their tracking to sub-pixel accuracy! The one-toe rigs are mechanically simpler and seem more likely to actually produce the motion they are supposed to (are the two-toe rigs really moving exactly matching angles? Are the axes parallel? Maybe, maybe not). From Where to Where? The inter-ocular distance is a very important number in stereo moviemaking: it is the distance between the eyes, or the cameras, with a typical value around 65 mm. It is frequently manipulated by filmmakers, however; more on that in a minute. Although you can measure the distance between your buddys eyes within a few millimeters pretty easily, when we start talking about cameras it is a little less obvious where to measure. It turns out that this question is much more significant than you might think as soon as you allow the camera vergence to change: if the cameras are tilted inwards towards each other, the point at which you measure will have a dramatic
256
STEREOSCOPIC MOVIES
effect. Depending on where you measure, the distance will change more or less or not at all. The proper point to consider is what we call the nodal point, as used for tripod mode shots and panoramic photos. Its not technically a nodal point for opticians. It is the center of the camera aperture, as seen from the outside of the camera. See this article on the pivot point for panoramic photography for more details. The inter-ocular distance (IOD) is the distance between the nodal points of the cameras. Dynamic Rigs Though the simplest rigs bolt the two cameras together at a fixed location, more sophisticated rigs allow the cameras to move during a shot. The simplest and most useful motion may not be what you think: it is to change the inter-ocular distance on the fly. This preserves the proper 3-D sensation, while avoiding extreme vergence angles that make it difficult to keep everything on-screen in the movie theater. The more complex effect is to change the vergence angle on the fly. This must be done with extreme caution: unless the rig is very carefully built, changing the vergence angle may also change the inter-ocular distanceor even change the direction between them as well. If a rig is to change the vergence angle, it must be constructed to locate the camera nodal point exactly at the center of the vergence angles rotation. A rig that changes only the inter-ocular distance does not have to be calibrated as carefully. A changing IOD should always be exactly parallel to the line between the camera nodal points, which in turn means that on a one-toe camera, the non-moving camera must be perpendicular to the translation axis, or a two-toe camera must have equal toe-in angles relative to the translation axis. The penalty for a rig that does not maintain a well-defined relationship between the cameras is simple: it must be treated as two separate cameras. The most dangerous shots and rigs are those with changing vergence, either with mirrors or directly, where the center of rotation does not exactly match the nodal point. Unless you have calibrated, it will be wrong. You will be in the same boat as people who shoot green-screen with no tracking markersand that boat has a hole
STEREOSCOPIC MOVIES
and 3 angles (for example Pan, Tilt, and Roll, or Roll, Pitch, and Yaw). The same six numbers are used for the basic position and orientation of any object. Those particular six numbers are not convenient for describing the relationship between the two cameras in a stereo pair, however! In the real world, there is only one real position measurement that can be made accurately, the inter-ocular distance, and it controls the scaling of everything. Accordingly, SynthEyes uses spherical coordinateswhich have only a single distance measurementto describe the relationship between the cameras. Of the two cameras, well refer to one as the dominant camera (the one we want to think about the most, typically the right), and the other as the secondary camera. The camera parameters describe the relationship of the secondary (child) camera to the dominant (parent) camera. Which camera is dominant is controlled on the Stereo Geometry panel. In each case, when we talk about the position of a camera, we are talking about the position of its nodal point (inside the front of the lens), not of the base of the camera, which doesnt matter. You can think about the stereo parameters in the coordinate space of the dominant camera. The dominant camera has a ground plane consisting of its side vector, which flies out the right side from the nodal point, and its look vector, which flies forward from the nodal point towards what it is looking at. The camera also has an up vector, which points in the direction of the top of the camera image. All of these are relative to the camera body, so if you turn the camera upside down, the cameras up vector is now pointing down! Here are the camera parameters. They have been chosen to be as human-friendly as possible. Most of the time, you should be concerned mainly with the Distance and Vergence; SynthEyes will tell you what the other values are and they shouldnt be messed with much. Distance. The inter-ocular distance between the cameras. Note that this value is measured in the same units as the main 3-D workspace units. So if you want an overall scene to be measured in feet, the inter-ocular distance should be measured in feet as well. Centimeters is a reasonable overall choice. Direction. This is the direction (angle) towards the nodal point of the secondary camera from the dominant camera, in the ground plane. If the secondary camera is directly next to the dominant camera, in the most usual configuration, the direction value is zero. The Direction angle increases if the secondary camera moves forward, so that at 90 degrees, the secondary camera is in front of the primary camera (ignoring relative elevation). See additional considerations in Two-Toe Revisited, below. Elevation. This is the elevation angle (above the dominant cameras ground plane). At zero, the secondary camera is on the dominant cameras ground plane. At 90 degrees, the secondary camera is above the dominant camera, on its up axis.
258
STEREOSCOPIC MOVIES
Vergence. This is the total toe-in angle by which the two cameras point in towards each other. At zero, the look directions of the cameras are parallel, they are converged at infinity. At 90 degrees, the look directions are at right angles. See Two-Toe Revisited below. Tilt. At a tilt of zero, the secondary camera is looking in the same groundplane as the dominant camera. At positive angles, the secondary camera is looking increasingly upwards, relative to the dominant camera. At a tilt of 90 degrees, the secondary camera is looking along the dominant cameras Up axis, perpendicular to the dominant camera viewing direction (they arent looking at the same things at all!). Roll. Relative roll of the secondary camera relative the dominant. At a roll angle of zero, the cameras are arent twisted with respect to one another at all; both camera look vectors point in the same direction. But as the roll angle increases, the secondary camera rolls counter-clockwise with respect to the dominant camera, as seen from the back. You can experiment with the stereo parameters by opening a stereo shot, opening the Stereo Geometry panel, clicking More and then one of the Live buttons. Adjusting the spinners will then cause the selected camera to update appropriately with respect to the other camera.
Two-Toe, Revisited
The camera parameters, as described above, describe the situation for single-toed camera rigs, where only one camera (the secondary) rotates for vergence. The situation is a little more complex for two-toe rigs, where both cameras toe inwards for vergence. These modes are Center-Left and CenterRight in the Stereo Geometry panels dominance selection. The dominant camera never moves during two-toed vergence, yet we still achieve the effect of both camera tilting in evenly. How is that possible? Consider a vergence angle of 90 degrees. With a one-toe rig, the secondary camera has turned 90 degrees in place without moving, and is now looking directly at the primary camera. With a two-toe rig at a vergence of 90 degrees, the secondary has turned 90 degrees so it is looking at right angles to the look direction of the dominant camera. But, and this is the key thing, at the same time the secondary camera has swung forward to what would otherwise be Direction=45 degrees, even though the Direction is still at zero. As a result, the secondary camera has tilted in 45 degrees from the nominal look direction, and the dominant camera is also 45 degrees from the nominal look directionwhich is the perpendicular to the line between the two cameras. The thing to keep in mind is that the line between the two cameras (nodal points) forms the baseline; the nominal overall rig look direction is 90 degrees
259
STEREOSCOPIC MOVIES
from that. SynthEyes changes the baseline in centered mode to maintain the proper even vergence for the two cameras; it does that by changing the definition of where the zero Direction is. The Direction value is offset by one-half the vergence in centered mode. If you put the stereo pair into one of the Centered modes and use Live mode, youll see the camera swinging forward and backward in response to changes in the vergence. Once you understand it, it should make sense. If it seems a bit more complex and demanding than single-toe rigs youre right!
260
STEREOSCOPIC MOVIES
The third case is riskiest, and is subject to the details of each shot: it may not always be possible to determine the camera parameters accurately. We recommend this approach only for rescuing un-calibrated shots at present.
Electronic Calibration
To electronically calibrate, print out the calibration grid from the web site using a large-format black and white (drafting) printer, which can be done at printing shops such as Fedex Kinkos. Attach the grid to a convenient wall, perhaps with something like Joes Sticky Stuff from Cinetools. Position the rig on a tripod in front of the wall, as close as it can get with the entire outer frame visible in both cameras (zoom lenses wide open). Adjust the height of the rig so that the nodal point of the cameras is at the same height as the center point of the grid. Re-aim the cameras as necessary to center them on the grid. This will converge them at that distance to the wall; you may want to offset them slightly outwards or inwards to achieve a different convergence distance, depending on what you want. Shoot a little footage of this static setup. Record the distance from the cameras to the wall, and the width of the visible grid pattern (48 on our standard grid at 100% size). For camcorders with zoom lenses, you should shoot a sequence, zooming in a bit at a time in each camera. You can use one remote control to control both cameras simultaneously. This sequence will allow the optic center of the lens to be determinedcamcorder lenses are often far off-center. Once you open the shots in SynthEyes, create a full-width checkline and use the Camera Field of View Calculator script to determine the overall field of view. Use the Adjust tools on the image preprocessor to adjust each shot to have the same size and rotation angle. Use the lens distortion controls to remove any distortion. Correct any mirroring with this pass as well, see the mirror settings on the image preprocessors Rez tab. Use the Cropping and re-sampling controls to remove lens off-centering. A small Delta Zoom value will equalize the zoom. See the tutorial for an overview of this process. Your objective is to produce a set of settings that take the two different images and make them look exactly the same, as if your camera rig was perfect. Once youve done that, you can record all of the relevant settings (see the Export/Stereo/Export Stereo Settings script), and re-use them on each of your shots (see Import/Stereo/Import Stereo Settings) to make the actual images match up properly. Then, you should save a modified version of each sequence out to disk for subsequent tracking, compositing, and delivery to the audience. Obviously this process requires that your stereo rig stay rigid from shot to shot (or periodic calibrations performed). The better the shots match, the less image quality and field of view will be lost in making the shots match.
261
STEREOSCOPIC MOVIES
Opening Shots
SynthEyes uses a control on the shot parameters panel to identify shots that need stereo processing. Open the left shot, and on the shot settings panel, click Stereo off until it says Left. After you adjust any other parameters and click OK, SynthEyes will immediately prompt you to open the right shot. Any settings, including image preprocessor settings, will be copied over to the right shot to save time. If you do not configure the stereo setting when you initially open the shot, you can do so later using the shot settings dialog. You can turn it on or off as your needs warrant. To get stereo processing, you must open the left shot first and the right shot second, and set the first shot to left and the second to right. Both shots must have the same shot-start and -end frame values. Stereo rigs that include mirrors will produce reversed images. If the camera was mechanically calibrated, use the Mirror Left/Right or Mirror Top/Bottom checkboxes on the Rez tab of the image preprocessor to remove the mirroring. (If the cameras are electronically calibrated using the image preprocessor, you should remove mirroring then as described above.) Note that you can use the Stereo view in the Perspective view to show both images simultaneously as an anaglyph.
Stereoscopic Tracking
In a stereo setup, there are links from trackers on the secondary camera to the corresponding trackers on the dominant camera; they tell SynthEyes which features are tracked in both cameras. These links are always treated as peg-type locks, regardless of the state of the Constrain checkbox. Automatic If you use automatic tracking, SynthEyes will track both shots simultaneously and automatically link trackers between the two shots. For this to work well, your two shots need to be properly matched, both in overall geometric alignment, and in color and brightness grading. A quick fix, if needed, is to turn on high-pass filtering in the image preprocessor. Be sure to set the proper dominant camera before auto-tracking, as SynthEyes will examine that to determine in which direction to place the links (from secondary trackers to primary camera trackers, which can be left to right or right to left). Supervised If you use supervised tracking, you will need to set up the links manually between the corresponding left- and right-camera trackers. There are a number of special features to make handling stereo tracker pairs easier (for automatic trackers also). It is easiest to do supervised stereo tracking with a perspective window containing one eye, and a camera view containing the other eye, typically the Camera+Perspective viewport setup or a custom configuration. With the right
262
STEREOSCOPIC MOVIES
camera active, click Lock to Current Cam on the perspective view's right-click menu. Then make the left camera active. Use control-home to vertically center the camera view to match the perspective view. Use the shot menu's Activate Other Eye (default accelerator key: minus sign) to flip-flop the eyes shown in the camera and perspective views. In the perspective view, the right-click View/Show only locked setting can be helpful: it causes the perspective view to show only the trackers on the camera that perspective view is locked to. The following discussion largely requires this setup, with the perspective view showing the opposite eye of the camera view. Creating Stereo Pairs, Easy Way This is the fast and easy way, it allows you to create many tracker pairs quickly. After you have created them, you will track them separately (the SimulTrack view can help there). Set the tracker control panel to Create Tracker mode. Set the perspective view to Add Stereo 2nd mode (on the Other Modes submenu). Click and drag in the camera view to create a tracker on one camera, and adjust whatever settings you need to. Then click and drag in the perspective view to create the matching tracker as a clone of the first. The two trackers will be linked together, and the names adjusted to match. If you entered a name for the first tracker, that name will be used, with L and R appended. TIP: If you only need to create a single tracker pair, you can do that quickly, without having to adjust the mode of either the camera or perspective views: hold down the C key and click in the camera view, and with C held down, click in the perspective view. The C key acts as an accelerator in both views. Creating Stereo Pairs, Manually To create stereo pairs, you should create a few trackers on one eye, switch to the other eye (typically using the minus sign key), create the matching trackers, then link them together. To link each pair, you should click the tracker in the camera view to select it, then ALT-click the matching tracker in the perspective view. SynthEyes will automatically link the trackers in the correct direction, depending on the camera dominance setting. You can do some tracking on the trackers as you are doing this. Note: while linking trackers normally requires that the Coordinate System Panel be open (so that you can see the results), the panel does not have to be open to link trackers in a stereo shot using the perspective viewyou can link them while keeping the Tracker panel open continuously, to save time. Stereo Pair Selection Feature When you have stereo pairs configured and are using the camera/perspective view setup, if you select a tracker in the camera view, the perspective view will show the matching tracker in the other camera in a different color (yellow by default, "Persp, Opposite sel" in the preferences). This makes it easy to check the matching.
263
STEREOSCOPIC MOVIES
ACHTUNG! Stay awake, this next one is tricky: if you click on a tracker in the perspective view, that tracker will not be selected, because it is not on the currently-active camera, but instead the matching tracker on the other camera (in the camera view) will be selected. That will in turn make the tracker you just clicked (in the perspective window) turn yellow, because its matching tracker is now selected. Again, this makes it easy to see what tracker goes with what. It sounds complicated but should be clear when you try it for real. If you want to select and edit a tracker displayed in the perspective view (on the opposite eye), you should switch the views with minus signthe camera view is the place to do that. Stereo ZWTs You can use zero-weighted -trackers to help you track: once you set up a pair and if the shot is already solved, any two frames of valid data, on the same camera or on the opposing cameras, is enough to determine the tracker's 3D location. Checking Stereo Correspondence In a stereo tracking setup, the trackers on each eye may be tracked perfectly, but the track can be a disaster if the trackers on each eye do not track the same feature on each side. The SimulTrack view can help identify mismatches. Open two SimulTrack views (either as part of a layout or floating, it doesn't matter). On one of them, turn on Stereo Spouses on the right-click menu. That SimulTrack will now show the trackers from the inactive side of the stereo view. You can select trackers in the active side, and see the active side's tracker on one SimulTrack, and the other side's tracker on the other SimulTrack (with Stereo Spouses). The ordering will be the same (driven by the active side), so you can look from one to the other view to identify mismatches. After you have a reasonable solve, you can also take a look at the error on the trackers to identify mismatches, using the Sort by Error option and the SimulTrack or Graph Editor.
264
STEREOSCOPIC MOVIES
If you hit the big green Auto button, or the Run Auto-tracker button, SynthEyes will prompt you to determine the desired mode of these three. If you are working manually or later need to change the modes, you can do so individually for each camera. Be sure to keep Indirectly, if present, on the secondary camera. Note that far trackers can be an issue with stereo shots that are on a tripod: once a tracker is many times the interocular distance from the cameras, its distance can no longer be determined, and the stereo solve goes from a nice stereo tripod situation to a combination of two tripod shots, see the section on Stereo Tripod Shots. Stereo Solving Mode The stereo mode uses the trackers that are linked between cameras to get things going. It does not rely on any camera motion at all: the camera can be stationary, and even a single still for each camera can be used, as long as there are enough nearby features (compared to the inter-ocular distance). Important: The Begin and End frames on the Solver panel should be configured directly for Stereo Solving modesomewhat differently than for a usual automatic solve, so please keep reading. To begin with, the checkboxes should be checked so that the values can be set manually. The Stereo solve will literally start with the Begin frame from both cameras; it should be chosen at a frame with many trackers in common between the two cameras. However, this offers a limited pool of data with which to get started. A much larger, and thus more reliable, pool of data is considered when the End frame is set as well. The Stereo solver startup considers all the frames between the Begin and End frames as source data. The one caveat: none of the camera parameters must change between the begin and end frames, including the distance or vergence, even if they were marked as changing (see the next section). If any of the camera parameters are constantly changing throughout the shot, or it can not be determined that they do not, then you must set the End frame and Begin frame to the same frame, and forego having any additional data for startup. Such a frame should have as many trackers as possible, and they should be carefully examined to reduce errors. If you do not select the Begin/End frames manually (leave them in automatic mode), then SynthEyes will select a single starting/ending frame that has as many trackers in common as possible. But as described, supplying a range is a better idea. Note that you might be able to use the entire shot as a range, though probably this will increase run time and a shorter period may produce equivalent results.
265
STEREOSCOPIC MOVIES
Automatic/Indirectly Solving Mode The stereo mode effectively uses the inter-ocular distance as the baseline for triangulating to initially find the tracking points. If the camera is moving, a larger portion of the motion can be used to get solving started, producing a more accurate starting configuration. To do that, use the normal Automatic solving mode for the dominant camera, and the Indirectly mode for the secondary camera. Assuming the moving camera path is reasonable, SynthEyes will solve for the dominant camera path, then for the secondary path, applying the selected camera/camera constraints at that time. This approach will probably work better on shots where most of the trackers are fairly far from the cameras, and the camera moves a substantial distance, thus establishing a baseline for triangulation. If the camera moves (translates) little, you should use the Stereo solving mode. Tripod/Indirectly Solving Mode With two cameras, nodal tripod shots are less an issue because distances and 3-D coordinates can be determined if there are enough nearby features. However, you may encounter shots that are nodal by virtue of not having anything nearby; call them "all-far" shots. For example, consider a camera on the top of a mountain, which must be attacked by CG birds. With no nearby features, the shot will be nodal, and there will be no way to determine the inter-ocular distance. Any inter-ocular distance can be used, with no way to tell if it is right or wrong. Like a (monocular) tripod shot, no 3-D solve is possible, only what amounts to two linked tripod solves. Use the Tripod/Indirectly setup (tripod mode on dominant camera, indirectly on secondary). When refining, use Refine Tripod mode for both cameras. On the stereo geometry panel (see below), you should set up your best estimate of the inter-ocular distance, either from on-set measurements or from other shots. You can animate it if you have the information to do so. Set the Direction and Elevation numbers to zero, or known values from other shots. SynthEyes will solve the shot to produce two synchronized tripod solves. Then, it will compute adjusted camera paths, based on the interocular distance and the pointing direction of the camera, as if the camera had been on a tripod. These will typically be small arc-like paths. If you need to later adjust the inter-ocular distance, Refine (Tripod) the shot to have the paths recalculated. As a result, you will have two matching camera paths so that you can add CG effects that come close to the camera. Since SynthEyes has regenerated the camera paths at a correct inter-ocular distance, even though all the tracked
266
STEREOSCOPIC MOVIES
features are far, you will still be able to add effects nearby and have them come out OK.
Setting Up Constraints
The Stereo Geometry panel can be used to set up constraints between the two cameras. If you will be using the inter-ocular distance to set the overall scale of the scene, then you should do that initially, before setting up a coordinate system using the *3 tool. The *3 tool will recognize the inter-ocular distance constraint, and generate a modified set of tracker constraints to avoid creating a conflict with the inter-ocular distance constraint. The left-most column on the Stereo Geometry panel sets the solving mode for each of the six stereo parameters; they can be configured individually and often will be. The default As-Is setting causes no constraint to be generated for that parameter. To constrain the Distance, change its mode to Known, and set the Lock-To Value to the desired value. The Lock-To value can be animated, under control of the Make Key button at top left of the panel. With Make Key off, the lock value shown and animated is that at the beginning of the shot. Beware, this can hide any additional keys you have already created. Usually it will be best to solve a shot once first, with at most a Distance constraint, and examine the resulting camera parameters. The stereo parameters can be viewed in the graph editor under the node Stereo Pairs. The colors of the parameters are shown on the stereo panel for convenience. Sudden jumps in a parameter will usually indicate a tracking problem, which should be addressed directly. The error is like an air bubble under plasticyou can move it around, but not eliminate it. The stereo locks are all soft and can not necessarily overcome an arbitrarily large error. If you do not fix the underlying errors in the tracking data, even if you force the stereo parameters to the values you wish, the error will appear in other channels or in the tracker locations. Usually, the other four stereo parameters (other than distance and vergence) are constant at an unknown value. Use the Fixed mode to tell SynthEyes to determine the best unknown value (like the Fixed Unknown lenssolving mode). If you are very confident of your calibration, or wish to have the best solve for a specific set of parameters, you can use the Known mode for them also. In the Varying solving mode, you can create constraints for specific desired ranges of frames, by animating the respective Lock button on or off. The parameter will be locked to the Lock-To value for those specific frames. The Hold button may also be activated (for vergence and distance); see the following section on Handling Shots with Changing IOD or Vergence.
267
STEREOSCOPIC MOVIES
Note that usually you should keep solving from scratch after changing the stereo constraint parameters, rather than switching to Refine mode. Usually after a change the desired solution will be too far away from the current one to be determined without re-running the solve.
Weights
Each constraint has an animated weight track. Weights range from 0 to 120, with 60 being nominal. The value is decibels, meaning 20 units changes the weight by a factor of 10. Thus, the total weight range is 0.001 to 1000.0. Excessively large weight values can de-stabilize the equations, producing a less-accurate result. We advise sticking with the default values to begin with, and only increasing a weight if needed to reduce difference values after a solve has been obtained. On a difficult solve where there is much contradictory information, it may be more helpful to reduce the weight, to make the equations more flexible and better able to find a decent solution.
268
STEREOSCOPIC MOVIES
Lock) controls. You can animate the respective Hold button to be on during the frames when the respective parameter is fixed, and keep the Hold off when the parameter is changing. (It is better to Hold on too few frames than too many.) After solving a shot, you may want to create a specific IOD or vergence curve. You can animate it directly, or create it with Approximate Keys in the graph editor. Use Vary mode and animate a Lock to get lock only some frames, or use Known mode to lock then entire shot if the whole curve is known.
Post-Stereo-Solve Checkup
After a stereo solve with constraints, you should verify that they have been satisfied correctly, using the graph editor. If the tracking data is wrong or calls for a stereo relationship too different than the constraints, they may not be satisfied and adjusting the constraints or tracking data may be necessary. IMPORTANT! On some marginal shots, the constraints may not be satisfied on the very first frame due to some mathematical details, creating a one-frame glitch in the camera/camera relationship. In this case, once you have the tracking fairly complete, and have set up a coordinate system on the dominant camera, switch to Refine mode, turn on the Constrain checkbox, and hit Go!
Object Tracking
SynthEyes can perform stereo object tracking, where the same rigid object is tracked from both cameras. (It can also do motion capture from stereo imagery, where the objects do not have to be rigid, and each feature is tracked separately.) Or you can do single-eye object tracking on a stereo shot, if the object moves enough for that to work well. To set up the stereo moving-object setup, do a Shot/Add Moving Object when a stereo shot is active. You will be asked whether to create a regular or stereo moving object. The latter is similar to adding a moving object twice, once when each camera (left and right) is selected as the main active object (the currently-selected camera is used as the parent when a new object is created), except that SynthEyes records that the two are linked together. Each object will have the Stereo solving mode. An object can only be in one place at a time: there should only be a single path for the object in world space, the same path as seen in the left and right cameras, just like each 3-D tracker pair only has a single location. The world path depends on the camera path, though! This creates challenging what comes first issues. Object solves always start with an initial camera-type estimate, before being converted to an object-type solve, and that is what happens with object solves as well. Simple object solves start out with two separate motion paths, one for the left object and one for the right object. As the camera path becomes available,
269
STEREOSCOPIC MOVIES
additional constraints are applied that force the left and right paths, in world space, to become identical by adjusting the camera and object positioning. Because of this inherent interaction, it is wise to work on the camera solve first, before proceeding to the objects. Helpful Hint: in addition to using the "Activate Other Eye" menu item or its minus-key accelerator, it can also be helpful to right-click the active camera/object button on the toolbar to rotate backwards through the collection of objects and cameras when there are multiple cameras and objects present. To perform motion-capture style tracking in a stereo shot, after completing the camera tracking, do an Add Moving Object, then set the solver mode for the object(s) to Individual Mocap. Add stereo tracking pairs on the moving object, and each pair will be solved to create a separate independent path. See the online tutorial for an example, and see the Motion Capture section of the manual.
270
STEREOSCOPIC MOVIES
achieve the proper inter-ocular distance. You can solve the shot as a moving-camera, store the stereo parameters on the Known tracks, reset the cameras, transfer the stereo parameters to the secondary camera via Set 1f or Set All, then change the setup to moving-object (see the tutorial) and solve as a moving object. Moving Object locking you can not lock the coordinates of a moving object to world coordinates. Exporting exports will contain the separate cameras and objects. Some exports may export only one camera at a time. In the future, we will probably have modified stereo exporters that parent the cameras to assemble a small rig. In any case, if you want to use some particular rig in your animation package, you will need to use some tool to convert the path information to drive your particular rig.
271
STOP!
273
Introduction
To perform motion capture of faces or bodies, you will need at least two cameras trained on the performer from different angles. Since the performers head or limbs are rotating, the tracking features may rotate out of view of the first two cameras, so you may need additional cameras to shoot more views from behind the actor. The fields of view of the cameras must be large enough to encompass the entire motion that the actor will perform, without the cameras tracking the performer (OK, experts can use SynthEyes for motion capture even when the cameras move, but only with care). You will need to perform a calibration process ahead of time, to determine the exact position and orientation of the cameras with respect to one another (assuming they are not moving). Well show you one way to achieve this, using some specialized but inexpensive gear. Very Important: Youll have to ensure that nobody knocks the cameras out of calibration while you shoot calibration or live action footage, or between takes. Youll need to be able to resynchronize the footage of all the cameras in post. Well tell you one way to do that. Generally the performer will have tracker markers attached, to ensure the best possible and most reliable data capture. The exception to this would be if one of the camera views must also be used as part of the final shot, for example, a talking head that will have an extreme helmet added. In this case, markers can be used where they will be hidden by the added effect, and in locations not permitting trackers, either natural facial features can be used (HD or film source!), or markers can be used and removed as an additional effect. After you solve the calibration and tracking in SynthEyes, you will wind up with a collection of trajectories showing the path through space of each individual feature. When you do moving-object tracking, the trackers are all rigidly connected to one another, but in motion capture, each tracker follows its own individual path. You will bring all these individual paths into your animation package, and will need to set up a rigging system that makes your character move in response to the tracker paths. That rigging might consist of expressions, Look At controllers, etc; its up to you and your animation package.
Camera Types
Since each cameras fields of view must encompass the entire performance (unless there are many overlapping cameras), at any time the actor is usually a small portion of the frame. This makes progressive DV, HD, or film source material strongly suggested.
274
Camera Placement
The camera placements must address two opposing factors: one, that the cameras should be far apart, to produce a large parallax disparity with good depth perception, and that the cameras should be close together, so that they can simultaneously observe as many trackers as possible. Youll probably need to experiment with placement to gain experience, keeping in mind the performance to be delivered. Cameras do not have to be placed in any discernable pattern. If the performance warrants it, you might want coverage from up above, or down below. If any cameras will move during the performance, they will need a visible set of stationary tracking markers, to recover their trajectory in the usual fashion. This will reduce accuracy compared to a carefully calibrated stationary camera.
Lighting
Lighting should be sufficient to keep the markers well illuminated, avoiding shadowing. The lighting should be enough to be able to keep the shutter time of the cameras as low as possible, consistent with good image quality.
275
The porcupine is hung by a support wire in the location of the performers head, then rotated as it is recorded simultaneously from each camera. The porcupines colored pom-poms can be viewed virtually all the time, even as they spin around to the back, except for the occasional occlusion. Similar fixtures can be built for larger motion capture scenarios, perhaps using dolly track to carry a wire frame. It is important that the individual trackable features on the fixture not move with respect to one another: their rigidity is required for the standard object tracking. The path of the calibration fixture does not particularly matter.
Camera Synchronization
The timing relationship between the different cameras must be established. Ideally, all the cameras would all be gen-locked together, snapping each image at exactly the same time. Instead, there are a variety of possibilities which can be arranged and communicated to SynthEyes during the setup process.
276
If you have a collection of video cameras, they will all take pictures at exactly the same (crystal-controlled) rate. However, one camera may always be taking pictures a bit before the other, and a third camera may always be taking pictures at yet a different time than the other two. The option is Crystal Sync. If you have a film camera, it might run a little more or a little less that 24 fps, not particularly synchronized to anything. This will be referred to as Loose Sync. In a capture setup with multiple cameras, one can always be considered to be Sync Locked, and serve as a reference. If it is a video camera, other video cameras are in Crystal Sync, and any film camera would be Loose Sync. If you have a film camera that will be used in the final shot, it should be considered to be the sync reference, with Sync Locked, and any other cameras are in Loose Sync. The beginning and end of each cameras view of the calibration sequence and the performance sequence must be identified to the nearest frame. This can be achieved with a clapper board or electronic slate. The low-budget approach is to use a flashlight or laser pointer flash to mark the beginning and end of the shot.
277
278
Note: the initial position of Camera01 is going to stay fixed, controlling the overall positions of all the cameras. If you want it in some particular location, you can remove the constraints from it, reset its path from the 3-D panel, then move it around to a desired location Solve the shot, and you have two orbiting cameras remaining at a fixed relative orientation as they orbit. Run the Motion Capture Camera Calibration script from the Script menu, and the orbits will be squished down to single locations. Camera01 will be stationary at its initial location, and Camera02 will be jittering around another location, showing the stability of the offset between the two. The first frame of Camera02s position is actually an average relative position over the entire shot; it is this location we will later use. You should save this calibration scene file (porcupine.sni); it will be the starting point for tracking the real footage. The calibration script also produces a script_output.txt file in a user-specific folder that lists the calibration data.
279
Two-Dimensional Tracking
You can now track both shots, creating the trackers into Object01 and Object02 for the respective shots. If you dont track all the markers, at least be sure to track a given marker either in both shots, or none, as a half-tracked marker will not help. The Hand-Held: Use Others mode may be helpful here for the rapid facial motions. Frequent keying will be necessary when the motion causes motion blur to appear and disappear (a lot of uniform light and short shutter time will minimize this).
280
Solving
Youre ready to solve, and the Solve step should be very routine, producing paths for each of the linked trackers. The final file is facetrk.sni. Fine Point! Normally SynthEyes will produce a position for each tracker that has an equal amount of error as seen from each camera. That's the best choice for general motion capture. However, if you have one primary camera where you want an exact match, and several secondary cameras to produce 3-D, you can adjust the Overall Weight of the reference camera, on the Solver panel, to be 10.0 or similar large value. Adjust the Overall Weight of the secondary cameras down to 0.1, for example. The resulting solution will be much more accurate for Camera01, and less so for the secondary cameras. Afterwards, you can start checking on the trackers. You can scrub through the shot in the perspective window, orbiting around the face. You can check the error curves and XYZ paths in the graph editor . By switching to Sort by Error
mode , you can sequence through the trackers starting from those with the highest error.
Modeling
You can use the calculated point locations to build models. However, the animation of the vertices will not be carried forward into the meshes you build. Instead, when you do a Convert to Mesh or Assemble Mesh operation in the perspective window, the current tracker locations are frozen on that frame. If desired, you can repeat the object-building process on different frames to build up a collection of morph-target meshes. Since the tracker/vertex linkage information is stored, you can use the MDD Motion Designer Mesh Animation export script to export animated meshes. You must export the mesh itself, typically in obj format. Then in your target application, you will read the mesh and apply the MDD data as an animated deformer. Note that it is crucial that the same exact obj mesh model be used as was used to generate the mdd file.
281
Note that it is very important to re-export the mesh if you use method #2: you must always add the MDD deformation to exactly the mesh that was used in SynthEyes, since the vertex numbering will not match exactly. SynthEyes does renumber and adjust vertices as needed when it reads OBJs, to match its internal vertex processing pipeline, especially if there is normal or texture coordinate data. You may also find it helpful to export the 3D path of a tracker by itself, using the Tracker 3D on Mesh exporter. You can use that data to drive bones or other rigging if desired.
282
One tracks the spout of a teacup, the other tracks the spouts shadow on the table. After solving the scene, we have the 3-D position of both. The procedure to locate the light in this situation is as follows. Switch to the Lighting Control Panel . Click the New Light button, then the New Ray button. In the camera view, click on the spout tracker, then on the tracker for the spouts shadow. We could turn on the Far-away light checkbox, if the light was the sun, so that the direction of the light is the same everywhere in the scene. Instead, well leave the checkbox off, and instead set the distance spinner to 100, moving the light away that distance from the target.
283
284
Set up a coordinate system as shown abovethe tracker at lower left is the origin, the one at lower right is on the left/right axis at 11.75, and the tracker at center left is on the floor plane. Solve the shot. [Note: no need to convert units, the 11.75 could be cm, meters, etc.] Create symmetric supervised trackers for the two primary light reflections at center top of the ball and track them though the shot. Change them both to zero-weighted trackers (ZWT) on the tracker panelwe dont want them to affect the 3-D solution. To calculate the reflection from the ball, SynthEyes requires matching geometry. Create a sphere. Set its height coordinate to be 3 and its size to be 12.25. Slide it around in the top view until the mesh matches up with the image of the ball. You can zoom in on the top view for finer positioning, and into the camera view for more accurate comparison. The lighting calculations can be more accurate when vertex normals are available. In your own shots, you may want to import a known mesh, for example, from a scan. In this case, be sure to supply a mesh that has vertex normals, or at least, use the Create Smooth Normals command of the Perspective window. On the lighting control panel , add a new light, click the New Ray button, then click one of the two highlight trackers twice in succession, setting that tracker as both the Source and Target. The target button will change to read (highlight) Raise the Distance spinner to 48, which is an estimated value (not needed for Far-away lights). From the quad view, youll see the light hanging in
285
In favorable circumstances, you will not need an approximate light height or distance. The calculation SynthEyes is making with All or Lock selected is more than just an averageit is able to triangulate to find an exact light position. As it turns out, often, as in this example shot, the geometry of the lights, mesh, and camera does not make that accurately possible, because the shift in highlight position as the camera moves is generally quite small. (You can test this by turning the distance constraint down to zero and hitting Lock again.) But it may be possible if the camera is moving extensively, for example, dollying along the side of a car, when a good mesh for the car is available.
286
Terminology
Theres a bit of new terminology to define here, since there are both 2-D and 3-D curves being considered. Curve. This refers to a spline-like 2-D curve. It will always live on one particular shots images, and is animated with a different location on each frame. Flex. A spline-like 3-D curve. A flex resides in 3-D, though it may be attached to a moving object. One or more curves will be attached to the flex; those curves will be analyzed to determine the 3-D shape of the flex. Rough-in. Placing control-point keys periodically and approximately. Tuning a curve. Adjusting a curve so it matches edges exactly.
Overview
Heres the overall process for using the curve and flex system to determine a 3-D curve. The quick synopsis is that we will get the 2-D curves positioned exactly on each frame throughout the shot, then run a 3-D solving stage. Note that the ordering of the steps can be changed around a bit, and additional wrinkles added, once you know what you are doing this is the simplest and easiest to explain. 1. Open the shot in SynthEyes 2. Obtain a 3-D camera solution, using automatic or supervised tracking 3. At the beginning of the shot, create a (2-D) curve corresponding to the flex-to-be. 4. Rough-in the path of the curve, with control-point animation keys throughout the shot. There is a tool that can help do this, using the existing point trackers. 5. Tune the curve to precisely match the underlying edges (manual or automatic).
287
288
Click the New Curve button, then, in the Camera View, click along the section of curve to be tracked, creating control points as you go. Place additional control points in areas of rapid curvature, and at extremal points of the curve. Avoid area where there is no trackable edge if possible. When you have finished with the last control point, right-click to exit the curve creation mode.
289
The first field asks how many trackers must be valid for the roughing process to continue. In this case, 5 trackers were selected to start. As shown, it will continue even if only one is valid. If the value is raised to 5, the process will stop once any tracker becomes invalid. If only a few trackers are valid (especially less than 4), less useful predictions of the curve shape can be made. The Key every N frames setting controls how often the curve is keyed. At the default setting of 1, a key will be placed at every frame, which is suitable for a hand-held shot, but less convenient to subsequently refine. For a smooth shot, a value of 10-20 might be more appropriate. The Rough Curve Importer will start at the current frame, and begin creating keys every so often as specified. It will stop if it reaches the end of the shot, if there are too few trackers still valid, or if it passes by any existing key on the curve. You can take advantage of this last point to fill in keys selectively as needed, using different sets of trackers at different times, for example. After youve used the Rough Curve Import tool, you should scrub through the shot to look for any places where additional manual tweaking is required. The curve may go offscreen or be obscured. If this happens, you can use the curve Enable checkbox to disable the curve. Note that it is OK if the curve goes partly offscreen, as long as there is enough information to locate it while it is onscreen.
Curve Tuning
Once the curve has been roughed into place, youre ready to tune it to place it more accurately along the edge. Of course, you can do this all by hand, and in adverse conditions, that may be necessary. But it is much better to use the automated Tune tool. You can tune either a single frame, with the Tune button, or all of the frames using of course the All button. When a curve is tuned on a frame, the curve control points will latch onto the nearby edge. For this reason, before you begin tuning, you may wish to create additional control points along the curve, by shift-clicking it.
290
You can adjust to edges of different widths, control the distance within which the edge is searched, and alter the trade-off between a large distant edge, and a smaller nearby one. Clearly, it is going to be easier to track edges with no nearby edges of similar magnitude. The control panel allows you to tune all frames (potentially just those within the animation playback range), only the frames that already have keys (to tune your roughed-in frames), or only the frames that do not have keys (to preserve your previously-keyed frames). You can also tell the tracking dialog to use the tuned locations as it estimates (using splining) where the curve is in subsequent frames, by turning on the Continuous Update checkbox. If you have a simple curve well-separated from confounding factors, you can use this feature to track a curve through a shot without roughing it in first. The drawback of doing this is that if the curve does get off course, you can wind up with many bad keys that must be repaired or replaced. [You can remove erroneous keys using Truncate.] With the Continuous Update box off, the tuning process is more predictable, relying solely on your roughed-in animation.
Flex Creation
With your curve(s) complete, you can now create a flex, which is the 3-D splined curve that will be made to match the curve animation. The flex will be created in 3-D in a position that approximately matches its actual position and shape. It is usually most convenient to open the Quad view, so that you can see the camera view at the same time you create the flex in one of the 3-D views (such as the Top view).
291
Flex Endpoints
The flexs endpoints must be nailed down so that the flex can not just shrivel up along the length of the curve, or pour off the end. The ends are controlled by one of several different means: 1. the end of the flex can stay even with its initial position, 2. the end of the flex can stay even with a specific tracker, or 3. the end of the flex can exactly match the position of a tracker. The first method is the default. The last method is possible only if there is a tracker at the desired location; this arises most often when several lines intersect. You can track the intersection, then force all of the flexes to meet at the same 3-D location. To set the starting or ending tracker location for a flex, click the Start Pt or End Pt button, then click on the desired tracker. Note that the current 3-D location of the tracker will be saved, so if you re-track or re-solve, you will need to reset the endpoint. The flex will end even with the specified point, meaning so that the point is perpendicular to the end of the flex. To match the position exactly, turn on the Exact button.
Flex Solving
Now that youve got the curve and flex set up, you are ready to solve. This is very easy click the Solve button (or Solve All if you have several flexes ready to be solved). After you solve a flex, the control points will no longer be visiblethey are replaced by a more densely sampled sequence of non-editable points. If you want to get back to the original control points to adjust the initial configuration, you can click Clear.
Flex Exports
Once you have solved the flex, you can export it. At present, there are two principal export paths. The flexes are not currently exported as part of regular tracker exports.
292
293
File/Merge
After you start File/Merge and select a file to merge, you will be asked whether or not to rename the trackers as necessary, to make them unique. If the current scene has Camera01 with trackers Tracker01 to Tracker05, and the scene being merged also has Camera01 with trackers Tracker01 to Tracker05, then answering yes will result in Camera01 with Tracker01 to Tracker05 and Camera02 with Tracker06 to Tracker10. If you answer no, Camera01 will have Tracker01 to Tracker05 and Camera02 will also have (different) Tracker01 to Tracker05, which is more confusing to people than machines. As that example shows indirectly, cameras, objects, meshes, and lights are always renamed to be unique. Renaming is always done by appending a number: if the incoming and current scenes both have a TrashCan, the incoming one will be renamed to TrashCan1. If you are combining a shot with a previously-tracked reference, you will probably want to keep the existing tracker names, to make it easiest to find matching ones. Otherwise, renaming them with yes is probably the least confusing unless you have a particular knowledge of the TrackerNN assignments (in which case, giving them actual names such as Scuff1 is probably best). You might occasionally track one portion of a shot in one scene file, and track a different portion of the same shot in a separate file. You can combine the scene files onto a single camera as follows: 1. Open the first shot 2. File/Merge the second shot. 3. Answer yes to make tracker names unique (important!) 4. Select Camera02 from the Shot menu. 5. Hit control-A to select all its trackers. 6. Go to the Coordinate System Panel .
295
296
297
The import runs roughly in reverse of the export. The frame offset is applied to the frame numbers in the file, and only those within the selected first and last frames are stored. The scale and offset can be adjusted; by default they are 1 and 0 respectively. The values of 2 and -1 shown undo the effect of the 0.5/0.5 in the example export panel. If you are importing several different tracker data files into a single moving object or camera, you may have several different trackers all named Tracker1, for example, and after combining the files, this would be undesirable. Instead, by turning on Force unique names, each would be assigned a new unique name. Of course, if you have done supervised tracking in some different files to combine, you might well leave it off, to combine the paths together. If the input data file contains data only for frames where a tracker has been found, the tracker will still be enabled past the last valid frame. By turning on Truncate enables after last, the enable will be turned off after the last valid frame. After each tracker is read, it is locked up. You can unlock and modify it as necessary. The tracking data file contains only the basic path data, so you will probably want to adjust the tracker size, search size, etc. If you will be writing your own tracker data file for this script to import, note that the lines must be sorted so that the lines for each specific tracker are contiguous, and sorted in order of ascending frame number. This convention makes everyones scripts easier. Also, note that the tracker names in the file never contain spaces, they will have been changed to underscores.
298
299
300
Details
SynthEyes uses two folders for batch file processing: an input folder and an output folder. Submit for Batch places them into the input folder; exports are written to the exports folder, completed files are written to the output folder, and the input file removed. You can set the location of the input, export, and output folders from the Preferences panel.
301
302
303
System Requirements
PC
Intel or AMD x86 processor with SSE2, such as Pentium 4, Athlon 64, Opteron, or Core/Core Duo. Note: SSE2 support is a requirement for SynthEyes 2007. Pentium 3 and Athlon XP/MP processors are not supported. Windows Vista, XP. Supports XPs 3GB mode. 32-bit version runs under Windows XP 64 Pro or 64-bit Vista. A separate 64-bit SynthEyes version is available. 1 GB RAM typical. 512 MB suggested minimum. 2+ GB suggested for pro, HD, and film users. 4+ GB are strongly suggested for 8-core machines. Mouse with middle scroll wheel/button. See the viewport reference section for help using a trackball. 1024x768 or larger display, 32 bit color, with OpenGL support. Large multi-head configurations require graphics cards with sufficient memory. DirectX 8.x or later recommended, required for DV and usually MPEG. Quicktime 5 or later recommended, required to read .mov files. A supported 3-D animation or compositing package to export paths and points to. Can be on a different machine, even a different operating system, depending on the target package. A user familiar with general 3-D animation techniques such as keyframing.
Mac OS X
Intel Mac, G5 Mac or G4 Mac (marginal). 1 GB RAM typical. 512 MB RAM suggested minimum. 2+ GB suggested for pro, HD, and film users. 4+ GB are strongly suggested for 8-core machines. 3 button mouse with scroll wheel. See the viewport reference section for help using a trackball or Microsoft Intellipoint mouse driver. 1024x768 or larger display, 32 bit color, with OpenGL support. Large multi-head configurations require graphics cards with sufficient memory. Mac OS 10.4 or 10.5. A supported 3-D animation or compositing package to export paths and points to. Can be on a different machine, even a different operating system, depending on the target package. A user familiar with general 3-D animation techniques such as keyframing.
Interchange
The Mac OSX versions can read SynthEyes files created on Windows and vice versa. Note that Windows, 64-bit Windows, and OS X licenses must be purchased separately; licenses are not cross-platform.
305
PC Installation
Please uninstall SynthEyes Demo before installing the actual product. To install a downloaded SynthEyes, run the installer Syn11Installer.msi (Syn1164Installer.msi for 64 bit), or insert the CD. You can install to the default location, or any convenient location. The installer will create shortcuts on the desktop for the SynthEyes program and HTML documentation. If you have a trackball or tablet, you may wish to turn on the No middlemouse preference setting to make alternate mouse modes available. See the viewport reference section. You should turn on Enhance Tablet Response if you have trouble stopping playback or tracking (Wacom appears to have fixed the underlying issue in recent drivers, so getting a new tablet driver may be another option.) Proceed to the Registration section below.
PC Fine Print
If you receive this error message: Error 1327.Invalid Drive E:\ (or other drive) then Windows Installer wants to check something on that drive. This can occur if you have a Firewire, network, or flash drive with a program installed to it, or an important folder such as My Documents placed on it, if the drive is not turned on or connected. The easiest cure is to turn the device on or reconnect it. This behavior is part of Windows, see http://support.installshield.com/kb/view.asp?articleid=q107921 http://support.microsoft.com/default.aspx?scid=kb;en-us;282183
PC - DirectX
SynthEyes requires Microsofts DirectX 8 or later to be able to read DV and MPEG shots. DirectX is a free download from Microsoft and is already a component of many current games and applications. You may be able to verify that you already have it by searching for the DirectX diagnostic tool dxdiag.exe, located in \windows\system or \winnt\system32. If you run it, the system tab shows the DirectX version number at the bottom of the system information. To download and install DirectX, go to http://www.microsoft.com and search for DirectX. Select a DirectX Runtime download for your operating
307
PC - QuickTime
If you have shots contained in QuickTime (Apple) movies (ie .mov files), you must have Apples QuickTime installed on your computer. If you use a capture card that produces QuickTime movies, you will already have QuickTime installed. SynthEyes can also produce preview movies in QuickTime format. You can download QuickTime from http://www.apple.com/quicktime/download/ Quicktime Pro is not required for SynthEyes to read or write files. Note that at present Apple does not offer a 64-bit version of Quicktime, so Quicktime support is not available for 64-bit SynthEyes.
Mac OS X Installation
1. 2. 3. 4. 5. 6. 7. Download the Syn11MT.dmg or Syn1164MT.dmg file to a convenient location on the Mac. Double-click it to open it and expose the SynthEyes installation package. Double-click the installer to run it. Proceed through a normal install, you will need root permissions. Eject the .dmg file from the finder; it will be deleted. Start SynthEyes from your Applications folder. You can create a shortcut on your desktop if you wish. Proceed to the Registration directions below.
Note that pictures throughout this document are based on the PC version; the Mac version will be very similar. In places where an ALT-click is called for on a PC, a Command-click should be used on the Mac, though these should be indicated in this manual. If you have a trackball or Microsofts Intellipoint mouse driver, you may wish to turn on the No middle-mouse preference setting to make alternate mouse modes available. See the viewport reference section. You should turn on Enhance Tablet Response if you have trouble stopping playback or tracking (Wacom appears to have fixed the underlying issue in recent drivers, so getting a new tablet driver may be another option.)
Registration
When you first start SynthEyes, a form will appear for you to enter registration information. Alternatively, if youve entered the temporary authorization data first, you can access the registration dialog from the Help/Register menu item.
Proceed as follows: 1. Use copy and paste to transfer the entire serial number (starts with SN- on PC, S6- on Win64, or IM- on OS X) from the email confirmation of your purchase to the form. 2. Fill out the remainder of the form. Sorry if this seems redundant to the original order form, but it is necessary. This data should
309
Authorization
1. View the email containing the authorization data. 2. Highlight the authorization information everything from the left parentheses bracket ( to the right parentheses ) and including both parentheses in your e-mail program, and select Edit/Copy in your mail program. Note: the serial number (SN-, IM-, etc) is not part of the authorization data but is included above it only for reference, especially for multiple licenses. 3. Start SynthEyes. If the registration dialog box appears, click the Use license on Clipboard button. If your temporary registration is still active, the registration dialog will not appear, so click Help/Authorize instead.
310
PC Uninstallation
Like other Windows-compatible programs, use the Add/Remove Programs tool from the Windows Control Panel to uninstall SynthEyes.
Mac Uninstallation
Delete the folders /Applications/SynthEyes and (if desired) your preferences etc /Users/YourName/Library/Applications Support/SynthEyes
311
313
Automatic Downloads
SynthEyes checks to determine the latest available build on the web site. If the latest build is more current than its own build, SynthEyes begins a download of the new version. The download takes place in the background as you use SynthEyes. The D/L button will be Yellow during the download. Once the download is complete, the D/L button will turn green. When you have reached a convenient time to install the new version, click the D/L button or select the Help/Install Updated menu item. After making sure your work is saved, and that you are ready to proceed, SynthEyes closes and opens the folder containing the new installer. Depending on your system and security settings, the installer may or may not start automatically. If it does not start automatically, click it to begin installation. The same process occurs when you check for updates manually by clicking the D/L button, with a few more explanatory messages.
Suggestions
We maintain a feature-suggestion system to help bring you the most useful and best-performing software possible. Click the Sug button on the toolbar, or Help/Suggest a Feature menu item. This miniature forum not only lets you submit requests, but comment and vote on existing feature suggestions. (This is not the place for technical support questions, however, please dont clog it up with them.) Demo version customers: this area is not available. Send email to support instead. Past experience has shown that most suggestions from demo customers are already in SynthEyes, be sure to check the manual first!
314
Web Links
The Help menu contains a number of items that bring up web pages from the www.ssontech.com web site for your convenience, including the main home page, the tutorials page, and the forum.
E-Mail Links
The Help/Tech Support Mail item brings up an email composition window preaddressed to technical support. Please investigate matters thoroughly before resorting to this, consulting the manual, tutorials, support site, and forum. If you do have to send mail, please include the following: Your name and organization An accurate subject line summarizing the issue A detailed description of your question or problem, including information necessary to duplicate it, preferably from File/New Screen captures, if possible, showing all of SynthEyes. A .sni scene file, after Clear All Blips, and ZIPped up (not RAR).
The better you describe what is happening, the quicker your issue can be resolved. Help/Report a Credit brings up a preaddressed email composition window so that you can let us know about projects that you have tracked using SynthEyes, so we can add them to our As Seen On web page. If you were wondering why your great new project isnt listed there this is the cure.
315
Menu Reference
File Menu
Many entries are Windows-standard. For example, File/New clears the scene and also opens the Shot/Add Shot dialog. File/Merge. Merges a previously-written SynthEyes .sni scene file with the currently-open one, including shots, objects, trackers, meshes, etc. Most elements are automatically assigned unique names to avoid conflicts, but a dialog box lets you select whether or not trackers are assigned unique names. File/Import/Shot. Clears the scene and opens the Shot/Add Shot dialog, if there are no existing shots, or adds an additional shot if one or more shots are already present. File/Import/Mesh. Imports a DXF or Alias/Wavefront OBJ mesh as a test object. File/Import/Reload mesh. Reloads the selected mesh, if any. If the original file is no longer accessible, allows a new location to be selected. File/Import/Tracker Locations. Imports a text file composed of lines: x_value y_value z_value Tracker_name. For each line, if there is an existing tracker with that name, its seed position is set to the coordinates given. If there is no tracker with that name, a new one is created with the specified seed coordinates. Use to import a set of seed locations from a pre-existing object model or set measurements, for example. New trackers use settings from the tracker panel, if it is open. See the section on merging files. File/Import/Extra Points. Imports a text file consisting of lines with x, y, and z values, each line optionally preceded or followed by an optional point name. A helper point is created for each line. The points might have been determined from on-set surveying, for example; this option allows them to be viewed for comparison. See the section on merging files. Export Again. Redoes the last export, saving time when you are exporting repeatedly to your CG application. Find New Scripts. Causes SynthEyes to locate any new scripts that have been placed in the script folder since SynthEyes started, making them available to be run. File Info. Shows the full file name of the current file, its creation and last-written times, full file names for all loaded shots, file names for all imported meshes (and the time they were imported), and file names for mesh textures (being extracted or only displayed). Plus, allows you to add your own descriptive information to be stored in the file. User Data Folder. Opens the folder containing preferences, the batch, script, downloads, and preview movie folders, etc. Submit for Batch. The current scene is submitted for batch processing by writing it into the queue area. It will not be processed until the Batch Processor is running, and there are no jobs before it.
317
MENU REFERENCE
Submit for Render. The current scene is submitted for batch processing: the Save Sequence process will be run on the active shot to write out the reprocessed image sequence to disk as a batch task. Use the Save Sequence dialog to set up the output file and compression settings first, close it without saving, then Submit for Batch. You will be asked whether or not to output both image sequences simultaneously for stereo. Other multiple-shot renderings can be obtained by Sizzle scripting, or by submitting the same file several times with different shots active. Batch Process. SynthEyes opens the batch processing window and begins processing any jobs in the queue. Batch Input Queue. Opens a Windows Explorer to the batch input queue folder, so that the queue can be examined, and possibly jobs removed or added. Batch Output Queue. Opens a Windows Explorer to the batch output queue folder, where completed jobs can be examined or moved to their final destinations. Exporter Outputs. Opens a Windows Explorer to the default exporter folder.
Edit Menu
Undo. Undo the last operation, changes to show what, such as Undo Select Tracker. Redo. Re-do an operation previously performed, then undone. Select same color. Select all the (un-hidden) trackers with the same color as the one(s) already selected. Select All etc affect the tracker selections, not objects in the 3-D viewports. Invert Selection. Select unselected trackers, unselect selected trackers. Clear Selection. Unselect all trackers. Lock Selection. Lock the selection so it can not be changed. Delete. Delete selected objects and trackers. Hide unselected. Hide the unselected trackers Hide selected. Hide the selected trackers Reveal selected. Reveal (un-hide) the selected trackers (typically from the lifetimes panel). Reveal nnn trackers. Reveal (un-hide) all the trackers currently hidden, ie nnn of them. Flash selected. Flashes all selected trackers in the viewports, making them easier to find. Polygonal Selections. Lassos follow the mouse motion to create irregular shapes. Rectangular Selections. Lassos sweep out a rectangular area. Lasso Trackers. The lasso selects trackers Lasso Meshes Instead. The lasso selects meshes Update Textures Now. All mesh textures will be re-extracted from the shot. Redo Textures at Solve. Enable control, when on, texture calculations will rerun whenever the scene is solved. Spinal aligning. Sets the spinal adjustment mode to alignment. Spinal solving. Sets the spinal adjustment mode to solving.
318
MENU REFERENCE
Edit Scene Settings affects the current scene only. Edit Preferences contains some of the same settings; these do not affect the current scene, but are used only when new scenes are created. Reset Preferences. Set all preferences back to the initial factory values. Gives you a choice of presets for a light- or dark-colored user interface, appropriate for office or studio use, respectively. Edit Keyboard Map. Brings up a dialog allowing key assignments to be altered.
View Menu
Reset View. Resets the camera view so the image fills its viewport. Expand to Fit. Same as Reset View. Reset Time Bar. Makes the active frame range exactly fill the displayable area. Rewind. Set the current time to the first active frame. To End. Set the current time to the last active frame. Play in Reverse. When set, replay or tracking proceeds from the current frame towards the beginning. Frame by Frame. Displays each frame, then the next, as rapidly as possible. Quarter Speed. Play back at one quarter of normal speed. Half Speed. Play back at one half of normal speed. Normal Speed. Play back at normal speed (ie the rated frame per second value), dropping frames if necessary. Note: when the Tracker panel is selected, playback is always frame-by-frame, to avoid skipping frames in the track. Double Speed. Play back at twice normal speed, dropping frames if necessary. Show Image. Turns the main images display in the camera view on and off. Show Trackers. Turns on or off the tracker rectangles in the camera view. Only Camera01s trackers. Show only the trackers of the currently-selected camera or object. When checked, trackers from other objects/cameras are hidden. The camera/object name changes each time you change the currently-selected object/camera on the Shot menu. The next few items occur in the Tracker Appearance submenu. Show All Tracker Names. When turned on, tracker names will be displayed for all trackers in the camera, perspective, and 3D viewports. Show Supervised Names. When turned on, tracker names will be displayed for (only) the supervised trackers, in the camera, perspective, and 3D viewports. Show Selected Names. The tracker names are displayed for each selected tracker. Show Names in Viewport. The tracker names (as controlled by the above) are also displayed in the 3D viewports. Use alternate colors. Each tracker has two different colors. The alternate set is displayed and editable when this menu item is checked, generally under control of the Set Color by RMS Error script. With Central Dot. Trackers are shown with a central dot in the camera view, even for auto-trackers, offset trackers, and locked trackers, where the dot would not normally be shown.
319
MENU REFERENCE
Show Only as Dots. All locked trackers are shown as solely a dot in the camera view, reducing clutter. Show as Dots in 3D Views. Trackers are shown as dots, not X's, in the 3D viewports. Show as Dots in Perspective. Trackers are shown as dots, not X's, in the Perspective view. Show Tracker Trails. When on, trackers show a trail into the future(red) and past(blue). Show 3-D Points. Controls the display of the solved position marks (Xs). Show 3-D Seeds. Controls the display of the seed position marks (+s). Show Tracker Radar. Visualization tool shows a circle at each tracker reflecting the tracker's error on the current frame. Show Seed Paths. When on, values for the seed path and field of view/focal length of the camera and moving objects will be shown and edited. These are used for Use Seed Paths mode and for camera constraints. When off, the solved values are displayed. Show Meshes. Controls display of object meshes in the camera viewport. Meshes are always displayed in the 3-D viewports. Solid Meshes. When on, meshes are solid in the camera viewport, when off, wire frame. Meshes are always wireframe in the 3-D viewports. Outline Solid Meshes. Solid meshes have the wire frame drawn over top, to better show facet locations. Cartoon Wireframe Meshes. A special wireframe mode where only the outer boundary and any internal creases are visible, intended for helping align set and object models. Only texture alpha. When on, meshes will display the alpha channel of their texture, instead of the texture itself, simplifying over-painting. Shadows. Show ground plane or on-object shadows in perspective window. This setting is sticky from SynthEyes run to run. Show Lens Grid. Controls the display of the lens distortion grid (only when the Lens control panel is open). OpenGL Camera View. When enabled, the OpenGL camera view is used, which is faster on Macs, and when there are large meshes present (50,000 vertices/faces and up). Keep off on PCs when there are no complex meshes. OpenGL 3-D Viewports. When enabled, the OpenGL version of the 3-D viewports is used, which is faster on Macs, and when there are large meshes present (50,000 vertices/faces and up). Keep off on PCs when there are no complex meshes. Double Buffer. Slightly slower but non-flickery graphics. Turn off only when maximal playback speed required. Sort Alphabetic. Trackers are sorted alphabetically, mainly for the up/down arrow keys. Updated when you change the setting in the graph editor. Sort by Error. Trackers are sorted from high error to low error. Sort by Time. Trackers are sorted from early in the shot to later in the shot. Sort by Lifetime. Trackers are sorted from shortest-lived to longest-lived.
320
MENU REFERENCE
Group by Color. In the sort order, all trackers with colors assigned will come first, with each color grouped together, each sorted by the specified order, followed by trackers at the default color. When this is off, trackers are not grouped together; the order is determined solely by the sort order. Only Selected Splines. When checked, the selected spline, and only the selected spline, will be shown, regardless of its Show This Spline status. Safe Areas. This is a submenu with checkboxes for a variety of safe areas you can turn on and off individually (you can turn on both 90% and 80% at once, for example). Safe areas are defined in the file safe11.ini in the main SynthEyes folder; you can add your own safe11.ini to add your own personal safe area definitions. Change the color via the preferences.
Track Menu
Add Many Trackers. After a shot is auto-tracked and solved, additional trackers can be added efficiently using the dialog. Clean Up Trackers. Coalesce Nearby Trackers. Brings up a dialog that searches for, and coalesces, multiple trackers that are tracking the same feature at different times in the shot. Combine Trackers. Combine all the selected trackers into a single tracker, and delete the originals. Cross Link by Name. The selected trackers are linked to trackers with the same name, except for the first character, on other objects. If the trackers object is solved Indirectly, it will not link to another Indirectly-solved object. It also will not link to a disabled object. Drop onto mesh. If a mesh is positioned appropriately in the camera viewport, drops all selected trackers onto the mesh, setting their seed coordinates. Similar to Place mode of Perspective window. Fine-tune Trackers. Brings up a dialog to automatically re-track automatic trackers using supervised tracking. Reduces jitter on some scenes. Selected Only. When checked, only selected trackers are run while tracking. Normally, any tracker which is not Locked is processed. Stop on auto-key. Causes tracking to stop whenever a key is added as a result of the Key spinner, making it easy to manually tweak the added key locations. Preroll by Key Smooth. When tracking starts from a frame with a tracker key, SynthEyes backs up by the number of Key Smooth frames, and retracks those frames to smooth out any jump caused by the key. Do not auto-generate keys. Do not auto-generate keys (see next two entries). Auto-generate for ZWTs. Auto-generate keys only for zero-weighted trackers (ZWTs). (See next entry.) Auto-generate for all. Auto-generate keys every Key (every) frame (from the tracker control panel) based on a rough 3D location when the second key is added to a tracker, if the camera/object is already solved.
321
MENU REFERENCE
Smooth after keying. When a key is added or changed on a supervised tracker, update the relevant adjacent non-keyed frames, based on the Key Smooth parameter. Pan to Follow. The camera view pans automatically to keep selected trackers centered. This makes it easy to see the broader context of a tracker. Pan to Follow 3D. This variant keeps the solved 3-D point of the tracker centered, which can be better for looking for systematic solve biases. ZWT auto-calculation. The 3-D position of each zero-weighted tracker is recomputed whenever it may have changed. With many ZWTs and long tracks, this might slow interactive response; use this item to temporarily disable recalculation if desired. Lock Z-Drop on. Mimics holding down the 'Z' key in the camera view, so that the Z-Drop feature is engaged: a selected tracker is immediately dropped at a clicked-on location, rather than having to be dragged there. Saves wear and tear on pinky finger. For convenience, meshes will not be selected if you click on them when this control is engaged. The status bar will show if this control is on. Steady Camera. Predicts the next location of the tracker based on the last several frames. Use for smooth and steady shots from cranes, dollies, steadi-cams. Hand-Held: Sticky. Use for very irregular features poorly correlated to the other trackers. The tracker is looked for at its previous location. With both handheld modes off, trackers are assumed to follow fairly smooth paths. Hand-Held: Use others. Uses previously-tracked trackers as a guide to predict where a tracker will next appear, facilitating tracking of jittery hand-held shots. Re-track at existing. Use this mode to re-track an already-tracked tracker. The search will be centered at the previously-determined location, preventing large jumps in position. Used for fine-tuning trackers, for example. No resampling. Supervised tracking works at the original image resolution. Linear x 4. Supervised tracking runs at 4 times the original image resolution, with linear interpolation between pixels. Default setting, suitable for usual DV and prosumer cameras. Mitchell2 x 4. Tracking runs at 4x resolution with B=C=1/3 Mitchell-Netravali filtering, which produces sharper images than bilinear but less than Lanczos: an intermediate setting if there are too many noise artifacts with Lanczos2. Lanczos2 x 4. Tracking runs at 4x resolution with N=2 Lanczos filtering, which produces sharper imagesof the image and the noise. Suitable primarily for clean uncompressed source footage. Takes longer than Linear x 4. Lanczos3 x 4. Tracks at 4x with N=3 Lanczos, which is even sharper, but takes longer too. Linear x 8. Supervised tracking runs at 8x the original resolution. Not necessarily any better than running at 4x. Mitchell2 x 8. Tracking runs at 8x resolution with B=C=1/3 Mitchell-Netravali filtering, which produces sharper images than bilinear but less than
322
MENU REFERENCE
Lanczos: an intermediate setting if there are too many noise artifacts with Lanczos2 x 8. Lanczos2 x 8. Tracks at 8x with N=2 Lanczos. Lanczos3 x 8. Tracks at 8x with N=3 Lanczos. (Tool Scripts). Tool scripts were listed at the end of the track menu in earlier versions of SynthEyes. They now have their own Script menu.
Shot Menu
Add Shot. Adds a new shot and camera to the current workspace. This is
different than File/New, which deletes the old workspace and starts a new one! SynthEyes will solve all the shots at the same time when you later hit
Go, taking links between trackers into account. Use the camera and object list at the end of the Shot menu to switch between shots. Edit Shot. Brings up the shot settings dialog box (same as when adding a shot) so that you can modify settings. Switching from interlaced to noninterlaced or vice versa will require retracking the trackers. Change Shot Images. Allows you to select a new movie or image sequence to replace the one already set up for the present shot. Useful to bring in a higher or lower-resolution version, or one with color or exposure adjustments. Warning: changes to the shot length or aspect ratio will adversely affect previously-done work. Image Preparation. Brings up the image preparation dialog (also accessed from the shot setup dialog), for image preparation adjustments, such as regionof-interest control, as well as image stabilization. Enable Prefetch. Turns the image prefetch on and off. When off, the cache status in the timebar will not be updated as accurately. Read 1f at a time. Preference! Tells SynthEyes to read only one frame at a time, but continue to pre-process frames in parallel. This option can improve performance when images are coming from a disk or network that performs poorly when given many tasks at once. Activate other eye. When the camera view is showing one of the views from a stereo pair, switches to the other eye. Additionally, if there is a perspective window locked to the other (now-displayed) eye, it is switched to show the original camera view, swapping the two views. Stereo Geometry. Brings up the Stereo Geometry control panel. Add Moving Object. Adds a new moving object for the current shot. Add trackers to this object and SynthEyes will solve for its trajectory. The moving object shows as a diamond-shaped null in the 3-D workspace. Remove Moving Object. Removes the current object. If it is a camera, it must not have any attached objects; if it is removed the whole shot goes with it. (Camera and Object List). This list of cameras and objects appears at the end of the shot menu, showing the current object or camera, and allowing you to switch to a different object or camera. Selecting an object here is different than selecting an object in a 3-D viewport.
323
MENU REFERENCE
Script Menu
User Script Folder. Opens your personal folder of custom scripts in the Explorer or Finder. Handy for making or modifying your own. SynthEyes will mirror the subfolder structure to produce a submenu tree, so you can keep yours separate, for example. System Script Folder. Opens SynthEyess folder of factory scripts. Helpful for quickly installing new script releases. SynthEyes will mirror the subfolder structure to produce a submenu tree, so you can put all the unused scripts into a common folder to simplify the view, for example. (Tool Scripts). Any tool scripts will appear here; selecting one will execute it. Such scripts can reach into the current scene to act as scripted importers, gather statistics, produce output files, or make changes. Standard scripts include Filter Lens F.O.V., Invert Perspective, Select by type, Motion capture calibrate, Shift constraints, etc. Note that importers and exporters have their own submenus on the File menu. See the Sizzle reference manual for information on writing scripts.
Window Menu
(Control Panel List). Allows you to change the control panel using standard Windows menu accelerator keystrokes. No floating panels. The current active panel is docked on the left edge of the main application window. Float One Panel. The active panel floats in a small carrier window and can be repositioned. If the active panel is changed, the carrier switches to the new panel. This may makes better use of your screen space, especially with larger images or multiple monitor configurations Many Floating Panels. Each panel can be floated individually and simultaneously. Clicking each panels button either makes it open, or if it is already open, closes it. Only one panel is the official active panel. Important note: mouse, display, and keyboard operations can depend on which panels are open, or which panel is active. These combinations may not make sense, or may interact in undesirable ways without warning. If in doubt, keep only a single panel open. No Panel. Closes all open floating panels, or removes the fixed panel. Note that one panel is still active for control purposes, even though it is not visible. Useful to get the most display space, and minimize redraw time, when using SynthEyes for RAM playback. Graph editor. Opens the graph editor. Hold Region Tracker Prep. Launch the Hold Tracker Preparation dialog, used to handle shots with a mix of translation and tripod-type nodal pans. Solver Locking. Launch the solvers lock control dialog, used to constrain the camera path directly. Path Filtering. Launch the solver's object path (and FOV) filtering dialog, which controls any post-solve filtering.
324
MENU REFERENCE
Spinal Editing. Launch the spinal editing control dialog, for real-time updates of solves. Texture panel. Opens the texture extraction control panel. Floating Camera. Click to float the camera view independently. The camera view will be empty in the standard viewport configurations. Floating Perspective. Click to open a new floating perspective window. Floating SimulTrack. Click to open a new floating SimulTrack window. Float Playbar. Floats the playbar (play, frame forward, rewind, etc) as a separate movable window. See also Playbar on toolbar in the preferences. Show Top Time Bar. Turns the time-bar at the top of the main window on or off, for example, if you are using a graph editors time bar on a second monitor, you can turn off the time bar on the main display. Viewport Manager. Starts the viewport layout manager, which allows you to change and add viewport configurations to match your working style and display system geometry. Click-on/Click-off. Quick toggle for click-on/click-off ergonometric mode, see discussion in the Preferences panel.
Help Menu
Commands labeled with an asterisk(*) require a working internet connection, those with a plus sign(+) require a properly-configured support login as well. An internet connection is not required for normal SynthEyes operation, only for acquiring updates, support, etc. Help PDF. Opens the help PDF file. Be sure to use the PDFs bookmarks as an extended table of contents, and the search function to help find things. Sizzle PDF. Opens the Sizzle scripting language manual. Read Messages+. Opens the web browser to a special message page containing current support information, such as the availability of new scripts, updates, etc. This page is monitored automatically; this is equivalent to the Msg button on the toolbar. Suggest Features+. Opens the Feature-Suggestion page for SynthEyes, allowing you to submit suggestions, as well as read other suggestions and comment and vote on them. (Not available on the demo version: send mail to support with questions/comments/suggestions.) Tech Support Site*. Opens the technical support page of the web site. Tech Support Mail*. Opens an email to technical support. Be sure to include a good Subject line! (Email support is available for one year after purchase.) Report a credit*. Hey, we all want to know! Drop us a line to let us know what projects SynthEyes has been used in. Website/Home*. Opens the SynthEyes home page for current SynthEyes news. Website/Tutorials*. Opens the tutorials page. Website/Forum*. Opens the SynthEyes forum. Register. Launches a form to enter information required to request SynthEyes authorization. Information is placed on the Windows clipboard. See the registration and authorization tutorial on the web site.
325
MENU REFERENCE
Authorize. After receiving new authorization information, copy it to the Windows clipboard, then select Authorize to load the new information. Set Update Info. Allows you to update your support-site login, and control how often SynthEyes checks for new builds and messages. Check for Updates+. Manually tells SynthEyes to go look for new builds and messages. Use this periodically if you have dialup and set the automaticcheck strategy to never. Similar to the D/L button on the toolbar. Install Updated. If SynthEyes has successfully downloaded an updated build (D/L button is green), this item will launch the installation. About. Current version information.
326
The Graph Editor icon appears in the toolbar area to indicate a nominal workflow, but it launches a floating window. Additional panels are described below: Add Many Trackers Dialog Advanced Features Clean Up Trackers Coalesce Nearby Trackers Curve tracking control Finalize Trackers Fine-Tuning Panel Green-screen control Hard and Soft Lock Controls Hold Tracker Preparation Tool Image Preparation Spinal Editing Control The shot-setup dialog is described in the section Opening the Shot.
Spinners
SynthEyes uses spinners, the stacked triangles on the right of the following graphic ( ), to permit easy adjustment of numeric fields on the control panels. The spinner control provides the following features: Click either triangle to increase or decrease the value in steps, Drag within the control to smoothly increase and decrease the value, Turns red on key frames,
327
Tool Bar
New, Open, Save, Undo, Redo. Buttons. Standard Windows (only). Wait for the tooltips or use the Undo/Redo menu items to see what function will be undone or redone. (Control Panel buttons). Changes the active control panel. Forward/Backward ( / ). Button. Changes the current playback and tracking direction. Reset Time Fill . Button. Resets the timebar so that the entire shot is visible.
. Button. The camera viewport is reset so that the entire image becomes visible. Shift-fill sets the zoom to 1:1 horizontally.
Viewport Configuration Select . List box. Selects the viewport configuration. Use the viewport manager on the Window menu to modify or add configurations. Camera01. Active camera/object. Left-click to cycle forward through the cameras and objects, right-click to cycle backwards.
Play Bar
Rewind
Back Key Button. Go backwards to the previous key of the selected tracker or object. Frame Number . Numeric Field. Sequential frame number, starting at zero or at 1 if selected on the preferences. Forward Key object. To End Button. Go forward to the next key of the selected tracker or
Button. Go to the last frame of the shot. . Button. Go backwards one frame. Auto-repeats.
Frame Backwards
Play/Stop / . Button. Begin playing the shot, forwards or backwards, at the rate specified on the View menu. Frame Forward . Button. Go forwards one frame. Auto-repeats.
328
Summary Panel
Auto. (the big green one) Run the entire match-move process: create features(blips), generate trackers, and solve. If no shot has been set up yet, you will be prompted for that first, so this is truly a one-stop button. See also Submit for Batch. Motion Profile. Select one of several profiles reflecting the kinds of motion the image makes. Use Crash Pan for when the camera spins quickly, for example, to be able to keep up. Or use Gentle Motion for faster processing when the camera/image moves only slightly each frame. Green Screen. Brings up the green-screen control dialog. Zoom Lens. Check this box if the camera zooms. On Tripod. Check this box if the camera was on a tripod. Hold. Animated Button. Use to create hold regions to handle shots with a mix of normal and tripod-mode sections. Fine-tune. Performs an extra stage of re-tracking between the initial feature tracking and the solve. This fine-tuning pass can improve the sub-pixel stability of the trackers on some shots. Settings. Launches the settings panel for fine-tuning. Run Auto-tracker. Runs the automatic tracking stage, then stops. Solve. Runs the solver.
329
Spline/Object List. An ordered list of splines and the camera or object they are assigned to. The default Spline1 is a rectangle containing the entire image. A feature is automatically assigned to the camera/object of the last
330
Move Up. Push button. Moves the selected spline up in the Spline/Object List, making it lower priority. Move Down. Push button. Moves the selected spline down in the Spline/Object List, making it higher priority. Shot Alpha Levels. Integer spinner. Sets the number of levels in the alpha channel for the shot. For example, select 2 for an alpha channel containing only 0 or 1(255), which you can then assign to a camera or moving object. Object Alpha Level. Spinner. Sets the alpha level assigned to the current camera or object. For example, with 2 alpha levels, you might assign level 0 to the camera, and 1 to a moving object. The alpha channel is used to assign a feature only if it is not contained in any of the splines. Import Tracker to CP. Button. When activated, select a tracker then click on a spline control point. The trackers path will be imported as keys onto the control point.
331
Motion Profile. Select one of several profiles reflecting the kinds of motion the image makes. Use Crash Pan for when the camera spins quickly, for example, to be able to keep up. Or use Gentle Motion for faster processing when the camera/image moves only slightly each frame. Clear all blips. Clears the blips from all frames. Use to save disk space after blips have been peeled to trackers. Blips this frame. Push button. Calculates features (blips) for this frame. Blips playback range. Push button. Calculates features for the playback range of frames. Blips all frames. Push button. Calculates features for the entire shot. Displays the frame number while calculating. Delete. Button. Clears the skip frame channel from this frame to the end of the shot, or the entire shot if Shift is down when clicked. Skip Frame. Checkbox. When set, this frame will be ignored during automatic tracking and solving. Use (sparingly) for occasional bad frames during explosions or actors blocking the entire view. Camera paths are spline interpolated on skipped frames. Advanced. Push button. Brings up a panel with additional control parameters. Link frames. Push button. Blips from each frame in the shot are linked to those on the prior frame (depending on tracking direction). Useful after changes in splines or alpha channels. Peel. Mode button. When on, clicking on a blip adds a matching tracker, which will be utilized by the solving process. Use on needed features that were not selected by the automatic tracking system.
332
333
Tracker Interior View. Shows its interior---the inner box of the tracker. Left Mouse: Drag the tracker location. Middle Scroll: Advance the current frame, tracking as you go. Right Mouse: Add or remove a position key at
334
Tracker Type. , , , . Button. Toggles the tracker type among normal match-mode, dark spot, bright spot, or symmetric spot. Right-click to sequence through the types in reverse order. Direction. Button. Configures the tracker for backwards tracking: it will only track when playing or stepping backwards. Enable. Button. Animated control turns tracker on or off. Turn off when tracker gets blocked by some thing, turn back on when it becomes visible again. Contrast. Number-less spinner. Enhances contrast in the Tracker Interior View window. Bright. Number-less spinner. Turns up the Tracker Interior View brightness. Color. Rectangular swatch. Sets the display color of the tracker for the camera, perspective, and 3-D views. Now. Button. Adds a tracker position key at the present location and frame. Right-click to remove a position key. Shift-right-click to truncate, removing all following keys. Key. Spinner tells SynthEyes to automatically add a key after this many frames, to keep the tracker on track. Key Smooth. Spinner. Trackers path will be smoothed for this many frames before each key, so there is no glitch due to re-setting a key. Name. Edit field. Adjust the trackers name to describe what its tracking. Pos. H and V spinners. Trackers horizontal and vertical position, from 1 to +1. You can delete a key (border is red) by right-clicking. Shift-right-clicking will truncate the tracker after this frame. Size. Size and aspect spinners. Size and aspect ratio (horizontal divided by vertical size) of the interior portion of the tracker. Search. H and V spinners. Horizontal and vertical size of the region (excluding the actual interior) that SynthEyes will search for the tracker around its position in the preceding frame. Preceding implies lower-numbered for forward tracking, higher-numbered for backward tracking. Offset. Button. When on, the offset channels will be added to the tracked location to determine the final tracker location, that net position will be editable, and the 2-D tracker will be locked. When off, the 2-D tracker is
335
336
Field of View. Spinner. Field of view, in degrees, on this frame. Focal Length. Spinner. Focal length, computed using the current Back Plate Width on Scene Settings. Provided for illustration only. Add/Remove Key. , Button. Add or remove a key to the field of view (focal length) track at this frame. Known. Radio Button. Field of view is already known (typically from an earlier run) and is taken from the field of view seed track. May be fixed or zooming. You will be asked if you want to copy the solved FOV track to the seed FOV trackdo that if you want to lock down the solved FOV. Fixed, Unknown. Radio Button. Field of view is unknown, but did not zoom during the shot. Fixed, with Estimate. Radio Button. Camera did not zoom, and a reasonable estimate of the field of view is available and has been set into the beginning of the lens seed track. This mode can make solving slightly faster and more robust. Important: verify that you know, and have entered, the correct plate size before using any on-set focal length values. A correct on-set focal length with an incorrect plate size makes the focal length useless, and this setting harmful. Zooming, Unknown. Radio Button. Field of view zoomed during shot. Identical Lens Weight. Spinner. A 0-120 solver weight for stereo shots, when non-zero it forces the two lens FOVs towards being identical. Use with care for special circumstances, lenses are rarely identical!
337
338
Go! Button. Starts the solving process, after tracking is complete. Master Reset. Button. Resets all cameras/objects and the trackers on them, though all Disabled camera/objects are left untouched. Control-click to clear the seed path, and optionally the seed FOV (after confirmation). Error. Number display. Root-mean-square error, in horizontal pixels, of all trackers associated with this object or tracker. Seeding Method. Upper drop-down list controlling the way the solver begins its solving process, chosen from the following methods: Auto. List Item. Selects the automatic seeding(initial estimation) process, for a camera that physically moves during the shot. Refine. List item. Resumes a previous solving cycle, generally after changes in trackers or coordinate systems. Tripod. List Item. Use when the camera pans, tilts, and zooms, but does not move. Refine Tripod. List item. Resumes a previous solving cycle, but indicates that the camera was mounted on a tripod. Indirect. List Item. Use for camera/objects which will be seeded from links to other camera/objects, for example, a DV shot indirectly seeded from digital camera stills.
339
340
341
Tracker Name. Edit. Shows the name of selected tracker, or change it to describe what it is tracking. Camera/Object. Drop-down list. Shows what object or camera the tracker is associated with; change it to move the tracker to a different object or camera on the same shot (or, you can clone it there for special situations). Entries beginning with asterisk(*) are on a different shot with the same aspect and length; trackers may be moved there, though this may adversely affect constraints, lights, etc. *3. Button. Starts and controls three-point coordinate setup mode. Click it once to begin, then click on origin, on-axis, and on-plane trackers in the camera view, 3-D viewports, or perspective window. The button will sequence through Or, LR, FB, and Pl to indicate which tracker should be clicked next. Click this button to skip from LR (left/right) to FB (front/back), or to skip setting other trackers. After the third tracker, you will have the opportunity to re-solve the scene to apply the new settings.
342
343
Creation Mesh Type. Drop-down. Selects the type of object created by the Create Tool. Create Tool. Mode button. Clicking in a 3-D viewport creates the mesh object listed on the creation mesh type list, such as a pyramid or Earthling.
344
345
346
New Light. Button. Click to create a new light in the scene. Delete Light. Button. Delete the light in the selected-light drop-down list. Selected Light. Drop-down list. Shows the select light, and lets you change its name, or select a different one. Far-away light. When checked, light is a distant, directional, light. When off, light is a nearby spotlight or omnidirectional(point) light. Compute over frames: This, All, Lock. In the (normal) This mode, the lights position is computed for each frame independently. In the All or Lock mode, the lights position is averaged over all the frames in the sequence. In the All mode, this calculation is performed repeatedly for live updates. In the Lock mode, the calculation occurs only when clicking the Lock button. New Ray. Button. Creates a new ray on the selected light. Delete Ray. Button. Delete the selected ray. Previous Ray (<). Button. Switch to the previous lower-numbered ray on the selected light. Ray Number. Text field. Shows something like 1/3 to indicate ray 1 of 3 for this light. Next Ray (>). Button. Switch to the next higher ray on the selected light. Selected Ray Source. Mode button. When lit up, click a tracker in the camera view or any 3-D view to mark it as one point on the ray. Target. Mode button. When lit up, click a tracker in the camera view or any 3-D view to mark it as one point on the ray. If the source and target trackers are the same, it is a reflected-highlight tracking setup, and the Target
347
The flex/curve control panel handles both object types, which are used to determine the 3-D position/shape of a curve in 3-D, even if it has no discernable point features. If you select a curve, the parameters of its parent flex (if any) will be shown in the flex section of the dialog. New Flex. Creates and selects a new flex. Left-click successively in a 3-D view or the perspective view to lay down a series of control points. Right-click to end.
348
349
This dialog, launched from the Trackers menu, allows you to add many more trackersafter you have successfully auto-tracked and solved the shot. Use to improve accuracy in a problematic area of the shot, or to produce additional trackers to use as vertices for a tracker mesh. Note: it may take several seconds between launching the dialog and its appearance. During this time your processors will be very busy. Tracker Requirements Min #Frames. Spinner. The minimum number of valid frames for any tracker added. Min Amplitude. Spinner. The minimum average amplitude of the blip path, between zero and one. A larger value will require a more visible tracker. Max Avg Err. Spinner. The maximum allowable average error, in horizontal pixels, of the prospective tracker. The error is measured in 2-D between the 2-D tracker position, and the 3-D position of the prospective tracker. Max Peak Err. Spinner. The maximum allowable error, in horizontal pixels, on any single frame. Whereas the average error above measures the overall
351
352
Advanced Features
This floating panel can be launched from the Feature control panel, affecting the details of how blips are placed and accumulated to form trackers. Feature Size (small). Spinner. Size in pixels for smaller blips Feature Size (big). Spinner. Size in pixels for larger blips, which are used for alignment as well as tracking. Density/1K. Spinner for each of big and small. Gives a suggested blip density in term of blips per thousand pixels. Minimum Track Length. Spinner. The path of a given blip must be at least this many frames to have a chance to become a tracker. Minimum Trackers/Frame. Spinner. SynthEyes will try to promote blips until there are at least this many trackers on each frame, including pre-existing guide trackers. Maximum Tracker Count. Spinner. Only this many trackers will be produced for the object, unless even more are required to meet the minimum trackers/frame. Camera View Type. Drop-down list. Shows black and white filtered versions of the image, so the effect of the feature sizes can be assessed. Can also show the images alpha channel, and the blue/green-screen check image, even if the screen control dialog is not displayed. Auto Re-blip. Checkbox. When checked, new blips will be calculated whenever any of the controls on the advanced features panel are changed. Keep off for large images/slow computers.
353
Align Mesh to Tracker Positions. The mesh will move to meet the trackers Align World to Mesh Position. The entire solve, camera path and trackers, will move to meet the mesh, which will not move. Allow Uniform scaling, all axes the same. The mesh will be stretched the same along each axis to match the trackers as best possible. Allow Non-uniform scaling, each axis separate. The mesh can be stretched separately along each axis to match, most usually for boxes where the exact dimensions are not known. Store resulting locations as tracker constraints. After alignment, the locations of the vertices will be burned into the trackers as Locks, so that the solve will reproduce this match again later, particularly for Align World to Mesh Position.
354
The panel is organized systematically, with a line for trackers with different categories of problems. A tracker can be counted in several different categories. There are Select toggle buttons for each category; each Select button selects and flashes trackers in that category in the main viewports. Click the button a second time to turn it off and de-select the trackers. After cleaning up the trackers (Fix), you should re-solve or refine the solution. All trackers. Radio button. All trackers are affected. Selected trackers. Radio button. Only the trackers already selected when the dialog is opened are affected. (Delete) Bad Frames. Checkbox. When checked, bad frames are deleted when the Fix button is clicked. Note that the number of trackers in the category is shown in parentheses. Show. Toggle button. Bad frames are shown in the user interface, by temporarily invalidating them. The graph editor should be open in Squish mode to see them. Threshold. Spinner. This is the threshold for a frame to be bad, as determined by comparing its 2-D location on a frame to its predicted 3-D location. The valud is either a percentage of the total number of frames (ie the worst 2%), or a value in horizontal pixels, as controlled by the radio buttons below. %. Radio button. The bad-frame threshold is measured in percentage; the worst N% of the frames are considered to be bad. Hpix. Radio button. The bad-frame threshold is a horizontal-pixel value. Disable. Radio button. When fixed, bad frames are disabled by adjusting the trackers enable track.
355
356
Trackers, especially automatic trackers, can wind up tracking the same feature in different parts of the shot. This panel finds them and coalesces them together into a single overall tracker. Coalesce. Button. Runs the algorithm and coalesces trackers, closing the panel. Cancel. Button. Removes any tracker selection done by Examine, then closes the dialog without saving the current parameter settings. Close. Button on title bar. The close button on the title bar will close the dialog, saving the tracker selection and parameter settings, making it easy for examine the trackers and then re-do and complete the coalesce. Examine. Button. Examines the scene with the current parameter settings to determine which trackers will be coalesced and how many trackers will be eliminated. The trackers to be coalesced will be selected in the viewports. # to be eliminated. Display area with text. Shows how many trackers will be eliminated by the current settings. Example: SynthEyes found two pairs of trackers to be coalesced. Four trackers are involved, two will be eliminated, two will be saved (and enlarged). The display will show 2 trackers to be eliminated. Defaults. Button. Restores all controls to their factory default settings. Distance (hpix). Spinner. Sets the maximum consistent distance between two trackers to be coalesced. Measured in horizontal pixels. Sharpness. Spinner. Sets the sensitivity within the allowable distance. If zero, trackers at the maximum distance are as likely to be coalesced as trackers at the same location. If one, trackers at the maximum distance are considered unlikely. Consistency. Spinner. The fraction of the frames two trackers must be nearby to be merged. Only selected trackers. Checkbox. When checked, only pre-selected trackers might be coalesced. Normally, all trackers on the current camera/object are eligible to be coalesced. Include supervised non-ZWT trackers. Checkbox. When off, supervised (golden) trackers that are not zero-weighted-trackers (ZWTs) are not eligible for coalescing, so that you do not inadvertently affect hand-tuned
357
Launched by the All button on the Flex/Curve control panel. Filter Size. Edge detection filter size, in pixels. Use larger values to accurately locate wide edges, smaller value for thinner edges. Search Width. Pixels. Size of search region for the edge. Larger values mean a roughed-in location can be further from the actual location, but might also mean that a different edge is detected instead. Adjacency Sharpness. 0..1. This is the portion of the search region in which the edge detector is most sensitive. With a smaller value, edges nearest the roughed-in location will be favored. Adjacency Rejection. 0..1. The worst weight an edge far from the roughed-in location can receive. Do all curves. When checked all curves will be tuned, not just the selected one. Animation range only. When checked, tuning will occur over the animation playback range, rather than the entire playback range. Continuous Update. Normally, as a range of frames is tuned, the tuning result from any frame does not affect where any other frame is searched for the searched-for location is based solely on the earlier curve animation that was roughed in. With this box checked, the tuning result for each frame immediately updates the curve control points, and the next frame
358
With one or more trackers selected, launch this panel with the Finalize Button on the Tracker control panel, then adjust it to automatically close gaps in a tracker (where an actor briefly obscures a tracker, say), and to filter (smooth) the trajectory of the selected trackers. The Finalize dialog affects only trackers which are not Locked (ie their Lock button is unlocked). When the dialog is closed via OK, affected trackers are Locked. If you need to later change a Finalized tracker, you should unlock it, then rerun the tracker from start to finish (this is generally fairly quick, since youve already got all the necessary keys in place). Filter Frames. The number of frames that are considered to produce the filtered version of a particular frame. Filter Strength. A zero to one value controlling how strongly the filter is applied. At the default value if one, the filter is applied fully. Max Gap Frames. The number of missing frames (gap) that can be filled in by the gap-filling process. Gap Window. The number of frames before the gap, and after the gap, used to fill frames inside the gap. Begin. The first frame to which filtering is applied. End. The last frame to which filtering is appied.
359
Fine-Tuning Panel
Launched from the Track menu.
Fine-tune during auto-track. Checkbox. If checked, the fine-tuning process will run automatically after auto-tracking. Key Spacing. Spinner. Requests that there be a key every this many frames after fine-tuning. Tracker Size. Spinner. The size of the trackers during and after fine tuning. The tracker size and search values are the same as on the Tracker panel. Tracker Aspect. Spinner. The aspect ratio of the trackers during and after fine tuning. U Search Size. Spinner. U (horizontal) search area size. Note that because the fine-tuning starts from the previously-tracked location, the search sizes can be very small, equivalent to a few pixels. V Search Size. Spinner. V (vertical) search area size. Reset. Button. Restore the current settings of the panel to factory values (not the preferences). Does not change the preferences; to reset the preferences to the factory values click Reset then Set Prefs. Get Prefs. Button. Set the current settings to the values stored as preferences.
360
Green-Screen Control
Launched from the Summary Control Panel, causes auto-tracking to look only within the keyed area for trackers. The key can also be written as an alpha channel or RGB image by the image preprocessor. Enable Green Screen Mode. Turns on or off the green screen mode. Turns on automatically when the dialog is first launched. Reset to Defaults. Resets the dialog to the initial default values.
361
362
Master Controls All. Button. Turn on or off all of the position and rotation locks. Shift-right-click to truncate keys past the current frame. Control-right-click to clear all keys leaving the object un-locked. Master weight. Spinner. Set keys on all position and rotation soft-lock weights. Shift-right-click to truncate keys past the current frame. Control-right-click to clear all keys (any locked frames will be hard locks). Back Key. Button. Skip backwards to the previous frame with a lock enable or weight key (but not seed path key). Forward Key. Button. Skip forward to the next frame with a lock enable or weight key (but not seed path key). Show. Button. When on, the seed path is shown in the main viewports, not the seed path. Also, the seed field of view/focal length is shown on the Lens Control panel, instead of the solved value. Translation Weights Pos. Button. Turn on or off all position lock enables. Position Weight. Spinner. Set all position weights. L/R. Button. Left/right lock enable. L/R Weight. Spinner. Left/right weight. F/B. Button. Front/back lock enable. F/B Weight. Spinner. Front/back weight. U/D. Button. Up/down lock enable.
363
364
Apply. Button. The preparation operation is performed. Undo. Button. Undo the last operation (of any kind) Preparation Mode Truncate. Button. Affected trackers are shut down in the interior of any hold region. Make Far. Button. Affected trackers are converted to Far, and shut down outside the hold region, plus the specified overlap. Clone to Far. Button. Default. Affected trackers are cloned, and the clone converted to Far with a reduced range as in Make Far. Convert Some. Button. A specified percentage of trackers is randomly selected and converted to Far. Percentage. Spinner. The percentage of trackers converted in Convert Some mode. Affected Trackers Selected. Button. Only selected trackers are affected by the operation. All. Button. All trackers are affected. (In both options, only automatic, non-far, trackers are considered).
365
Like the main SynthEyes user interface, the image preparation dialog has several tabs, each bringing up a different set of controls. The Stabilize tab is active above. With the left button pushed, you can review all the tabs quickly. For more information on this panel, see the Image Preparation and Stabilization sections. Warning: you should be sure to set up the cropping and distortion/scale values before beginning tracking or creating rotosplines. The splines and trackers
366
Redo. Button. Redo the last change undone. Add (checkline). Button. When on, drag in the view to create checklines. Delete (checkline). Button. Delete the selected checkline. Final. Button. Reads either Final or Padded: the two display modes of the viewport. The final view shows the final image coming from the image preparation subsection. The padded view shows the image after padding and lens undistortion, but before stabilization or resampling. Both. Button. Reads either Both, Neither, or ImgPrep, indicating whether the image prep and/or main SynthEyes display window are updated simultaneously as you change the image prep controls. Neither mode saves time if you do not need to see what you are doing. Both mode allows you to show the Padded view and Final view (in the main camera view) simultaneously. Margin. Spinner. Creates an extra off-screen border around the image in the image prep view. Makes it easier to see and understand what the stabilizer is doing, in particular. Show. Button. When enabled, trackers are shown in the image prep view. Image Prep View. Image display. Shows either the final image produced by the image prep subsystem (Final mode), or the image obtained after padding the image and undistorting it (Padded mode). You can drag the Region-ofinterest (ROI) and Point-of-interest (POI) around, plus you can click to select trackers, or lasso-select by dragging. Playbar (at bottom) Preset Manager. Drop-down. Lets you create and control presets for the image prep system, for example, different presets for the entire shot and for each moving object in the shot. Preset Mgr. Disconnect from the current preset; further changes on the panel will not affect the preset. New preset. Create and attach to a new preset. You will be prompted for the name of the new preset.
367
Back Frame. Button. Go back one frame; with Control down, back one key; with Shift down, back to the beginning of the shot. Auto-repeats. Frame. Spinner. The frame to be displayed in the viewport, and to set keys for. Note that the image does not update while the spinner drags because that would require fetching all the intermediate frames from disk, which is largely what were trying to avoid. Forward Frame. Button. Go forward one frame; with Control down, forward one key; with Shift down, forward to the end of the shot. Auto-repeats. Forward Key. To End. Button. Go forward to the next frame with a ROI or Levels key. Button. Go to the end of the shot.
Make Keys. Checkbox. When off, any changes to the levels or region of interest create keys at frame zero (for when they are not animated). With the checkbox on, keys are created at the current frame. Enable. Button (stoplight). Allows you to temporarily disable levels, color, blur, downsampling, channels, and ROI, but not padding or distortion. Use to find a lost ROI, for example. Effective only within image prep. Rez Tab Blur. Spinner. Causes a Gaussian blur with the specified radius, typically to minimize the effect of grain in film. Applied before down-sampling, so it can eliminate artifacts. Hi-Pass. Spinner. When non-zero, creates a high-pass filter using a Gaussian blur of this radius. Use to handle footage with very variable lighting, such as explosions and strobes. Radius is usually much larger than typical blur compensations. Applied before down-sampling. DownRez. Drop-down list: None, By 1/2, By 1/4. Causes the image from disk to be reduced in resolution by the specified amount, saving RAM and time for large film images, but reducing accuracy as well. Interpolation. Drop-down list. Bi-Linear, 2-Lanczos, 3-Lanczos, 2-Mitchell. The bi-linear method is fastest but softens the image slightly. If the shot has a lot of noise, that can be a good thing. The 2-Lanczos filter provides a sharper result, after a longer time. The 3-Lanczos filter is even sharper, with more time and of course the noise is made sharper also. 2-Mitchell is between bi-linear and 2-Lanczos in sharpness.
368
369
370
371
372
373
Warning: Path and FOV filtering CAUSE sliding, because they move the camera path AWAY from the position that produces the best, locked-on, results. The filtering is best used as part of a workflow where you only filter a few axes, lock them after filtering, then refine the solution to accommodate the effect of the filtering. The selection of solve or seed path and whole shot or playback range are available only interactively; the solving process always filters the whole shot into the solve path (if enabled).
374
Frequency. Animated spinner. Cutoff frequency controlling how quickly the parameter is allowed to change, in cycles per second (Hz), ranging up to at most 1/2 the frame rate. Strength. Animated spinner. Controls how strongly the filtering is applied, ranging from 0 (none) to 1 (completely filtered at the given frequency). X/Y/Z. Checkboxes. One checkbox for each translational axis, controlling whether or not it will be filtered. Rotation. Checkbox. Controls whether rotation is filtered. Note that there are no separate channels to filter or not for rotation. Distance. Checkbox. Controls whether the camera/origin (camera tracks) or object/camera (object tracks) distance is filtered or not. Primarily intended for difficult object tracks where most error is in the direction towards or away from the camera. FOV. Checkbox. Controls whether or not the field of view is filtered. To Solve Path. The filtered path (and/or FOV) is written into the solve tracks (normal default). To Seed Path. The filtered path (and/or FOV) is written into the seed tracks, which is available only interactively and is intended to generate data for hard or soft locking the axes. Whole Shot. The entire shot is filtered (normal default). Playback range. Only the portion of the shot between the green and red playback range markers on the timebar will be filtered. For interactive use (only), this can be quicker and easier than setting up an animated strength value to adjust a portion of a shot. The filtering will blend in and out at each end of the playback range.
375
Off/Align/Solve. Button. Controls the mode in which the spinal editing features run, if at all. In align mode, the scene is re-aligned after a change. In solve mode, a refine solve cycle is run after a change. Finish. Button. Used to finish a refine solve cycle that was truncated to maintain response time. Equivalent to the Go button on the solver control panel. Lock Weight. Spinner. This weight is applied to create a soft-lock key on each applicable channel when the camera or object is moved or rotated. When this spinner is dragged, the solver will run in Solve mode, so you can interactively adjust the key weight. Drag time (sec). Spinner. (Solve mode only.) Refine cycles will automatically be stopped after this duration, to maintain an interactive response rate. If zero, there will be no refine cycles during drag. Time at release. Spinner. (Solve mode only.) An additional refine operation will run at the completion of a drag, lasting for up to this duration. If zero, there will not be a solve cycle at the completion of dragging (ie if the drag time is long enough for a complete solve already). Update ZWTs, lights, etc on drag. Checkbox. If enabled, ZWTs, lights, etc will be updated as the camera is dragged, instead of only at the end. Message area. Text. A text area displays the results of a solve cycle, including the number of iterations, whether it completed or was stopped, and the RMS error. In align mode, a total figure of merit is shown reflecting the extent to which the constraints could be satisfiedthe value will be very small, unless the constraints are contradictory. Preferences Controls The spinal settings are stored in the scene file. When a new scene is created, the spinal settings are initialized from a set of preferences. These preferences are controlled directly from this panel, not from the preferences panel.
376
Make Keys. Button. When on, keys are created and shown at the current frame. When off, the value and status on the first frame in the shot are shown for non-animated fixed parameters. Back to Key. Button. Moves back to the next previous frame with any stereorelated key. Forward to Key. Button. Moves forward to the next following frame with any stereo-related key.
377
378
379
380
(file name). Static text field. Shows the base file name of the texture on the selected mesh; this file is either read, if Create Texture is off, or written, if Create Texture is on. Set. Button. Brings up the file browser to set the file name. Important: be sure to set Create Texture appropriately before clicking Set, so that the correct File Open or File Save dialog can be displayed. Clear. Button. Removes the texture file name. Options. Button. Allows the compression options for the selected file type to be changed. Save. Button. All selected meshes with extracted textures are re-saved to disk. Use after painting in the alpha channel, for example. Create Texture. Checkbox. When set, this mesh will have a texture computed for it on demand (via Run, Run All, or after a solve), which will then be written to the designated file. When the checkbox is clear, the specified texture will just be shown in the viewport.
381
382
383
Timing Bar
The timing bar shows valid regions and keys for trackers, roto masks, etc, depending on what is currently selected, and the active panel. Shows hold regions with magenta bars at the top of the frames. Green triangle: start of replay loop. Left-drag Red triangle: end of replay loop. Left-drag. Green bar: The Begin frame on the solver panel. Display only. Red bar: The End frame on the solver panel. Display only. Left Mouse: Click or drag the current frame. Drag the start and end of the replay loop. Shift-drag to change the overall starting or ending frame. Controlshift-drag to change the end frame, even past the end of the shot (useful when the shot is no longer available). Middle Mouse: Drag to pan the time bar left and right. Middle Scroll: Scroll the current time. Shift-scroll to zoom the time bar. Right Mouse: Horizontal drag to pan time bar, vertical drag to zoom time bar. Or, right click cancels an ongoing left or middle-mouse operation.
Camera Window
The camera view can be floated with the Window/Floating camera menu item. Left Mouse: Click to select and drag a tracker, or create a tracker if the Tracker panels create button is lit. Shift-click to include or exclude a tracker from the existing selection set. Drag to lasso 2-D trackers, control-drag to lasso both the 2-D trackers and any 3-D points, shift-drag to lasso additional trackers. Lasso meshes instead if "Edit/Lasso meshes instead" is turned on. ALT-Left-Click (Mac: Command-Left-Click) to link to a tracker, when the Tracker 3-D panel is displayed. Click the marker for a tracker on a different object, to switch to that object. Drag a Lens panel alignment line.
385
3-D Viewport
Left Mouse: Click and Drag repeatedly to create an object, when the 3-D Panels Create button is lit. ALT-Left-Click (Mac: Command-Left-Click) to link to a tracker, when the Tracker 3-D panel is displayed. Drag a lasso to select multiple trackers, shift-drag to lasso additional trackers. Lasso meshes when "Edit/Lasso meshes instead" is selected. Or, move, rotate, or scale an object, depending on the tool last selected on the 3-D Panel. Middle Mouse: Drag to pan the viewport. (See mouse preferences discussion above.) Middle Scroll: Zoom the viewport. Right Mouse: Drag vertically to zoom the viewport. Or, cancel an ongoing left or middle-mouse operation.
386
Tracker 7 is unlocked and selected in the main user interface, and a selection of keys from trackers 6, 7, and 9 are selected in the graph editor. While the other trackers are automatic, #7 is now supervised and tracks in the forward direction (note the directionality in the key markers). The current frame # is off to the left, before frame 35.
387
The capture shows a graph display of Camera01. The red, green, and blue traces are solved camera X,Y, and Z velocities, though you would have to expose the solved velocity node if you did not know. The magenta trace with key marks every frame is a field-of-view curve from a zoom shot. The time area is in scroll mode, the graph shows frames 62 to 130, and we are on frame 117. Hint. This panel does a lot of different stuff. If you only read this, you will probably not understand exactly what or why everything does what it does. We could go on and on trying to describing everything exactly, to no purpose. Keep alert for what SynthEyes can do, and give it a try inside SynthEyesyou will understand a lot better.
Alpha, Error, Time Sort. , , , . Sort trackers in a modified alphabetical order, by the error after solving, by time, or by tracker lifetime. The button sequences through these four modes in order, right-click to go in reverse order.
388
389
Left Hierarchy Scroll This is the scroll bar along the left edge of the graph editor in both graph and tracks modes. In the hierarchy scroll: left-drag inside the knob to move it and pan the hierarchy vertically, left-click outside the knob to page up or down, right-click to HOME to the top, or double-click to center on that location.
The interior of the entire height of the scroll bar shows where nodes are selected or open, even though they are not currently displayed. You can rapidly see any of those open nodes by clicking at that spot on the scroll bar. Hierarchy/Canvas Gutter A small gutter area between the hierarchy and canvas area lets you expand the hierarchy area to show longer tracker names, or even to compress it down so that it can not be seen at all to save space if the graph editor is embedded in a complex layout. Note that the gutter can not be seen directly; it starts at the right edge of the white border behind selected hierarchy nodes, and the cursor will change shape to a left/right drag cursor.
Tracks Mode
Hierarchy Area The middle-mouse scroll wheel scrolls the hierarchy area vertically. Disclosure Triangle. this node. Visibility. . Click to expose or hide the node/nodes/tracks under
390
Tracker Name. Selected nodes have a white background. Only some types of nodes can be selected, corresponding to what can be selected in SynthEyess viewports. In the following list, keep in mind that only one of most objects can be selected at a time; only trackers can be multi-selected. click or drag to select one node (updating all the other views), control-click or drag to toggle the selection, control-shift-drag to clear a range of selections, shift-click to select an additional tracker, shift-click an already-selected tracker to select the range of trackers from this one to the nearest selected one, or double-click to change the name of a node (if allowed).
Include in Composite. . When on (as shown), keys on this track are included in the composite track of its parent (and possibly in the grandparent, greatgrandparent, etc). The off key of an enable track is never included on a composite track. Mouse Modes The mouse mode buttons at the bottom center control what the mouse buttons do in the canvas area. Common operations shared by all modes: Select Keys. Middle-mouse pan, Middle-scroll to change the current frame and pan if needed. Shift-middle-scroll to zoom the time axis Right-drag to zoom or pan the time axis (like the main timebar) Right-click to bring up the track modes canvas menu. . The shared operations plus: Left-click a key to select it, Left-drag a box to select all the keys in the box, Shift-left-click or drag to add to the selected key set, Control-left-click or drag to remove from the selected key set.
391
Left-click a key to select it, Left-drag a box to select all the keys in the box, Left-drag selected keys to re-time them (shift them in time), Control-left-drag to clone the select keys and drag them to a new frame, Alt-left-drag to include keys on all tracks sharing keys. Double-click keys to bring up the Set Key Values dialog. Add Keys. . The shared operations plus:
Left-click a key to select it, Left-click a location where there is no key to add one. Left-drag a box to add keys at all possible key locations within the box. The value will be determined by interpolating the existing curve at the time the key is added. Shift-left-click to add to the selected key set, Double-click keys to bring up the Set Key Values dialog. Delete Keys. . The shared operations plus:
Left-click a key to delete it, Left-drag a region, all keys inside that can be deleted will be deleted. Squish Mode. This mode activates automatically when you select Squish mode with the keys not shown (see Shared Features, above). With no keys shown, the key manipulation modes do not make sense. Instead the following mode, modified from the hierarchys name area, is in effect: click or drag to select and flash one node, control-click or drag to toggle the selection, control-shift-drag to clear a range of selections, shift-click to select an additional tracker, shift-click an already-selected tracker to select the range of trackers from this one to the nearest selected one. Hierarchy Menu (Tracks mode) This menu appears when you right-click in the hierarchy area. Note that some menu items pay attention to the mouse location when you right-click. Home. Scrolls the hierarchy up to the top. End. Scrolls the hierarchy to the end. Close except this. Closes all the other nodes except the right-clicked one. Close all. Closes all nodes except the top-level Scene.
392
393
Graphs Mode
Hierarchy Area Disclosure Triangle. this node. Visibility. . Click to expose or hide the node/nodes/tracks under
394
Tracker Name. Selected nodes have a white background. Only some types of nodes can be selected, corresponding to what can be selected in SynthEyess viewports. In the following list, keep in mind that only one of most objects can be selected at a time; only trackers can be multi-selected. click or drag to select one node (updating all the other views), control-click or drag to toggle the selection, control-shift-drag to clear a range of selections, shift-click to select an additional tracker, shift-click an already-selected tracker to select the range of trackers from this one to the nearest selected one, or double-click to change the name of a node (if allowed).
Show Channel(s). . When on (as shown), the channels graph is drawn in the canvas area. On a node, controls all the channels of the node, and the control may have the on state shown, a partially-shown state (fainter with no middle dot), or may be off (hollow, no green or dot). Zoom Channel. . Controls the vertical zoom of this channel, and all others of the same type: they are always zoomed the same to keep the values comparable. Left-click to see all related channels (their zoom icons will light up) and see the zero level of the channel in the canvas area, and see the range of values displayed on the status line. Left-drag to change the scale. It will change the offset to keep the data visiblehold the ALT key to keep the data visible over the entire length of the shot. Right-click to reset the zoom and offsets to their initial values. Double-click to auto-zoom each channel in the same group so that they have the same scale and same offsets. Compare to double-clicking the pan icon. Shift-double-click auto-zooms all displayed channels, not just this group.
395
396
Left-click a key to select it, Left-drag a box to select all the keys in the box, Left-drag selected keys to re-time them (shift them in time), Control-left-drag to clone the select keys and drag them to a new frame, Alt-left-drag to include keys on all tracks sharing keys with the selected ones. Double-click keys to bring up the Set Key Values dialog. Add Keys. . The shared operations at top plus:
Left-click a key to select it, Shift-left-click on a key to add to the selected key set. Control-left-click on a key to remove it from the selected key set. Left-click on a curve to add a key at that location. Left-drag a box in empty-space to add keys at all possible key locations within the box. The value will be determined by interpolating the existing curve at the time the key is added. Double-click keys to bring up the Set Key Values dialog. Delete Keys. . The shared operations at top plus:
Left-click a key to delete it, Left-drag a region, all keys inside that can be deleted will be deleted. Deglitch. . The shared operations at top plus:
Left-click a curve or key to fix a glitch by averaging, or by truncating if it is the beginning or end of the curve. Warning: do not try to deglitch the first frame of a velocity curveit is the second frame of the actual data. Turn on the position curve instead. Control-left-drag to isolate on the curve under the mouse cursor. (Temporarily enters isolate mode.) Isolate. . Intended to be used when all trackers are selected and displayed. The shared operations at top plus:
397
Zoom.
Left-drag an area then release; then channel zooms and offsets are changed to display only the dragged region. Simulates zooming the canvas, but it is the zoom and pan of the individual channels that is changing. Right-click on the zoom button resets the pans and zoomseven if the zoom button is not active. Hierarchy Menu (Graph mode) This menu appears when you right-click in the hierarchy area. Note that some menu items pay attention to the mouse location when you right-click. Home. Scrolls the hierarchy up to the top. End. Scrolls the hierarchy to the end. Hide these curves. Turns off the display of all data channels of the node that was right-clicked. Close except this. Closes all the other nodes except the right-clicked one. Close all. Closes all nodes except the top-level Scene. Expose recursive. Exposes the clicked-on node, and all its children. Close recursive. Closes the clicked-on node,and all its children. Expose selected. Exposes all selected nodes. Close selected. Closes all selected nodes. Copy Selected Keys. Copy the selected keys onto the shared graph-editor clipboard. Cut Selected Keys. Copy the selected keys onto the shared graph-editor clipboard, then delete them. Paste Keys into this. Paste keys from the shared graph-editor clipboard into the node or channel you right-clicked on. Keys may be moved from between channels of the same underlying type: for example, Y position keys can be moved to the X position channel, but not the Pan rotation angle. At least slightly clever about figuring out what you are trying to do when moving different kinds of keys into different places, try first, ask questions later. The menu item will be grayed out if the transfer can not be made. Note that pasting into a locked tracker will not have any effect. Delete clicked. Deletes the node you right-clicked on. Note: the delete key (on the keyboard) deletes keys, not nodes, in both the canvas and hierarchy areas.
398
400
Activated by double-clicking a key from the graph or tracks views to change one or more keys to new values, specified numerically. If multiple keys are selected when the dialog is activated, the values can all be set to the same value, or they can all be offset by the same amount, as selected by the radio buttons at the bottom of the panel. The value is controlled by the spinner, but also by up and down buttons for each digit. You can add 0.1 to the value by clicking the + button immediately to the right and below the decimal point. The buttons add or subtract from the overall value, not from only a specific digit. Right-clicking an up or down button clears that digit and all lower digits to zero, rounding the overall value. The values update into the rest of the scene as you adjust them. When you are finished, click OK or Cancel to cancel the change.
401
402
403
View Submenu
Local coordinate handles. The handles on cameras, objects, or meshes can be oriented along either the global coordinate axes, or the axes of the item itself, this menu check item controls which is displayed. Path-relative handles. The handles are positioned using the camera path: slide the camera along the path, inwards with respective to curvature, or upwards from the curvature. This option applies only for cameras and objects. Stereo Display. If in a stereo shot, selects a stereo display from both cameras. See Perspective View Settings to configure. Whole path. Moves a camera or object and its trackers simultaneously. See 3-D Control Panel. Whole affects meshes. Controls whether or not the Whole button affects meshes as it moves a scene. Turn on if you have already placed the
404
405
406
Texturing Submenu
Frozen Front Projection. The current frame is frozen to form a texture map for every other frame in the shot. The object disappears in this frame; in other frames you can see geometric distortion as the mesh (with this image applied) is viewed from other directions. Rolling Front Projection. The edit mesh will have the shot applied to it as a texture, but the image applied will always be the current one. Remove Front Projection. Texture-mapping front projection is removed from the edit mesh. Assign Texture Coordinates. Assigns UV texture coordinates using camera mapping, then crops them to use the entire range. Crop Texture Coords. Adjust the UV coordinates of the edit mesh so that they use the entire 0..1 range. Use this after a camera map or heavy edit of a mesh, to utilize more of the possible texture map's pixels. Clear Texture Coords. Any UV texture coordinates are cleared from the edit mesh, whether they are due to front projection or importing. Create Smooth Normals. Creates a normal vector at each vertex of the edit mesh, averaging over the attached facets. The smooth normals are used to provide a smooth perspective display of the mesh. Clear Normals. The per-vertex normals are cleared, so face normals will be used subsequently. Open Texture Panel. Opens the texture control panel, so you can apply an existing texture to a mesh, or calculate a new one.
Linking Submenu
Align via Links dialog. Brings up a dialog that uses existing links to either align a mesh to the location of the trackers it is linked to, or align the entire world (shot) to match the mesh. This latter option is useful when you have a mesh model and want the matchmove to match your existing model, it is a form of Coordinate System Alignment.
407
Grid Submenu
Show Grid. Toggle. Turns grid display on and off in this perspective window. Keyboard: G key. Move Grid. Mouse mode. Left-dragging will slide the grid along its normal mode, for example, allowing you to raise or lower a floor grid. Floor Grid, Back Grid, Left Side Grid, Ceiling Grid, Front Grid, Right Side Grid. Puts the grid on the corresponding wall of a virtual room (stage), normally viewed from the front. The grids are described this way so that they are not affected by the current coordinate system selection. To Facet/Verts/Trkrs. Aligns the grid using an edit-mesh facet, 1 to 3 edit-mesh vertices, if a mesh is open for editing, or 1 to 3 trackers otherwise. This is
408
File name/ Select the output file name to which the movie should be written. A Quicktime movie, BMP, Cineon, DPX, JPEG, OpenEXR, PNG, SGI, Targa, or TIFF(Mac only) file sequence can be produced. For image sequences, the file name given is that of the first frame; this is your chance to specify how many digits are needed and the starting value, for example, prev1.bmp or prevu0030.exr.
409
410
SimulTrack Reference
The SimulTrack view shows the interior of multiple trackers at multiple frames throughout the shot simultaneously, allowing you to get a quick overview of a track and modify it quickly. It can be used not only for checking up on trackers, but for additional supervised-tracking workflows.
Basics
The SimulTrack view contains any number of tiles laid out in a grid pattern. Tiles are shown for selected trackers on their keyed frames, and on the current frame.
This tile corresponds to frame 142 of Tracker88. Clicking on the frame number will send the SynthEyes user interface to frame 142. The tracker name is listed at the bottom of the pane; clicking the name will select (only) this tracker. Shift-clicking the name will un-select the tracker, removing it from the SimulTrack display (useful when many are selected). Either way, clicking on the tracker name will also flash the tracker in the other viewports, to make it easier to find elsewhere. The parentheses "()" around the tracker name indicate that it is locked; use the right-click menu to change that. The underline below the tracker name shows the specific color that has been assigned to this tracker, if any. The wide rim indicates that there is a tracker position key on this frame, and the blue color means that this frame (142) is the current active frame in the main user interface. Normally only frames with keys are shown in the SimulTrack view (this can be a lot for auto-tracked trackers), so that the user-created keys can quickly be examined and modified during supervised tracking. The space between keys can be expanded to show intervening unkeyed frames by clicking on the gutter, or by using various right-menu commands. Tip: clicking in the gutter or using a right-click expand menu operation makes a difference only on keyed tiles. The light and dark blue curves overlaid on the tile show the figure-of-merit (FOM) and 3-D error curves of the tracker between this key and the next. The curves can be enabled or disabled from the right-click menu. Dragging the interior of a key, or dragging the offset marker, has the same effect as it does within the mini-tracker view of the Tracker panel, setting a
411
SIMULTRACK REFERENCE
position or offset key. Use control and shift to slow down or accelerate the movement of the tracker for more accurate or faster repositioning. Similar to, but not identical to, the mini-tracker view, shift-right-click within a tile to add or remove a position key on that frame. Right-clicking brings up the right-click menu, so shift-right is needed in SimulTrack, whereas plain right-click is used in the mini-tracker-view, which has no right-click menu. Clicking on the "S" at the upper-right of a tile will turn on or off the strobe setting for that frame. When strobing is enabled, the tile will sequence rapidly between the image of the prior key, the current frame, and the following key. Hovering over a tile will bring up a tooltips with statistics on the tracker.
Display Modes
The overall SimulTrack window shows many tiles simultaneously, in one of three different configurations, depending on the number of trackers selected in the SynthEyes user interface. Use the middle-mouse button or the scroll bar to pan the entire SimulTrack view. Use ALT-left (Command-Left) to scrub through the shot from inside the SimulTrack view. The middle-mouse scroll wheel will step through the frames of the shot. Use shift-scroll (command-scroll on Macs) to scroll the tiles instead.
412
SIMULTRACK REFERENCE
Grid Mode
The SimulTrack view is in Grid mode whenever there are more selected trackers than can be shown in Rows mode. Only a single tile is shown for each tracker, the tile for the current frame. In the view above, notice that trackers 27R, 11R, 23R, 26R, 39R, and 44R are disabled on the current frame. All the other trackers are valid and keyed (they are auto-trackers and keyed on each frame). The sort order of the trackers in the SimulTrack view is determined by the Sort settings on the main View menu. There are 6 pink and 4 blue trackers at the beginning of the list, grouped together as the overall View/Group by Color option is turned on.
413
SIMULTRACK REFERENCE
Row Mode
Here, the SimulTrack view is in Rows mode, with exactly 5 trackers selected. The light-blue background shows where the current tile is being displayed; panning the view can move that blue region. Trackers 41 and 88 are valid on the current frame, which is frame 142 as indicated by the blue rim on Tracker88 at center. Trackers 129, 159, and 198 have tiles in the middle section but they end or begin after the current frame. These trackers have been fine-tuned with keys every 8 frames, as can be seen.
414
SIMULTRACK REFERENCE
Single Mode
The SimulTrack is showing a single selected tracker, in this case a finetuned one. This image was captured as a new file was being opened, so that you can see the waiting graphic, which appears while the relevant shot image is being fetched (it takes 31 different images to display this single SimulTrack view). Normally, with adequate RAM on the machine, the wait graphic disappears rapidly as the frames are fetched. However that may not happen if the machine does not have enough RAM; you may not be able to see all images simultaneously.
Right-Click Menu
Important: in several cases, the results of a menu operation depend on which tile is right-clicked to open the menu. Home. Scrolls the tiled frame display to the top of the page. To End. Scrolls the tiled frame display to the bottom of the page. Show FOM. Shows the figure of merit curve for match-type supervised trackers within each tile: how well the reference pattern matches image on a given frame. Larger values indicate problems. Show Error. Shows the error curve within each tile: how far the 2-D tracker location is from the 3-D tracker location. Larger values indicate problems. Select only valid. Starting from the set of selected and displayed trackers, unselect those that are not valid on the current frame, to reduce clutter.
415
SIMULTRACK REFERENCE
Select same color. Select all other trackers on the same camera/object that have the same color as the clicked-on tracker. Stereo spouses. Instead of showing the selected trackers, the SimulTrack view shows the matching tracker on the other camera of the stereo pair. Open two SimulTrack views simultaneously, and see both sides at once. (Locked). Shows whether the clicked-on tracker is locked or not (though you can already tell if it's name is enclosed in parentheses, ie "(Tracker1)"), and allows you to unlock or relock it. Lock All. Locks all currently-selected trackers. Unlock All. Unlocks all currently-selected trackers. Is ZWT. Shows whether the clicked-on tracker is a zero-weighted tracker, and toggles that status. Exactify. Sets a key on this frame of the clicked-on tracker, exactly at its solved 3-D location (as seen in the image). Generate autokeys. Fills out additional keys at a spacing determined by the Key (every) setting of the tracker panel, based on the 3-D location, computed as if the tracker was a zero-weighted tracker (which it may be). Adjust these keys to refine the track. Strobing this. Show and toggle whether or not the clicked-on tracker is strobing at this frame. Unstrobe all. Stop all trackers from strobing, on all frames. Expanded this. Shows and toggles whether or not the clicked-on tracker is expanded (showing all the intervening non-keyed, tracked, frames between this key and the next). Close all. Closes (un-expands) all key frames on all trackers. Remove menu ghosts. Some OpenGL cards do not redraw correctly after a pop-up menu has appeared, this control forces a delayed redraw to remove the ghost. On by default and harmless, but this lets you disable it. This setting is shared throughout SynthEyes and saved as a preference. Strobe Submenu Strobe all frames. Begins strobing on all displayed frames of the clicked-on tracker. Unstrobe all frames. Stops strobing on all frames of the clicked-on tracker Strobe all on this frame. Begins strobing on this frame of all selected trackers. Unstrobe all on this frame. Stops strobing on this frame of all selected trackers. Expand Submenu Expand All. Expand all frames on all selected trackers Expand all frames. Expand all frames on the clicked-on tracker. Close all frames. Close all frames on the clicked-on tracker. Expand all on this frame. Expand this frame on all selected trackers. Close all on this frame. Close this frame on all selected trackers.
416
417
418
SIMULTRACK REFERENCE
out" supervised trackers, stepping into the as-yet-untracked portion and typically using z-drop and z-drop-lock.
419
Preferences
Preferences apply to the user interface as a whole. Some preferences that are also found on the scene settings dialog, such as the coordinate axis setting, take effect only as a new scene is created; subsequently the setting can be adjusted for that scene alone with the scene settings panel. Other preferences are set directly from the dialog that uses them, for example, the spinal editing preferences. Apologies in advance: We concede that there are too many controls on this panel.
421
16 bit/channel (if available). Store all 16 bits per channel from a file, producing more accurate image, but consuming more storage. After min. Spinner. The calculation-complete sound will be played if the calculation takes longer than this number of minutes. Anti-alias curves. Checkbox. Enables anti-aliasing and thicker lines for curves displayed by the graph editor. Easier to read, but turn off if it is too slow for less-powerful OpenGL cards. Auto-switch to quad. Controls whether SynthEyes switches automatically to the quad viewport configuration after solving. Switching is handy for beginners but can be cumbersome in some situations for experts, so you can turn it off. Axis Setting. Selects the coordinate system to be used. Back Plate Width. Width of the cameras active image plane, such as the film or imager.
422
423
424
Scene Settings
The scene settings, accessed through Edit/Edit Scene Settings, apply to the current scene (file). The perspective-window sizing controls are found here. Normally, SynthEyes bases the perspective-window sizes on the world size of the active camera or object. The resulting actual value of the size will be shown in the spinner, and no key will be indicated (a red frame around the spinner). If you change the spinner, a key frame will be indicated (though it does not animate). After you change a value, and the key frame marker appears, it will no longer change with the world size. You can reset an individual control to the factory default by right-clicking the spinner. There are several buttons that transfer the sizing controls back and forth to the preferences: there is no separate user interface for these controls on the Preferences panel. If a value has not been changed, that value will be saved in the preferences, so that when the preferences are applied (to a new scene, or recalled to the current scene), unchanged values will be the default factory values, computed from the current world size.
425
Axis Setting. Selects the coordinate system to be used. Camera Size. 3-D size of the camera icon in the perspective view. Far Clip. Far clip distance in the perspective view Inter-ocular. Spinner. Sets the inter-ocular distance (in the unitless numbers used in SynthEyes). Used when the perspective view is not locked to the camera pair. Key Mark Size. Size of the key marks on camera/object seed paths. Light Size. Size of the light icon in the perspective view. Load from Prefs. Loads the settings from the preferences (this is the same as what happens when a new scene is created). Mesh Vertex Size. Size of the vertex markers in the perspective viewin pixels, unlike the other controls here. Near Clip. Near clipping plane distance. Object Size. Size of the moving-object icon in the perspective view. Orbit Distance. The distance out in front of the camera about which the camera orbits, on a camera rotation when no object or mesh is selected.
426
427
Keyboard Reference
SynthEyes has a user-assignable keyboard map, accessed through the Edit/Edit Keyboard Map menu item. (Preview: use the Listing button to see them all.) The keyboard manager lets you set up assignments of keys to menu items, various button operations, and Sizzle scripts such as tools, importers, and exporters.
The first list box shows a context (see the next section), the second a key, and the third shows the action assigned to that key (there is a NONE entry also). The Shift, Control, and Alt (Mac: Command) checkboxes are checked if the corresponding key must also be down; the panel shown here shows a Select All operation will result from Control-A in the Main context. Because several keys can be mapped to the same action, if you want to change Select All from Control-A to Control-T, say, you should set Control-A back to NONE, and when configuring the Control-T, select the T, then the Control checkbox, and finally then change the action to Select All. Time-Saving Hint: after opening any of the drop-down lists (for context, key, or action), hit a key to move to that part of the list quickly. The Change to button sets the current key combination to the action shown, which is the last significant action performed before opening the keyboard manager. In the example, it would be Reset Preferences. Change to makes it easy to set up a key code: perform the action, open the keyboard manager, select the desired key combination, then hit Change to. The Change to button may not always pick up a desired action, especially if it is a buttonuse the equivalent menu operation instead. You can quickly remove the action for a key combination using the NONE button. Changes are temporary for this run of SynthEyes unless the Save button is clicked. The Factory button resets the keyboard assignments to their factory defaults. Listing shows the current key assignments; see the Default Key Assignments section below.
429
KEYBOARD REFERENCE
Key Contexts
SynthEyes allows keys to have different functions in different places; they are context-dependent. The contexts include: The main window/menu The camera view Any perspective view Any 3-D viewport Any command panel There is a separate context for each command panel. In each context, there is a different set of applicable operations, for example, the perspective window has different navigation modes, whereas trackers can only be created in the camera window. When you select a context on the keyboard manager panel, only the available operations in that context will be listed. Here comes the tricky part: when you hit any key, several different contexts might apply. SynthEyes checks the different contexts in a particular order, and the first context that provides an action for that key is the context and action that is applied. In order, SynthEyes checks The selected command panel context The context of the window in which the key was struck The main window/menu context The context of the camera window, if it is visible, even if the cursor was not in the camera window.
This is a bit complex but should allow you to produce many useful effects. Note that the 4th rule does have an action at a distance flavor that might surprise you on occasion, though it is generally useful. You may notice that some operations appear in the main context and the camera, viewport, or perspective contexts. This is because the operation appears on the main menu and the corresponding right-click menu. Generally you will want the main context. Keys in the command-panel contexts can only be executed when that command-panel is open. You can not access a button on the solver panel when the tracker panel is open, say. The solver panels context is not active, so the key will not even be detected, the solver panel functionality is unavailable when it isnt open, and changing settings on hidden panels makes for tricky user interfaces (though there are some actions that basically do this).
430
KEYBOARD REFERENCE
file shows the current assignments sorted by action name and by the key, so you can find the key for a given action, or see what keys are unused. The listing also shows the available actions, so you can see what functions you can assign a key to. All menu actions can be assigned, as can all buttons, check boxes, and radio boxes on the main control panels, plus a variety of special actions. You will see the current key assignment listed after menu items and in the tooltips of most buttons, checkboxes, and radio buttons on command panels. These will automatically update when you close the keyboard manager.
Fine Print
Do not assign a function to plain Z or apostrophe/double-quote. These keys are used as an extra click-to-place shift key in the camera view, and any Z or / keyboard operation will be performed over and over while the key is down for click-to-place. The Reset Zoom action does two somewhat different things: with no shift key, it resets the camera view so the image fills the view. When the shift key is depressed, it resets the camera view so that the image and display pixels are 1:1 in the horizontal direction, ie the image is full size. Consequently, you need to set up your key assignments so that the fill operation is un-shifted, and the 1:1 operation is shifted. The same thing applies to other buttons whose functionality depends on the mouse button. If you shift-click a button to do something, then the function performed will still depend on the shift setting of the keyboard accelerator key. There may be other gotchas scattered through the possible actions; you should be sure to verify their function in testing before trying them in your big important scene file. You can check the undo button to verify the function performed, for example. The My Layout action sets the viewport configuration to one named My Layout so that you can quickly access your own favorite layout.
You can quickly access this folder from SynthEyess File/User Data Folder menu item.
431
KEYBOARD REFERENCE
The preferences data and viewport layouts are also stored in prefs11.dat and layout11.ini files in this folder. Note that the Application Data folder may be hidden by the Windows Explorer; there is a Folder Option to make it visible.
432
To add a new viewport configuration, do the following. Open the manager, and select an existing similar configuration in the drop-down list. Hit the Duplicate button, and give your new configuration a name. If you created a new Custom layout in the main user interface by changing the panes, and youd like to keep that layout for future use, you can give it a name here, so that it is not overwritten by your next Custom layout creation. Tip: In the main user interface, the 7 key automatically selects a layout called My Layout so you can reach it quickly if you use that name. Inside the view manager, you can resize the viewports as in the main display, by dragging the borders (gutters). If you hold down shift while dragging a border, you disconnect that section of the border from the other sections in the same row or column. Try this on a quad viewport configuration and it will make sense. If you double-click a viewport, you can change its type. You can split a viewport into two, either horizontally or vertically, by clicking in it and then the
433
434
The selector at top left selects the script bar being edited; its file name is shown immediately below the script name, with the list of script buttons listed under that. Use the New button to create a new script bar; you will enter a name for your script bar, then select a file name for it within your personal scripts folder. You can also use the Save As button to duplicate a script with a new name, use Chg. Name to change the human-readable name (not the file name), or you can Delete a script bar, which will delete the script bars file from disk (but not any of the scripts). Each button has a short name, shown in the list, in addition to the longer full script name and file name, both of which are shown when an individual button is selected in the list. You can double-click a button to change the short name, use the Move Up and Move Down buttons to change the order, or click Remove to remove a button from the script bar (this does NOT delete the script from disk). To add a button to a script bar, select the script name or menu command in the selector at bottom, then click the add button. You will be able to select or adjust the short name as the button is added.
435
436
437
438
Support
Technical support is available through techsupport@ssontech.com. A response should generally be received within 24 hours except on weekends. SynthEyes is written, supported, and 2003-2010 by Andersson Technologies LLC. This software is based in part on the work of the Independent JPEG Group, http://www.ijg.org. Based in part on TIFF library, http://www.libtiff.org, Copyright 1988-1997 Sam Leffler, and Copyright 1991-1997 Silicon Graphics, Inc. Also based in part on the LibPNG library, Glenn Randers-Pehrson and various contributing authors. OpenEXR library Copyright (c) 2004, Industrial Light & Magic, a division of Lucasfilm Entertainment Company Ltd. Portions contributed and copyright held by others as indicated. All rights reserved. Neither the name of Industrial Light & Magic nor the names of any other contributors to this software may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. All of the contributors efforts are greatly appreciated.
439