Вы находитесь на странице: 1из 114

LaVision

We count on Photons
Product-Manual
Imaging Tools
Item-Number(s): 1005xxx
Product-Manual for DaVis 8.2
LaVision GmbH, Anna-Vandenhoeck-Ring 19, D-37081 Gttingen
Produced by LaVision GmbH, Gttingen
Printed in Germany
Gttingen, September 23, 2013
Document name: 1003012_ImagingTools_D82.pdf
Contents
Contents
1 Introduction 5
2 Scaling 7
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Scaling Dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Basic layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 Define scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3 Calibration 15
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2 Calibration dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3 Basic layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.4 1 camera or 2 cameras (mapped, e.g. stereo). . . . . . . . . . . 18
3.5 Limitations and requirements for pinhole / polynomial map-
ping function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.6 Define Origin, Maintain Calibration . . . . . . . . . . . . . . . . . 27
3.7 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.8 Stitching in a side-to-side camera setup . . . . . . . . . . . . . . 34
3.8.1 Preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.8.2 Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.8.3 Stitching images . . . . . . . . . . . . . . . . . . . . . . . . 35
3.8.4 Stitching vector fields . . . . . . . . . . . . . . . . . . . . . 36
4 Mask Definition 37
5 Overlay Editor 43
6 Color Palettes 45
6.1 Edit Palette . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6.2 Adapt existing palette . . . . . . . . . . . . . . . . . . . . . . . . . 47
7 Camera Setup Wizards 51
7.1 Focal length calculator . . . . . . . . . . . . . . . . . . . . . . . . . 51
7.2 dt / velocity calculator . . . . . . . . . . . . . . . . . . . . . . . . . 51
7.3 Synthetic image generator 2D . . . . . . . . . . . . . . . . . . . . 52
3
Contents
8 Image correction and distortion 59
9 Time-super-sampling 63
10 Correlation Map 65
11 X/Y Plot 67
11.1 Window Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
11.2 Plot mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
11.3 Vector fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
11.3.1 Y - axis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
11.3.2 X - axis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
11.4 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
11.4.1 Y - axis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
11.4.2 X - axis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
11.5 Store mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
11.6 Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
12 About Filter Operations 75
12.1 Filter Operations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
12.2 Smoothing Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
12.2.1 Smoothing 3x3 . . . . . . . . . . . . . . . . . . . . . . . . . 76
12.2.2 Gaussian LowPass . . . . . . . . . . . . . . . . . . . . . . . 76
12.2.3 Median Filter. . . . . . . . . . . . . . . . . . . . . . . . . . . 76
12.3 Sobel Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
12.4 Compass Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
12.5 Laplace Filter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
12.6 Sharpening Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
12.7 Other Special Filters . . . . . . . . . . . . . . . . . . . . . . . . . . 81
12.7.1 Erosion and Dilation Filter . . . . . . . . . . . . . . . . . . . 81
12.8 Fourier Transform (FFT) . . . . . . . . . . . . . . . . . . . . . . . . 81
12.8.1 Basic FFT and Inverse FFT on Images . . . . . . . . . . . . 82
12.8.2 FFT-Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
12.8.3 Using CL to design your own filter . . . . . . . . . . . . . . 87
12.8.4 The Art of Filter Design . . . . . . . . . . . . . . . . . . . . 88
12.8.5 Mathematical Operation on FFTs. . . . . . . . . . . . . . . 89
12.8.6 FFT Data Storage . . . . . . . . . . . . . . . . . . . . . . . . 92
12.9 Correlation and Convolusion . . . . . . . . . . . . . . . . . . . . . 93
12.9.1 Autocorrelation . . . . . . . . . . . . . . . . . . . . . . . . . 94
12.9.2 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
12.9.3 Cyclic versus Non-Cyclic Computations . . . . . . . . . . 95
4
Contents
12.9.4 Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
12.10 Nonlinear Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
12.10.1 The Math behind NonlinearFilter . . . . . . . . . . . . . . . 99
12.11 Dialog for FFT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
12.12 Dialog for Linear Filter . . . . . . . . . . . . . . . . . . . . . . . . . 104
12.12.1 User Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Index 109
5
Contents
6
1 Introduction
The Imaging Tools manual describes different dialogs that are available in
the basic DaVis software package.
The biggest part of this manual concerns the perspective image correction.
The Calibration wizard chapter explains how to generate a calibration.
The Image distortion and correction chapter describes how to apply the
calculated calibration on a single image. This functionality is also available
in the Processing dialog.
Masks are often used to restrict the area of interest during processing.
Theay are defined in the Mask dialog. Overlays are painted above a im-
age or vecter view. The XY Plot dialog allows to extract profiles from a
single or multiple vector fields.
The Filter operations chapter explains the different possibilities of image
processing using linear or non-linear filters.
7
1 Introduction
8
2 Scaling
2.1 Introduction
The tutorial is based on DaVis 8.0 software package and describes the Scal-
ing dialog. In all cases the term scaling refers to the calibration process.
Doing a calibration is important because the results should be shown in
scaled units representing the true (world) dimensions. So the image scale
pixel/mm should be determined.
2.2 Scaling Dialog
A successful scaling procedure will store all acquired data and the scaling
information under the Properties folder of the project in subfolder Scal-
ing. The scaling button in the project toolbar opens the scaling dialog.
2.3 Basic layout
The scaling dialog includes the following elements:
9
2 Scaling
1 In the tool bar the scaling process can be aborted and the dialog
can be closed. With buttons next or previous the scaling step can
be changed. This adapts the currently active window (zoom factor,
color mapping and color palette etc..). When all steps are finished,
the next button will change to become a finish button.
2 The center dialog area on the left gives access to setting check boxes,
entering numbers or action buttons.
3 The center dialog area on the right displays the hardware settings.
4 Below that dialog the window manager is located with selection boxes
to show the image for a specific camera / view combination in the
working window or the reference window. The reference window
might be switched on and off.
5 One or two image windows on the bottomshowthe camera images for
a specific camera / view combination. The image shown in the work-
ing window is used for all user interactions. The reference window
is only for convenience to compare two images for different cameras
and/or views and might be switched off without any drawbacks.
10
2.4 Define scale
To get a better view on the images the dialog elements can be minimized
by pressing the Maximize / Minimize button.
2.4 Define scale
The dialog looks like the following screenshot:
The goal of the current step is to define conveniently the mm / pixel scale
of the image. This dialog requires two different actions:
1. image acquisition or import,
2. the interactive definition of the mm / pixel scale by entering numbers
directly or by definition with mouse clicks into the image.
Image acquisition
For the image acquisition the connected cameras are readout. This is
started with buttons Take image and Grab.
The cameras to be used for image acquisition must be selected in the gen-
eral device interface GDI dialog on the right. This dialog is visible only
in the minimized images, maximized dialog setting - see above. The GDI
is also used to set up the exposure times, external trigger, (laser) light
sources, step motor positions and other hardware settings important to the
image acquisition.
Changing the average value in the text box on the left to any other value
than 1 will result in the averaging of the specified number of images to
improve image quality due to poor lighting conditions. Please make sure
that neither the camera nor the calibration sample is moving!
11
2 Scaling
Import
One or more previously recorded files can be imported from hard disk by
pressing the Load button. Either load single images or whole DaVis data
sets. This opens a modal dialog that shows a preview of the image in
question (see screenshot below).
Here an image frames must be selected for import. For images originating
from DaVis the check box can be unchecked. This reads out the default
camera number which is stored in each image (and shown for the current
image). For imported bitmaps (BMP or TIF files) the camera must be defined
to which the image corresponds. If the stored camera number should be
overwritten, another (wrong) camera number might be given. The selection
box will turn red (instead of green) if the program recognizes a discrepancy.
Furthermore some modes must be specified: append the selected file(s) to
selection of images, replace previously imported frames for the selected
camera number or start all over deleting any previously present images
and replace all images with the selected image(s) by the new selection.
Definition of mm/pixel scaling
If the calibration is fine but the origin is not, a new origin can be redefined
for the (already calibrated) image. Or the value of any other reference point
(not necessarily the origin) can be set.
Click to the left button and follow the instructions for interactive selection
with the mouse. Click into the image to specify the new origin or reference
point.
12
2.4 Define scale
The dialog consists of three parts:
The top group box shows the clicked pixel coordinates.
In the center group box a mode has to be selected: Just give the
horizontal distance between the two points, just give the vertical dis-
tance, or define the (diagonal) point to point distance in mm.
The bottom group box contains the text items to enter the (known)
distance in mm.
After pressing the return key or OK the pixel / mm scaling is calculated and
any present dewarping function is reset to unity (no dewarping).
In any case the first clicked point will become the new origin of the coor-
dinate system or the specified reference coordinate point. When the right
Scale button has been clicked, a dialog appears where the scaling can be
defined directly:
Either enter the already known pixel / mm scale or give the position of
two reference points and the point to point distance in mm between them.
Again after accepting the new scale any previously defined image dewarp-
ing function will be reset to no dewarping.
13
2 Scaling
Definition of new origin / reference point
If the calibration is fine but the origin is not, a new origin can be redefined
for the (already calibrated) image. Or the value of any other reference point
(not necessarily the origin) can be set.
Click to the left button and follow the instructions for interactive selection
with the mouse. Click into the image to specify the new origin or reference
point.
Or enter numbers directly to define the position and coordinates in the
image of the new origin / reference point by clicking on the right button
shown above. This opens a dialog to specify a pixel position and the (new)
corresponding mm coordinates.
At the end of both procedures the pixel / mm scaling is maintained and the
new origin or reference point is changed to the new value. Any present
dewarping function is set to unity (no dewarping).
Reset calibration to pixel scaling
With button Reset the new scaling can be removed and replaced by old
settings. The pixel/mmscaling is removed as well as any present dewarping
function, and the origin returns to the upper left (default).
When the achieved calibration is fine, the scaling has to be accepted with
button Finish in the tool bar. The image dewarping function + pixel/mm
scaling are stored in the project. The following dialog opens:
14
2.4 Define scale
Select the first choice to open a new project with the new scaling
leaving the current project untouched.
Select the second option to overwrite the scaling of the currently ac-
tive project. Be aware that previously recorded images and their de-
rived results might not be consistent with results that will be calcu-
lated with the new scaling.
Select the third choice to overwrite the scaling of the currently active
project and scale all previous recordings present in the root level of
the project to be consistent with the new calibration. Be aware that
results which have been derived before the changed calibration might
not be consistent with results that will be calculated with the new
calibration.
Select the fourth choice if everything should be discarded. Then the
old scaling or calibration will be restored. Note that the whole scaling
procedure must be repeated.
The scaling dialog closes at the end and DaVis returns to the project
manager.
15
2 Scaling
16
3 Calibration
3.1 Introduction
This tutorial is based on DaVis 8.0 software package (or later versions) and
describes the Calibration Wizard dialog. In all cases the term calibration
refers to the image calibration process. A calibration is important for all
processings because:
The results should be shown in scaled units representing the true
(world) dimensions. So the image scale pixel/mm should be deter-
mined.
For oblique viewing setups or views through curved glass windows the
images show some image distortions due to perspective projection
(and possibly inherent camera lens distortions). This can be corrected
via the appropriate calibration.
For stereo measurements an internal representation for the geomet-
rical setup of both cameras relative to the sample is needed. This is
determined.
There are some suppositions for the calibration:
Knowledge to operate the hardware in order to collect the calibration
images must be available.
If the camera(s) are focussed on the sample surface, it is a simple
task to move the calibration plate into the focal plane - and when it
is removed, you are immediately ready to start acquiring images. If
the calibration is done first without being careful about the plate po-
sition, the system might be unfocussed on the sample, necessitating
refocussing and recalibration.
A calibration can be performed after the sample images have been
acquired (providing the camera system has not been altered).
17
3 Calibration
3.2 Calibration dialog
A successful calibration procedure will store all acquired data and the cali-
bration information under the Properties folder of your project in the folder
Calibration.
A click on the calibration entry in the project manager shows the parame-
ters of the dewarping function including the RMS of the fit to check the
quality of the calibration.
A new calibration (imported or using the) or any additional adaptations of
the calibration (e.g. setting a new origin) will store the previously active cal-
ibration in the folder Properties/Calibration History before overwriting
the Properties/Calibration files. So it is possible to restore any previous
calibration done in your project: Importing a calibration is possible via a
right mouse click in the project manager tree item of the calibration. The
Calibration History folder can be deleted anytime to save up hard disk
space, but it will not be possible to restore old calibrations afterwards (right
mouse click: delete). This is not possible for the active Calibration folder
in the project manager and the Calibration folder of a project should never
deleted manually!
The calibration button in the project toolbar opens the calibration dialog.
3.3 Basic layout
The example below gives the basic outline of the calibration dialog.
18
3.3 Basic layout
It consists of the following elements:
1 In the tool bar the calibration process can be aborted and the di-
alog can be closed. With buttons next or previous the calibration
step can be changed. This adapts the currently active window (zoom
factor, color mapping and color palette etc..). When all steps are fin-
ished, the next button will change to become a finish button.
2 Alternatively the corresponding step button in the list on the right can
be used to go to the corresponding step directly (list labeled You are
here:). Please note that it might not be possible to go to a later step
because you may have to make additional selections or perform some
actions (e.g. clicking of reference marks) in an earlier step.
3 The center dialog area on the left where all interactions are done by
setting check boxes, entering numbers or pressing buttons.
4 The center dialog area on the right showing all necessary calibration
steps to be done in the to create a new calibration. It also indicates
the actual step and gives easy access to all other steps, e.g. going
back several steps or skipping various already done steps.
5 Below that dialog the window manager gives some selection boxes to
show the image for a specific camera / view combination in the work-
19
3 Calibration
ing window or the reference window. The reference window might be
switched on and off.
6 One or two image windows on the bottomdisplay camera images for a
specific camera / view combination. The image shown in the working
window is used for all user interactions. The reference window is only
for convenience to compare two images for different cameras and/or
views. This might be switched off without any drawbacks.
The descriptions in the following two sections refer to (a) single camera
StrainMaster Portable 2D setup viewing through air only, and perpendic-
ular to the sample face. In this case a simple geometric scaling can be
made. The second section shows the common steps for for a single camera
viewing obliquely or through distoring media, or a 2 camera stereo system
like the StrainMater Portable 3D. The actual walkthrough shows the steps
for a 2 camera stereo setup.
3.4 1 camera or 2 cameras (mapped, e.g. stereo)
This section shows the common steps for for a single camera viewing
obliquely or through distoring media, or a 2 camera stereo system like the
StrainMater Portable 3D. The actual walkthrough shows the steps for a 2
camera stereo setup, but the procedure is the same for both situations.
Step 1 : Define experimental setup
For stereo systems the fifth option needs to be selected. Press the Next
button in the toolbar to go to step 2.
Step 2 : Define coordinate system
In this step define the coordinate system and the number of views available
of your calibration plate in different places. It is recommended to choose 3
views for higher accuracy (although 1 view will suffice in general).
20
3.4 1 camera or 2 cameras (mapped, e.g. stereo)
If it is not possible to put the calibration plate into the place where the
sample under study will be located, the offset might be entered here (per-
pendicular to the calibration plate). This offset will be used to redefine the
origin of the calibration later.
Press the Next button in the toolbar to go to step 3.
Step 3 : Select used calibration plate
In this step the calibration plate has to be selected. If a calibration plate
from LaVision is used, it is enough to select the plate type labeled on
the plate e.g. Type 10. This will set all plate parameters to the correct
values (see first screenshot above).
In case of an own calibration plate a number of parameters (shape, size and
distance between marks, distance between planes) has to be provided. If
the cameras are set to look at the front and the backside of the calibration
plate, switch on the use front and backside of plate checkbox. Step 4
specifies the sides of the calibration plates, where the cameras are looking
to.
Press the Next button in the toolbar to go to step 4.
Step 4 : Image acquisition
In the fourth step the necessary images of the calibration plate for all cam-
eras and all views have to be acquired. Only then proceeding to the next
21
3 Calibration
steps is allowed. Previously recorded images might be imported from hard
disk via button Import. New images might be directly acquired using the
connected hardware. This is done via buttons Take image or Grab in the
left part of the dialog .
To the left of those buttons a number of images greater 1 must be specified.
Any other value than 1 will result in the averaging of the given number of
images to improve image quality (noise reduction!) due to poor lighting
conditions. Please make sure that neither the camera nor the calibration
sample is moving during the averaging process!
In the area on the right hardware settings can be changed in the general
device interface such as switching on/off cameras or the light source,
changing exposure time / light pulse duration or the trigger mode between
internal trigger or external random trigger (which allows the use of the
manual user trigger).
For views that already contain images the selection box entries view 1 (of
3) are marked with (ok). If a camera contains valid images for all views
that selection box turns green. If all view entries of all coordinate systems
are green then the next step button will be enabled. Pressing the button
with the red cross will delete the camera image of the selected view. Press
the Next button in the toolbar to go to step 5.
To get a better view on the images the dialog elements might be minimized
pressing the Maximize / Minimize button. This switches the dialog to the
mode shown in the following figure:
22
3.4 1 camera or 2 cameras (mapped, e.g. stereo)
If using front and back side of the plate (see step 2) here for each
camera the viewing direction has to be specified. Otherwise this option is
disabled.
Press the Next button in the toolbar to go to step 5.
Step 5 : Mark definition
In the fifth step three start marks must be defined as a seed for the mark
search. To start the process either
23
3 Calibration
click All cameras/views which automatically will show the images
for all camera / view combinations one after another to (re)define the
start marks for all those combination,
press button Shown camera/views to (re)define the start marks in
the currently visible image in the working window,
press a table entry in the table to the right corresponding to a cam-
era / view combination to (re)define the start marks for that combina-
tion.
Instructions will appear in the Instructions field of the dialog to guide to
the clicking process:
1. click a start reference mark (the same in all camera images and
views!),
2. click the neighbor mark to the right,
3. click the mark to the top of the first selected mark.
Please make sure that the selected marks are located on the front plane
of a 3D 2-level calibration plate!
With clicking all 3 marks successfully the table entry for that camera/view
combination will switch to okay. The clicking process may be stopped any
time pressing the Stop button in the tool bar or by pressing the ESC key
on keyboard. If all marks were defined successfully on all images (all table
entries show an okay) press the Next button in the toolbar to go to step 6.
Step 6 : Finding all marks
In the sixth step the software finds all other marks.
24
3.4 1 camera or 2 cameras (mapped, e.g. stereo)
Start this automatic process by pressing the button Start search. Depend-
ing on the image quality and the speed of the computer this might require
several minutes. But once in a while the progress of the search is shown
(number of marks found in each camera/view) in the table to the right. Usu-
ally it is enough if 20-30 marks could be identified. Please be aware that
due to the nature of the polynomial mapping functions the fit marks have
cover the whole camera image. For the pinhole calibration (which should
normally be used for StrainMaster Portable) it is enough if only parts
of the image are covered with marks. To restrict the search area the user
can define a polygon masks by pressing the relevant button in the dialog.
Note that key "c" must be pressed to close the polygon. If rogue marks are
detected, these can be deleted individually.
Press the Next button in the toolbar to go to step 7.
Step 7 : Fit mapping function
In the seventh step calibration mapping function is fitted to the marks found
in each image. Available functions are pinhole calibration and polyn-
imial 3rd order calibration. Please note that in case of having more
than one view and these are not coplanar or equidistant the polynomial
calibration is not available.
To start the calculation process press the button Start calibration. The
calibration process is automatic and requires no user interaction. The av-
erage deviation of the dewarped mark positions to the ideal regular grid is
25
3 Calibration
shown in the table to the right for each image. For a 2 Megapixel camera a
value lower than 1 pixel is good. A value lower than 0.3 pixel is excellent.
Values higher than 2 pixel are questionable.
Cameras with a higher number of pixels will produce higher errors: e.g. a
camera with twice as many pixels in x and y (this would be an 8 Megapixel
camera) will see the same absolute mm deviation of the pinhole model
mapping to the identified mark positions twice as large in terms of its pixels.
So larger errors are acceptable here.
Press the Next button in the toolbar to go to step 8.
Step 8 : Evaluation of corrected images
This last step is used to evaluate the fit results visually.
For that purpose all camera / view / z-level combinations are dewarped. An
image is also created where all of corrected images are overlayed (summed
up) to check that all corrected images coincide. In addition a grid of the
ideal regular grid of marks is overlayed in red. This grid should pass through
the center of all marks.
The result can be checked:
in single camera images if the marks fall to the ideal regular grid
if the marks in the summed up images of different cameras (and one
view) coincide for z=0 mm position
if the marks in the summed up images of different views for one cam-
era coincide for z=0 mm position
if the marks in the summed up images of all views for all camera
coincide for z=0 mm position
In case of a 3D 2-level calibration plate the same can be checked for the
position of the second plane, too. Please note that if there are coincidence
marks for one level, the marks for the other level will be mismatched when
26
3.4 1 camera or 2 cameras (mapped, e.g. stereo)
superimposing different cameras due to their different viewing directions
(parallax effect).
This parallax can be used to check the 3D resolution of your experiment
later. Let the distance dz between the 2 levels be 1 mm. Then observe the
mismatch in pixel for the mismatched plane - which we will assume here
to be 5 pixel s apart. As the deformation calculation can resolve move-
ments of 0.05 pixel shifts on the CCD images shifts are detectable as low
as 0.05 pixel /5 pixel 1 mm 10m in z-direction for the current camera
setup. You can increase this resolution by increasing the angular separation
of your system.
The table Fit results to the right summarizes the most important fit pa-
rameter.
In case of a pinhole calibration the Show 3D button opens the following
window. This window gives a representation in 3D of the camera position
relative to the calibration plate. This is a consistency check, so please
check if this resembles the physical camera setup (orientation, angles and
rough distances).
Finish calibration
When the achieved calibration is fine then press the Finish button in the
tool bar (see image aside) to accept the calibration Now the following dialog
opens:
27
3 Calibration
Select the first choice to open a new project with the new calibration
leaving the current project untouched.
Select the second option to overwrite the calibration of the currently
active project - be aware that previously recorded images and their
derived results might not be consistent with results that will be calcu-
lated with the new calibration.
Select the third choice to overwrite the calibration of the currently ac-
tive project and scale all previous recordings present in the root level
of your project to be consistent with the new calibration - be aware
that results that have been derived before the changed calibration
might not be consistent with results that will be calculated with the
new calibration.
Select the fourth choice if everything should be discarded. This re-
activates the old calibration and removes all steps of the calibration
procedure.
The calibration dialog will close and DaVisreturns to the project manager.
3.5 Limitations and requirements for pinhole / poly-
nomial mapping function
A pinhole calibration covers the whole volume visible in cameras, but
needs an undisturbed optical access through air. Furthermore it is not nec-
essary to put the calibration plate in place so that it covers the whole im-
age. The pinhole calibration works with a single plane of marks, but accu-
racy greatly increases if you use a 2-level plate (2 planes separated by a
known distance dz, like the one provided with the LaVision systems). It
is recommended to have 3 views with tilted calibration planes to minimize
mapping errors over the volume.
28
3.6 Define Origin, Maintain Calibration
The generic polynomial 3rd order function works with arbitrary distro-
tions contained in images e.g. because the optical access to the experi-
ment is blocked by a glass window that adds extra distortion. It is only
possible to calibrate a volume with the generic polynomial 3rd order func-
tion using 2 or more equidistant coplanar planes. The LaVision 2-level
plate will suffice. But as the distortion outside the volume covered by the
calibration plate is extrapolated extra errors in the results dependent from
the distance to the calibation plate are possible. If using multiple views,
the calibration plate must be put in equidistant and coplanar positions. It
is advisable to use a translation stage to do this.
3.6 Define Origin, Maintain Calibration
After the calibration procedure has finished, a new origin might be defined
or the value for any other reference point might be set. This is done with
two steps in the calibration dialog:
Step 1: Experimental Setup
In the first step the define origin, maintain calibration mode must be
selected in the expertimental setup dialog. When done click the Next but-
ton.
29
3 Calibration
Step 2: Define Origin, Maintain Calibration
The following options are available:
1 Load a Frame from the Camera.
2 Grab Frames from the Camera until the Stop button is pressed.
3 Import Image, see section 3.6 below.
4 Maximize or minimize the current working window.
5 Place new origin in the left upper/lower edge, in the right upper/lower
edge or in the middle (like the cross on the buttons).
6 Definition of new origin, see section 3.6 below.
Import Image
Another option is to import one or more previously recorded files from
hard disk by pressing the Load button. Either load single images or whole
DaVis sets. This opens a modal dialog that shows a preview of the image
in question (see screenshot below).
30
3.6 Define Origin, Maintain Calibration
Here an image frames must be selected for import. For images originating
from DaVis the check box can be unchecked. This reads out the default
camera number which is stored in each image (and shown for the current
image). For imported bitmaps (BMP or TIF files) the camera must be defined
to which the image corresponds. If the stored camera number should be
overwritten, another (wrong) camera number might be given. The selection
box will turn red (instead of green) if the program recognizes a discrepancy.
Furthermore some modes must be specified: append the selected file(s) to
selection of images, replace previously imported frames for the selected
camera number or start all over deleting any previously present images
and replace all images with the selected image(s) by the new selection.
Definition of new origin / reference point
If the calibration is fine but the origin is not, a new origin can be redefined
for the (already calibrated) image. Or the value of any other reference point
(not necessarily the origin) can be set.
Click to the left button and follow the instructions for interactive selection
with the mouse. Click into the image to specify the new origin or reference
point.
Or enter numbers directly to define the position and coordinates in the
image of the new origin / reference point by clicking on the right button
shown above. This opens a dialog to specify a pixel position and the (new)
corresponding mm coordinates.
At the end of both procedures the pixel / mm scaling is maintained and the
new origin or reference point is changed to the new value. Any present
dewarping function is set to unity (no dewarping).
After a new origin is defined, this can be useed just for the actual camera
or for all cameras.
31
3 Calibration
Finish calibration
When the achieved calibration is fine, the calibration has to be accepted
with button Finish in the tool bar. The following dialog opens:
Select the first choice to open a new project with the new calibration
leaving the current project untouched.
Select the second option to overwrite the calibration of the currently
active project. Be aware that previously recorded images and their
derived results might not be consistent with results that will be calcu-
lated with the new calibration.
Select the third choice to overwrite the calibration of the currently
active project and scale all previous recordings present in the root
level of the project to be consistent with the new calibration. Be aware
that results which have been derived before the changed calibration
might not be consistent with results that will be calculated with the
new calibration.
Select the fourth choice if everything should be discarded. Then the
old calibration will be restored. Note that the whole calibration proce-
dure must be repeated.
32
3.6 Define Origin, Maintain Calibration
Before the calibration dialog closes, a last question is asked to make sure,
that the old calibration will be removed and replaced by a new calibra-
tion. If accepting this security question, then DaVis returns to the project
manager.
33
3 Calibration
3.7 Troubleshooting
Step 4
The go to next step
button is not enabled
Check if images for all camera and view combinations
have been taken. The selection boxes are not green
while more images are required. Check their entries
for missing (ok) texts. Otherwise go back and reduce
the number of views. Be aware that when taking ad-
ditional images for a certain view all cameras in that
coordinate system need to see the calibration plate in
the same location.
Camera images are
dark.
Check the exposure time in the GDI. For systems with
a laser or any other triggered light source: For safety
reasons the light sources are switched off when the
calibration dialog is entered. So either switch them
on again or choose a higher exposure time to take
images with day light.
Camera images are
still too dark.
If not enough light is available then several images
could be added to improve the signal to noise ratio
further. Make sure the calibration plate is not moving
during the image acquisition !
Step 5
The go to next step
button is not enabled
Check if clicked on all images. All table entries on
the right should hold an okay entry. After clicking
an entry that does not hold an entry labeled okay
the corresponding image is shown and asks to click
again follow the instructions shown in the dialog. Or
just click All cameras/ views or the Shown cam-
era/view button.
continued on next page
34
3.7 Troubleshooting
Step 6
The go to next step
button is not enabled
Check if clicked on all images. All table entries on the
right should hold the number of found marks in the
form a or a +b. After adapting the search parameters
and checking the next two tips below one can click on
the table entry with the missing number of marks and
the search will be redone for this camera / view. Or
just lick the Start search button to find the marks
in the images for all camera / views.
Not all marks could be
found
Before changing the mark search parameters make
sure that having specified the correct calibration plate
type (step 3). A wrong type or wrong entries for a user
defined plate are a common source of problems. Then
check if really clicked on the start marks on the top
plane (in a 3D plate) and close enough to the center
of the marks on the plate. Only if all of this fails then
adapt the search parameters.
Too many marks have
been found
The search can be limited by defining a mask or just
delete single marks from the list of found marks. Only
if all of this fails then the search parameters should
be adapted.
Step 7
The go to next step
button is not enabled
Check if clicked on all images. All table entries on the
right should hold the RMS of the fit (a number well
below 2)
The RMS of the fit is
still very high
Check if really clicked the start marks on the top plane
of a 3D plate. Then check if all marks have been
found. Another common problem is that the camera
images for a certain view do not show the calibration
plate at the same position. This might happen easily
when retaking images (step 4) for only one of several
cameras in coordinate system with a calibration plate
in another position (calibration plate was moving in
the meantime). Or it might have happened when mix-
ing up single images when importing them from file
(e.g. two identical images are provided for view one
and two for camera 1 and two different images for
camera 2).
Furthermore there might be the case that the pinhole
model has been selected and a non radial distortions
is given in the images (e.g. when looking through
a glass window). Then the polynomial fit should be
tried.
continued on next page
35
3 Calibration
Step 8
The red grid does not
meet the corrected
mark positions
This is the case if the RMS of the fit is too high. Please
see troubleshooting tips for step 7.
In the sum of all cam-
eras / sum of all views
display the corrected
camera images do not
match.
Other than in the pinhole model fit in the polynomial
fit the user has to be aware that the correction is op-
timized only for the places where marks have been
found. So if for example marks are seen only in the
left side of the image the right side might show huge
discrepancies. Retake the images with a full screen
view of the calibration plate and make sure that all
marks are found or use the pinhole model for the fit.
3.8 Stitching in a side-to-side camera setup
3.8.1 Preparation
To stitch images or vector fields the images of both cameras need over-
lapping of at least 10% of the image area. The zoom factors have to be
almost the same (see sketch below). It is not necessary that the cameras
look from a perpendicular direction to the experiment, although it helps to
keep the interpoaltion errors small (smoothing effect on the images). Refer
to the following setup. It is important though that a continuous calibration
plate is used that covers the area of both images. It is impossible to use
two different calibration plates that are arranged side by side in front of the
cameras.
36
3.8 Stitching in a side-to-side camera setup
3.8.2 Calibration
To apply image or vector field stitching it is important to do a proper cali-
bration. Choose the calibration 2 mapped cameras (e.g. stereo) (please
refer to chapter 3.4). In step 5 click the cross which are almost at the same
apparent location in the camera images taken with camera 1 and 2 (e.g.
the mark in the center of the image), as if the cameras were looking at
the same place of the calibration plate (like in a stereo system) - do NOT
choose the physically identical cross in the overlapping area! Proceed as
indicated in chapter 3.4 and Finish the last step.
Now you reenter the dialog and choose the calibration define origin -
maintain calibration and select the cross you first clicked in camera im-
age 2 and now specify its real coordinate (which is not 0,0 but maybe 200,0
if 200mm to the right of the first physical cross you clicked in camera 1).
Please refer to the setup of the previous section. Proceed and Finish the
last step.
Now you have successfully done the calibration necessary to stitch images
or vector fields.
3.8.3 Stitching images
To stich images select the set with a series of double frame images and
apply the image correction (raw->world) batch processing operation in
the image mapping group. This takes out the image distortion due to
perspective distortion and camera lense errors of each camera
Then apply the shift and intensity correction batch processing opera-
tion in the time series group which stitches the images together. In the
card General Parameter deselect all options but shift correction and go
to the card Shift and rotation correction. In this card you define a com-
mon point in both images to help the computer find the shift automatically.
You might want to use a subpixel shift which enters another image interpo-
lation of camera image 2 but does not has stair artefact in the boundary
region. Or deselect the subpixel option which just copies the camera im-
age 2 to the side of camera image 1 to the best integral pixel shift without
entering any further smoothing artefacts. If cameras are fixed you should
use the Calculate shift = relative to first image.
Then if you want to make furhter evaluations (e.g. vector calculation) keep
in mind that the images have been already corrected so switch off any "use
iamge correction" switches if available.
37
3 Calibration
3.8.4 Stitching vector fields
As the vector field calculation implies image interpolation in the multi pass
process, stitching images is a bad idea as the information is degrading by
subsequent sinterpolations. To avoid additional interpolation you should in-
terpolate vector fields instead. The recommended procedure is the follow-
ing: Do NOT do image stitching, but select sequential cross correlation
mode frame 0-1, 2-3 this time switching on the "use image correction".
This implies a vector calculation of multi pass window deformation and im-
age correction interpolation in one step, so the iamge quality is maintained.
! The result of this operation is a series of doulbe frame vector fields corre-
sponding to camera 1 and 2.
On these results apply the merge vector function in the vector map-
ping group which has no parameters to set. The result is a stitched vector
field where in the overlapping region the vectors are interpolated to give a
smooth transition.
38
4 Mask Definition
A mask is used for specifying an arbitrary region in the image where some-
thing should be computed. A mask buffer may be an image or a vector
field. In both cases only then a pixel or a vector is accepted or computed,
if the value in the mask buffer is non-zero at the corresponding position.
In some projects it is possible to pass any displayed image to the mask
dialog via context menu send to mask definition. If a vector file is
passed via send to then the vector length will be extracted first and the
resulting scalar field is passed to the mask dialog. For projects please read
the manual or chapter about your project about the possibilities to use a
mask.
In the Processing dialog some operations are using a mask, and then the
operation parameter cards include a button to open this dialog. In DaVis
Classic the Mask dialog can be opened from the Rectangles menu.
Figure 4.1: Dialog for mask definition.
The upper part of the mask dialog (see figure 4.1) shows the parameter set-
tings. The lower part displays the source image on the left and the masked
image on the right. The width of both images windows can be changed by
the slider below the images. To load another source image, please press
the load image button on the upper right of the dialog. The display pa-
39
4 Mask Definition
rameters can be easily changed after pressing the right mouse button in
the image window and selecting Display Properties in the context menu.
There are three possible operation modes to create a mask:
load fixed mask fromfile simply loads a previously calculated mask
from a file. The mask can be loaded pressing the Load button
user defined mask allows to define the masked area with geomet-
rical elements rectangle, circle, ellipse and polygon. In this mode
the user can create a number of objects with the mouse and then
use either the inside of a single object or the outside as valid region.
The intersection of all these masked areas is defines the final masked
area.
create algorithmic mask with operation pipeline takes the cur-
rent image and applies filter operations. All pixels with a value of
0 counts are masked out. A typical procedure would be to smooth
the image sufficiently before subtracting a threshold, which sets all
unwanted regions to zero. This can be done easiliy using the dialog
described further down in figure 4.3
create user defined + algorithmic mask applies the geometri-
cal mask first to reduce the masked area to a certain region. Then
the remaining pixels can be sorted out using the algoritmic operation
pipeline.
When finishing the mask creation, the current mask can be stored into a file
via button save current mask to file as fixed mask. This mask file is
reloaded at the next program start automatically. Additionally the current
dialog settings of the dialog can be stored or loaded by the buttons near
dialog settings. This way is useful when testing several mask definitions
and later reloading good settings for the algorithmic operation pipeline.
Figure 4.2: Parameter settings for user defined mask.
40
To create a user defined mask new elements must be added. When press-
ing one of the blue or red buttons, representing the graphical elements
rectangle, circle, ellipse and polygon, then a definition of the area is pos-
sible with mouse clicks into the left image. For a rectangle two opposite
angles are be given. For circle and ellipse the center position and the ra-
dius have to be defined. For a polygon any number of points can be added.
Key c closes the polygon and ends the point adding. Key d deletes the last
entry and key q quits the definition and discards the polygon.
With button Clear last item (of X) the last defined objects in the list of X
items can be deleted. This canm be done several times to undo all changes.
Button Clear all items removes the whole list. The text next to the buttons
give information on the number of objects defined so far.
To create a mask in algorithmic mode a number of different operations
can be selected in the list on the right. Press button Recalculate now to
use the image on the left side as source for the calculation and display the
new mask on the right side.
Figure 4.3: Parameter settings for algorithmic mask.
Eliminate all 0 counts: It is strongly recommended that this option
is selected initially, so that zero pixels in the original image do not
lead to unwanted masked out pixels. Pixel with intensity 0 are not
used during an operation.
Invert image I(x,y)=Imax-I(x,y)+N and add an intensity offset N.
Smoothing over N N pixel.
Sliding average/maximum/minimum filter over N N pixel.
Median filter over N N pixel. The median filter is a non-linear fil-
ter not working with the simple matrix equation above. The median
41
4 Mask Definition
filter takes a N N area and sorts all elements according to their in-
tensity. Then the middle element is stored back as the new center
pixel. Just like the other smoothing filter, it will eliminate high fre-
quency noise. As an advantage it does not eliminate sharp edges as
much as the other filters. On the other hand this filter is non-linear,
i.e. the average intensity of the image may change. The median filter
works mathematically very similar to the erosion and dilatation filter
discussed below, which have a very different behavior.
Above/below threshold sets the intensity above or below a given
threshold to 0 counts.
Erosion: Just like the median filter this filter will sort the pixels of a
3x3-area according to their intensity. But instead of taking the middle
element, the erosion will take the element before. The effect is a kind
of erosion, valley become deeper and finally the image will be eroded
away toward the background level.
Dilate: The dilatation filter does exactly the opposite of the erosion.
It will take the next element after the middle one in the sorting list.
The effect is that edges become sharper while plateaus flatten and
become more even.
Binarize image to 0, 10000 counts: will set all pixels to either 0 if
they were 0 before or to 10000 otherwise. This options is useful to
distinguish low-count pixels from 0. The image may also be inverted
afterwards. Note that sometimes it is easier to define the areas which
must not be used e.g. for vector calculation.
Local RMS: Any pixel value is replaced with the local RMS deviation
to the average of its neighbours. This is a useful filter operation to
mask out areas with high particle seeding for example. Image ar-
eas without or with poor particle seeding show a homogenoues (low)
count distribution while areas with many particles have high local
contrast (count value variations). The same applies to probes with
speckle pattern.
Replace value: to 0, 10000 counts: will set all pixels of the specified
value to 0. This options is useful to use masks with a mask out value
different from 0. Note that sometimes it is easier to define the areas
which must not be used e.g. for vector calculation.
42
User function: A macro command of the definition macro(int in-
putoutputbuffer, int parameter) can be specified. This is called with
the image in the currently active state of the operation pipeline and
the given parameter value. The macro should return the changed
buffer. On this way it is possible to write user defined filters.
Sometimes an image already has very low count pixels. In the displayed
mask on the right these counts are very difficult to distinguish from masked
out pixels (which by definition have 0 counts). To make this easier the
image can be binarized. This destroys the image structure. A better dis-
play of binarized images is reached with the help of check box add 100
counts offset to masked area. When selected, 100 counts are added to
the image in not masked areas, while the masked areas stay at 0 counts.
This way the masked out area is clearly visible and the image structure is
still displayed: e.g. outline of probe, particle seeding, experiment setup.
Algorithmic mask parameters (thresholds) are calculated always from the
original image.
Automatic display update updates the mask every time a parameter
is changed. This may slow down the work flow severly depending on the
selected mask creation mode. Recalculate now creates the mask when
pressed just one time to test the current parameters.
When using the mask in a processing dialog operation (like vector calcula-
tion) the Recalculation mode can be defined here:
The mask is calculated one time for the first image and then used for
the rest of the set.
The mask is recreated for all the images in the set (possible for algo-
rithmic mask only).
The mask is created for the first image and then moved with the de-
forming object (useful ony for time series images).
43
4 Mask Definition
44
5 Overlay Editor
Overlays are user defined painting structures (lines, polygons, ellipses,
texts), which are connected to buffers or cameras and automatically painted
above the 2D-image display of a buffer. The overlay display can be enabled
or disabled in the Display Properties dialog in the section about Addons.
For information about programming overlays with CL macros please read
the Command Language manual.
Overlays are connected to a buffer as buffer attributes, and they are stored
together with the buffers raw data in file types IM7, IMG, IMX, VEC and VC7.
The Overlay Editor dialog of figure 5.1 can be opened from the process-
ing function add overlay in group attributes when selecting mode Edit
Frame Overlays. In DaVis Classic the editor is available from menu
Buffer.
Figure 5.1: Overlay Editor dialog
An overlay is painted either in every frame of a buffer or in a single frame
only. In the upper left line of the dialog the active frame can be selected,
starting with index 0, or the all frames definition. This frame selection is
available for multi frame buffers only.
45
5 Overlay Editor
The list below includes all defined objects for the active frame. When se-
lecting an object, the type, position and other parameters are displayed
and the overlay object itself is painted in the buffer view on the right.
To paint all overlay objects of the active frame and get a test view, please
press the view all button. Button copy will create a complete copy of
the selected object, but changes the objects name. A single object can
be deleted, and the complete frame objects can be destroyed by button
Delete All. Note: There is no possibility to undo the last changes!
To define a new object, press on one of the bitmap buttons with symbols
text, rectangle, polygon, ellipse, circle and line. After pressing the button,
the mouse cursor is activated to wait for the user input and the status
line on bottom of the DaVis main window gives information about the
requested input: For a text object the upper left corner of the text area
must be selected with the left mouse button. For a rectangle or line
the first and second position must be selected, for a circle or ellipse the
center and a radius position. For a polygon all positions must be selected,
and when finishing the user has to press a key: q to quit the definition, c to
close the object or o to get an open polygon.
Each object parameter can be changed manually after definition: the name,
pencolor and fillcolor, the penwidth and the positions. With the Set button
each position can be redefined by a mouse click in the buffer view. For a
text object the text itself and some style settings are available.
By default each object is visible and painted on screen. If the mode is
disabled, the object is not painted but available for a later enabling without
deleting and recreating the parameters.
With the file items in the lower left part of the dialog, open file and save
file, the active overlay can be stored or reloaded. This is an easy way to
save an overlay and later use it again in the same or another buffer.
The overlay can be exported to the mask buffer, either the active overlay
or the complete buffer overlay. During the export the intensity values of the
pixels inside the objects and outside of all objects can be defined. Also the
export can be done to another buffer or even to a camera: Each camera
can use an individual overlay definiton, which is copied into the buffer at
the end of image acquisition automatically.
Press the Back button in the toolbar to return to the last dialog or to the
DaVis screen in Classic mode.
46
6 Color Palettes
There are two ways to change color palettes: With the Palette Editor some
existing palettes can be replaced by user defined palettes. The Palette
Adaption allows to add user defined color to specific positions in existing
palettes.
6.1 Edit Palette
DaVis includes a comfortable way to create an own color palettes. The
Color Palette Editor is called by the Edit Palette entry of the menu
Window Set Palette Edit Palette. When the window first opens the
currently selected palette is cleared and a single white entry is preseleceted
at the center of the palette (position 127). Values at up to 20 different (and
arbitrary) interpolation positions can be set in the palette at one time.
The different items in this window and their functions are:
The button Clear Palette will set the palette to its initial settings (only one
white interpolation point at the palette center)
The color palette at the bottom of the window shows the edited palette
at its current stage
The Interpolate button to its right determines if the colors is interpolated
between neighbour color palette entries.
The slider above it marks the current position of the palette for changes.
The textfield to its right shows this position (166 in this case). A new po-
sition is chosen by entering a new position (0..255) in the textfield (plus
pressing the RETURN button), changing the silder position with the mouse
or pressing the / buttons to move to the previous / next defined inter-
polation position.
By drawing the sliders R, G, B the red, green and blue component of a
color are selected and the intensity set with the slider Lightn., the re-
sult is shown in the colored rectangle below. E.g.: red= 255, green=255,
blue=0, lightness=200 will yield (dark) yellow. The position in color space
is shown on the leftcolor map.
A much easier way to select a color is clicking in the color map with your
mouse. While you hold down the button the color below your mouse cursor
47
6 Color Palettes
Figure 6.1: Color Palette Editor
is shown on the sliders and in the colored rectangle. Please note that the
color map only contain saturated colors (where at least one component
is 255, that is lightness = 255). Colors as e.g. dark yellow or grey have
to be derived from the saturated colors yellow or white and darkened by
setting the Lightness slider to lower values.
With the button Add/Replace Color you add the currently visible color in
the colored rectangle area to the palette either adding a new interpolation
point to the palette or replacing the color of an old one, depending on the
palette slider position.
With the button Delete Color the interpolation of the current palette
slider position is removed (if there has been one)
Pressing the button equidistant will create a black-grey-white color palette
with the selected number of interpolation points entered in the texfield
to its left at equidistant positions. This can be a good starting point to
create new color palettes: Only the colors have to be substituted (using
succesively the buttons or to position the slider above the set positions,
choosing a new color and replacing old ones)
48
6.2 Adapt existing palette
With the buttons and the modified palette is saved to and reloaded
from the file userpal_N.col, where N is the number of the active palette.
The file name is created automatically with the path of the DaVis startup
directory. If these color palette files are found at the start up of DaVis
in the DaVis directory they are loaded automatically overwriting DaVis
standard palettes with the corresponding number. When deleting these
files with the Windows file explorer DaVis default maps are visible again.
In this way, own palettes can be saved and made available for further
DaVis sessions. The files userpal_N.col have a simple ASCII file format.
Numbers can be changed with any text editor if necessary.
The button Color path will show the color palette entries in the color map
above (see figure 6.1 above). Please note that because only saturated
colours are shown, visually different colors that vary only in lightness (as
e.g. black, grey and white, dark yellow and light yellow) are shown at the
same position. As long as only the sliders R, G, B, Lightness or palette
position are moved or the mouse is moved across the color map the color
path will stay visible. This allows to select a position along the color path
and choose a new color for an intermediate interpolation point. But once a
mouse button is released the color pass will be gone (but can be redrawn
again after e.g. adding the selected color to the color palette).
6.2 Adapt existing palette
This wizard allows to adapt existing color lookup palettes by adding extra
color entries at specific positions to produce the effect of equipotential lines
in a scalar field. The adaption is not stored when shutting down DaVis!
At each start of the software the default palettes are available again.
The dialog of figure 6.2 is opened via menu Window Set Palette Adapt
Palette, which can be reached from any dialog at any time.
The source palette is base of an adaptation. The palette index can be
changed in the item on top of the dialog using the spin buttons. When
pressing the button right to the palette index, the palette selection dialog
of figure 6.3 opens. There all palettes are displayed and a specific one
can be selected by pressing the button on top of the palette. The dialog is
cancelled with the X button in the title bar.
Destination palette is the palette that will store the result from source
palette colors plus some additional color entries. Again the palette index
can be changed in the top of the dialog.
49
6 Color Palettes
Figure 6.2: Dialog to adapt existing Color Lookup Palettes.
Figure 6.3: Dialog to select a new palette
Items min/max give the minimum and maximum (arbitrary) range of the
intensity mapping to color index. E.g. 1 is used for the minimum color
(black) and 100 is used for the maximum color (white). Relative to this
mapping the extra color entries are placed inside the adapted palette to
reflect the specified intensities.
Up to 10 Additional color entries can be defined. They define a position
in the intensity mapping and a color. A preview of the new palette is given
on the right.
Each color entry can be enabled or disabled. The new color is given and
can be changed via the right button in this row. In the color selection
dialog of figure 6.4 some predefined color are selectable in the bottom row.
When another color is needed, the color component sliders for reg, green
and blue can be moved. Those free colors must be accepted with button
Slider while button None cancels the selection.
50
6.2 Adapt existing palette
Figure 6.4: Dialog to select a new color
Button Restore all original palettes (but current) removes all earlier
changes to other than the active palette. The changes of the currect palette
are not deleted.
Button Restore all original palettes and close dialog undos all changes
including the current destination palette and closes the dialog.
Button Apply changes in current palette adapts the current destination
palette only. The dialog stays open for further changes.
Button Apply all changes and close dialog adapts all palettes since
opening of the dialog. Then the adaption dialog is closed.
51
6 Color Palettes
52
7 Camera Setup Wizards
7.1 Focal length calculator
Prior to setting up the camera you may use the help menu to assist you in
choosing the optimal camera working distance for a desired field of view
(measurement area). Select menu item Help Wizard: focal length
calculator to open dialog of figure 7.1.
Figure 7.1: Focal length calculator
At first select the used camera type, which defines the size of the CCD
sensor. Then enter the field of view to be monitored by the camera as
size of the sample under study. Enter the distance between camera and
sample. Press button Calculate to find the focal length, which fits the
sample optimally into the image.
7.2 dt / velocity calculator
The dialog of figure 7.2 is useful to find the time difference between two
exposures of double frame exposures (dt) for a given velocity (as expected
mean velocity of the flow) or to find the best velocity for a given dt. Select
one of both modes.
Select the type of the used camera. The camera size is given in pixel. Enter
the field of view to be monitored and the pixel shift of the particles,
which should be seen by the given dt or velocity. Press button Calculate
to find the solution.
53
7 Camera Setup Wizards
Figure 7.2: dt / velocity calculator
7.3 Synthetic image generator 2D
You open the dialog by selecting the entry synthetic image generator
2D in the Help menu of the Davis main dialog.
The dialog of figure 7.3 is useful to create a set of example images - both
single frame and double frame - if you have no hardware connected or no
experimental setup available. It simulates a camera and creates a series
of images with moving structures that behave over time as specified in
the dialog. Also the ground truth (the vector field describing the pattern
movement) is created with the images for later reference.
The lower part of the dialog shows the created images on the left and the
corresponding vector fields on the right.
In the upper part of the dialog there are four group of parameters to be set:
54
7.3 Synthetic image generator 2D
Figure 7.3: Synthetic image generator 2D
Storage mode:
Here you specify the number of images to be created and where the result
will be saved. Pressing the open folder button will ask to select a new
destination for the image set. If the name of the set exist a suffix will
be added (name.set name_XX.set with an increasing number
XX) to prevent ocerwriting existing data. Below that set a set of the name
vector.set is created where the corresponding vector fields are stored
as reference data.
Furthermore the minimum and maximum pattern Shift(t) over time is de-
fined for all images along the time series. For example: Setting this to 10
and 10 using double frame images (refer to next section) will produce a se-
ries of double frame images each containing a (max.) 10 pixel movement
of the pattern. Whereas setting this to 0 and 10 using single frame images
(refer to next section) will produce a series of single frame images each
containing an increasing shift of 0 pixel movement (first iamge) to 10 pixel
movement (last image) of the pattern.
55
7 Camera Setup Wizards
Pressing Create image and vector sets starts the creation and storage
of the data. In contrast Test settings creates just one image and one
vector field (using the specified maximum shift) temporarily to check the
parameter settings. This is not saved to your hard disk.
Simulated camera:
In this group of paramters the type of simulated camera is defined:
The Dynamic range (max. counts) defines the maximum number of
counts of the simulated camera before it saturates. Ther will be no higher
intensity in the image. The Data format of image is either word re-
sembling a real digital camera holding only integer count values or float
with interpolated intensity values. The Image size (width * height) is
the number of pixels of the images (camera chip size). Also you define if
you want to simulate a single frame or a double frame camera. For a
double frame camera a new random pattern is created for every (double
frame) image. For a single frame camera one random pattern is used for
the whole time series to simulate time resolved recordings.
Finally you can add simulated camera noise, if you check the add cam-
era noise checkbox. The parameters are the e

/count conversion factor


which is needed to simulate the Poisson distributed photon statistics noise
: pixels holding an intensity of N counts that are equivalent to M elec-
trons will show an average noise of

M electrons which is reconverted to


counts. This Poisson statistic distribution is approximated by a Gaussian
distribution. Futhermore the termic noise level is defined which produces a
gaussian background level on all pixels. Typical values of real cameras are
5 and 10 respectively.
bias
Particle parameter:
In this group box the pattern properties are defined:
56
7.3 Synthetic image generator 2D
In Distribution the spatial location of all particles is defined. It is either
totally random (which will lead to random clustering of particles) is more
appropriate to simulate PIV seeding particles. You can define a particle
density (particles per pixel). A typical value is 0.05 (for 3-4 pixel particle
size). Larger particles require smaller densities so that they do not overlap
too much.
Blue noise distribution is a random pattern where particles / pattern dots
have a minimum distance to each other which have to given to prevent
overlaping of particles. Typically this value is a little bigger than the particle
size e.g. 12 for particle size 8. These dots are nevertheless randomly
distributed so that there is no preferred direction.
The Particle size and Particle form specify how particles / the pattern
look like. It is the size of each particle in pixel and the radial intensity
distribution inside the iamge. This is either a step function (dot of homoge-
neous intensity) with a relativly narrow fall off to background intensity. Or
a Gaussian bell intensity profile fall off with a standard deviation as defined
by particle size. The latter is a close approximation to what PIV seeding
particles lok like.
The Max. intensity is the highes intensity value of a single particle. Due to
overlapping of particles intensities will add up, until the camera saturation
is reached. You can avoid the summing of intensities of overlapping plateau
intensity distribution particles if you choose the same number of counts in
both the particle intensity and the Dynamic range (max. counts) of the
camera (see above). Then the plateau already saturates the camera and
dots will melt rather than add up.
Vector Field parameter:
Here the pattern /particle shift over time is specified. You have the choice
of several distinctive behaviours:
For Pattern shifting there are the two modes Strain and PIV
In PIV only the simulated particles are moved and maintaint their individual
intensity profile (like moving seeding particles in air or water). Furthermore
57
7 Camera Setup Wizards
in double frame mode the shift is symmetrically disributed V/2 in the first
iamge and +V/2 in the second frame
In STRAIN the pattern of each dot is smeared with the shifts defined by the
vector field and become distorted. In double frame mode the shift is only
present in the second frame.
In the Vector field selection the following choices are available (refer to
table 7.1), each of them with a parameter to be set :
Finally the Reference vector grid for the ground truth vector field is de-
fined to be able to comprae it to results calculated from the images more
easily.
58
7.3 Synthetic image generator 2D
Homogeneous deformation This is a simulated expansion of a sample with a
homogenous stretch in y-direction. The size of the
undistorted region can be set.
Crack expansion This is a simulated crack opening with a nonlinear
growing stretch in x direction away from the image
center. The size of the region that reaches 10%
90% of the maximum shift can be set
Vertical shear zone This is a simulated shear zone with a nonlinear
growing shear in y direction with the maximum
in the image center. The size of the region that
reaches 10%90% of the maximum shift can be set.
Circular shear zone The inner circular part performs a clockwise rigid
rotation the outer part an anti-clockwise rigid rota-
tion with a smooth (nonlinear) transition zone in be-
tween. The width of the region that reaches 10%
90% of the final shift on either side can be set.
Oseen vortex This is a simulated Oseen vortex which models a
line vortex that decays due to viscosity. The apprix-
imate distance in pixel from the vortex center can
be specified where you will get maximum vorticity.
Rigid shift This is a rigid shift. The parameter is the direction
in degrees of that shift.
Laminar flow This is a simulated laminar flow through a pipe in x
direction with a parabolic velocity profile in y direc-
tion. The parameter specifies how far outside of the
image the wall of the simulated pipe is lying.
Rigid rotation This is a rigid rotation. The parameter is the dis-
tance of the rotation center in x direction from the
center of the image.
Table 7.1: Types of simulated vector fields
59
7 Camera Setup Wizards
60
8 Image correction and
distortion
The Image correction and distortion dialog of figure 8.1 checks the
image dewarping function on images as well as the effect of the vector
preshift used in the vector multi pass calculations. The example shows
a calibration plate image to illustrate the dewarping. If an image should
be used for dewarping, press right mouse button on the image and select
send to image correction and distortion.
Figure 8.1: Image Reconstruction dialog
The left image holds the source image which has been send to the dialog.
The right image includes the result of the correction. At first select the
parameters as shown in figure 8.1. Then press button Apply correction
to start processing. The result might be like figure 8.2.
In this example all distortions due to the perspective distortion as well as
camera lens errors are taken out. The crosses are placed on a regular grid.
61
8 Image correction and distortion
Figure 8.2: Image Reconstruction result
This can be done for the standard case of z = 0 mm height or for any
other fixed height above the calibration plate position of view 1 used in the
calibration process. Alternatively a height map can be provided, holding the
z-position for each pixel as used in the StrainMaster 3D or SurfaceFlow
package.
Another way is to provide a corrected image and distort via button Apply
distortion.
Both the source and result images can be saved and loaded by pressing
the corresponding button.
In the dewarping process a bilinear interpolation is used to access image
information at subpixel positions. One optional mode is the Whittaker
reconstruction with a specified filter length (e.g. 7 pixels). This mode is
significantly slower but might provide sharper images.
A second optional mode is to shift the image contents as specified in a
reference vector field. To use this mode the flag use image correction
has to be disabled and use vector buffer has to be enabled. The result is
like the example of figure 8.3.
This second mode gives a possibility to check if the vector results really
describe well the shifts of the image. If two frames are present for a cam-
era, then another option is available: Either the whole shift can be applied
62
Figure 8.3: Image Reconstruction with vector shift
to the second frame only as recommended in time series images. Or the
half of the vector shift can be distributed evenly o both frames. For double
frame images the opposite sign is recommended.
After processing the reconstruction, the patterns of both frames should
match exactly when switching between both frames. If there are still shifts
present in the corrected image, this indicates that the vector field still does
not describe the shifts in those regions well enough. More (multi pass)
iterations and/or a finer grid should be tried.
For vector calculations with image correction, both use image correction
and use vector buffer can be combined.
Expert users can choose the option to correct/distort frames for a specific
camera only. Another expert option allows to select all frames with existing
CameraNameX buffer attributes. The auto mode is recommended.
63
8 Image correction and distortion
64
9 Time-super-sampling
Increases the temporal resolution of PIV vector field SETs derived from high
speed time series recorded in frame straddling mode.
To open the Time-super-sampling dialog of figure 9.1 use the right mouse
button on a (vector field) image and select the Send to time-super-
sampling option. The dialog operates on the SET the image belongs to.
Figure 9.1: Time-super-sampling dialog.
On top of the dialog the super-sampling-factor can be set, which defines the
increase of temporal resolution of the source SET. The number of generated
intermediate vector fields between two consecutive source vector fields is
equal to "super-sampling-factor-1". The dialogs data viewer can be used
to slide over the original source images and the generated intermediate
images to get an instant view on the results. In the groupbox Currently
65
9 Time-super-sampling
displayed image the user can see, if the currently shown image is an
original image from the source SET or a calculated intermediate image. By
using the Save as new SET... button a time-super-sampled SET is stored
to disc by using the current settings. Only the selected range in the data
viewer is saved to the new SET.
To calculate the intermediate vector fields, the so-called time-super-sampling
(TSS) algorithm is used. This algorithm was described by Fulvio Scanaro in
his paper An advection model to increase the time-resolution of PIV time-
series of July 2010. The current implementation can be used for both 2C
and 3C vector fields in 2D and 3D. The input vector fields should be derived
from high speed time series recorded in frame straddling mode. Tomo/3D-
vector fields are only supported, if the grid-values in X, Y and Z are equal.
The TSS algorithm requires specific time stamps in the vector field buffers:
In every case, it is necessary that the "AcqTimeSeries0"-attribute is set
correctly. Additionally the attribute "FrameDt0" must be available. If thsi
attribute is missing, alternatively the X- and I-scales must be defined for
frame0. The X-scale must have the unit "mm" and the I-scale must have
the unit "m/s". If these requirements on attributes are not fulfilled, a SET
cannot be used with the TSS algorithm. (If necessary, these attributes could
be copied to the vector field buffers from their image source buffers.)
This functionality is also available as a Processing operation, see 1003001_DaVis_D80.pdf
for details.
66
10 Correlation Map
To open the Correlation map dialog of figure 10.1 use the right mouse
button on an image and select the Send to Correlation map option.
Figure 10.1: Correlation Map dialog
On the top of the dialog the Correlation parameter and the Vector com-
ponent values are located. On the bottom left the particle image, used
as the source for the correlation, and on the bottom right the correlation
function at the certain position determined by the cursor on the particle
image.
While the cursor is moved on the particle image the correlation is recalcu-
lated automatically. This works when the Switch mouse selection on/off
flag is off. The recalculation is done for the specified Correlation win-
dow size at the Correlation window position. This is given in pixel
determined by the cursor position on the particle image. The result of the
correlation is updated with every mouse move.
67
10 Correlation Map
The left mouse button can be used to set the Switch mouse selection
on/off flag. Now the position of the correlation window is fixed. By mov-
ing the cursor on the correlation the correlation height is determined and
shown in the status line. The cursor position is displayed, too, either scaled
or in pixel coordinates when keeping the SHIFT button on the keyboard
pressed.
Switch mouse selection on/off: When the flag is off the correla-
tion map is updated automatically when the cursor is moved on the
particle image.
Correlation window position: Shows the center position of the in-
terrogation window depending on the cursor position on the particle
image.
Correlation window size: Determines the correlation window size
for the currently executed correlation.
Vector field was created using image correction: Determines
if the calculation of the correlation is taking image correction into
account or not.
Vector component values: Shows the position of the 1st, 2nd, 3rd
and 4th choice vector in [mm] for x- and y-component.
disabled: not used for correlation map.
68
11 X/Y Plot
The X/Y Plot dialog can be reached via X/Y Plot button in the Project
Manager. This dialog creates a user defined x/y plot of a image or vector
data set. Depending on the type of the data set different data sources for
the x and y axis are available.
Figure 11.1: XY Plot dialog.
The dialog of figure 11.1 is subdivided into a right part with the views of
the source and result and a left part with the window manager on top and
the x/y plot parameter on bottom.
11.1 Window Manager
The Window Manager is located in the top left corner. It is used to setup
the image display configuration. A suitable array of images (vertical horizontal )
may be selected using the Layout textbox within the range 11 up to 22.
The following types can be displayed:
69
11 X/Y Plot
Source: The selected vector dataset which is used to create the x/y
plot.
X/Y Plot: The resulting x/y plot.
11.2 Plot mode
A Plot mode has to be defined before selecting the data for the x- and
y-axis. The following modes are available:
Figure 11.2: Plot modes of the XY plot dialog.
x value / y value (1 profile): This mode gets from each source
vector field 1 data point (the selected x and y value) to create one
profile with n (number of vector fields) data points.
Example: Average strain Eyy over 9 vector fields.
x line / y line (n profiles): This mode gets from each source vector
field 1 profile (the selected x data line over the y data line) to create
one plot with n (number of vector fields) profiles.
70
11.3 Vector fields
Example: Vector component Vx along a specified line for 9 vector
fields.
Note: Depending on the selected Plot mode the dialog supports different
groups/functions for the y- and x-axis.
11.3 Vector fields
11.3.1 Y - axis
Select the data source for the y-axis between a scalar field operation (all
available scalar field operations which are described in the Processing chap-
ter) and device data which are used to store additional data like analog
values from a A/D converter.
extract scalar field: ...: Choose the group and function of the data
you want to use for the y axis. Depending on the Plot mode you
have different possibilities for the Data range: In the x value / y
value mode you can choose between the average value of the whole
vector field or the average of a user defined rectangle which can be
set with the Rect button. The x line / y line mode expects a data
line with n (>1) values. For this you can choose between:
arbitrary line: Select an arbitrary line in the source vector field.
The operation extracts the selected scalar values along this line.
71
11 X/Y Plot
horizontal line: The scalar values of the selected horizontal
line.
horizontal line n-m (average): Select a range of horizontal
lines. The operation calculates an average line between the se-
lected range.
vertical line: The scalar values of the selected vertical line.
vertical line n-m (average): Select a range of vertical lines.
The operation calculates an average line between the selected
range.
device data: Choose one of the available device data you want to
use for the y-axis. If the Device data box shows - no device data
- there are no data in the current source dataset. In the x value / y
value mode the operation extracts the average of the selected device
data for each vector field. In the x line / y line mode the operation
gets the complete device data line.
11.3.2 X - axis
Select the data for the x-axis. Depending on the selected Plot mode and
the data for the y-axis, one or several of the following groups are available:
extract scalar field: ...: Choose the group and function of the data
to be used for the x-axis. With extract scalar field... for the y- and
x-axis both axis are using the data range of the y-axis.
device data: Choose one of the available device data for the x-axis.
If the Device data box shows - no device data - there are no data in
the current source dataset. In mode x value / y value the operation
extracts the average of the selected device data for each vector field.
In mode x line / y line the operation gets the complete device data
line.
file number: Uses the file number of each source file for the x-axis
(only available for x value / y value mode).
72
11.4 Images
time: Extracts from each source file the stored acquisition time (only
available for x value / y value mode).
values along line: This option is available for the x line / y line
mode only.
11.4 Images
11.4.1 Y - axis
Select the data source for the y-axis between image data and device data
which are used to store additional data like analog values from a A/D con-
verter.
image data: ...: Choose the group and function of the data you want
to use for the y axis. Depending on the Plot mode you have different
possibilities for the Data range: In the x value / y value mode you
can choose between the average value of the image or the average
of a user defined rectangle which can be set with the Rect button.
The x line / y line mode expects a data line with n (>1) values. For
this you can choose between:
arbitrary line: Select an arbitrary line in the image. The opera-
tion extracts the values along this line.
horizontal line: The image data of the selected horizontal line.
horizontal line n-m (average): Select a range of horizontal
lines. The operation calculates an average line between the se-
lected range.
vertical line: The scalar values of the selected vertical line.
vertical line n-m (average): Select a range of vertical lines.
The operation calculates an average line between the selected
range.
73
11 X/Y Plot
device data: Choose one of the available device data you want to
use for the y-axis. If the Device data box shows - no device data
- there are no data in the current source dataset. In the x value
/ y value mode the operation extracts the average of the selected
device data for each image. In the x line / y line mode the operation
gets the complete device data line.
11.4.2 X - axis
Select the data for the x-axis. Depending on the selected Plot mode and
the data for the y-axis, one or several of the following groups are available:
device data: Choose one of the available device data for the x-axis.
If the Device data box shows - no device data - there are no data in
the current source dataset. In mode x value / y value the operation
extracts the average of the selected device data for each image. In
mode x line / y line the operation gets the complete device data line.
file number: Uses the file number of each source file for the x-axis
(only available for x value / y value mode).
time: Extracts from each source file the stored acquisition time (only
available for x value / y value mode).
values along line: This option is available for the x line / y line
mode only.
11.5 Store mode
The Store mode determines if and how the resulting X/Y Plot is stored:
off: The result is not stored in the current project (but temporary on
harddisk).
default: The result is stored in the current project with a default
name in the subfolder of the source dataset.
74
11.6 Processing
specify: The result is stored in the current project with the specified
name in the subfolder of the source dataset.
11.6 Processing
The Start Processing button creates the X/Y Plot using the selected file
range (Source groupbox), the Plot mode, Y-axis and X-axis parameter.
The result is automatically shown in the right part of the dialog.
75
11 X/Y Plot
76
12 About Filter Operations
12.1 Filter Operations
Filter operations play an important role in image processing. They are used
for smoothing, edge detection, texture or contrast enhancement, and many
other special tasks. There are numbers of filters, especially the NxN-filters,
which only operate on the close neighborhood of a pixel. On the other hand,
the filter operations done using the fourier transform are nonlocalized and
operate on frequencies, i.e. structures like sharp edges are analysed in the
realm of their frequency representation.
The FFT-filter are described in an extra chapter. In this chapter the linear
and non-linear neighborhood filters are described.
Among them the most often used ones are the ones, which calculate the
new value of a pixel from a linear combination of the surrounding 8 pixel
values and the value of the center pixel itself. Mathematically this can be
represented as a 3x3-matrix, which is sort of moved over the image to
calculate a new center pixel each time it is applied to a 3x3 area:
f

i ,j
=

a
1
f
i 1,j 1
a
2
f
i ,j 1
a
3
f
i +1,j 1
b
1
f
i 1,j
b
2
f
i ,j
b
3
f
i +1,j
c
1
f
i 1,j +1
c
2
f
i ,j +1
c
3
f
i +1,j +1

/divisor
where

a1 a2 a3
b1 b2 b3
c1 c2 c3

/divisor
is the filter matrix. divisor is the final division factor of the sum. Only
if divisor is the sum over the filter matrix elements, the average image
intensity stays the same. For example, if all of the filter elements are 1
(=smoothing) then divisor is set to 9 in order not to change the average
image intensity.
77
12 About Filter Operations
12.2 Smoothing Filter
Smoothing filters are used to eliminate high frequency noise by taking an
average over the neighborhood of a pixel.
12.2.1 Smoothing 3x3

1 1 1
1 1 1
1 1 1

/9
This is the simplest smoothing filter and it works well. Function smooth-
ing k x k is the same matrix, but using up to 9x9 pixels. Of course, any
smoothing filter will also smooth-out any sharp edges, which are not noise,
but physically real.
Figure 12.1: Original image (left) and 9x9-smoothed image (right).
12.2.2 Gaussian LowPass
This is another good smoothing filter. It takes the center pixel more into
account than the simple smoothing-3x3-filter. Mathematically it is more
like a gaussian curve. Viewed in the frequency realm, the gaussian low-
pass filter is smoother than the simple 1-1-1-filter.

1 2 1
2 4 2
1 2 1

/16
12.2.3 Median Filter
The median filter is a non-linear filter not working with the simple matrix
equation above. The median filter takes a k x k-area (k = 3, 5 or 7) and
78
12.3 Sobel Filter
Original Image 7x7 Gaussian Low-pass
Smoothing 3 x 3 Smoothing 9 x 9
Figure 12.2: Gaussian and Smoothing filters on an example image.
sorts all elements according to their intensity. Then the middle element
is stored back as the new center pixel. Just like the other smoothing filter, it
will eliminate high frequency noise. As an advantage it does not eliminate
sharp edges as much as the other filters. On the other hand this filter
is non-linear, i.e. the average intensity of the image may change. The
median filter works mathematically very similar to the erosion and dilation
filter discussed below, which have a very different behaviour.
12.3 Sobel Filter
The sobel filter will enhance edges. There are four different ones, each one
preferring a different direction. The effect is one of introducing a pseudo-3D
representation of the data.
79
12 About Filter Operations
Figure 12.3: Median filter on 3 x 3 area and on 7 x 7 area.
All of the sobel filters do not use any divisor (=1). The pixels are simply
multiplied with the filter matrix and summed up.
Sobel 1 vertical:

1 0 1
2 0 2
1 0 1

Sobel 2 horizontal:

1 2 1
0 0 0
1 2 1

Sobel 3 diagonal:

0 1 2
1 0 1
2 1 0

Sobel 4 diagonal

2 1 0
1 0 1
0 1 2

12.4 Compass Filter


The compass filters are also edge detection filters. They are named accord-
ing to their preferred direction N, NE, E, ... .
Compass-N:

1 1 1
1 2 1
1 1 1

80
12.5 Laplace Filter
Figure 12.4: Example for Sobel-1 vertical filter.
Compass-NE:

1 1 1
1 2 1
1 1 1

Compass-E:

1 1 1
1 2 1
1 1 1

and so on...
12.5 Laplace Filter
Laplace filter will detect edges in any direction. Simply try out the three
different laplacian filter. Laplace-3 is a laplacian of a gaussian filter, i.e.
it incorporates a second derivative to provide a noise-independent edge
detection filter.
Just like the sobel filter the divisor is 1.
Laplace-1:

0 1 0
1 4 1
0 1 0

Laplace-2:

1 1 1
1 8 1
1 1 1

Laplace-3:

1 2 1
2 4 2
1 2 1

81
12 About Filter Operations
Figure 12.5: Example of the Laplace filter.
12.6 Sharpening Filter
These filters enhance the contrast of the image, i.e. the image is sharp-
ened. Mathematically this is done by adding the original image to the lapla-
cian. The factor a determines the addition of the edge-enhancing second
derivative to the image. For a = 1 this is a 50% addition.
Sharpness-1:

0 1 0
1 4 +a 1
0 1 0

/a
Sharpness-2:

1 1 1
1 8 +a 1
1 1 1

/a
Figure 12.6: Example of Sharpness filter.
82
12.7 Other Special Filters
12.7 Other Special Filters
12.7.1 Erosion and Dilation Filter
Just like the median filter this filter will also sorts the pixels of a 3x3-area
according to their intensity. But instead of taking the middle element, the
erosion will take the element before. The effect is a kind of erosion, val-
ley become deeper and finally the image will be eroded away toward the
background level.
The dilation filter does exactly the opposite. It will take the next element
after the middle one in the sorting list. The effect is that edges become
sharper and plateaus flatten and become more even.
Figure 12.7: Examples of erosion (left) and dilation filter (right).
12.8 Fourier Transform (FFT)
The fourier transformation is a linear transformation from space or time to
the frequency domain. Instead of displaying the data in the way it was mea-
sured, after applying the fourier transform it is viewed as signal strengths
of certain frequencies. Since the base functions of the fourier transform
are orthogonal and complete, there is no information lost in the transfor-
mation. Applying the inverse fourier transformation will recover the original
data again.
The fourier transform (FT) is defined using integrals on continuous types
of data. On the other hand, a computer stores digitised values at regular
intervals in space. Here, the discrete fourier transform (DFT), defined as the
sum over discrete values, must be used. Finally, the fast fourier transform
(FFT) is just a fast algorithm for computing the DFT, which reduces the time
83
12 About Filter Operations
for calculation tremendously. Due to the fast FFT-algorithm the FFT became
so widely spread as a processing tool for all kinds of data in numerous
applications.
For N-data points there are about N*N numerical calculations necessary
using the DFT. Using the FFT this is reduced to Nlog
2
(N) which for typical
data sizes like 512 reduces the calculation time by a factor of 57!
The main application of FFTs are in performing filtering techniques, since
in the frequency space many filters can be done by simple multiplication.
Most of the times a low-pass filter is used to eliminate high-frequency noise,
or a high-pass filter may suppress unwanted slow changes in the back-
ground.
FFTs are also used as a fast method for computing correlation between two
data ranges, or to compute convolution and deconvolution of signals.
12.8.1 Basic FFT and Inverse FFT on Images
The LaVision Pro Package uses the basic butterfly-algorithm enhanced by
the possibility of using arbitrary long arrays of data. Instead of using the
standard 128, 256, 512,... data length, any length of data points can be
used (it must be even, though) while still leading to a very fast algorithm
(Singleton).
The time of computation is dependant on how N (the number of data points)
can be separated into factors. If N is close to a prime number (e.g. 2 * 137)
the time of computation is not much faster than in the slow DFT-way of
calculation. If N can be separated into low prime numbers (e.g. 2*2*3*3)
the algorithm is very fast. Fortunately, the number of pixels used in CCD-
cameras is most of the times ideal for applying the FFT.
If you take a part of an image for the FFT, simply be aware that some
lengths of data points might require longer times.
The formula for calculating the FFT on some data array x
j
is:
F
k
=
1
N
N1

j =0
x
j
e
i 2 j k/N
=
1
N
N1

j =0
x
j
(cos(i 2 j k/N) +i sin(i 2 j k/N))
(12.1)
where i is the imaginary number (i
2
= 1). The inverse FFT is defined as:
F
k
=
N1

j =0
x
j
e
i 2 j k/N
(12.2)
84
12.8 Fourier Transform (FFT)
The FFT can be applied to 2D-image data horizontally or vertically or both
vertically and horizontally, which is called 2D-FFT.
The horizontal-FFT takes each row of the image and calculates the FFT from
this data array. So, when applied to an image of size N*M (N-columns and
M-rows) the resulting image is also of size N*M and the first row of the
FFT-image is the fourier transform of the first row of the original image.
All rows are still independent. Spoken in terms of frequencies, the original
image is analysed, in what respect there are horizontal frequencies (may be
visualised as horizontal arrows) in each of the rows of the original image.
The wavelengths corresponding to the FFT-frequencies correspond to the
range of 1 pixel up to N pixel. The vertical-FFT works the same on columns
of the image.
Since the FFT is linear, both the horizontal- and the vertical-FFT can be
executed one after the other to yield the 2D-FFT, which is a measure of
frequencies in arbitrary directions. The formula for the 2D-FFT is:
F
nm
=
1
NM
M1

k=0
N1

j =0
x
jk
e
i 2 k m/M
e
i 2 j n/N
(12.3)
Since the FFT works in principle on complex numbers and gives as a result
almost always also complex numbers, even if the original data is real, we
need to be concerned about the storage of the FFT-data in the buffers of
the DaVis-program.
First of all, the FFT-operation only works on FLOAT-buffers and it always
stores the result as a FLOAT-buffer. But since the result of the FFT is com-
plex, we would need to store the result in two buffers, one for the real part
and one for the imaginary part of the data. This would be true in general,
but the software makes use of a feature which is true when the input data
is just real instead of the general complex case.
When the input data is real, then the output data is symmetric in the real
part and anti-symmetric in the imaginary part of the output-data. So only
half the data needs to be stored, the other half is simply the same or neg-
ative. Exactly how the software stores the result of the FFT in a buffer is
explained in an extra section, because it is not so simple to explain and, in
general, it is of no interest for you.
What is important, though, is the way where the software stores the low
frequencies and where it stores the high frequencies:
Rule: low frequencies are in the middle of the image,
high frequencies are at the outside.
85
12 About Filter Operations
This is different than what is expected fromequation 12.1, where the lowest
frequencies are (k=0,1,...) at the beginning of the output data array, and
the higher frequencies are to the right. The reason why the storage is
done according to the above rule becomes apparent, when standard filters
are applied to the FFT-images, since then the filters are always symmetric
around the center, which makes them very easy to generate.
The lowest frequency (k=0 in eqn. 12.1), stored at the center, is the aver-
age over all pixels (cos(..) = 1, sin(...) = 0). This average value is stored
at the location N/2-1. For example, if N is 384, then the average is at pixel
384/2-1 = 191 (the first pixel is pixel 0). Or, if the 2D-FFT is used on an
image of size 384 x 286 pixel, then the center with the average value is at
position (191/142).
The inverse-FFT, of course, takes the storage details into account before
using equation 12.2. The result of the inverse-FFT is only real, since we
started with a real image. Even after applying symmetric filters to FFT-
data, the result of the inverse-FFT will always be real again.
Exercise
Lets use the DaVis program in Classic mode to try out the basic FFT and
IFFT functions:
1) Load an image into buffer 1.
2.) Change buffer 1 into a FLOAT-buffer by making buffer 1 active and click-
ing on it again with the right mouse button to open the Data Properties
dialog and the Properties card. There set the type of the buffer to FLOAT
and click on OK. Remember that the FFT only works on FLOAT buffers.
3.) Go to menu Compute FFT/IFFT.
86
12.8 Fourier Transform (FFT)
4.) Set the calculation mode to 2D. You may also try horizontal or verti-
cal to see the differences in the output.
5.) Run FFT, use all pixels of the image (select Rectangle 0 as active
rectangle) and store the result in buffer 2.
6.) Display buffer 2. Probably you dont see much. Go to resolution 64 or
even multiply buffer 2 by 100 or 1000 with F7 and b2=b2*100 in order to
see more details.
Notice how most of the image energy is in the center at the low frequencies
and at the outer rim of the image, where the high frequency noise is stored.
7.) Run IFFT to convert the image back to normal. Store the result in buffer
3. If you have multiplied buffer 2 by 100 before, then the image intensity
is 100 times higher in intensity.
Notice that the original image and buffer 3 are for all practical purposes ex-
actly the same. Differences only appear in some minor digit due to round-
ing errors.
12.8.2 FFT-Filter
Once the data has been fourier-transformed, it is easy to apply a standard
frequency filter to it by simple multiplication of the fourier data with an
equal-length vector, which suppresses or enhances certain frequencies:
F

k
= F
k
C
k
k = 0, 1, ..., N 1 (12.4)
where C
k
is a frequency dependant filter function usually in the range of
0.0 to 1.0.
There are numerous ways of generating filters. For example, a low-pass
filter is 1.0 for low frequencies and goes to 0.0 for high frequencies. A
high-pass filter will be 0.0 for low frequencies and goes to 1.0 for high
frequencies.
87
12 About Filter Operations
Translated to our way of storing the FFT-data with low frequencies in the
middle, this means, for example for a low-pass filter, that the value of the
filter function is 1.0 in the middle of the image and goes to 0.0 towards the
outer rim of the image. So a very crude low-pass filter will simply set all
pixels outside an inner rectangle to 0.0 (using the function Set Outside
Rect. = C).
A filter needs to be defined only over half the image length, since it is
symmetric around the center. For example, if the image size is 200 pixels
(in either dimension) then it is sufficient to define the filter up to pixel 100.
In case the filter function length does not correspond with the image size,
or in case a 2D-filter-function is applied to an image, where the width is not
the same as the height (the standard case), the filter function is extended
properly or cut to length if necessary.
For example, if the FFT has been computed in the 2D-mode, and the image
size is 384 x 286 pixels, and the filter length is two times 100 as in the
above example, then all pixels beyond the end of the filter (100) are set
the the last value of the filter function at point N/2, which is 0 in the above
example. This simply means that the low-pass filter is set to 0 for the very
high frequencies. To be more precise, the center of the image is at 191/142,
and in horizontal direction all pixels before pixel 191-100=91 and beyond
pixel 191+100=291 are set to 0 and in vertical direction all pixels before
142-100=42 and beyond pixel 142+100=242 are set to 0.
In the 2D-case the image is multiplied with the filter function vertically and
also horizontally. In the 1D-case, the image is only multiplied once either
horizontally or vertically.
If the filter function is longer than the image data array it is simply cut to
length. Of course, the program always centers the filter function correctly.
Applying a high pass filter it is very important to use the correct filter
length, since cutting the filter to length or extending the filter in length
may produce unwanted results. In the 2D-case it becomes necessary to
use two different filters, vertically and horizontally, of different lengths to
account for the fact that the image height is different from the image width.
Function FFT Filter in the Fast Fourier Transform dialog (see page 103)
will use the first profile of the profile buffer. In the 1D-case the filter is only
applied horizontally or vertically, in the 2D-case it is done in both directions.
Finally you can open the Buffer Operations dialog and use functions Set
Inside Rect. = C and Set Outside Rect. = C for a fast low-pass or high-
pass filter. We advise not to use these functions, though, since very crude
filters will introduce extra noise into the source image as we show later on.
88
12.8 Fourier Transform (FFT)
12.8.3 Using CL to design your own filter
You are not limited to the filter design menu to manipulate the FFT-data.
Since the FFT-data is a standard FLOAT-buffer, it can be processed in any
other way the program allows. Specifically you may design your own filter
function (stored in a profile) and then multiply the FFT-image with the filter
function. Or you may create a complete 2D-buffer to multiply the FFT-data
with.
All functions in the filter design menu are actually CL-macros. The filters are
generated using subroutine MakeFFTFilter() (profile 19), copied to profiles
17 (horizontal) and 18(vertical) and also processed further to account for
the actual image size. Finally, the FFT-data is multiplied with profiles 17
and 18, either or both vertically and horizontally.
Below is a short CL-programming sequence to show how to use CL to apply
a simple low-pass filter:
// simple horizontal low-pass filter
// let us assume the source image is in buffer 1
GetBufferSize( 1, nx, ny, type )
// check image size
FFT( 1, 0, 0, nx-1, ny-1, 2, 0 )
// horizontal FFT on whole image
// FFT is now in buffer 2
// here comes the definition of the filter function:
SetProfileBoundary( 19, 0, nx-1 )
// set length of profile 19
for ( i = nx/2-1; i<nx-1; i++ )
// right side of filter function
F[ 19, i ] = 1.0 - ( i - nx/2 + 1 ) / ( nx /2 );
// simple straight line to 0.0
for ( i = 0; i<nx/2-2; i++ )
// fill left side of profile symmetrically
F[ 19, i ] = F[ 19, nx - i - 2 ] // center = nx/2 - 1
// pixel 0 = pixel nx-2, 1 = nx-3, ...
B[2] = B[2] * P[19] // apply filter horizontally
IFFT( 2, 3, 0 ) // horizontal IFFT
The next example shows how to multiply a buffer vertically with a profile.
This is not so straight forward, since multiplication of a buffer with a profile
is always done vertically. The trick is to generate a column, which is then
multiplied with the FFT-data.
// Vertical multiplication of buffer 1 with profile 19.
89
12 About Filter Operations
// The result is stored in buffer 2.
B[2] = B[1] // copy buffer 1 to buffer 2
// the important side effect is that buffer 2
// has the same size as buffer 1.
C[2,0] = P[19] // copy profile 19 to column 0 of buffer 2
B[2] = B[1] * C[2,0] // this is the vertical multiplication
12.8.4 The Art of Filter Design
The important rule in defining FFT-filter is that the filter function should be
as smooth as possible. There are only very few exceptions to this rule, for
example, in case it is necessary to filter out a well known frequency, and
only this one.
The transition functions in the filter design menu are chosen so that there
is a smooth transition from one segment to the next.
To illustrate what the effect is of using a crude filter with sharp edges, a
sample image is taken and filtered both with a crude filter and a smooth
filter. The sample image has inside itself a rectangle of intensity 100 while
the background is at 20 counts.
First of all we transform the image into fourier space. Note that the fourier
transform of a square has the highest value in the center while decreasing
progressively towards the outside. A sharp edge is the superposition of
a series of progressively higher frequencies with progressively decreasing
amplitude. Smoothing is the elimination of the higher frequencies so that
the transition at the edge will be smoother.
Then a crude low-pass filter is applied with function Set Outside Rect. =
C in menu Buffer Operation, setting all pixels to zero, which are roughly
more than 10 pixels away from the center. After transforming back with
IFFT we note an interesting effect in the final image. While the edges have
been smoothed as we expected there is also a reverberation of the square
in the rest of the image. The background does not stay constantly at inten-
sity 200, but is now swinging around 200. The reason for this is that the
low frequencies are now no longer compensated by higher frequencies to
produce a stable background. This means that our crude FFT-filter actually
introduced additional noise into the image!
Let us compare it with a smooth filter designed with the filter design menu.
Higher frequencies are not simply set to zero, but fade out towards zero.
The result is that the background is now much smoother than before. Note
that also the top region of the square is also now much more constant at
intensity 100.
90
12.8 Fourier Transform (FFT)
Figure 12.8: Original image wth a small rectangle (top), the FFT (left) and
PowerFFT (right).
12.8.5 Mathematical Operation on FFTs
There are some useful functions in menu FFT math operations to work with
FFT data in more detail.
First of all since the complex FFT data is stored in a kind of scrambled way
to fit into a FLOAT buffer, it is useful to be able to separate the complex
FFT buffer into two buffers where one is the amplitude part of the complex
numbers and the other one is the phase. Mathematically every complex
number can be expressed not only as a real part and an imaginary part,
but also in terms of amplitude and phase, which in case of the FFT data is
a much more useful representation of the real and imaginary values.
Any complex number F
k
can be expressed as
F
k
= R
k
+iI
k
= A
k
e
i P
k
= A
k
(cos P
k
+i sinP
k
) (12.5)
with
91
12 About Filter Operations
Figure 12.9: Crude low-pass filter (left) versus smooth filter (right).
A
k
=

R
k
R
k
+I
k
I
k
(12.6)
and
P
k
= tan(
I
k
R
k
) (12.7)
where R
k
and I
k
are the real and imaginary part of F
k
and A
k
and P
k
are the
amplitude and the phase of F
k
.
Function Amplitude computes the amplitude image of the FFT image, func-
tion Phase computes the phase. Again the low frequencies are toward the
center of the image and the high frequencies are toward the outside. In
the section on data storage there is more detail about where exactly every
F
k
of our fundamental fourier equation 12.1 is stored. Function Combine
Amplitude and Phase will recombine an amplitude and phase image to an
FFT-image, which can then be transformed back using IFFT. Note that these
three function must know the type of FFT-data, whether it is a horizontal,
vertical or 2D-FFT, because in each case the storage of the FFT-data is dif-
ferently. For example, when the original image has been transformed using
the 2D-FFT mode, then all subsequent operations must also been done in
the 2D-mode (function Mode: must stay the same).
The separation of an FFT-image into amplitude and phase can be used to
do some special filtering on the phase. In the standard case, when we
apply a filter to the complete FFT-image, this is the same as if we apply
92
12.8 Fourier Transform (FFT)
the filter to the amplitude image only. But there are some special filtering
techniques, which require, for example, to enhance or change the phase
separately. There a number of good books on this topic. See the literature
list for references (page 98).
The most often used application for the computation of the amplitude im-
age is the computation of the power spectra, which is simply the square of
the amplitude. Function Power-FFT simply computes the amplitude of the
FFT-image and then squares the buffer.
Mathematically squaring the amplitude is the same as multiplying a com-
plex number with its complex-conjugate counterpart. Complex conjugate
means that the imaginary part is changed in sign.
F
k
F

k
= (R
k
+iI
k
)(R
k
iI
k
)
(R
k
R
k
+iR
k
I
k
+iR
k
I
k
i
2
I
k
I
k
)
(R
k
R
k
+I
k
I
k
)
(12.8)
In generalising the complex multiplication to two different numbers, func-
tion multiply A B

will compute the complex multiplication between two


FFT-images A and the complex conjugate of B. If B is the same as A, then
this function will compute the power spectrum according the above equa-
tion.
Similarly function multiply A B does a complex multiplication without tak-
ing the complex conjugate of B. Those two function are mainly used for the
computation of a correlation function or a convolution function.
There is no extra function for complex division, since it is rarely used and
can be generated using the above functions. Defining complex division as
the inverse of the complex multiplication, it can be best visualised in the
realm of amplitude and phase. Here the complex multiplication is simply
the multiplication of the amplitudes while adding the phases:
F
k
G

k
= (A
k
e
i P
k
) (B
k
e
i Q
k
)
(A
k
B
k
) e
i (P
k
+Q
k
)
(12.9)
Therefore complex division is the division of amplitudes while subtracting
the phases:
F
k
/G

k
= (A
k
/B
k
) e
i (P
k
Q
k
)
(12.10)
In the same way the division by the complex conjugate G

is done by
adding the phases.
93
12 About Filter Operations
12.8.6 FFT Data Storage
The FFT data is stored in one FLOAT-buffer using the fact that the fourier
transform of a real data array is symmetric in its real part and anti-
symmetric in its imaginary part. This means, using the connotation of
our basic equation 12.1 and 12.5:
F
k
= R
k
+iI
k
with 0 k N 1 (12.11)
the symmetry is given by
R
0
= data average
R
1
= R
N1
, R
2
= R
N2
, ..., R
N/2
I
0
= 0, I
1
= I
N1
, I
2
= I
N2
, ..., I
N/2
= 0
(12.12)
Stored in the FFT-buffer is only R
0
up to R
N/2
and I
1
up to I
N/21
, i.e. a total
of N real numbers. The storage is done in such a way that at the center of
the image is stored R
0
and to the right are R
1
to R
N/2
and to the left are I
1
to I
N/21
. The center of the image is at pixel location N/2-1. The highest
frequency is actually N/2, indices higher than that are equivalent to lower
frequencies.
Example: N=10, horizontal FFT. The FFT is stored as:
pix 0 1 2 3 4 5 6 7 8 9
I
4
= -
I
6
I
3
= -
I
7
I
2
= -
I
8
I
1
= -
I
9
R
0
R
1
=
R
9
R
2
=
R
8
R
3
=
R
7
R
4
=
R
6
R
5
Horizontal and vertical FFTs are computed independently. This means that
the 2D-FFT is done by first computing the horizontal FFT and storing the
result in the above form in one buffer, then computing the vertical FFT on
that buffer and again doing the rearrangement. Therefore in the 2D-case
the final storage arrangement is quite complicated. There is actually no
need to compute it, since it is much easier to use function Amplitude and
Phase, in case direct access to each individual FFT-data point is required.
Using equation 12.5 to 12.7 as the definition as the amplitude and phase,
functions Amplitude and Phase will store the result in the following way,
again using our example of N=10:
pix 0 1 2 3 4 5 6 7 8 9
A
6
=
A
4
A
7
=
A
3
A
8
=
A
2
A
9
=
A
1
A
0
A
1
=
A
9
A
2
=
A
8
A
3
=
A
7
A
4
=
A
6
A
5
and
94
12.9 Correlation and Convolusion
pix 0 1 2 3 4 5 6 7 8 9
P
6
=
-P
4
P
7
=
-P
3
P
8
=
-P
2
P
9
=
-P
1
P
0
=
0
P
1
=
-P
9
P
2
=
-P
8
P
3
=
-P
7
P
4
=
-P
6
P
5
=
0
Similar in the 2D case the storage is (N=8, M=6, A
nm
):
A
54
A
64
A
74
A
04
A
14
A
24
A
34
A
44
A
55
A
65
A
75
A
05
A
15
A
25
A
35
A
45
A
50
A
60
A
70
A
00
A
10
A
20
A
30
A
40
A
51
A
61
A
71
A
01
A
11
A
21
A
31
A
41
A
52
A
62
A
72
A
02
A
12
A
22
A
32
A
42
A
53
A
63
A
73
A
03
A
13
A
23
A
33
A
43
Since the power spectrum is the square of the amplitude it is stored in
the same way as the amplitude or phase image.
2D-FFT-filter must be designed in such a way that they are symmetric
around R
00
or A
00
.
12.9 Correlation and Convolusion
The Correlation between two arrays of data f
i
and g
i
of length N is defined
as
Corr
k
=
1
N
N1

i =0
f (i ) g(k +i ) (12.13)
The factor 1/N is sometimes omitted, but simply out of practical considera-
tions it should be included, since otherwise the numbers will get much too
big to be displayed in any way.
Corr
k
is a measure for how closely the two data arrays agree with each
other when they are shifted by k-data points. Therefore the correlation
tells if a certain structure in one data array appears in the second data
array again up to some displacement vector defined by k. Correlation are,
for example, used for computing flow or particle velocity fields, where two
images are taken shortly one after the other, and the correlation between
the two images are computed. The correlation can be computed on the
whole image, giving a measure of the average flow speed and direction,
or it can be done on local sub-images to compute a 2D-velocity field. The
sub-images, of course, must still be large enough so that in each one there
is still enough information to compute a correlation.
The correlation can be easily extended into 2D:
95
12 About Filter Operations
Corr
kl
=
1
NM
N1,M1

i =0,j =0
f (i , j ) g(k +i , l +j ) (12.14)
Mathematically it would be very time-consuming to compute the correlation
according to the above equation, especially for a 2D-correlation of two 2D-
image with a large number of pixels. But there is a mathematical relation
between the correlation and the FFT. The correlation is the inverse FFT of
the multiplication of the fourier transform of the first data array with the
complex conjugate FFT of the second data array:
Corr = IFFT(FFT(f ) FFT(g)

) (12.15)
The time saving is more than a factor of 1000 for a 512 x 512 image.
In menu Correlation/convolution function Correlation simply computes the
correlation according to the above recipe. First the FFT is computed of the
two source images, the complex multiplication is done and finally the result
is transformed back.
12.9.1 Autocorrelation
The autocorrelation is the correlation of an image with itself. Mathemati-
cally this is simply the inverse FFT of the power spectrum, since the power
spectrum is the result of the complex multiplication F (F). Of course, this
can be computed with function Correlation by simply entering the same
buffer for both source images. The auto correlation has always the highest
value for k=0, since correlating the image with itself (without displacement)
is a perfect correlation.
12.9.2 Convolution
The convolution between two arrays of data f
i
and g
i
of length N is defined
as
Conv
k
=
1
N
N1

i =0
f (i ) g(k i ) (12.16)
The convolution is nearly the same as the correlation, except that the sec-
ond data array has been flipped over. So, for example, it can be used to
detect similar structures in two different images, where the structure has
been reversed in the second image.
Similarly the convolution can be computed via FFT by using
96
12.9 Correlation and Convolusion
Conv = IFFT(FFT(f ) FFT(g)) (12.17)
Note that the multiplication is without taking the complex conjugate of
FFT(g).
12.9.3 Cyclic versus Non-Cyclic Computations
In the definition of the correlation and the convolution, equations 12.13
and 12.16, you will notice that the index of, for example, g(k + i ) will run
out-of-bound for high ks and is. There are two solutions for this problem:
Cyclic computation:
In this case it is assumed that the source data will repeat itself indefinitely
to the right and left. I.e. the cyclic extension of the data is
g(N) = g(0), g(N +1) = g(1), g(N +2) = g(2), ...
F
k
= R
k
+iI
k
= A
k
e
i P
k
= A
k
(cos P
k
+i sinP
k
) (12.18)
and similarly to the left in case of negative indices for computing the
convolution. The advantage of using the cyclic extension of the data is
that the computation of the correlation or convolution by using the fourier
transform can be done without further modifications, since the FFT is, by
definition, also cyclic, i.e. it also assumes that the data will repeat itself in
every dimension.
Note that in the cyclic case the result is symmetric because a displacement
vector of k is the same as a displacement vector of N-k. Therefore it is suf-
ficient to look at e.g. the two top quadrants of the image (in the 2D-case),
because the other two quadrants do not contain any additional information.
Figure 12.10: Cyclic autocorrelation
97
12 About Filter Operations
Non-Cyclic computation:
In this case it is assumed that the source data will be 0 outside the specified
data range 0-N-1. I.e. the extension of the data is
g(N) = 0, g(N +1) = 0, g(N +2) = 0, ...
and similarly to the left in case of negative indices for computing the con-
volution. Another way of understanding this case is that in the equations
12.13 and 12.16 we are only computing the sum for those data values
which actually exist. If any index is outside the bounds, it will not be added
to the sum.
This case is most often physically much more meaningful, because there is
usually no reason to assume that the data will repeat itself with just exactly
that period N we choose to sample the data with.
Figure 12.11: Non-Cyclic autocorrelation
Now the question is, if we can compute the non-cyclic correlation and con-
volution still using the FFT. The answer is YES as long as we use a certain
trick of zero-filling:
The trick is to double the source image and set all new pixels to zero. Then
the FFT is computed just like in the cyclic case. We will end up with a
resulting image twice the size as in the cyclic case, which we then cut in
half. Since we doubled the N (and M in the 2D-case) we need to divide the
image by the factor of 2 (1D) or 4 (2D).
Note that the result of the non-cyclic correlation is not symmetric.
This procedure is done automatically by function non-cyclic correlation and
non-cyclic convolution. Since the FFTs and the IFFT is computed on an
image twice the size, it will take longer than in the cyclic case. Since a
number of intermediate temporary buffers are needed, make sure that you
have not used up all your memory yet. You might have to clean-up your
buffers (set them to zero size) before using these functions.
98
12.9 Correlation and Convolusion
In some cases it is advantageous to fill only one image with zeros. For
example, doing autocorrelation on parts of an image the best results are
achieved, when a certain sub-image is picked out and stored in a separate
buffer, and then an area twice as big symmetricallly surrounding that sub-
image is taken as the counterpart for the correlation and stored in another
buffer. The buffer with the sub-image itself is then doubled in size and filled
with zeros. Finally the correlation is performed.
It is fairly simple to do those steps inside the program. Picking out an area
of interest is done with menu data compression (compression factor = 1).
Changing buffer sizes is done by clicking on the active buffer to go to the
buffer parameter menu. When enlarging a buffer, the newly created areas
are automatically set to 0.
Here it is shown how to do this using CL:
// optimal non-cyclic AUTOCORRELATION of part of an image
// it is assumed that the relevant image is in buffer 1
SetRect( 1, 100, 100, 139, 119 );
// rectangle 1: 40 x 20 pixels in size = sub-image
// (if you let the user specify a rectangle, you
// must make sure that there is enough space around
// the selected rectangle)
SetRect( 2, 80, 90, 159, 129 );
// rectangle 2: 80 x 40 pixels in size
SetBufferSize( 2, 80, 40, 1 );
// make space for sub-image
B[2] = 0; // fill with zeros
SetBufferSize( 3, 80, 40, 1 )
// make space for counter-image
B[3] = 0; // fill with zeros
MoveRectangle( 1, 1, 2, 20, 10 );
// copy sub-image to center of buffer 2 !!
MoveRectangle( 1, 2, 3, 0, 0 )
// copy surrounding area
99
12 About Filter Operations
FFT( 2, 4, 2 ); // 2D-FFT of sub-image
FFT( 3, 5, 2 ); // 2D-FFT of surrounding area
MultiplyFFT( 4, 5, 6, 3+2 ); // 2D-complex mult. A * (B*)
IFFT( 6, 7, 2 ); // inverse FFT = correlation
SetBufferSize( 7, 40, 20, 1 ); // throw away symmetric parts
B[7] = B[7] / 4; // renormalize
12.9.4 Literature
Adrian RJ, Christensen KT, Liu ZC (2000) Analysis and interpretation of in-
stantaneous turbulent velocity fields , Exp. in Fluids, 29/3, p. 275-290
Ernst, Dr. E., Einfhrung in die digitale Bildverarbeitung , Franzis Verlag
Jhne, B., Digitale Bildverarbeitung , Springer-Verlag
Jaroslavskij, L.P., Einfhrung in die Digitale Bildverarbeitung , Hthig Ver-
lag
Blahut, R.E., Fast algorithms for digital precessing. Reading, Mass. ,
Addison-Wesley
12.10 Nonlinear Filter
The non-linear filters do not work with the simple matrix equation as the
linear filters do. Each non-linear filter has its specific advantages, but on
the other hand they are non-linear operations, i.e. the average intensity of
the image may change.
For the three filters Median, Erosion and Dilatation a k k area (k=
3,5,7,9 or 11) is taken and all elements are sorted according to their inten-
sities. Then the elements are stored back following a special rule for each
filter.
Median Filter
The middle element of the sorted k k area is stored back as the new
center pixel. It eliminates high frequency noise. As an advantage compared
to many linear filters it does not eliminate sharp edges.
Erosion Filter
The element before the middle element of the sorted k k area is
stored back as the new center pixel. The effect is a kind of erosion. Valleys
100
12.10 Nonlinear Filter
become deeper and finally, after having applied the filter several times, the
image will be eroded away towards the background level.
Dilatation Filter
It does exactly the opposite of the Erosion filter. The next element after
the middle element in the sorted k k area is stored back as the new
center pixel. The effect is that edges become sharper and plateaus flatten.
Concentration Filter
Concentrates intensities on the "local maximum" pixel. First of all the filter
searches for pixels whose intensity is a local maximum. The intensity In
addition has to be above a background level (set with the parameter noise-
level). If found, it will move the intensities of the surrounding pixels toward
the center to concentrate all power in the middle.
Intensity normalization (image-avg)/rms
The Image normalization function subtracts the local average (with the fil-
ter length given) and divides the result by the local intensity standard de-
viation. The result is an image with local mean intensity zero and a fixed
dynamic range everywhere.
12.10.1 The Math behind NonlinearFilter
Sliding Average Filter:
This filter computes a local average over a specified scale length by com-
puting the average according to the following equation applied 4-times, by
going from left to right, from right to left, from top to bottom and from
bottom to top through the image:
S
avg
(0) = I (0)
and
S
avg
(i) = (n-1)/n * S
avg
(i-1) + 1/n * I (i) for i 1
where I(i) is the pixel intensity at a certain pixel i and S
avg
(i) is the com-
puted average at that pixel and and S
avg
(i-1) is the computed average at
the previous pixel. N is the scale length (must be n 2), which corresponds
about to the distance over which the average is computed.
The effect is a sliding average, where each new pixel is added with a small
weighting factor. This type of filter is much faster than any fourier type of
101
12 About Filter Operations
average filter or a linear n n matrix filter, since very few multiplications
are necessary for each pixel, independant on the scale length. There is a
small edge effect at the image borders, but by far not as severe as when us-
ing a FFT-Filter. The only drawback is that the image intensities are shifted
a bit in the direction of processing (an intensity peak becomes smoothed
and the peak center is shifted by n/2). But this is compensated by going
not only from left to right, but also from right to left.
The exact equations for the 4 passes are decribed below:
Going from left to right: For all rows y do:
S
avg
(x,y) = (n-1)/n * S
avg
(x-1,y) + 1/n * I (x,y)
Going from right to left: For all rows y do:
S
avg
(x,y) = (n-1)/n * S
avg
(x+1,y) + 1/n * S
avg
(x,y)
Going from top to bottom: For all columns x do:
S
avg
(x,y) = (n-1)/n * S
avg
(x,y-1) + 1/n * S
avg
(x,y)
Going from bottom to top: For all columns x do:
S
avg
(x,y) = (n-1)/n * S
avg
(x,y+1) + 1/n * S
avg
(x,y)
For example, if the scale length is 10 pixel, then
S
avg
(i) = 0.9 * S
avg
(i-1) + 0.1 * I (i)
Sliding Minimum Filter:
This filter computes a local minimum over a specified scale length. The fil-
ter has the same principle as the sliding average, but with slightly different
equations:
With m = n / 2 half of scale length, do
Step 1: going from left to right: For all rows y do:
fmin = S
avg
(0,y) = I (0,y)
and
if ( I (x,y) > fmin )
fmin = (m-1)/m * fmin + 1/m * I (x,y)
else
fmin = I (x,y)
S
avg
(x,y) = (n-1)/n * S
avg
(x-1,y) + 1/n * fmin
where fmin is an intermediate variable for storing a sliding minimum over
half the scale length, which again is smoothed over the full scale length.
Step 2-4: same as for sliding average
102
12.10 Nonlinear Filter
Note that this filter can still produce sometimes pixel intensities that are
slightly above the original values. The strict sliding minimum filter ex-
plained below is strictly below the original intensities, but is not as smooth
as this filter.
Sliding Maximum Filter:
Same as sliding minimum, only < instead of > in the above equation in
step 1.
Strict Sliding Minimum Filter:
Same as sliding minimum, except that none of the resulting pixel intensities
will be below the intensity of the original pixel at a certain position. The
equations are slightly different:
Going from left to right: For all rows y do:
fmin = S
avg
(0,y) = I (0,y)
and
if ( I (x,y) > fmin )
fmin = (m-1)/m * fmin + 1/m * I (x,y)
else
fmin = I (x,y)
S
avg
(x,y) = min(fmin, (n-1)/n * S
avg
(x-1,y) + 1/n * fmin)
Going from right to left: For all rows y do:
S
avg
(x,y) = min( S
avg
(x,y) ,
(n-1)/n * S
avg
(x+1,y) + 1/n * S
avg
(x,y) )
Going from top to bottom: For all columns x do:
S
avg
(x,y) = min( S
avg
(x,y) ,
(n-1)/n * S
avg
(x,y-1) + 1/n * S
avg
(x,y) )
Going from bottom to top: For all columns x do:
S
avg
(x,y) = min( S
avg
(x,y),
(n-1)/n * S
avg
(x,y+1) + 1/n * S
avg
(x,y) )
Strict Sliding Maximum Filter:
Same as strict sliding minimum, only max instead of min, and < instead
of > in the above equation in step 1.
103
12 About Filter Operations
Sliding Average Filter (gaussian profile)
The algorithm follows the same pattern as the sliding average filter de-
scribed above: making horizontal passes left to right and back again and
averaging it with the vertical passes top to bottom and back again.
But in contrast of using a sliding value and the current pixel only, the gaus-
sian profile filter uses the last 3 pixels in an iterative convolution to create
an approximation to a gaussian bell 2D filter. Below the horizontal pass is
described exemplarily :
set first three destination pixels I
d
(0, y), I
d
(1, y) and I
d
(2, y) to the av-
erage intensity of the three source pixels (Is(0, y)+Is(1, y)+Is(2, y))/3
then starting from x = 4 calculate the next pixels I
d
(x, y) iteratively
using the pixel I
s
(x, y) and previous destination pixels I
d
(x 1, y),
I
d
(x 2, y) and I
d
(x 3, y):
I
d
(x, y) = BI
s
(x, y)+
B
1
I
d
(x 1, y) +B
2
I
d
(x 2, y) +B
3
I
d
(x 3, y)
B
0
with the following constants
=filter length/2.0
Q =0.98711 0.96330 for 2.5
3.97156 4.14554

1 0.26891 for < 2.5


B
0
=1.57825 +2.44413 Q +1.4281 Q
2
+0.422205 Q
3
B
1
=2.44413 Q +2.85619 Q
2
+1.26661 Q
3
B
2
=1.4281 Q
2
1.26661 Q
3
B
3
=0.422205 Q
3
B =1 (B
1
+B
2
+B
3
)/B
0
The same procedure is done backwards again. The vertical pass is equiv-
alent with x and y exchanged moving from top to bottom and back again
working on the horizontally smoothed image.
Please note that the filter length of this filter is defined as a float value
(rather than an integer value of the sliding average filter) and hence a finer
adjustment of the smoothing level is possible.
Reference: A.Lukin "Tips and Tricks: Fast Image Filtering Algorithms" in
Proceedings of GraphiCon2007, Moscow, Russia, June 2007, pp. 186-189
104
12.11 Dialog for FFT
Filtering horizontally
Same as the above described modes, but the filters are only applied hori-
zontally, i.e. only performing steps 1 and 2 in the above equations. If one
would like the sliding filter applied only vertically, one needs to rotate the
buffer by 90

, apply the filter horizontally, and rotate back 90

.
12.11 Dialog for FFT
The dialog for FFT (see figure 12.12) can be found in DaVis Classic in
menu Compute. Here the user can easily check how the Fast Fourier Trans-
form works on single images.
Figure 12.12: Fast Fourier Transform dialog
Enter the number of your source image in Image buffer on top of this
dialog. You will need three temporary buffers for the resulting FFT (FFT
buffer), the Amplitude and the Phase.
When you press button Do FFT you are asked to enter the number of a
rectangular area to calculate the Fast Fourier Transform of a part of your
source image. The result is shown in the FFT buffer. Enter rectangle
number 0 to use the whole source image. Use menu Rectangle Set to
define a rectangular area. A rectangular region has to be at least 4x4 pixel.
Calculate the Invers FFT (button Do IFFT) from your FFT buffer and store
the result in the Image buffer.
Calculate the Amplitude and Phase part of the FFT buffer and show
them on screen.
Button Combine will use the amplitude and phase buffers to recalculate
the FFT buffer.
The FFT computation mode can be horizontal, vertical or 2-dimensional.
Press button Compute Power-FFT, then enter the buffer number of the
result, which will be shown on screen immediately.
105
12 About Filter Operations
Multiply FFTFFT calculates the product of to FFT buffers. You have to
enter the numbers of both source buffers and of the resulting buffer in the
following question boxes.
Multiply FFT(FFT*) works like before, but calculates the conjugate-
complex of the second FFT buffer.
FFT Filter: Apply the active filter (first profile in the profile buffer) to the
image buffer or to the FFT buffer.
Press button Correlation to open the following dialog:
Figure 12.13: Setting the Correlation and Convolution modes for the FTT
dialog.
Select the computation mode and press a button for Correlation or Con-
volution calculation. Button Close closes the dialog without calculation.
12.12 Dialog for Linear Filter
The dialog for linear filters (see figure 12.14) can be found in DaVis Clas-
sic in menu Compute. Here the user can choose between predefined filters
and user filters that can be changed by the user.
Figure 12.14: Linear Filter dialog
Select a filter from the list by:
106
12.12 Dialog for Linear Filter
typing the first letter with the keybord
scrolling with the bar on the right or
using the cursor keys of the keybord.
In the info item on bottom of this dialog an explanation about the chosen
filter is displayed.
Button Edit opens a dialog box for the definition of User Filters.
Filter Matrix
Usually the filter operates as a NxN-matrix, which is placed over the image.
For the selected filter the matrix is always displayed at the bottom of the
dialog box. The new value of a pixel ( Pij ) is calculated from a linear
combination of the surrounding pixel values and the center pixel ( P
i ,j
)
itself. The result is normalized by a "divisor".
The 3x3 filter

a1 a2 a3
b1 b2 b3
c1 c2 c3

/divisor
leads to the new value P
i ,j
:
P
i ,j
= ( a
1
P
i 1,j 1
+ a
2
P
i ,j 1
+ a
3
P
i +1,j 1
+
b
1
P
i 1,j
+ b
2
P
i ,j
+ b
3
P
i +1,j
+
c
1
P
i 1,j +1
+ c
2
P
i ,j +1
+ c
3
P
i +1,j +1
) / divisor .
How to Perform Filter Operation
1. Select a filter from the list.
2. Click on Execute.
3. Enter the source and destination buffer.
4. The result is displayed on the screen.
1D-Vector Filter
There are two predefined vector filters for smoothing over 9 pixel in one
dimension (horizontally or vertically). Other 1-dimensional filters can
be freely defined as user filters.
Vector filters are very useful for analyzing, e.g., 1-dimensional spectra,
which have another information on the vertical axis of the image (e.g.
107
12 About Filter Operations
space coordinate or time). Maybe, just the spatial axis should be smoothed,
but the spectral resolution on the horizontal axis is kept as it is. In this case
use the vertical smoothing filter.
12.12.1 User Filter
The user can freely define 10 different linear filters. The dialog box in which
all settings have to be made is shown below.
In order to define a new user filter, select it in the list of linear filters
(shown on the page before) and click on Edit to open the dialog box.
Choose the type of linear user filter.
Define the filter by entering the matrix or vector coefficients inside the
corresponding field directly.
Two further actions (divide by and set outer rim) are defined at the bot-
tom of the dialog box. They are performed before the result of the filter is
stored.
The description entered here will apear below the list of linear filters,
when the filter has been selected (see the page before).
Basically, there are three types of user filters.
NxN matrices: Usual NxN-matrices (N= 3, 5, 7 or 9) that are applied
in the same way as the predefined linear filters. They are defined by
entering the matrix elements in the center field.
NxN separable matrices: They work much faster than the usual
matrices. If a matrix is separable (is the product of a vertical times
108
12.12 Dialog for Linear Filter
a horizontal 1D-vector), several steps of the full matrices operations
are redundant. Therefore it can be applied in two steps of vector
filter operations: first a horizontal vector, second a vertical vector.
These filters are also defined by entering the elements directly into
the vectors.
1D vectors: Vector filters in one dimension.
For seperable matrices and vector filters enter the coefficients into
the vectors (horizontal and/or vertical). Enter normal matrice filters inside
the matrice.
Negative or floating point values can only be stored in image buffers of the
type FLOAT. So it might be necessary to convert the output buffer to a
FLOAT buffer, in order not to lose significant digits of the data.
After performing the matrix (or vector) operation, the result buffer can be
divided by a specified value.
1 (off): The division is neglected.
sum of coefficients: Choose this divisor in order not to change the
average intensity of the image buffer.
value: Any other divisor can also be defined.
At the rim of the buffer a filter will not work over the same amount of
pixels as in the middle. Therefore the result might differ drastically at the
rim, espacially if a gradient filter is used. Then it might be useful to set
the whole outer rim equal to 0. In the default setting the rim of the result
buffer is not changed.
109
12 About Filter Operations
110
Index
Index
Dialog
AdaptPalette, 48
CorrelationMap, 65
CreateMask
Algorithmic, 39
UserDefined, 38
dt velocity calculator, 52
EditPalette, 46
Fast Fourier Transform, 103
FFT, 103
Convolution, 104
Correlation, 104
Focal length calculator, 51
ImageReconstruction, 59
Corrected, 60
VectorShift, 61
LinearFilter, 104
Edit, 106
MaskDefinition, 37
OverlayEditor, 43
SelectColor, 49
SelectPalette, 48
Synthetic image generator 2D,
53
Time-super-sampling, 63
XYPlot, 67
PlotModes, 68
File
userpal_N.col, 47
111
Index
112
LaVision GmbH
Anna-Vandenhoeck-Ring 19
D-37081 Goettingen, Germany
www.lavision.com
Email: sales@lavision.com
Tel.: +49(0)551-9004-0
Fax: +49(0)551-9004-100
LaVision
We count on Photons
LaVisionUK Ltd
Downsview House/ Grove Technology Park
Grove, Oxon, Ox 12 9FF/ UK
www.lavisionuk.com
Email: sales@lavision.com
Tel.: +44-(0)-870-997-6532
Fax: +44-(0)-870-762-6252
LaVision, Inc.
211 W. Michigan Ave., Suite 100
Ypsilanti, MI 48197, USA
www.lavisioninc.com
Email: sales@lavisioninc.com
Phone: +1(0)734-485-0913
Fax: +1(0)240-465-4306
M
a
n
u
a
l
N
o
1
0
0
3
0
1
2
-
S
e
p
t
e
m
b
e
r
2
3
,
2
0
1
3

Вам также может понравиться