Вы находитесь на странице: 1из 6

Guy Martin BEng MS Email: guyj.martin@gmail.

com
White Paper The Zooming Lens Model Basics
ABSTRACT
A direct extension of our camera model is the zooming lens model. We show its practical
use from our two projection based formulation, and why accurate use of a zooming lens
in intelligent systems requires a stereo pair. 90% of applications of a digital camera in
intelligent systems would require a zooming lens model.
Canadian Application Patent Serial No. 2,819,956, July 2 nd 2013, PCT/CA2014/000534

Quick Overview of the Camera Model [7]


Figure 1 shows the camera model graphic representation as everyone uses it. The
image plane is assumed square with the lens axis ZC (camera line of sight) originating
from the focal point O.
For an ideal camera, the image scale is in
both x and y directions, and the image plane
located at in front of focal point O. The
image center (CX,CY) is given where the lens
axis pierces through the camera plane.
Because of lens and camera assembly
tolerance, the lens axis is never square with
the image plane. To compensate, currently
known models apply a different scale to x
and y image coordinates, respectively a and b,
and even tilt the image vertical axis y by skew
parameter s (referred to as the tilted axis
assumption); multiple mathematical formulations
are found throughout the literature. The parameter
set a, b, s, (CX,CY) is the internal camera model
Figure 1: Theoretical Camera Representation
and writes in matrix form as (1)

(1)

a s C X
K 0 b CY
0 0 1

Our camera model seeks to recreate the a=b=,


s=0 image, avoiding the perspective bias created
by the all too common but wrong tilted axis
assumption. We demonstrated [7] that it creates a
systematic bias in every camera model parameter, especially the image centre (CX,CY)
which shows off position by as much as 3.5 pixels: Basically, using the tilted axis
assumption as in (1), you dont know where the camera is aiming. We replace it with a
full 3D perspective model of the image plane, where the camera sensor is considered
off-squareness with the camera line of sight ZC. We therefore recover the image scale
needed to model the zooming lens properly, and the geometry of the line of sight ZC
along which the zooming occurs by a displacement of focal point O.

October 12 2015, Guy Martin BEng MS

1/6

Figure 2: Internal Camera Model Modification

In our internal camera model, Figure 2,


we use two projection planes. One at
=1 or the entry of the lens system, and
a second one at where the image
plane is tilted with respect to the =1
true scale 1:1 projection plane. The
camera model from our IEEE TePRA
2015 conference is reproduced in the
last page of this white paper. Working
from two projections guarantees 3D
consistency of our model.

Lens distortion occurs between both planes, and from our model flow chart, is expressed
in the =1 true scale 1:1 projection plane, or at the entry of the lens system.

Internal Camera Model Inversion


The pinhole projection in the camera model is at the entry of the lens system on =1,
and the image plane at has to be projected back to the entry from the inverse internal
camera model. For now we discard lens distortion and will discuss its impact later on.
Our internal model is purely projective and assumes a (0,0) image centre. Step 1 in
inverting the internal model is therefore subtracting the image centre (CX,CY) from
camera image point coordinate (u,v).
The forward model is the matrix product of scaling by and a pure rotation matrix, so
inverting is straightforward.
For Internal Model Parameters: (, ) (CX, CY)
Forward Internal Model

Inverse Internal Model

Input Data: True scale =1 image point (x, y)

Input Data:Camera scale image point (u, v)

u = (cos x + sin sin y+ sin cos)


v = (cos y sin)
w = sin x + cos sin y + cos cos
Scaling back to unity w=1 and decentring
u = ( u/w + CX)
v = ( v/w + CY)

x = ( u - CX)
y = ( v - CY)
w = (sin cos x sin y + cos cos)
x = (cos x sin )/w
y = (sin sin x + cos y + cos sin)/w
Scaling back to unity w=1

Our internal model is purely projective and the last step of the perspective transformation
always requires a scaling back to unity dividing by homogeneous 2D coordinate scale w.
Fundamentally, our model seeks to always work in true 1:1 scale since the lens entry
plane is the only place in the model where the lens axis is square with a projection
plane, the only pinhole image in the model.
October 12 2015, Guy Martin BEng MS

2/6

Static Camera
The camera image is essentially a
homographic transformation of a
true scale projection. Zooming in
and out introduces an image scale
change by moving focal point O,
along with the entry image plane
at =1, closer or away from the
camera image plane at along
lens axis ZC since the camera itself
Figure 3: True Size Camera Image Projection
is static, bolted down to the
experiment table. Motion of origin O while changing therefore has to be accounted for
in Z object coordinates.
The =1 plane is infinite in dimension, square with lens axis ZC, and the image
boundaries on =1 are given by projecting the corners of the camera image plane in true
scale, giving the area of the true world the camera is seeing.
At first, consider an object in front of a uniform background. Moving the object causes a
true scale =1 projection change, which in turn creates a camera image through
homographic reprojection at . Moving the object along the lens axis ZC is equivalent to
zooming in and out changing , and the only way to know if the object is moving or not is
by measuring the scale change in the background image: There is a dual effect in and
Z object displacement: A projection is essentially proportional to /Z.
Our model makes it possible to use a single scale for the image, related to the camera
plane distance from focal point O, and removes an image centre bias in calibration so
that the camera line of sight ZC is known, giving the depth change in object position.
Assembly tolerances in the lens mechanism will offset the image centre calibration as
changes, affecting the line of sight ZC location in space. For reasonably well built lenses,
the line of sight drift will be small. Unfortunately, the smaller the lens, the more
tolerances will drift the image centre while zooming in and out from changing .

Moving Camera
Since there is a dual behaviour between camera/object Z relative position along ZC and
focal distance change, we need to isolate camera motion from zooming change. In a
hybrid stereo pair built from a fixed lens camera and a zooming lens camera, from the
fixed lens camera we can measure the rigid motion at the stereo camera setup
assembly, and then isolate in the zooming lens camera the image scale change in .
When assembly tolerances in the zooming lens assembly drift the image centre and
camera line of sight, knowing the object relative pose change with respect to the camera
frame of reference reduces the problem to recalibrating the internal camera model. To
the limit of accuracy provided by the camera motion estimate in the fixed lens camera,
the hybrid stereo pair is also a solution to the assembly tolerance error in the zooming
lens assembly.
October 12 2015, Guy Martin BEng MS

3/6

Lens Distortion
On the last page we reproduced our camera calibration model as published earlier this
year. Lens distortion is accurate only when modelled for an image plane perfectly square
with the line of sight, where the full radial behaviour of lens distortion is accounted for. In
our algorithm, we conveniently used the =1 plane.
Isolating RGB data in distortion parameter identification, and keeping image centre
(CX,CY) constant for all three colour channels, automatically handles chromatic distortion
by making all three images match in true 1:1 scale at =1. In a Bayer pattern colour
camera, we first model geometric distortion from the green channel data, giving the
image centre (CX,CY), and then proceed to the red and blue raw camera signals keeping
centre (CX,CY) constant.
Since lens distortion is modelled on the =1 plane, parameter identification starts by the
lens lowest value so that the full surface range of lens distortion is known. When
increasing making the =1 projected image boundaries smaller, we will be in fact
interpolating the lens distortion formulas.

CONCLUSION
Our camera model, published earlier this year, introduces a paradigm change in digital
imaging. The main change is a direct consequence of introducing two planes in the
camera model, one at the entry, the second at the exit of the lens model. The camera
image is essentially a homographic transformation of a true scale projection, where lens
distortion occurs in the optics between both planes. We therefore use projective
geometry throughout.
It allowed us to model the camera using a single image scale factor , while removing a
systematic bias in the camera line of sight ZC since we are correcting the image centre
position (CX,CY). A unique image scale (instead of a, b and s as in the tilted axis
assumption) is needed to properly model a zooming lens or chromatic distortion: While it
is obvious for a zooming lens model, in chromatic distortion the colour splitting of the
edges is essentially an image scale change with light colour.
The addition of a second plane in the model allows modeling of the zooming lens: The
scale change is created when the =1 and focal point O are moving in a rigid
combination, closer or away from the image plane at , along the lens axis ZC.
Those two image planes also allow modelling of lens distortion in a zooming lens camera
The proper distortion model is given for an image plane perfectly square to lens axis ZC,
and for that purpose we conveniently use the =1 true size lens entry plane. Identifying
lens parameters starting from the lens lower value insures that we always interpolate
the distortion model expressed on the =1 true size plane.
For practical purposes, in order to account for lens mechanism assembly tolerances and
camera motion, zooming lenses should be used in pair with a fixed lens camera, creating
a hybrid stereo pair.

October 12 2015, Guy Martin BEng MS

4/6

Guy Martin BEng MS Email: guyj.martin@gmail.com


Camera Forward Model Flow Chart (Calibration)
Starting from a 3D world coordinate point (X, Y, Z)
External Model
Input Data: 3D point coordinates (X, Y, Z)
Model Parameters: ( ), (TX TY TZ)
Compute:
P = [X Y Z 1]

r11 r12
R3 x3T3 x1 r21 r22
r31 r32

r13 TX
r23 TY
r33 TZ

rij, i,j=1,..,3 are functions of target posture angles ( )


T

[ X Y Z ] = [R3x3 T3x1]P
T
T
[ x y 1] = [X/Z Y/Z 1]
Output Data: External model image point (x, y)

Lens Distortion Model: Radial geometric distortion


Input Data: External model image point (x, y)
Model Parameters: k1, k2, k3, (can be expanded)
Compute: _____
2
2
r=(x +y )
3
5
7
r = r + k1 r + k2 r + k3 r +
x = x r/r
y = y r/r
Output Data: Distorted image point (x, y), Note *
Internal Camera Model
Input Data: Distorted image point (x, y)
Internal
Camera (,
Model
Model
Parameters:
) (CX, CY)
Input Data: Distorted image point (x, y)
Compute:
Model
Parameters: (, ) (CX, CY)
u = (cos x + sin sin y+ sin cos)
Compute:
v = (cos y sin)
u
x ++ cos
sin sin
w == (cos
sin x
sin y+
y +sin
coscos)
cos
vx==(cos
y

sin)
(u/w + CX)
w
sin+ xCY+) cos sin y + cos cos
y == (v/w
x = (u/w + CX)
Output Data:
Camera
y = (v/w
+ CY)image point (x, y)
Small angles approx., cos cos 1, sin , sin
Output Data: Camera image point (x, y)
Note *: In the lens distortion model, to avoid computing a square root,

radial distortion can be expressed as


r2 = (x2+y2), d = (k1 r2 + k2 r4 + k3 r6 + )
x = x + d*x
y = y + d*y
The flow chart gives the forward camera model as used in calibration. In robotics and
imagery, we seek to invert the last two steps of the model to recreate the image as seen
by the External Model in 1:1 scale, as if the camera was completely cancelled from the
behaviour. From the External Model image, we can easily scale and translate at will.
October 12 2015, Guy Martin BEng MS
5/6

REFERENCE
[1] Frdric Devernay
A Non-Maxima Suppression Method for Edge Detection with Sub-Pixel Accuracy
INRIA: Institut National de Recherche en Informatique et en Automatique
Report N 2724, November 1995, 20 pages

[2] Guy J Martin


Sub-Pixel Edge Extraction in the 2D Plane

Extension of Frederic Devernays 1995 Report,


2015 January 6th, 7 pages
[3] Shawn Becker,
Semiautomatic Camera Lens Calibration from Partially Known Structure
MIT: Massachusetts Institute of Technology
http://alumni.media.mit.edu/~sbeck/results/Distortion/distortion.html 1994, 1995

[4] Guy J Martin


White Paper - Accurate 3D Telemetry from Knowing the Image Scale
How to Solve the Single Camera Target Location Problem without using Homography

Submitted and reviewed by John McLean and Charmaine Gilbreath US Naval Research Lab
in May 2015
[5] L.H. Hartley, P. Sturm
Triangulation
Proc. of the ARPA Image Understanding Workshop 1994, Monterey, CA 1994,
pp. 957-966
[6] Guy J Martin
High Accuracy Camera Modeling and Calibration for Automated Imagery
P01-IST099-NATO Symposium on Disruptive Technologies, Madrid 2011
[7] Guy J Martin
The Major Error in the Camera Model is the Tilted Axis Assumption
Correction of a Systematic Perspective Bias in the Camera Model
Proc. of the IEEE TePRA Conference , Boston USA, May 11th 2015, 6 pages
[8] Guy J Martin
System and Method for Imaging Device Modeling and Calibration
Canadian Application Patent Serial No. 2,819,956, July 2nd 2013, PCT/CA2014/000534

[9] Reg G. Willson and Steven A. Shafer


What is the Center of the Image?
Proc. of IEEE Conference on Computer Vision and Pattern Recognition, New York, NY, June 1993.

[10] Reg G. Willson and Steven A. Shafer


A Perspective Projection Camera Model for Zoom Lenses
Robotics Institute, Carnegie Mellon University

[11] Zhengyou Zhang,


A Flexible New Technique for Camera Calibration
Technical Report MSR-TR-98-71
December 2, 1998, last updated on Dec. 5, 2009
October 12 2015, Guy Martin BEng MS

6/6

Вам также может понравиться