Академический Документы
Профессиональный Документы
Культура Документы
1 Introduction
The camera is one of the most essential tools in computer vision. It is the
mechanism by which we can record the world around us and use its output -
photographs - for various applications. Therefore, one question we must ask
in introductory computer vision is: how do we model a camera?
2 Pinhole cameras
aperture
Lets design a simple camera system a system that can record an image
of an object or scene in the 3D world. This camera system can be designed
by placing a barrier with a small aperture between the 3D object and a
photographic film or sensor. As Figure 1 shows, each point on the 3D object
emits multiple rays of light outwards. Without a barrier in place, every point
on the film will be influenced by light rays emitted from every point on the
3D object. Due to the barrier, only one (or a few) of these rays of light passes
through the aperture and hits the film. Therefore, we can establish a one-
to-one mapping between points on the 3D object and the film. The result is
that the film gets exposed by an image of the 3D object by means of this
mapping. This simple model is known as the pinhole camera model.
the pinhole itself can be projected onto the image plane, giving a new point
C 0.
Here, we can define a coordinate system i j k centered at the pinhole
O such that the axis k is perpendicular to the image plane and points toward
it. This coordinate system is often known as the camera reference system
or camera coordinate system. The line defined by C 0 and O is called the
optical axis of the camera system.
Recall that point P 0 is derived from the projection of 3D point P on the
image plane 0 . Therefore, if we derive the relationship between 3D point
P and image plane point P 0 , we can understand how the 3D world imprints
itself upon the image taken by a pinhole camera. Notice that triangle P 0 C 0 O
is similar to the triangle formed by P , O and (0, 0, z). Therefore, using the
law of similar triangles we find that
1
Throughout the course notes, let the prime superscript (e.g. P 0 ) indicate that this
point is a projected or complementary point to the non-superscript version. For example,
P 0 is the projected version of P .
2
T T
P 0 = x0 y 0 = f xz f yz
(1)
Notice that one large assumption we make in this pinhole model is that
the aperture is a single point. In most real world scenarios, however, we
cannot assume the aperture can be infinitely small. Thus, what is the effect
of aperture size?
Figure 3: The effects of aperture size on the image. As the aperture size
decreases, the image gets sharper, but darker.
As the aperture size increases, the number of light rays that passes
through the barrier consequently increases. With more light rays passing
through, then each point on the film may be affected by light rays from
multiple points in 3D space, blurring the image. Although we may be in-
clined to try to make the aperture as small as possible, recall that a smaller
aperture size causes less light rays to pass through, resulting in crisper but
darker images. Therefore, we arrive at the fundamental problem presented by
the pinhole formulation: can we develop cameras that take crisp and bright
images?
3
object lens film
Figure 4: A setup of a simple lens model. Notice how the rays of the top
point on the tree converge nicely on the film. However, a point at a different
distance away from the lens results in rays not converging perfectly on the
film.
in the image plane. Therefore, the problem of the majority of the light rays
blocked due to a small aperture is removed (Figure 4). However, please note
that this property does not hold for all 3D points, but only for some specific
point P . Take another point Q which is closer or further from the image
plane than P . The corresponding projection into the image will be blurred
or out of focus. Thus, lenses have a specific distance for which objects are
in focus. This property is also related to a photography and computer
graphics concept known as depth of field, which is the effective range at
which cameras can take clear photos.
P
focal point
-z f zo
Figure 5: Lenses focus light rays parallel to the optical axis into the fo-
cal point. Furthermore, this setup illustrates the paraxial refraction model,
which helps us find the relationship between points in the image plane and
the 3D world in cameras with lenses.
Camera lenses have another interesting property: they focus all light rays
traveling parallel to the optical axis to one point known as the focal point
(Figure 5). The distance between the focal point and the center of the lens
is commonly referred to as the focal length f . Furthermore, light rays
4
passing through the center of the lens are not deviated. We thus can arrive
at a similar construction to the pinhole model that relates a point P in 3D
space with its corresponding point P 0 in the image plane.
0 0 x
0 x z
P = 0 = 0 yz (2)
y zz
The derivation for this model is outside the scope of the class. However,
please notice that in the pinhole model z 0 = f , while in this lens-based model,
z 0 = f +z0 . Additionally, since this derivation takes advantage of the paraxial
or thin lens assumption2 , it is called the paraxial refraction model.
Because the paraxial refraction model approximates using the thin lens
assumption, a number of aberrations can occur. The most common one is
referred to as radial distortion, which causes the image magnification to
decrease or increase as a function of the distance to the optical axis. We
classify the radial distortion as pincushion distortion when the magnifi-
cation increases and barrel distortion3 when the magnification decreases.
Radial distortion is caused by the fact that different portions of the lens have
differing focal lengths.
5
As discussed earlier, a point P in 3D space can be mapped (or projected)
into a 2D point P 0 in the image plane 0 . This R3 R2 mapping is referred
to as a projective transformation. This projection of 3D points into the
image plane does not directly correspond to what we see in actual digital
images for several reasons. First, points in the digital images are, in general,
in a different reference system than those in the image plane. Second, digital
images are divided into discrete pixels, whereas points in the image plane are
continuous. Finally, the physical sensors can introduce non-linearity such as
distortion to the mapping. To account for these differences, we will introduce
a number of additional transformations that allow us to map any point from
the 3D world to pixel coordinates.
Image coordinates have their origin C 0 at the image center where the k
axis intersects the image plane. On the other hand, digital images typically
have their origin at the lower-left corner of the image. Thus, 2D points in
the image plane and 2D points in the image are offset by a translation vector
T
cx , cy . To accommodate this change of coordinate systems, the mapping
now becomes:
0 x
0 x f z + cx
P = 0 = (3)
y f yz + cy
The next effect we must account for that the points in digital images are
expressed in pixels, while points in image plane are represented in physical
measurements (e.g. centimeters). In order to accommodate this change of
units, we must introduce two new parameters k and l. These parameters,
whose units would be something like pixels
cm
, correspond to the change of units
in the two axes of the image plane. Note that k and l may be different
because the aspect ratio of the unit element is not guaranteed to be one.
If k = l, we often say that the camera has square pixels. We adjust our
previous mapping to be
0 x x
0 x f k z + cx z + cx
P = 0 = = (4)
y f l yz + cy yz + cy
Is there a better way to represent this projection from P P 0 ? If this
projection is a linear transformation, then it can be represented as a product
of a matrix and the input vector (in this case, it would be P . However, from
Equation 4, we see that this projection P P 0 is not linear, as the opera-
tion divides one of the input parameters (namely z). Still, representing this
projection as a matrix-vector product would be useful for future derivations.
Therefore, can we represent our transformation as a matrix-vector product
despite its nonlinearity?
6
One way to get around this problem is to change the coordinate sys-
tems. For example, we introduce a new coordinate, such that any point
P 0 = (x0 , y 0 ) becomes (x0 , y 0 , 1). Similarly, any point P = (x, y, z) becomes
(x, y, z, 1). This augmented space is referred to as the homogenous co-
ordinate system. As demonstrated previously, to convert some Euclidean
vector (v1 , ..., vn ) to homogenous coordinates, we simply append a 1 in a
new dimension to get (v1 , ..., vn , 1). Note that the equality between a vec-
tor and its homogenous coordinates only occurs when the final coordinate
equals one. Therefore, when converting back from arbitrary homogenous
coordinates (v1 , ..., vn , w), we get Euclidean coordinates ( vw1 , ..., vwn ). Using
homogenous coordinates, we can formulate
x
x + cx z 0 cx 0 0 cx 0
y
Ph0 = y + cy z = 0 cy 0 z = 0 c y 0 Ph
(5)
z 0 0 1 0 0 0 1 0
1
From this point on, assume that we will work in homogenous coordinates,
unless stated otherwise. We will drop the h index, so any point P or P 0 can
be assumed to be in homogenous coordinates. As seen from Equation 5, we
can represent the relationship between a point in 3D space and its image
coordinates by a matrix vector relationship:
0 x
x 0 cx 0 0 cx 0
y
P 0 = y 0 = 0 cy 0 z = 0 cy 0 P = M P
(6)
z 0 0 1 0 0 0 1 0
1
The matrix K is often referred to as the camera matrix. This matrix con-
tains some of the critical parameters that are useful to characterize a camera
model. Two parameters are currently missing from our formulation: skew-
ness and distortion. We often say that an image is skewed when the camera
coordinate system is skewed. In this case, the angle between the two axes are
slightly larger or smaller than 90 degrees. Most cameras have zero-skew, but
some degree of skewness may occur because of sensor manufacturing errors.
Deriving the new camera matrix accounting for skewness is outside the scope
7
of this class and we give it to you below:
0
x cot cx
K = y 0 = 0 sin
cy (8)
z 0 0 1
Most methods that we introduce in this class ignore distortion effects. There-
fore, our final camera matrix has 5 degrees of freedom: 2 for focal length, 2
for offset, and 1 for skewness.
So far, we have described a mapping between a point P in the 3D cam-
era reference system to a point P 0 in the 2D image plane. But what if the
information about the 3D world is available in a different coordinate sys-
tem? Then, we need to include an additional transformation that relates
points from the world reference system to the camera reference system. This
transformation is captured by a rotation matrix R and translation vector T .
Therefore, given a point in a world reference system Pw , we can compute its
camera coordinates as follows:
R T
P = P (9)
0 1 w
P 0 = K R T Pw = M P w
(10)
5 Camera Calibration
To precisely know the transformation from the real, 3D world into digital
images requires prior knowledge of many of the cameras intrinsic parame-
ters. If given an arbitrary camera, we may or may not have access to these
parameters. We do, however, have access to the images the camera takes.
Therefore, can we find a way to deduce them from images? This problem of
8
Figure 7: The setup of an example calibration rig.
ui (m3 Pi ) m1 Pi = 0
vi (m3 Pi ) m2 Pi = 0
9
tions becomes
u1 (m3 P1 )m1 P1 = 0
v1 (m3 P1 )m2 P1 = 0
..
.
un (m3 Pn )m1 Pn = 0
vn (m3 Pn )m2 Pn = 0
10
Solving for the intrinsics gives
1
=
ka3 k
2
uo = (a1 a3 )
vo = 2 (a2 a3 )
(15)
(a1 a3 ) (a2 a3 )
= cos1
ka1 a3 k ka2 a3 k
2
= ka1 a3 k sin
= 2 ka2 a3 k sin
11
tion: 1
0 0
u
QPi = 0 0 M Pi = i = pi
1 (17)
vi
0 0 1
If we try to rewrite this into a system of equations as before, we get
ui q3 Pi = q1 Pi
vi q3 Pi = q2 Pi
This system, however, is no longer linear, and we require the use nonlin-
ear optimization techniques, which are covered in Section 22.2 of Forsyth &
Ponce. We can simplify the nonlinear optimization of the calibration prob-
lem if we make certain assumptions. In radial distortion, we note that the
ratio between two coordinates ui and vi is not affected. We can compute this
ratio as
m 1 Pi
ui m 1 Pi
= m 3 Pi
m 2 Pi
= (18)
vi m P
m 2 Pi
3 i
Similar to before, this gives a matrix-vector product that we can solve via
SVD:
v1 P1T u1 P1T T
Ln = ...
.. m1 (19)
. mT
T T 2
vn Pn un Pn
Once m1 and m2 are estimated, m3 can be expressed as a nonlinear func-
tion of m1 , m2 , and . This requires to solve a nonlinear optimization problem
whose complexity is much simpler than the original one.
12
Rotating a point in 3D space can be represented by rotating around each
of the three coordinate axes respectively. When rotating around the coordi-
nate axes, common convention is to rotate in a counter-clockwise direction.
One intuitive way to think of rotations is how much we rotate around each
degree of freedom, which is often referred to as Euler angles. However, this
methodology can result in what is known as singularities, or gimbal lock,
in which certain configurations result in a loss of a degree of freedom for the
rotation.
One way to prevent this is to use rotation matrices, which are a more gen-
eral form of representing rotations. Rotation matrices are square, orthogonal
matrices with determinant one. Given a rotation matrix R and a vector v,
we can compute the resulting vector v 0 as
v 0 = Rv
13
In matrix form, translations can be written using homogeneous coordi-
nates. If we construct a translation matrix as
1 0 0 tx
0 1 0 ty
T = 0 0 1 tz
0 0 0 1
then we see that P 0 = T P is equivalent to P 0 = P + t.
If we want to combine translation with our rotation matrix multiplication,
we can again use homogeneous coordinates to our advantage. If we want to
rotate a vector v by R and then translate it by t, we can write the resulting
vector v 0 as: 0
v R t v
=
1 0 1 1
Finally, if we want to scale the vector in certain directions by some amount
Sx , Sy , Sz , we can construct a scaling matrix
Sx 0 0
S = 0 Sy 0
0 0 Sz
Therefore, if we want to scale a vector, then rotate, then translate, our
final transformation matrix would be:
RS t
T =
0 1
Note that all of these types of transformations would be examples of affine
transformations. Recall that projective transformations occur when the final
row of T is not 0 0 0 1 .
14
Figure 8: The weak perspective model: orthogonal projection onto reference
plane
Figure 9: The weak perspective model: projection onto the image plane
15
and leave it to you as an exercise. The simplification is clearly demonstrated
when mapping the 3D points to the image plane.
m1 m1 P
P 0 = M P = m2 P = m2 P (20)
m3 1
Thus, we see that the image plane point ultimately becomes a magnification
of the original 3D point, irrespective of depth. The nonlinearity of the projec-
tive transformation disappears, making the weak perspective transformation
a mere magnifier.
x0 = x
y0 = y
Orthographic projection models are often used for architecture and industrial
design.
Overall, weak perspective models result in much simpler math, at the
cost of being somewhat imprecise. However, it often yields results that are
very accurate when the object is small and distant from the camera.
16