Вы находитесь на странице: 1из 128

Mathematical Foundation of

Photogrammetry
(part of EE5358)
Dr. Venkat Devarajan
Ms. Kriti Chauhan
8/14/2013 Virtual Environment Lab, UTA 2
Photogrammetry
Formal Definition:
Photogrammetry is the art, science and technology of obtaining reliable
information about physical objects and the environment, through
processes of recording, measuring, and interpreting photographic
images and patterns of recorded radiant electromagnetic energy and
other phenomena.
- As given by the American Society for Photogrammetry and Remote Sensing
(ASPRS)
Photogrammetry is the science or art of obtaining
reliable measurements by means of photographs.
photo = "picture, grammetry = "measurement,
therefore photogrammetry = photo-measurement
Chapter 1
8/14/2013 Virtual Environment Lab, UTA 3
Distinct Areas
Metric Photogrammetry
Interpretative Photogrammetry
making precise measurements from
photos determine the relative locations
of points.
finding distances, angles, areas,
volumes, elevations, and sizes and
shapes of objects.
Most common applications:
1. preparation of planimetric and
topographic maps
2. production of digital orthophotos
3. Military intelligence such as
targeting
Deals in recognizing and identifying
objects and judging their significance
through careful and systematic analysis.
Photographic
Interpretation
Remote
Sensing
(Includes use of multispectral
cameras, infrared cameras,
thermal scanners, etc.)
Chapter 1
8/14/2013 Virtual Environment Lab, UTA 4
Uses of Photogrammetry
Products of photogrammetry:
1. Topographic maps: detailed and accurate graphic representation of cultural and
natural features on the ground.
2. Orthophotos: Aerial photograph modified so that its scale is uniform throughout.
3. Digital Elevation Maps (DEMs): an array of points in an area that have X, Y and Z
coordinates determined.

Current Applications:
1. Land surveying
2. Highway engineering
3. Preparation of tax maps, soil maps, forest maps, geologic maps, maps for city
and regional planning and zoning
4. Traffic management and traffic accident investigations
5. Military digital mosaic, mission planning, rehearsal, targeting etc.
Chapter 1
8/14/2013 Virtual Environment Lab, UTA 5
Types of photographs
Terrestrial Aerial
Vertical
Oblique
Truly Vertical
Tilted
(1deg< angle < 3deg)
High oblique
(includes horizon)
Low oblique
(does not include horizon)
Chapter 1
8/14/2013 Virtual Environment Lab, UTA 6
Of all these type of photographs, vertical and low
oblique aerial photographs are of most interest to
us as they are the ones most extensively used for
mapping purposes
8/14/2013 Virtual Environment Lab, UTA 7
Aerial Photography
Vertical aerial photographs are taken along parallel passes called flight strips.
Successive photographs along a flight strip overlap is called end lap 60%
Area of common coverage called stereoscopic overlap area.
Chapter 1
Overlapping photos
called a stereopair.


8/14/2013 Virtual Environment Lab, UTA 8
Chapter 1
Aerial Photography
Position of camera at each exposure is called the exposure station.
Altitude of the camera at exposure time is called the flying height.
Lateral overlapping of adjacent flight strips is called a side lap (usually 30%).
Photographs of 2 or more sidelapping strips used to cover an area is referred to
as a block of photos.
8/14/2013 Virtual Environment Lab, UTA 9
Now, lets examine the acquisition devices for these
photographs
8/14/2013 Virtual Environment Lab, UTA 10
Camera / Imaging Devices
The general term imaging devices is used to describe instruments used for
primary photogrammetric data acquisition.
Types of imaging devices (based on how the image is formed):
1. Frame sensors/cameras: acquire entire image simultaneously
2. Strip cameras, Linear array sensors or Pushbroom scanners: sense
only a linear projection (strip) of the field of view at a given time and
require device to sweep across the gaming area to get a 2D image
3. Flying spot scanners or mechanical scanners: detect only a small
spot at a time, require movement in two directions (sweep and scan)
to form 2D image.
Chapter 3
8/14/2013 Virtual Environment Lab, UTA 11
Aerial Mapping Camera
Chapter 3
Aerial mapping cameras
are the traditional
imaging devices used in
traditional
photogrammetry.
8/14/2013 Virtual Environment Lab, UTA 12
Lets examine terms and characteristics associated with a
camera, parameters of a camera, and how to determine
them
8/14/2013 Virtual Environment Lab, UTA 13
Focal Plane of Aerial Camera
The focal plane of an aerial camera is the plane in which all incident light rays are
brought to focus.
Focal plane is set as exactly as possible at a distance equal to the focal length behind
the rear nodal point of the camera lens. In practice, the film emulsion rests on the focal
plane.
Chapter 3
Rear nodal point: The emergent
nodal point of a thick combination
lens. (N in the figure)
Note: Principal point is a 2D point
on the image plane. It is the
intersection of optical axis and
image plane.
8/14/2013 Virtual Environment Lab, UTA 14
Fiducials in Aerial Camera
Fiducials are 2D control points whose xy coordinates are precisely and
accurately determined as a part of camera calibration.
Fiducial marks are situated in the middle of the sides of the focal plane
opening or in its corners, or in both locations.
They provide coordinate reference for principal point and image points.
Also allow for correction of film distortion (shrinkage and expansion) since
each photograph contains images of these stable control points.
Lines joining opposite fiducials intersect at a point called the indicated
principal point. Aerial cameras are carefully manufactured so that
this occurs very close to the true principal point.
True principal point: Point in the focal plane where a line from the rear
nodal point of the camera lens, perpendicular to the focal plane,
intersects the focal plane.
Chapter 3
8/14/2013 Virtual Environment Lab, UTA 15
Elements of Interior Orientation
Chapter 3
Elements of interior orientation are the parameters needed to determine accurate spatial
information from photographs. These are as follows:
1. Calibrated focal length (CFL), the focal length that produces an overall mean
distribution of lens distortion.
2. Symmetric radial lens distortion, the symmetric component of distortion that occurs
along radial lines from the principal point. Although negligible, theoretically always
present.
3. Decentering lens distortion, distortion that remains after compensating for
symmetric radial lens distortion. Components: asymmetric radial and tangential lens
distortion.
4. Principal point location, specified by coordinates of a principal point given wrt x and
y coordinates of the fiducial marks.
5. Fiducial mark coordinates: x and y coordinates which provide the 2D positional
reference for the principal point as well as images on the photograph.
The elements of interior orientation are determined through camera calibration.
8/14/2013 Virtual Environment Lab, UTA 16
Other Camera Characteristics
Chapter 3
Other camera characteristics that are often of significance are:

1. Resolution for various distances from the principal point (highest near the
center, lowest at corners of photo)

2. Focal plane flatness: deviation of platen from a true plane. Measured by
a special gauge, generally not more than 0.01mm.

3. Shutter efficiency: ability of shutter to open instantaneously, remain open
for the specified exposure duration, and close instantaneously.
8/14/2013 Virtual Environment Lab, UTA 17
Camera Calibration:
General Approach
Chapter 3
Step 1) Photograph an array of targets whose relative positions
are accurately known.
Step 2) Determine elements of interior orientation
make precise measurements of target images
compare actual image locations to positions they should
have occupied had the camera produced a perfect
perspective view.
This is the approach followed in most methods.
8/14/2013 Virtual Environment Lab, UTA 18
After determining interior camera parameters, we
consider measurements of image points from images
8/14/2013 Virtual Environment Lab, UTA 19
Photogrammetric Scanners
Chapter 4
Photogrammetric scanners are the devices used to convert the content of
photographs from analog form (a continuous-tone image) to digital form (an
array of pixels with their gray levels quantified by numerical values).
Coordinate measurement on the acquired digital image can be done either
manually, or through automated image-processing algorithms.
Requirements: sufficient geometric and radiometric resolution, and high
geometric accuracy.
Geometric/spatial resolution indicates pixel size of resultant image. Smaller
the pixel size, greater the detail that can be detected in the image. For high
quality photogrammetric scanners, min pixel size is on the order of 5 to 15m
Radiometric resolution indicates the number of quantization levels. Min should
be 256 levels (8 bit); most scanners capable of 1024 levels (10 bit) or higher.
Geometric quality indicates the positional accuracy of pixels in the resultant
image. For high quality scanners, it is around 2 to 3 m.
8/14/2013 Virtual Environment Lab, UTA 20
Sources of Error in Photo Coordinates
Chapter 4
The following are some of the sources of error that can distort the true photo
coordinates:
1. Film distortions due to shrinkage, expansion and lack of flatness
2. Failure of fiducial axes to intersect at the principal point
3. Lens distortions
4. Atmospheric refraction distortions
5. Earth curvature distortion
6. Operator error in measurements
7. Error made by automated correlation techniques

8/14/2013 Virtual Environment Lab, UTA 21
Now that we have covered the basics of image
acquisition and measurement, we turn to analytical
photogrammetry
8/14/2013 Virtual Environment Lab, UTA 22
Analytical Photogrammetry
Chapter 11
Definition: Analytical photogrammetry is the term used to describe the
rigorous mathematical calculation of coordinates of points in object space
based upon camera parameters, measured photo coordinates and ground
control.

Features of Analytical photogrammetry:
rigorously accounts for any tilts
generally involves solution of large, complex systems of redundant
equations by the method of least squares
forms the basis of many modern hardware and software system including
stereoplotters, digital terrain model generation, orthophoto production, digital
photo rectification and aerotriangulation.
8/14/2013 Virtual Environment Lab, UTA 23
Image Measurement Considerations
Before using the x and y photo coordinate pair, the following conditions should
be considered:
1. Coordinates (usually in mm) are relative to the principal point - the origin.
2. Analytical photogrammetry is based on assumptions such as light rays
travel in straight lines and the focal plane of a frame camera is flat. Thus,
coordinate refinements may be required to compensate for the sources of
errors, that violate these assumptions.
3. Measurements must be ensured to have high accuracy.
4. While making measurements of image coordinates of common points that
appear in more than one photograph, each object point must be precisely
identified between photos so that the measurements are consistent.
5. Object space coordinates is based on a 3D cartesian system.
Chapter 11
8/14/2013 Virtual Environment Lab, UTA 24
Now, we come to the most fundamental and useful
relationship in analytical photogrammetry, the
collinearity condition
8/14/2013 Virtual Environment Lab, UTA 25
Collinearity Condition
Appendix D
The collinearity condition is illustrated in the figure below. The exposure station
of a photograph, an object point and its photo image all lie along a straight
line. Based on this condition we can develop complex mathematical relationships.
8/14/2013 Virtual Environment Lab, UTA 26
Let:
Coordinates of exposure station be X
L
, Y
L
, Z
L
wrt
object (ground) coordinate system XYZ
Coordinates of object point A be X
A
, Y
A
, Z
A
wrt
ground coordinate system XYZ
Coordinates of image point a of object point A be
x
a
, y
a
, z
a
wrt xy photo coordinate system (of which
the principal point o is the origin; correction
compensation for it is applied later)
Coordinates of image point a be x
a
, y
a
, z
a
in a
rotated image plane xyz which is parallel to the
object coordinate system
Transformation of (x
a
, y
a
, z
a
) to (x
a
, y
a
, z
a
) is
accomplished using rotation equations, which
we derive next.
Collinearity Condition Equations
Appendix D
8/14/2013 Virtual Environment Lab, UTA 27
Rotation Equations
Appendix C
Omega rotation about x axis:
New coordinates (x
1
,y
1
,z
1
) of a point (x,y,z)
after rotation of the original coordinate
reference frame about the x axis by angle
are given by:
x
1
= x
y
1
= y cos + z sin
z
1
= -ysin + z cos
Similarly, we obtain equations for phi rotation
about y axis:
x
2
= -z
1
sin + x
1
cos
y
2
= y
1
z
2
= z
1
cos + x
1
sin

And equations for kappa rotation about z axis:
x = x
2
cos + y
2
sin
y = -x
2
sin + y
2
cos
z = z
2
8/14/2013 Virtual Environment Lab, UTA 28
Final Rotation Equations
(
(
(

=
z
y
x
X
(
(
(

=
33 32 31
23 22 21
13 12 11
m m m
m m m
m m m
M
We substitute the equations at each stage to get the following:
x = m
11
x + m
12
y + m
13
z
y = m
21
x + m
22
y + m
23
z
z = m
31
x + m
32
y + m
33
z

In matrix form: X = M X
where
(
(
(

=
'
'
'
'
z
y
x
X
Properties of rotation matrix M:
1. Sum of squares of the 3 direction cosines (elements of M) in any row or column is
unity.
2. M is orthogonal, i.e. M
-1
= M
T
Where ms are function of
rotation angles , and
Appendix C
8/14/2013 Virtual Environment Lab, UTA 29
Coming back to the collinearity condition
8/14/2013 Virtual Environment Lab, UTA 30


Substitute this into rotation formula:

Now,
factor out z
a
/(Z
A
-Z
L
), divide x
a
, y
a
by z
a
add corrections for offset of principal point (x
o
,y
o
)
and equate z
a
=-f, to get:
Collinearity Equations
(

+ +
+ +
=
) ( ) ( ) (
) ( ) ( ) (
33 32 31
13 12 11
L A L A L A
L A L A L A
o a
Z Z m Y Y m X X m
Z Z m Y Y m X X m
f x x
(

+ +
+ +
=
) ( ) ( ) (
) ( ) ( ) (
33 32 31
23 22 21
L A L A L A
L A L A L A
o a
Z Z m Y Y m X X m
Z Z m Y Y m X X m
f y y
Using property of similar triangles:
Appendix D

' ' ; ' ' ; ' '
' ' '
'
33
'
32
'
31
'
23
'
22
'
21
'
13
'
12
'
11
a
L A
L A
a
L A
L A
a
L A
L A
a
a
L A
L A
a
L A
L A
a
L A
L A
a
a
L A
L A
a
L A
L A
a
L A
L A
a
a
L A
L A
a a
L A
L A
a a
L A
L A
a
A L
a
L A
a
L A
a
z
Z Z
Z Z
m z
Z Z
Y Y
m z
Z Z
X X
m z
z
Z Z
Z Z
m z
Z Z
Y Y
m z
Z Z
X X
m y
z
Z Z
Z Z
m z
Z Z
Y Y
m z
Z Z
X X
m x
z
Z Z
Z Z
z z
Z Z
Y Y
y z
Z Z
X X
x
Z Z
z
Y Y
y
X X
x
|
|
.
|

\
|

+
|
|
.
|

\
|

+
|
|
.
|

\
|

=
|
|
.
|

\
|

+
|
|
.
|

\
|

+
|
|
.
|

\
|

=
|
|
.
|

\
|

+
|
|
.
|

\
|

+
|
|
.
|

\
|

=
|
|
.
|

\
|

=
|
|
.
|

\
|

=
|
|
.
|

\
|

8/14/2013 Virtual Environment Lab, UTA 31


Review of Collinearity Equations
Collinearity equations:
(

+ +
+ +
=
) ( ) ( ) (
) ( ) ( ) (
33 32 31
13 12 11
L A L A L A
L A L A L A
o a
Z Z m Y Y m X X m
Z Z m Y Y m X X m
f x x
(

+ +
+ +
=
) ( ) ( ) (
) ( ) ( ) (
33 32 31
23 22 21
L A L A L A
L A L A L A
o a
Z Z m Y Y m X X m
Z Z m Y Y m X X m
f y y
Where,
x
a
, y
a
are the photo coordinates of image point a
X
A
, Y
A
, Z
A
are object space coordinates of object/ground
point A
X
L
, Y
L
, Z
L
are object space coordinates of exposure
station location
f is the camera focal length
x
o
, y
o
are the offsets of the principal point coordinates
ms are functions of rotation angles omega, phi, kappa
(as derived earlier)
Collinearity equations:
are nonlinear and
involve 9 unknowns:
1. omega, phi, kappa
inherent in the ms
2. Object coordinates
(X
A
, Y
A
, Z
A
)
3. Exposure station
coordinates (X
L
, Y
L
, Z
L
)
Ch. 11 & App D
8/14/2013 Virtual Environment Lab, UTA 32
Now that we know about the collinearity condition, lets
see where we need to apply it.
First, we need to know what it is that we need to find
8/14/2013 Virtual Environment Lab, UTA 33
As already mentioned, the collinearity conditions involve 9 unknowns:
1) Exposure station attitude (omega, phi, kappa),
2) Exposure station coordinates (X
L
, Y
L
, Z
L
), and
3) Object point coordinates (X
A
, Y
A
, Z
A
).

Of these, we first need to compute the position and attitude of the exposure
station, also known as the elements of exterior orientation.

Thus the 6 elements of exterior orientation are:
1) spatial position (X
L
, Y
L
, Z
L
) of the camera and
2) angular orientation (omega, phi, kappa) of the camera

All methods to determine elements of exterior orientation of a single tilted
photograph, require:
1) photographic images of at least three control points whose X, Y and Z
ground coordinates are known, and
2) calibrated focal length of the camera.
Elements of Exterior Orientation
Chapter 10
8/14/2013 Virtual Environment Lab, UTA 34
Elements of Interior Orientation
Chapter 3
Elements of interior orientation which can be determined through camera calibration are
as follows:
1. Calibrated focal length (CFL), the focal length that produces an overall mean
distribution of lens distortion. Better termed calibrated principal distance since it
represents the distance from the rear nodal point of the lens to the principal point of
the photograph, which is set as close to optical focal length of the lens as possible.
2. Principal point location, specified by coordinates of a principal point given
wrt x and y coordinates of the fiducial marks.
3. Fiducial mark coordinates: x and y coordinates of the fiducial marks which
provide the 2D positional reference for the principal point as well as images
on the photograph.
4. Symmetric radial lens distortion, the symmetric component of distortion that occurs
along radial lines from the principal point. Although negligible, theoretically always
present.
5. Decentering lens distortion, distortion that remains after compensating for
symmetric radial lens distortion. Components: asymmetric radial and tangential lens
distortion.
As an aside, from earlier discussion:
8/14/2013 Virtual Environment Lab, UTA 35
Next, we look at space resection which is used for
determining the camera station coordinates from a
single, vertical/low oblique aerial photograph
8/14/2013 Virtual Environment Lab, UTA 36
Space Resection By Collinearity
Space resection by collinearity involves formulating the collinearity equations for a
number of control points whose X, Y, and Z ground coordinates are known and whose
images appear in the vertical/tilted photo.
The equations are then solved for the six unknown elements of exterior orientation that
appear in them.
2 equations are formed for each control point
3 control points (min) give 6 equations: solution is unique, while 4 or more control points (more than 6
equations) allows a least squares solution (residual terms will exist)
Initial approximations are required for the unknown orientation parameters, since
the collinearity equations are nonlinear, and have been linearized using Taylors
theorem.
Chapter 10 & 11
No. of points No. of equations
Unknown ext. orientation
parameters
1 2 6
2 4 6
3 6 6
4 8 6
8/14/2013 Virtual Environment Lab, UTA 37
Coplanarity Condition
A similar condition to the collinearity condition, is coplanarity, which is the
condition that the two exposure stations of a stereopair, any object point and
its corresponding image points on the two photos, all lie in a common plane.

Like collinearity equations, the coplanarity equation is nonlinear and must be
linearized by using Taylors theorem. Linearization of the coplanarity equation
is somewhat more difficult than that of the collinearity equations.

But, coplanarity is not used nearly as extensively as collinearity in analytical
photogrammetry.

Space resection by collinearity is the only method still commonly used
to determine the elements of exterior orientation.

8/14/2013 Virtual Environment Lab, UTA 38
Initial Approximations
for Space Resection
We need initial approximations for all six exterior orientation parameters.
Omega and Phi angles: For the typical case of near-vertical photography, initial
values of omega and phi can be taken as zeros.
H:
Altimeter reading for rough calculations
Compute Z
L
(height H about datum plane) using ground line of known
length appearing on the photograph

To compute H, only 2 control points are required, rest are redundant.
Approximation can be improved by averaging several values of H.

Chapter 11 & 6
8/14/2013 Virtual Environment Lab, UTA 39
Calculating Flying Height (H)
Flying height H can be calculated using a ground line of known length that
appears on the photograph.
Ground line should be on fairly level terrain as difference in elevation of
endpoints results in error in computed flying height.
Accurate results can be obtained despite this though, if the images of the end
points are approximately equidistant from the principal point of the photograph
and on a line through the principal point.
Chapter 6
H can be calculated using equations for scale of a photograph:
S = ab/AB = f/H
(scale of photograph over flat terrain)
Or
S = f/(H-h)
(scale of photograph at any point whose elevation above datum is h)
8/14/2013 Virtual Environment Lab, UTA 40
Photographic Scale
S = ab/AB = f/H
S
AB
= ab/AB = La/LA = Lo/LO = f/(H-h)
where
1) S is scale of vertical photograph over a flat terrain
2) S
AB
is scale of vertical photograph over variable terrain
3) ab is distance between images of points A and B on the
photograph
4) AB is actual distance between points A and B
5) f is focal length
6) La is distance between exposure station L & image a of
point A on the photo positive
7) LA is distance between exposure station L and point A
8) Lo = f is the distance from L to principal point on the
photograph
9) L
O
= H-h is the distance from L to projection of o onto the
horizontal plane containing point A with h being height of
point A from the datum plane
Note: For vertical photographs taken over variable terrain, there
are an infinite number of different scales.
Chapter 6
As an explanation of the equations from which H is calculated:
8/14/2013 Virtual Environment Lab, UTA 41
Initial Approx. for X
L
, Y
L
and k
x and y ground coordinates of any point can be obtained by simply multiplying x and y
photo coordinates by the inverse of photo scale at that point.
This requires knowing
f, H and
elevation of the object point Z or h.
A 2D conformal coordinate transformation (comprising rotation and translation) can then be performed,
which relates these ground coordinates computed from the vertical photo equations to the control
values:
X = a.x b.y + T
X
; Y = a.y + b.x + T
Y
We know (x,y) and (x,y) for n sets are known giving us 2n equations.

The 4 unknown transformation parameters (a, b, T
X
, T
Y
) can therefore be calculated by
least squares. So essentially we are running the resection equations in a diluted mode
with initial values of as many parameters as we can find, to calculate the initial
parameters of those that cannot be easily estimated.
T
X
and T
Y
are used as initial approximation for X
L
and Y
L
, resp.
Rotation angle = tan
-1
(b/a) is used as approximation for (kappa).
Chapter 11
8/14/2013 Virtual Environment Lab, UTA 42
Space Resection by Collinearity: Summary
Summary of Initializations:
Omega, Phi -> zero, zero
Kappa -> Theta
X
L
, Y
L
-> T
X
, T
Y

Z
L
->flying height H
Summary of steps:
1. Calculate H (Z
L
)
2. Compute ground coordinates from assumed vertical photo for the control points.
3. Compute 2D conformal coordinate transformation parameters by a least squares
solution using the control points (whose coordinates are known in both photo
coordinate system and the ground control cood sys)
4. Form linearized observation equations
5. Form and solve normal equations.
6. Add corrections and iterate till corrections become negligible.
(To determine the 6 elements of exterior orientation using collinearity condition)
Chapter 11
8/14/2013 Virtual Environment Lab, UTA 43
If space resection is used to determine the elements of
exterior orientation for both photos of a stereopair, then
object point coordinates for points that lie in the stereo
overlap area can be calculated by the procedure known
as space intersection
8/14/2013 Virtual Environment Lab, UTA 44
Space Intersection By Collinearity
Chapter 11
For a ground point A:
Collinearity equations are written for image point
a
1
of the left photo (of the stereopair), and for
image point a
2
of the right photo, giving 4
equations.
The only unknowns are X
A
, Y
A
and Z
A
.
Since equations have been linearized using
Taylors theorem, initial approximations are
required for each point whose object space
coordinates are to be computed.
Initial approximations are determined using
the parallax equations.
Use: To determine object point coordinates for points that lie in the stereo overlap area of two
photographs that make up a stereopair.
Principle: Corresponding rays to the same object point from two photos of a stereopair must
intersect at the point.
8/14/2013 Virtual Environment Lab, UTA 45
Parallax Equations
Parallax Equations:
1) p
a
= x
a
x
a

2) h
A
= H B.f/p
a
3) X
A
= B.x
a
/p
a
4) Y
A
= B.y
a
/p
a
where
h
A
is the elevation of point A above datum
H is the flying height above datum
B is the air base (distance between the exposure stations)
f is the focal length of the camera
p
a
is the parallax of point A
X
A
and Y
A
are ground coordinates of point A in the
coordinate system with origin at the datum point P of the
Lpho, X axis is in same vertical plane as x and x flight
axes and Y axis passes through the datum point of the
Lpho and is perpendicular to the X axis
x
a
and y
a
are the photo coordinates of point a measured wrt
the flight line axes on the left photo
Chapter 8
8/14/2013 Virtual Environment Lab, UTA 46
Applying Parallax Equations
to Space Intersection
For applying parallax equations, H and B have to be determined:
Since X, Y, Z coordinates for both exposure stations are known,
H is taken as average of Z
L1
and Z
L2
and

B = [ (X
L2
-X
L1
)
2
+ (Y
L2
-Y
L1
)
2
]
1/2
The resulting coordinates from the parallax equations are in the arbitrary
ground coordinate system.
To convert them to, for instance WGS84, a conformal coordinate transformation
is used.
Chapter 11
8/14/2013 Virtual Environment Lab, UTA 47
Now that we know how to determine object space
coordinates of a common point in a stereopair, we can
examine the overall procedure for all the points in the
stereopair...
8/14/2013 Virtual Environment Lab, UTA 48
Analytical Stereomodel
Chapter 11
Aerial photographs for most applications are taken so that adjacent photos overlap by
more than 50%. Two adjacent photographs that overlap in this manner form a
stereopair.
Object points that appear in the overlap area of a stereopair constitute a stereomodel.
The mathematical calculation of 3D ground coordinates of points in the stereomodel by
analytical photogrammetric techniques forms an analytical stereomodel.

The process of forming an analytical stereomodel involves 3 primary steps:
1. Interior orientation (also called photo coordinate refinement): Mathematically
recreates the geometry that existed in the camera when a particular photograph was
exposed.
2. Relative (exterior) orientation: Determines the relative angular attitude and positional
displacement between the photographs that existed when the photos were taken.
3. Absolute (exterior) orientation: Determines the absolute angular attitude and positions
of both photographs.
After these three steps are achieved, points in the analytical stereomodel will have object
coordinates in the ground coordinate system.
8/14/2013 Virtual Environment Lab, UTA 49
Chapter 11
Analytical Relative Orientation
Initialization:

If the parameters are set to the values
mentioned (i.e.,
1
=
1
=
1
=X
L1
=Y
L1
=0,
Z
L1
=f, X
L2
=b),

Then the scale of the stereomodel is
approximately equal to photo scale.


Now, x and y photo coordinates of the
left photo are good approximations for X
and Y object space coordinates, and

zeros are good approximations for Z
object space coordinates.
Analytical relative orientation involves defining (assuming) certain elements of exterior orientation
and calculating the remaining ones.
8/14/2013 Virtual Environment Lab, UTA 51
Analytical Relative Orientation
Chapter 11
1) All exterior orientation elements, excluding Z
L1
of the left photo of the stereopair are
set to zero values.
2) For convenience, Z
L
of left photo (Z
L1
) is set to f and X
L
of right photo (X
L2
) is set to
photo base b.
3) This leaves 5 elements of the right photo that must be determined.
4) Using collinearity condition, min of 5 object points are required to solve for the
unknowns, since each point used in relative orientation is net gain of one equation for
the overall solution (since their X,Y and Z coordinates are unknowns too)
No. of points in overlap No. of equations No. of unknowns
1 4 (2+2) 5 + 3 = 8
2 4 + 4 = 8 8 + 3 = 11
3 8 + 4 = 12 11 + 3 = 14
4 12 + 4 = 16 14 + 3 = 17
5 16 + 4 =20 17 + 3 =20
6 20 + 4 = 24 20 + 3 = 23
8/14/2013 Virtual Environment Lab, UTA 52
Analytical Absolute Orientation
Chapter 16 & 11
Stereomodel coordinates of tie points are related to their 3D coordinates in a (real, earth based) ground
coordinate system. For small stereomodel such as that computed from one stereopair, analytical
absolute orientation can be performed using a 3D conformal coordinate transformation.

Requires minimum of two horizontal and three vertical control points. (20 equations with 8 unknowns
plus the 12 exposure station parameters for the two photos:closed form solution). Additional control
points provide redundancy, enabling a least squares solution.

(horizontal control: the position of the point in object space is known wrt a horizontal datum;
vertical control: the elevation of the point is known wrt a vertical datum)

Once the transformation parameters have been computed, they can be applied to the remaining
stereomodel points, including the X
L
, Y
L
and Z
L
coordinates of the left and right photographs. This
gives the coordinates of all stereomodel points in the ground system.
No. of equations No. of additional unknowns Total no. of unknowns
1 horizontal control point 2 per photo =>total 4 1 unknown Z value 12 exterior orientation
parameters + 1 = 13
1 vertical control point 2 equations per photo => 4
equations total
2 unknown X and Y values 12 + 2 = 14
2 horizontal control points 4 * 2 = 8 equations 1 * 2 = 2 12 + 2 = 14
3 vertical control points 4 * 3 = 12 equations 2 * 3 = 6 12 + 6 = 18
2 horizontal + 3 vertical
control points
8 + 12 = 20 equations 2 + 6 = 8 12 + 8 = 20
8/14/2013 Virtual Environment Lab, UTA 53
As already mentioned while covering camera
calibration, camera calibration can also be included
in a combined interior-relative-absolute orientation.
This is known as analytical self-calibration
8/14/2013 Virtual Environment Lab, UTA 54
Analytical Self Calibration
Chapter 11
Analytical self-calibration is a computational process wherein camera calibration
parameters are included in the photogrammetric solution, generally in a
combined interior-relative-absolute orientation.

The process uses collinearity equations that have been augmented with
additional terms to account for adjustment of the calibrated focal length,
principal-point offsets, and symmetric radial and decentering lens distortion.
In addition, the equations might include corrections for atmospheric refraction.


With the inclusion of the extra unknowns, it follows that additional independent
equations will be needed to obtain a solution.
8/14/2013 Virtual Environment Lab, UTA 55
So far we have assumed that a certain amount of
ground control is available to us for using in space
resection, etc. Lets take a look at the acquisition of
these ground control points
8/14/2013 Virtual Environment Lab, UTA 56
Ground Control
for Aerial Photogrammetry
Chapter 16
Ground control consists of any points
whose positions are known in an object-space coordinate system and
whose images can be positively identified in the photographs.

Classification of photogrammetric control:
1. Horizontal control: the position of the point in object space is known wrt a
horizontal datum
2. Vertical control: the elevation of the point is known wrt a vertical datum

Images of acceptable photo control points must satisfy two requirements:
1. They must be sharp, well defined and positively identified on all photos, and
2. They must lie in favorable locations in the photographs

.
8/14/2013 Virtual Environment Lab, UTA 57
Photo Control Points
for Aerotriangulation
Chapter 16
The Number of ground-surveyed photo control needed varies with
1. size, shape and nature of area,
2. accuracy required, and
3. procedures, instruments, and personnel to be used.

In general, more dense the ground control, the better the accuracy in the
supplemental control determined by aerotriangulation. thesis of our
targeting project!!

There is an optimum number, which affords maximum economic benefit and maintains a
satisfactory standard of accuracy.

The methods used for establishing ground control are:

1. Traditional land surveying techniques
2. Using Global Positioning System (GPS)
8/14/2013 Virtual Environment Lab, UTA 58
Ground Control by GPS
Chapter 16
While GPS is most often used to compute horizontal position, it is capable of
determining vertical position (elevation) to nearly the same level of accuracy.

Static GPS can be used to determine coordinates of unknown points with
errors at the centimeter level.
Note: The computed vertical position will be
related to the ellipsoid, not the geoid or mean
sea level. To relate the GPS-derived
elevation (ellipsoid height) to the more
conventional elevation (orthometric height), a
geoid model is necessary.

However, if the ultimate reference frame is
related to the ellipsoid, this should not pose a
problem.
8/14/2013 Virtual Environment Lab, UTA 59
Having covered processing techniques for single
points, we examine the process at a higher level, for
all the photographs
8/14/2013 Virtual Environment Lab, UTA 60
Aerotriangulation
Chapter 17
It is the process of determining the X, Y, and Z ground coordinates of
individual points based on photo coordinate measurements.
consists of photo measurement followed by numerical interior,
relative, and absolute orientation from which ground coordinates
are computed.
For large projects, the number of control points needed is extensive
cost can be extremely high
. Much of this needed control can be established by aerotriangulation
for only a sparse network of field surveyed ground control.
Using GPS in the aircraft to provide coordinates of the camera
eliminates the need for ground control entirely
in practice a small amount of ground control is still used to
strengthen the solution.
8/14/2013 Virtual Environment Lab, UTA 61
Pass Points for Aerotriangulation
Chapter 17
selected as 9 points in a format of 3 rows X 3 columns,
equally spaced over photo.
The points may be images of natural, well-defined objects
that appear in the required photo areas
if such points are not available, pass points may be
artificially marked.
Digital image matching can be used to select points in the
overlap areas of digital images and automatically match
them between adjacent images.
essential step of automatic aerotriangulation.
8/14/2013 Virtual Environment Lab, UTA 62
Analytical Aerotriangulation
Chapter 17
The most elementary approach consists of the following basic steps:

1. relative orientation of each stereomodel
2. connection of adjacent models to form continuous strips and/or
blocks, and
3. simultaneous adjustment of the photos from the strips and/or blocks
to field-surveyed ground control

X and Y coordinates of pass points can be located to an accuracy of
1/15,000 of the flying height, and Z coordinates can be located to an
accuracy of 1/10,000 of the flying height.

With specialized equipment and procedures, planimetric accuracy of
1/350,000 of the flying height and vertical accuracy of 1/180,000
have been achieved.

8/14/2013 Virtual Environment Lab, UTA 63
Analytical Aerotriangulation Technique
Chapter 17
Several variations exist.

Basically, all methods consist of writing equations that express the unknown
elements of exterior orientation of each photo in terms of camera constants,
measured photo coordinates, and ground coordinates.

The equations are solved to determine the unknown orientation parameters
and either simultaneously or subsequently, coordinates of pass points are
calculated.

By far the most common condition equations used are the collinearity
equations.

Analytical procedures like Bundle Adjustment can simultaneously enforce
collinearity condition on to 100s of photographs.


8/14/2013 Virtual Environment Lab, UTA 64
Simultaneous Bundle Adjustment
Chapter 17
Adjusting all photogrammetric measurements to ground control values
in a single solution is known as a bundle adjustment. The process is so
named because of the many light rays that pass through each lens
position constituting a bundle of rays.

The bundles from all photos are adjusted simultaneously so that
corresponding light rays intersect at positions of the pass points and
control points on the ground.

After the normal equations have been formed, they are solved for the
unknown corrections to the initial approximations for exterior orientation
parameters and object space coordinates.

The corrections are then added to the approximations, and the
procedure is repeated until the estimated standard deviation of unit
weight converges.
8/14/2013 Virtual Environment Lab, UTA 65
Quantities in Bundle Adjustment
Chapter 17
The unknown quantities to be obtained in a bundle adjustment consist of:
1. The X, Y and Z object space coordinates of all object points, and
2. The exterior orientation parameters of all photographs

The observed quantities (measured) associated with a bundle adjustment are:
1. x and y photo coordinates of images of object points,
2. X, Y and/or Z coordinates of ground control points,
3. direct observations of exterior orientation parameters of the photographs.

The first group of observations, photo coordinates, is the fundamental photogrammetric
measurements.

The next group of observations is coordinates of control points determined through field
survey.

The final set of observations can be estimated using airborne GPS control system as well
as inertial navigation systems (INSs) which have the capability of measuring the
angular attitude of a photograph.
8/14/2013 Virtual Environment Lab, UTA 66
Consider a small block consisting of 2 strips with 4 photos per strip, with 20
pass points and 6 control points, totaling 26 object points; with 6 of those also
serving as tie points connecting the two adjacent strips.
Bundle Adjustment on a Photo Block
Chapter 17
8/14/2013 Virtual Environment Lab, UTA 67
Bundle Adjustment on a Photo Block
Chapter 17
To repeat, consider a small block consisting of 2 strips with 4 photos per strip, with 20 pass points and 6
control points, totaling 26 object points; with 6 of those also serving as tie points connecting the two
adjacent strips.

In this case,
The number of unknown object coordinates
= no. of object points X no. of coordinates per object point = 26X3 = 78
The number of unknown exterior orientation parameters
= no. of photos X no. of exterior orientation parameters per photo = 8X6 = 48
Total number of unknowns = 78 + 48 = 126

The number of photo coordinate observations
= no. of imaged points X no. of photo coordinates per point = 76 X 2 = 152
The number of ground control observations
= no. of 3D control points X no. of coordinates per point = 6X3 = 18
The number of exterior orientation parameters
= no. of photos X no. of exterior orientation parameters per photo = 8X6 = 48

If all 3 types of observations are included, there will be a total of 152+18+48=218 observations; but if
only the first two types are included, there will be only 152+18=170 observations
Thus, regardless of whether exterior orientation parameters were observed, a least squares solution is
possible since the number of observations in either case (218 and 170) is greater than the number
of unknowns (126 and 78, respectively).
No. of imaged points =
4 X 8
(photos 1, 4, 5 & 8
have 8 imaged points
each)
+
4 X 11
(photos 2, 3, 6 & 7 have
11 imaged points each)
= total 76 point images
8/14/2013 Virtual Environment Lab, UTA 68
The next question is, how are these equations
solved.

Well, we start with observations equations, which
would be the collinearity condition equations that we
have already seen, we linearize them, and then use
least squares procedure to find the unknowns.

We will start by refreshing our memories on least
squares solution of over-determined equation set.
8/14/2013 Virtual Environment Lab, UTA 69
Relevant Definitions
Appendix A & B
Observations are the directly observed (or measured) quantities which
contain random errors.
True Value is the theoretically correct or exact value of a quantity. It can never
be determined, because no matter how accurate, the observation will always
contain small random errors.
Accuracy is the degree of conformity to the true value.
Since true value of a continuous physical quantity can never be known,
accuracy is likewise never known. Therefore, it can only be estimated.
Sometimes, accuracy can be assessed by checking against an independent,
higher accuracy standard.
Precision is the degree of refinement of quantity.
The level of precision can be assessed by making repeated measurements
and checking the consistency of the values.
If the values are very close to each other, the measurements have high
precision and vice versa.
8/14/2013 Virtual Environment Lab, UTA 70
Relevant Definitions
Error is the difference between any measured quantity and the true value for
that quantity.

m
x
MPV

=
Where x is the sum of the individual measurements, and m is
the number of observations.
Appendix A & B
Most probable value is that value for a measured or indirectly determined
quantity which, based upon the observations, has the highest probability.

The MPV of a quantity directly and independently measured having observations of equal
weight is simply the mean.
Types of errors
Random errors (accidental and compensating)
Systematic errors (cumulative; measured and modeled to compensate)
Mistakes or blunders (avoided as far as possible; detected and eliminated)
8/14/2013 Virtual Environment Lab, UTA 71
Relevant Definitions
Residual is the difference between any measured quantity and the most probable
value for that quantity.
It is the value which is dealt with in adjustment computations, since errors are
indeterminate. The term error is frequently used when residual is in fact meant.
Degrees of freedom is the number of redundant observations (those in excess of
the number actually needed to calculate the unknowns).
Weight is the relative worth of an observation compared to nay other observation.
Measurements are weighted in adjustment computations according to their
precisions.
Logically, a precisely measured value should be weighted more in an adjustment
so that the correction it receives is smaller than that received by less precise
measurements.
If same equipment and procedures are used on a group of measurements, each
observation is given an equal weight.
Appendix B
8/14/2013 Virtual Environment Lab, UTA 72
Relevant Definitions
Appendix B
Standard deviation (also called root mean square error or 68 percent error) is a
quantity used to express the precision of a group of measurements.
For m number of direct, equally weighted observations of a quantity, its standard
deviation is:

r
v
S
2

=
Where v
2
is the sum of squares of the residuals and r is
the number of degrees of freedom (r=m-1)
According to the theory of probability, 68% of
the observations in a group should have
residuals smaller than the standard deviation.

The area between S and +S in a Gaussian
distribution curve (also called Normal
distribution curve) of the residual, which is
same as the area between average-S and
average+S on the curve of measurements, is
68%.
8/14/2013 Virtual Environment Lab, UTA 73
Fundamental Condition of Least Squares
For a group of equally weighted observations, the fundamental condition which
is enforced in least square adjustment is that the sum of the squares of the
residuals is minimized.
Suppose a group of m equally weighted measurements were taken with
residuals v
1
, v
2
, v
3
,, v
m
then:


minimum ...
2 2 2
3
2
2
2
1
1
= + + + + =

=
m
m
i
v v v v
i
v
Basic assumptions underlying least squares theory:
1. Number of observations is large
2. Frequency distribution of the errors is normal (gaussian)
Appendix B
8/14/2013 Virtual Environment Lab, UTA 74
Applying Least Squares
Steps:
1) Write observation equations (one for each measurement) relating
measured values to their residual errors and the unknown
parameters.
2) Obtain equation for each residual error from corresponding
observation.
3) Square and add residuals
4) To minimize v
2
take partial derivatives wrt each unknown variable
and set them equal to zero
5) This gives a set of equations called normal equations which are
equal in number to the number of unknowns.
6) Solve normal equations to obtain the most probable values for the
unknowns.
Appendix B
8/14/2013 Virtual Environment Lab, UTA 75
Least Squares Example Problem
2x 3y
x x y y y
Corresponding
Observation Eqns:
x + 3y = 10.1 + v1
2y = 6.2 + v3
x + 2y = 6.9 + v2
2x + y = 4.8 + v4
Let:
AB be a line segment
C divide AB into 2 parts of length X and Y
D be midpoint of AC, i.e. AD = DC = x
E and F trisect CB, i.e. CE = EF = FB = y
In this least squares problem, the coefficients of
unknowns in the observation equations are other
than zero and unity
4 observation equations (m=2) in 2 variables/unknowns (n=2)
Take v
2
and differentiate partially w.r.t. the unknowns to get 2
equations in 2 unknowns.
Solution gives the most probable values of x and y.

Note:

If D is not the exact midpoint and E
& F do not trisect the into exactly
equal parts,

Actual x and y values may differ
from segment to segment.

We only get the most probable
values for x and y!
8/14/2013 Virtual Environment Lab, UTA 76
Formulating Equations
2 2 2 2
2
4
2
3
2
2
2
1
2
4
3
2
1
4
3
2
1
) 8 . 4 2 ( ) 2 . 6 2 ( ) 9 . 6 2 ( ) 1 . 10 3 (
v
: residuals add and Square 3) Step
8 . 4 2
2 . 6 2
9 . 6 2
1 . 10 3
n observatio ing correspond from error residual each for Equation 2) Step
8 . 4 2
2 . 6 2
9 . 6 2
1 . 10 3
n) observatio each for residual a (include
: t) measuremen each for (one Equations n Observatio 1) Step
+ + + + + + =
+ + + =
+ =
=
+ =
+ =
+ = +
+ =
+ = +
+ = +
y x y y x y x
v v v v
y x v
y v
y x v
y x v
v y x
v y
v y x
v y x
8/14/2013 Virtual Environment Lab, UTA 77
Normal Equations and Solution
(

=
(

=
(

=
(

=
(

= +
= +
= + + + + + +
= + + + + + +
+ + + + + + =
c
c
+ + + + + + =
c
c

0780 . 3
8424 . 0
3 . 61
6 . 26
18 7
7 6
3 . 61
6 . 26
18 7
7 6
6 . 122
2 . 53
36 14
14 12
: Solving 5) Step
0 6 . 122 36 14
0 2 . 53 14 12
: Equations Normal Simplified
0 ) 8 . 4 2 ( 2 2 * ) 2 . 6 2 ( 2 2 * ) 9 . 6 2 ( 2 3 * ) 1 . 10 3 ( 2
0 2 * ) 8 . 4 2 ( 2 0 ) 9 . 6 2 ( 2 ) 1 . 10 3 ( 2
: Equations Normal
) 8 . 4 2 ( 2 2 * ) 2 . 6 2 ( 2 2 * ) 9 . 6 2 ( 2 3 * ) 1 . 10 3 ( 2
2 * ) 8 . 4 2 ( 2 0 ) 9 . 6 2 ( 2 ) 1 . 10 3 ( 2
: of s derivative partial Taking 4) Step
1
2
2
2
y
x
y
x
y
x
y x
y x
y x y y x y x
y x y x y x
y x y y x y x
y
v
y x y x y x
x
v
v
8/14/2013 Virtual Environment Lab, UTA 78
General Form of Observation Equations
1 1 1 3 13 2 12 1 11
... v L X a X a X a X a
n n
= + + + +
2 2 2 3 23 2 22 1 21
... v L X a X a X a X a
n n
= + + + +
3 3 3 3 33 2 32 1 31
... v L X a X a X a X a
n n
= + + + +
m linear observation equations of equal weight containing n unknowns:

For m<n: underdetermined set of equations.
For m=n: solution is unique
For m>n: m-n observations are redundant, least squares can be applied to find MPVs
Appendix B
m m n mn m m m
v L X a X a X a X a = + + + + ...
3 3 2 2 1 1

Step 1:
Where:
X
j
: unknown
a
ij
: coefficients of the unknown Xjs
L
i
: observations
v
i
: residuals
(equations I)
8/14/2013 Virtual Environment Lab, UTA 79
General Form of Normal Equations
) ( ) ( ... ) ( ) ( ) (
1
1
1
1
3 3 1
1
2 2 1
1
1 1 1
1
i i
m
i
n in i
m
i
i i
m
i
i i
m
i
i i
m
i
L a X a a X a a X a a X a a

= = = = =
= + + + +
) ( ) ( ... ) ( ) ( ) (
2
1
2
1
3 3 2
1
2 2 2
1
1 1 2
1
i i
m
i
n in i
m
i
i i
m
i
i i
m
i
i i
m
i
L a X a a X a a X a a X a a

= = = = =
= + + + +
) ( ) ( ... ) ( ) ( ) (
3
1
3
1
3 3 3
1
2 2 3
1
1 1 3
1
i i
m
i
n in i
m
i
i i
m
i
i i
m
i
i i
m
i
L a X a a X a a X a a X a a

= = = = =
= + + + +
Equations obtained at the end of Step 4:
At step 1 we have m equations in n variables.
At the end of Step 4 we have n equations in n variables.
) ( ) ( ... ) ( ) ( ) (
1 1
3 3
1
2 2
1
1 1
1
i im
m
i
n in im
m
i
i im
m
i
i im
m
i
i im
m
i
L a X a a X a a X a a X a a

= = = = =
= + + + +
Appendix B

(equations II)
8/14/2013 Virtual Environment Lab, UTA 80
Matrix Forms of Equations
1 1 1
V L X A
m m n
n
m
=
) ( ) (
) (
1
L A A A X
L A X A A
T T
T T

= =>
=
(
(
(
(
(
(

=
mn m m m
n
n
n
n
m
a a a a
a a a a
a a a a
a a a a
A
...
... ... ... ... ...
...
...
...
3 2 1
3 33 32 31
2 23 22 21
1 13 12 11
Equations I (observation equations) in matrix form:
Equations II (normal equations) in matrix form:
where:
(
(
(
(
(
(

=
n
n
X
X
X
X
X

3
2
1
1
(
(
(
(
(
(

=
n
n
L
L
L
L
L

3
2
1
1
(
(
(
(
(
(

=
n
n
v
v
v
v
V

3
2
1
1
Appendix B
8/14/2013 Virtual Environment Lab, UTA 81
where,
r is the number of degrees of freedom and equals the number of observation minus the
number of unknowns i.e. r = m n
S
Xi
is the standard deviation of the ith adjusted quantity, i.e., the quantity in the ith row of the
X matrix
S
0
is the standard deviation of unit weight
Q
XiXi
is the element in the ith row and the ith column of the matrix (A
T
A)
-1
in the unweighted
case or the matrix (A
T
WA)
-1
The observation equation in matrix form:
Standard deviation of unit weight for an unweighted adjustment is:


Standard deviations of the adjusted quantities are:
Standard Deviation of residuals
Appendix B
i i i
X X x
T
Q S S
r
V V
S
L AX V
0
0
=
=
=
8/14/2013 Virtual Environment Lab, UTA 82
Standard Deviations in Example
3492 . 0 and 2016 . 0
18 * and 6 *
18 7
7 6
0823 . 0
0372 . 0
0440 . 0
0984 . 0
0236 . 0
8 . 4
2 . 6
9 . 6
1 . 10
0780 . 3
8424 . 0
1 2
2 0
2 1
3 1

8 . 4
2 . 6
9 . 6
1 . 10
1 2
2 0
2 1
3 1
0 0
0
0
4
3
2
1
4
3
2
1
= =
= =
=
(

=
= =
(
(
(
(

=
(
(
(
(

(
(
(
(

=
(
(
(
(

(
(
(
(

(
(
(
(

=
(
(
(
(

=
y x
y x
X X x
T
T
S S
S S S S
Q S S
A A
r
V V
S
v
v
v
v
y
x
v
v
v
v
L AX V
i i i

For our example problem, we


find the standard deviation of
x and y to be:
S
x
=0.2016 and S
y
=0.3492
8/14/2013 Virtual Environment Lab, UTA 83
Linearization of our non-linear equation
set
Our Least Squares Solution was for a linear
set of equations
Remember in all our photogrammetric
equations we have sines, cosines etc.
Need to linearize
Use Taylor Series Expansion
8/14/2013 Virtual Environment Lab, UTA 84
Review of Collinearity Equations
Collinearity equations:
(

+ +
+ +
=
) ( ) ( ) (
) ( ) ( ) (
33 32 31
13 12 11
L A L A L A
L A L A L A
o a
Z Z m Y Y m X X m
Z Z m Y Y m X X m
f x x
(

+ +
+ +
=
) ( ) ( ) (
) ( ) ( ) (
33 32 31
23 22 21
L A L A L A
L A L A L A
o a
Z Z m Y Y m X X m
Z Z m Y Y m X X m
f y y
Where,
x
a
, y
a
are the photo coordinates of image point a
X
A
, Y
A
, Z
A
are object space coordinates of object/ground
point A
X
L
, Y
L
, Z
L
are object space coordinates of exposure
station location
f is the camera focal length
x
o
, y
o
are the coordinates of the principal point
ms are functions of rotation angles omega, phi, kappa
(as derived earlier)
Collinearity equations:
are nonlinear and
involve 9 unknowns:
1. omega, phi, kappa
inherent in the ms
2. Object point coordinates
(X
A
, Y
A
, Z
A
)
3. Exposure station
coordinates (X
L
, Y
L
, Z
L
)
Ch. 11 & App D
8/14/2013 Virtual Environment Lab, UTA 86
Linearization of Collinearity Equations
) ( ) ( ) (
23 22 21 L A L A L A
Z Z m Y Y m X X m s + + =
a o
x
q
r
f x F =
(

=
a o
y
q
s
f y G =
(

=
Rewriting the collinearity equations:
) ( ) ( ) (
13 12 11 L A L A L A
Z Z m Y Y m X X m r + + =
) ( ) ( ) (
33 32 31 L A L A L A
Z Z m Y Y m X X m q + + =
where
Applying Taylors theorem to these equations (using only upto first order
partial derivatives), we get
Appendix D
8/14/2013 Virtual Environment Lab, UTA 87
Linearized Collinearity Equations Terms
etc., , , , ,
0
0
0
0
|
|
.
|

\
|
c
c
|
.
|

\
|
c
c
|
|
.
|

\
|
c
c
|
.
|

\
|
c
c
| e | e
G G F F
F
0
, G
0
: functions of F and G evaluated at the initial approximations for the 9
unknowns;
are partial derivatives of F and G wrt the
indicated unknowns evaluated at the initial
approximation
etc., , , , k | e d d d
are unknown corrections to be applied to the initial approximations.
(angles are in radians)
a A
A
A
A
A
A
L
L
L
L
L
L
x dZ
Z
F
dY
Y
F
dX
X
F
dZ
Z
F
dY
Y
F
dX
X
F
d
F
d
F
d
F
F
=
|
|
.
|

\
|
c
c
+
|
|
.
|

\
|
c
c
+
|
|
.
|

\
|
c
c
+
|
|
.
|

\
|
c
c
+
|
|
.
|

\
|
c
c
+
|
|
.
|

\
|
c
c
+
|
.
|

\
|
c
c
+
|
|
.
|

\
|
c
c
+
|
.
|

\
|
c
c
+
0 0 0 0
0 0
0
0
0
0
k
k
|
|
e
e
a A
A
A
A
A
A
L
L
L
L
L
L
y dZ
Z
G
dY
Y
G
dX
X
G
dZ
Z
G
dY
Y
G
dX
X
G
d
G
d
G
d
G
G
=
|
|
.
|

\
|
c
c
+
|
|
.
|

\
|
c
c
+
|
|
.
|

\
|
c
c
+
|
|
.
|

\
|
c
c
+
|
|
.
|

\
|
c
c
+
|
|
.
|

\
|
c
c
+
|
.
|

\
|
c
c
+
|
|
.
|

\
|
c
c
+
|
.
|

\
|
c
c
+
0 0 0 0
0 0
0
0
0
0
k
k
|
|
e
e
where
Appendix D
8/14/2013 Virtual Environment Lab, UTA 88
Simplified Linearized Collinearity
Equations
a
x A A A
L L L
v J dZ b dY b dX b
dZ b dY b dX b d b d b d b
+ = + + +
+ +
16 15 14
16 15 14 13 12 11

k | e
Since photo coordinates x
a
and y
a
are measured values, if the equations are to be
used in a least squares solution, residual terms must be included to make the
equations consistent.
The following simplified forms of the linearized collinearity equations include these
residuals:
a
y A A A
L L L
v K dZ b dY b dX b
dZ b dY b dX b d b d b d b
+ = + + +
+ +
26 25 24
26 25 24 23 22 21

k | e
where J = x
a
F
0
, K = y
a
- G
0
and the bs are coefficients equal to the partial derivatives
In linearization using Taylors series, higher order terms are ignored, hence
these equations are approximations.
They are solved iteratively, until the magnitudes of corrections to initial
approximations become negligible.
Chapter 11
8/14/2013 Virtual Environment Lab, UTA 89
We need to generalize and rewrite the linearized
collinearity conditions in matrix form.
While looking at the collinearity condition, we were only
concerned with one object space point (point A).
Lets first generalize and then express the equations in
matrix form
8/14/2013 Virtual Environment Lab, UTA 90
Generalizing Collinearity Equations
The observation equations which are the foundation of a bundle adjustment are the
collinearity equations:
(
(

+ +
+ +
=
) ( ) ( ) (
) ( ) ( ) (
33 32 31
13 12 11
i i i i i
i i i i i i
L j L j L j
L j L j L j
o ij
Z Z m Y Y m X X m
Z Z m Y Y m X X m
f x x
(
(

+ +
+ +
=
) ( ) ( ) (
) ( ) ( ) (
33 32 31
23 22 21
i L j L j i L j i
L j L j i i L j
o ij
Z Z m Y Y m X X m
Z Z m Y Y m X X m
f y y
i i i
i i i i
Where,
x
ij
, y
ij
are the measured photo coordinates of the image of point j on photo i related to the
fiducial axis system
X
j
, Y
j
, Z
j
are coordinates of point j in object space
X
Li
, Y
Li
, Z
Li
are the coordinates of the eyepoint of the camera
f is the camera focal length
x
o
, y
o
are the coordinates of the principal point
m
11i
, m
12i
, ..., m
33i
are the rotation matrix terms for photo i
These non-linear equations
involve 9 unknowns: omega,
phi, kappa inherent in the ms,
object point coordinates (X
j
, Y
j
,
Z
j
) and exposure station
coordinates (X
Li
, Y
Li
, Z
Li
)
Ch. 11 & App D
8/14/2013 Virtual Environment Lab, UTA 91
Linearized Equations in Matrix Form
ij ij
j ij i ij V B B + = A + A c
.. .. . .
(
(



=
ij ij ij ij ij
ij ij ij ij ij ij
b b b b b b
b b b b b b
B
ij
ij
26 25 24 23 22 21
16 15 14 13 12 11
.
(
(

=
(

=
(
(
(

= A
(
(
(
(
(
(
(
(

= A
ij
x
ij
ij
ij
ij
j
j
j
j
L
L
L
i
i
i
i
v
v
V
K
J
dZ
dY
dX
dZ
dY
dX
d
d
d
ij
i
i
i
y
.. .
c
k
|
e
(
(

=
ij ij ij
ij ij ij
b b b
b b b
Bij
26 25 24
16 15 14
..
Matrix contains the partial derivatives of the collinearity equations with respect to the exterior
orientation parameters of photo i, evaluated at the initial approximations.
Matrix contains the partial derivatives of the collinearity equations with respect to the object space
coordinates of point j, evaluated at the initial approximations.
Matrix contains corrections for the initial approximations of the exterior orientation parameters for
photo i.
Matrix contains corrections for the initial approximations of the object space coordinates of point j.
Matrix contains measured minus computed x and y photo coordinates for point j on photo i.
Matrix V
ij
contains residuals for the x and y photo coordinates.
ij B
.
ij B
..
i
.
A
j
..
A
ij
c
Ch. 17
8/14/2013 Virtual Environment Lab, UTA 92
Coming to the actual observations in the observation
equations (collinearity conditions), first we consider the
photo coordinate observations, then ground control and
finally exterior orientation parameters
8/14/2013 Virtual Environment Lab, UTA 93
Weights of Photo Coordinate Observations
Proper weights must be assigned to photo coordinate observations in order to be included in the
bundle adjustment.
Expressed in matrix form, the weights for x and y photo coordinate observations of point j on photo i
are:
(
(
(
(

=
=
(
(

=

2
2
ij ij x y y x
ij ij
2
y
2
x
2
o
1
2
2
2
1
0
0
1
to simplifies s coordinate photo for matrix weight the case, In this
zero. to equal is s coordinate photo in covariance the cases, many in and
1, to equal set be can which parameter arbitrary an is variance reference The
. y with x of covariance the is and ly; respective
, y and in x variances are and variance; reference the is
ij ij ij ij
ij ij
ij
ij
ij ij ij
ij ij ij
y
x
ij
y x y
y x x
o ij
W
where
W
o
o
o o
o o o
o o
o o
o
Ch. 17
8/14/2013 Virtual Environment Lab, UTA 94
Even though ground control observation equations are linear, in order to be consistent with the
collinearity equations, they will also be approximated by the first-order terms of Taylors series:
Ground Control
j
j
j
Z j j
Y j j
X j j
v Z Z
v Y Y
v X X
+ =
+ =
+ =
00
00
00
j point for residuals coordinate the are and ,
j point for values coordinate measured the are Z and Y , X
j point of s coordinate unknown are Z and Y , X
00
j
00
j
00
j
j j j
j j j
Z Y X
v v v
where
.. .. ..
j j
j V C + = A
Observation equations for ground control coordinates are:
j
j
j
Z j j j
Y j j j
X j j j
v Z dZ Z
v Y dY Y
v X dX X
+ = +
+ = +
+ = +
00 0
00 0
00 0
j point of
s coordinate for the ions approximat the to s correction are dZ and dY , dX
j point of s coordinate for the ions approximat initial the are Z and Y , X
j j j
0
j
0
j
0
j
where
Rearranging the terms and expressing in matrix form:
(
(
(

=
(
(
(

=
(
(
(

= A
j
j
j
Z
Y
X
j
j j
j j
j j
j
j
j
j
j
v
v
v
V
Z Z
Y Y
X X
C
dZ
dY
dX
where
..
0 00
0 00
0 00
.. ..

Ch. 17
8/14/2013 Virtual Environment Lab, UTA 95
Weights of Ground Control Observations
1
2
2
2
2
..

(
(
(

=
j j j j j
j j j j j
j j j j j
Z Y Z X Z
Z Y Y X Y
Z X Y X X
o
j W
o o o
o o o
o o o
o
As with photo coordinate measurements, proper weights must be assigned to ground control
coordinate observations in order to be included in the bundle adjustment. Expressed in matrix form,
the weights for X, Y and Z ground control coordinate observations of point j are:
Ch. 17
j) point for values coordinate measured the are Z and Y , (X
with Z X of covariance the is
with Z Y of covariance the is
Y with X of covariance the is
ly respective , Z and Y , X in variances the are and ,
variance reference the is
00
j
00
j
00
j
00
j
00
j
00
j
00
j
00
j
00
j
00
j
00
j
00
j
2 2 2
2
j j j j
j j j j
j j j j
j j j
X Z Z X
Y Z Z Y
X Y Y X
Z Y X
o
where
o o
o o
o o
o o o
o
=
=
=
8/14/2013 Virtual Environment Lab, UTA 96
Exterior Orientation Parameters
Y

00 00 00
00 00 00
i
L i i
i
L i i
i
L i i
i i i
Z L L Y L L X L L
i i i i i i
v Z Z v Y v X X
v v v
+ = + = + =
+ = + = + =
k | e
k k | | e e
Ch. 17
The final type of observation consists of measurements of exterior orientation parameters. The form of
their observation equations is similar to that of ground control:
The weight matrix for exterior orientation parameters has the following form:
1
2
2
2
2
2
2
.

(
(
(
(
(
(
(
(

=
i
L i L i L i L i L i i L i i L i i L
i L i L
i
L i L i L i i L i i L i i L
i L i L i L i L
i
L i i L i i L i i L
i L i i L i i L i i i i i i
i L i i L i i L i i i i i i
i L i i L i i L i i i i i i
Z Y Z X Z Z Z Z
Z Y Y X Y Y Y Y
Z X Y X X X X X
Z Y X
Z Y X
Z Y X
i
W
o o o o o o
o o o o o o
o o o o o o
o o o o o o
o o o o o o
o o o o o o
k | e
k | e
k | e
k k k k | k e k
| | | k | | e |
e e e k e | e e
8/14/2013 Virtual Environment Lab, UTA 97
Now that we have all our observation equations and the
observations, the next step in applying least squares, is
to form the normal equations
8/14/2013 Virtual Environment Lab, UTA 98
With the observation equations and weights defined as previously, the full set of normal equations
may be formed directly.
In matrix form, the full normal equations are: where:
Normal Equations
K N = A
ns. observatio point control ground for only made are matrix K to ons contributi C W and matrix N to ons contributi
exist. parameters n orientatio exterior for ns observatio only when made are matrix K to ons contributi C W and matrix N to ons contributi
matrix. zero a be will submatrix ing correspond i, photo on appear not does j point If
subscript point the is j and subscript, photo the is points, of number the is n photos, of number the is
K K N N N
C W K
C W K
C W K
C W K
C W K
C W K
C W K
C W K
K
... 0 0 0 ...
... ... ... ... ... ... ... ... ... ...
0 ... 0 0 ...
0 ... 0 0 ...
0 ... 0 0 ...
... ... 0 0 0
... ... ... ... ... ... ... ... ... ...
... 0 ... 0 0
... 0 ... 0 0
... 0 ... 0 0
.. .. ..
. . .
..
1
. .
1
. .. ..
1
.. . . . .
1
.
.. .. ..
3
..
3
..
3
..
2
..
2
..
2
..
1
..
1
..
1
..
. . .
3
.
1
.
3
.
2
.
1
.
2
.
1
.
1
.
1
.
..
3
..
2
..
1
..
.
3
.
2
.
1
.
.. ..
3
3
3
3
3
3
3 2 1
3
3
3
..
3
..
3
3
3
3
3 33 23 13
3
3
3
3
2
..
2
..
3
3
2 32 22 12
3
3
3
3
3
3
1
..
1
..
1 31 21 11
3 2 1 3
.
3
.
6
6
6
6
6
6
3 33 32 31
6
6
3
.
3
.
6
6
6
6
2 23 22 21
6
6
6
6
2
.
2
.
6
6
1 13 12 11
6
6
6
6
6
6
1
.
1
.
j j j
i i i
ij
ij
T
ij
m
i
j ij
ij
T
ij
n
j
i ij
ij
T
ij
m
i
j ij
ij
T
ij ij ij
ij
T
ij
n
j
i
n n n
m m m
n
m
n n
T
mn
T
n
T
n
T
n
T
m
T T T
T
m
T T T
T
m
T T T
mn m m m
n
n
n
W
W
i m
W B W B B W B B W B B W B
W N N N N N
W N N N N N
W N N N N N
W N N N N N
N N N N W N
N N N N W N
N N N N W N
N N N N W N
N
c c
= = = =
= = = = =
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(

+
+
+
+
+
+
+
+
=
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(

A
A
A
A
A
A
A
A
= A
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(

+
+
+
+
+
+
+
+
=

Ch. 17
8/14/2013 Virtual Environment Lab, UTA 99
Now that we have the equations ready to solve, we can
solve them with the initial approximations and iterate till
the iterated solutions do not change in value.
8/14/2013 Virtual Environment Lab, UTA 101
In aerial photography, if GPS is used to determine
the coordinates for exposure stations, we can
include those in the bundle adjustment and reduce
the amount of ground control that is required
8/14/2013 Virtual Environment Lab, UTA 102
Bundle Adjustment with GPS control
Chapter 17
Using GPS in aircraft to estimate coordinates of the exposure stations in the
adjustment can greatly reduce the number of ground control points
required.
Considerations while using GPS control:
1. Object space coordinates obtained by GPS pertain to the phase center of
the antenna but the exposure station is defined as the incident nodal point
of the camera lens.
2. The GPS recorder records data at uniform time intervals called epochs
(which may be on the order of 1s each), but the camera shutter operates
asynchronously wrt the GPS fixes.
3. If a GPS receiver operating in the kinematic mode loses lock on too many
satellites, the integer ambiguities must be redetermined.
8/14/2013 Virtual Environment Lab, UTA 104
Additional Precautions
regarding Airborne GPS
Chapter 17
First, it is recommended that a bundle adjustment with analytical self-calibration
be employed when airborne GPS control is used.
Often, due to inadequate modeling of atmospheric refraction distortion, strict
enforcement of the calibrated principal distance (focal length) of the camera will
cause distortions and excessive residuals in photo coordinates. Use of
analytical self-calibration will essentially eliminate that effect.

Second, it is essential that appropriate object space coordinate systems be
employed in data reduction.
GPS coordinates in a geocentric coordinate system should be converted to local
vertical coordinates for the adjustment. After aerotriangulation is completed, the
local vertical coordinates can be converted to whatever system is desired.
8/14/2013 Virtual Environment Lab, UTA 105
Though all our discussion so far has been for aerial
photography, satellite images can also be used for
mapping

In fact, since the launch of IKONOS, QuickBird, and
OrbView-3 satellites, rigorous photogrammetric processing
methods similar to those of aerial imagery, such as block
adjustment used to solve aerial blocks totaling hundreds or
even thousands of images, are routinely being applied to
high-resolution satellite image blocks.
8/14/2013 Virtual Environment Lab, UTA 106
Aerotriangulation with Satellite Images
Chapter 17 &
Gene, Grodecki (2002)
linear sensor arrays that scan an image strip while the satellite orbits.

Each scan line of the scene has its own set of exterior orientation
parameters, principal point in the center of the line.

The start position is the projection of the center of row 0 (of an image
with m columns and n rows) on the ground.

Since, the satellite is highly stable during acquisition of the image, the
exterior orientation parameters can be assumed to vary in a systematic
fashion.

satellite image data providers supply Rational Polynomial Camera
(RPC) coefficients. Thus it is possible to block adjust imagery
described by an RPC model.
8/14/2013 Virtual Environment Lab, UTA 107
Aerotriangulation with Satellite Images
Chapter 17
The exterior orientation parameters vary systematically as functions of the x
coordinate:

x
=
0
+ a
1
.x;
x
=
0
+ a
2
.x;
x
=
0
+ a
3
.x;
X
Lx
=X
L0
+ a
4
.x; Y
Lx
= Y
L0
+a
5
.x; Z
Lx
= Z
L0
+ a
6
.x
+ a
7
.x
2
Here,
x is the row no. of some image position,

x
,
x
,
x
, X
Lx
, Y
Lx
, Z
Lx
, are the exterior orientation
parameters of the sensor when row x was
acquired,

0
,
0
,
0
, X
L0
, Y
L0
, Z
L0
, are the exterior
orientation parameters of the sensor at the start
position, and
a
1
through a
7
are coefficients which describe the
systematic variations of the exterior orientation
parameters as the image is acquired.

8/14/2013 Virtual Environment Lab, UTA 108
This procedure of aerotriangulation, however, can only be performed
at the ground station by the image providers who have access to the
physical camera model.

For users wishing to block adjust imagery with their own proprietary
ground control, or other reasons, the image providers supply the
images with RPCs
8/14/2013 Virtual Environment Lab, UTA 109
Introduction to RPCs
RPC camera model is the ratio of two cubic functions
of latitude, longitude, and height.
RPC models transform 3D object-space coordinates
into 2D image-space coordinates.
RPC models have traditionally been used for
rectification and feature extraction and have recently
been extended to block adjustment.
8/14/2013 Virtual Environment Lab, UTA 110
Lets look at the formal RPC mathematical model.
We start with defining the domain of the functional model and its
normalization, and then go on to define the actual functions
8/14/2013 Virtual Environment Lab, UTA 111
RPC Mathematical Model
Separate rational functions are used to express the object-space to line and the object-
space to sample coordinates relationship.
Assume that (,,h) are geodetic latitude, longitude and height above WGS84 ellipsoid in
degrees, degrees and meters, respectively of a ground point and
(Line, Sample) are denormalized image space coordinates of the corresponding image
point

To improve numerical precision, image-space and object-space coordinates are
normalized to <-1,+1>

Given the object-space coordinates (,,h) and the latitude, longitude and height offsets
and scale factors, we can normalize latitude, longitude and height:

P = ( LAT_OFF) / LAT_SCALE
L = ( LONG_OFF) / LONG_SCALE
H = (h HEIGHT_OFF) / HEIGHT_SCALE

The normalized line and sample image-space coordinates (Y and X, respectively) are
then calculated from their respective rational polynomial functions f(.) and g(.)
8/14/2013 Virtual Environment Lab, UTA 112
Definition of RPC Coefficients
Y = f(,,h) = Num
L
(P,L,H) / Den
L
(P,L,H) = c
T
u / d
T
u
X = g(,,h) = Num
S
(P,L,H) / Den
S
(P,L,H) = e
T
u / f
T
u
where,
Num
L
(P,L,H) = c
1
+ c
2
.L + c
3
.P + c
4
.H + c
5
.L.P + c
6
.L.H + c
7
.P.H + c
8
.L
2
+ c
9
.P
2
+ c
10
.H
2
+ c
11
.P.L.H +
c
12
.L
3
+ c
13
.L.P
2
+ c
14
.L.H
2
+ c
15
.L
2
.P + c
16
.P
3
+ c
17
.P.H
2
+ c
18
.L
2
.H + c
19
.P
2
.H + c
20
.H
3
Den
L
(P,L,H) = 1 + d
2
.L + d
3
.P + d
4
.H + d
5
.L.P + d
6
.L.H + d
7
.P.H + d
8
.L
2
+ d
9
.P
2
+ d
10
.H
2
+ d
11
.P.L.H +
d
12
.L
3
+ d
13
.L.P
2
+ d
14
.L.H
2
+ d
15
.L
2
.P + d
16
.P
3
+ d
17
.P.H
2
+ d
18
.L
2
.H + d
19
.P
2
.H + d
20
.H
3
Num
S
(P,L,H) = e
1
+ e
2
.L + e
3
.P + e
4
.H + e
5
.L.P + e
6
.L.H + e
7
.P.H + e
8
.L
2
+ e
9
.P
2
+ e
10
.H
2
+ e
11
.P.L.H +
e
12
.L
3
+ e
13
.L.P
2
+ e
14
.L.H
2
+ e
15
.L
2
.P + e
16
.P
3
+ e
17
.P.H
2
+ e
18
.L
2
.H + e
19
.P
2
.H + e
20
.H
3
Den
S
(P,L,H) = 1 + f
2
.L + f
3
.P + f
4
.H + f
5
.L.P + f
6
.L.H + f
7
.P.H + f
8
.L
2
+ f
9
.P
2
+ f
10
.H
2
+ f
11
.P.L.H + f
12
.L
3
+
f
13
.L.P
2
+ f
14
.L.H
2
+ f
15
.L
2
.P + f
16
.P
3
+ f
17
.P.H
2
+ f
18
.L
2
.H + f
19
.P
2
.H + f
20
.H
3
There are 78 rational polynomial coefficients
u = [1 L P H LP LH PH L
2
P
2
H
2
PLH L
3
LP
2
LH
2
L
2
P P
3
PH
2
L
2
H P
2
H H
3
]
c = [c
1
c
2
c
20
]T; d = [1 d
2
d
20
]T; e = [e
1
e
2
e
20
]T; f=[1 f
2
f
20
]
T

The denormalized RPC models for image j are given by:
Line = p(,,h) = f(,,h) . LINE_SCALE + LINE_OFF
Sample = r(,,h) = g(,,h) . SAMPLE_SCALE + SAMPLE_OFF
8/14/2013 Virtual Environment Lab, UTA 113
RPC Block Adjustment Model

The RPC block adjustment math model proposed is defined in the image space.
It uses denormalized RPC models, p and r, to express the object-space to image-
space relationship, and the adjustable functions, p and r, which are added to the
rational functions to capture the discrepancies between the nominal and the measured
image-space coordinates.
For each image point i on image j, the RPC block adjustment math model is thus
defined as follows:
Line
i
(j)
= p
(j)
+ p
(j)
(
k
,
k
,h
k
) +
Li
Sample
i
(j)
= r
(j)
+ r
(j)
(
k
,
k
,h
k
) +
Si
where
Line
i
(j)
and Sample
i
(j)
are measured (on image j) line and sample coordinates of the ith
image point, corresponding to the kth ground control or tie point with object space
coordinates (
k
,
k
,h
k
)
p
(j)
and r
(j)
are the adjustable functions expressing the differences between the
measured and the nominal line and sample coordinates of ground control and/or tie
points, for image j
(
Li
and
Si
are random unobservable errors,
p
(j)
and r
(j)
are the given line and sample,
denormalized RPC models for image j)
8/14/2013 Virtual Environment Lab, UTA 114
RPC Block Adjustment Model
The following is a general polynomial model defined in the domain of
image coordinates to represent the adjustable functions, p and r:
p = a
0
+ a
S
.Sample + a
L
.Line + a
SL
.Sample.Line + a
L2
.Line
2
+
a
S2
.Sample
2
+
r = b
0
+ b
S
.Sample + b
L
.Line + b
SL
.Sample.Line + b
L2
.Line
2
+
b
S2
.Sample
2
+
The following truncated polynomial model defined in the domain of
image coordinates to represent the adjustable functions is proposed to
be used:
p = a
0
+ a
S
.Sample + a
L
.Line
r = b
0
+ b
S
.Sample + b
L
.Line
8/14/2013 Virtual Environment Lab, UTA 117
Multiple overlapping images can be block adjusted using the
RPC adjustment.
The overlapping images, with RPC models expressing the
object-space to image-space relationship for each image, are
tied together by tie points
Optionally, the block may also have ground control points with
known or approximately known object-space coordinates and
measured image positions.
Because there is only one set of observation equations per
image point, index i uniquely identifies that set.
RPC Block Adjustment Algorithm
8/14/2013 Virtual Environment Lab, UTA 118
Thus, observation equations are formed for each image point i.
Measured image-space coordinates for each image point i (Line
i
(j)
and Sample
i
(j)
) constitute the
adjustment model observables, while the image model parameters (a
0
(j)
, a
S
(j)
, a
L
(j)
, b
0
(j)
, b
S
(j)
, b
L
(j)
) and
the object space coordinates (
k
,
k
, h
k
) comprise the unknown adjustment model parameters.
are approximate fixed values for the true image coordinates.
Since true image coordinates are not known, values of the measured image coordinates are used
instead.
Effect of using approximate values is negligible because measurements of image coordinates are
performed with sub-pixel accuracy.
RPC Block Adjustment Algorithm
0 ) , , (
) ( ) ( ) (
= + + A + =
Li k k k
j j j
i Li
h p p Line F c |
For the kth ground control being the ith tie point on the jth image, the RPC block adjustment
equations are:
0 ) , , (
) ( ) ( ) (
= + + A + =
Si k k k
j j j
i Si
h r r Sample F c |
) (
) (
) (
) ( ) (
0
) (
. .
j
i
j
L
j
i
j
S
j j
Line a Sample a a p + + = A
) (
) (
) (
) ( ) (
0
) (
. .
j
i
j
L
j
i
j
S
j j
Line b Sample b b r + + = A
with:
(Observation equations)
) ( ) (
and
j
i
j
i Sample Line
8/14/2013 Virtual Environment Lab, UTA 119
The observation equations can be written as:

Applying Taylor Series expansion to the RPC block adjustment observation
equations results in the following linearized model:

where:





And
RPC Block Adjustment Algorithm
(

=
Si
Li
i
F
F
F
Pi i i
i i
w dF F
dF F
= =
= + +
c
c
0
0
0
Pi
k k k
j j
i
j
L
j
i
j
S
j j
i
k k k
j j
i
j
L
j
i
j
S
j j
i
Si
Li
i
w
h r Line b
Sample b b Sample
h p Line a
Sample a a Line
F
F
F =
(
(
(
(
(

+ +
+ +
+ +
+ +
=
(

=
) , , ( .
.
) , , ( .
.
0 0 0 0
0 0
0 0 0 0
0 0
0
0
0
) ( ) ( ) (
) ( ) ( ) (
0
) (
) ( ) ( ) (
) ( ) ( ) (
0
) (
|
|
8/14/2013 Virtual Environment Lab, UTA 120
dx = x -x
0
is the vector of unknown corrections to the approximate model parameters, x
0,
dx
A
is the sub-vector of the corrections to the approximate image adjustment parameters for n images
dx
G
is the sub-vector of the corrections to the approximate object space coordinates for m ground control
and p tie points
x
0
is the vector of the approximate model parameters
is a vector of unobservable random errors
RPC Block Adjustment Algorithm
| |
(

=
(

(
(
(
(
(

c
c
c
c
c
c
c
c
=
(
(
(
(

c
c
c
c
=
(

=
G
A
Gi Ai
G
A
x
T
G
Si
x
T
A
Si
x
T
G
Li
x
T
A
Li
x
T
Si
x
T
Li
Si
Li
i
dx
dx
A A
dx
dx
x
F
x
F
x
F
x
F
dx
x
F
x
F
dF
dF
dF
0 0
0 0
0
0
| |
| |
T
n
L
n
S
n n
L
n
S
n
L S L S A
T
p m p m p m G
G
A
G
A
db db db da da da db db db da da da dx
dh d d dh d d dx
x
x
x
dx
dx
dx
) ( ) ( ) (
0
) ( ) ( ) (
0
) 1 ( ) 1 ( ) 1 (
0
) 1 ( ) 1 ( ) 1 (
0
1 1 1 0
...
... ; ;
0
0
=
=
(

=
(

=
+ + +
| |
8/14/2013 Virtual Environment Lab, UTA 123
C
w
: The a priori covariance matrix of the vector of misclosures, w,
A
A
: The first-order design matrix for the image adjustment parameters
A
G
: The first-order design matrix for the object space coordinates
RPC Block Adjustment Algorithm
w dx A
w
w
w
dx
dx
I
I
A A
G
A
P
G
A
G A
= +
(
(
(

= +
(

(
(
(

c c or,
0
0
(
(
(

=
G
A
P
w
C
C
C
C
0 0
0 0
0 0
(
(
(
(

i
A
A
A
A
A
A
1
(

=
0 ... 0 1 0 0 0 0 ... 0
0 ... 0 0 0 0 1 0 ... 0
) ( ) (
) ( ) (
j
i
j
i
j
i
j
i
A
Line Sample
Line Sample
A
i
As a consequence of the previous reductions, the RPC block adjustment model in matrix form reads

(
(
(
(

i
G
G
G
A
A
A
1
(
(
(
(
(

c
c
c
c
c
c
c
c
c
c
c
c
=
0 ... 0 0 ... 0
0 ... 0 0 ... 0
0 0 0
0 0 0
x
k
Si
x
k
Si
x
k
Si
x
k
Li
x
k
Li
x
k
Li
G
h
F F F
h
F F F
A
i
|
|
8/14/2013 Virtual Environment Lab, UTA 124
RPC Block Adjustment Algorithm
w
P
is the vector of misclosures for the image-space coordinates,
w
Pi
is the sub-vector of misclosures for the image-space coordinates of the ith
image point on the jth image

(
(



=
(
(
(
(

=
) , , ( . .
) , , ( . .
w ;
0 0 0 0 0 0
0 0 0 0 0 0
i
1
) ( ) ( ) ( ) ( ) ( ) (
0
) (
) ( ) ( ) ( ) ( ) ( ) (
0
) (
P
k k k
j j
i
j
L
j
i
j
S
j j
i
k k k
j j
i
j
L
j
i
j
S
j j
i
P
P
P
h r Line b Sample b b Sample
h p Line a Sample a a Line
w
w
w
i
|
|

w
A
=0 is the vector of misclosures for the image adjustment parameters,
w
G
=0 is the vector of misclosures for the object space coordinates,
C
P
is the a priori covariance matrix of image-space coordinates,
C
A
is the a priori covariance matrix of the image adjustment parameters,
C
G
is the a priori covariance matrix of object-space coordinates
8/14/2013 Virtual Environment Lab, UTA 125
A Priori Constraints
This block adjustment model allows the introduction of a priori information using the
Bayesian estimation approach, which blurs the distinction between observables and
unknowns both are treated as random quantities.
In the context of least squares , a priori information is introduced in the form of weighted
constraints. A priori uncertainty is expressed by C
A
, C
P
, and C
G
.
C
A
: uncertainty of a priori knowledge of the image adjustment parameters.
In an offset only model, the diagonal elements of C
A
(the variances of a
0
and b
0
), express the
uncertainty of a priori satellite attitude and ephemeris.
C
P:
prior knowledge of image-space coordinates for ground control and tie points.
Line and sample variances in C
P
are set according to the accuracy of the image
measurement process.
C
G:
prior knowledge of object-space coordinates for ground control and tie points.
In the absence of any prior knowledge of the object coordinates for tie points, the
corresponding entries in C
G
can be made large (like 10,000m) to produce no significant bias.
One could also remove the weighted constraints for object coordinates of tie points from the
observation equations. But being able to introduce prior information for the object
coordinates of tie points adds flexibility.
8/14/2013 Virtual Environment Lab, UTA 126
Since the math model is non-linear, the least squares solution needs to be
iterated until convergence is achieved. At each iteration step, application of the
least squares principle results in the following vector of estimated corrections
to the approximate values of the model parameters.
At the subsequent iteration step, the vector of approximate model parameters
x
0
is replaced by the estimated values:



The least squares estimation is repeated until convergence is reached.
The covariance matrix of the estimated model parameters is:
RPC Block Adjustment Algorithm
( ) w C A A C A x d
w
T
w
T 1
1
1

=
x d x x
0
+ =
( )
1
1

= A C A C
w
T
x
8/14/2013 Virtual Environment Lab, UTA 128
Experimental Results
Project located in Mississippi, with 6 stereo strips and 40 well-distributed GCPs.
Each of the 12 source images was produced as a georectified image with RPCs.
The images were then loaded onto a Socet SET workstation running the RPC block adjustment
model.
Multiple well-distributed tie-points were measured along the edges of the images.
Ground points were selectively changed between control and check points to quantify block
adjustment accuracy as a function of the number and distribution of GCPs.
The block adjustment results were obtained using a simple two-parameter, offset-only model with a
priori values for a
0
and b
0
of 0 pixels and a priori standard deviation of 10 pixels.
GCP Average Error
Longitude (in m)
Average Error
Latitude (in m)
Average Error
Height (in m)
Standard
Deviation
Longitude (in m)
Standard
Deviation
Latitude (in m)
Standard
Deviation Height
(in m)
None -5.0 6.2 1.6 0.97 1.08 2.02
1 in center -2.0 0.5 -1.1 0.95 1.07 2.02
3 on edge -0.4 0.3 0.2 0.97 1.06 1.96
4 in corners -0.2 0.3 0.0 0.95 1.06 1.95
All 40 GCPs 0.0 0.0 0.0 0.55 0.75 0.50
When all 40 GCPs are used, the ground control overwhelms the tie points and the a priori constraints, thus,
effectively adjusting each strip separately such that it minimizes control point errors on that individual strip.
8/14/2013 Virtual Environment Lab, UTA 129
RPC - Conclusion
RPC camera model provides a simple, fast and accurate representation of
the Ikonos physical camera model.

If the a-priori knowledge of exposure station position and angles permits a
small angle approximation, then adjustment of the exterior orientation
reduces to a simple bias in image space.

Due to the high accuracy of IKONOS, even without ground control, block
adjustment can be accomplished in the image space.

RPC models are equally applicable to a variety of imaging systems and so
could become a standardized representation of their image geometry.

From simulation and numerical examples, it is seen that this method is as
accurate as the ground station block adjustment with the physical camera
model.
8/14/2013 Virtual Environment Lab, UTA 130
Finally, lets review all the topics that we have covered
8/14/2013 Virtual Environment Lab, UTA 131
Summary
The mathematical concepts covered today were:
1. Least squares adjustment (formulating observation equations and reducing
to normal equations)
2. Collinearity condition equations (derivation and linearization)
3. Space Resection (finding exterior orientation parameters)
4. Space Intersection (finding object space coordinates of common point in
stereopair)
5. Analytical Stereomodel (interior, relative and absolute orientation)
6. Ground control for Aerial photogrammetry
7. Aerotriangulation
8. Bundle adjustment (adjusting all photogrammetric measurements to ground
control values in a single solution)- conventional and RPC based
8/14/2013 Virtual Environment Lab, UTA 132
Terms
A lot of the terminology is such that can sometimes cause confusion. For
instance, while pass points and tie points mean the same thing, (ground)
control points refer to tie points whose coordinates in the object
space/ground control coordinate system are known, while the term check
points refers to points that are treated as tie points, but whose actual
ground coordinates are very accurately known.
Below are some more terms used in photogrammetry, along with their brief
descriptions:
1. stereopair: two adjacent photographs that overlap by more than 50%
2. space resection: finding the 6 elements of exterior orientation
3. space intersection: finding object point coordinates for points in stereo
overlap

4. stereomodel: object points that appear in the overlap area of a stereopair
5. analytical stereopair: 3D ground coordinates of points in stereomodel,
mathematically calculated using analytical photogrammetric techniques
8/14/2013 Virtual Environment Lab, UTA 133
6. interior orientation: photo coordinate refinement, including corrections
for film distortions, lens distortion, atmospheric refraction, etc.
7. relative orientation: relative angular attitude and positional
displacement of two photographs.
8. absolute orientation: exposure station orientations related to a ground
based coordinate system.
9. aerotriangulation: determination of X, Y and Z ground coordinates of
individual points based on photo measurements.
10. bundle adjustment: adjusting all photogrammetric measurements to
ground control values in a single solution
11. horizontal tie points: tie pts whose X and Y coordinates are known.
12. vertical tie points: tie pts whose Z coordinate is known
Terms
8/14/2013 Virtual Environment Lab, UTA 135
Software Products Available
There is a variety of software solutions available in the market today to perform all the
functionalities that we have seen today. The following is a list of a few of them:

1. ERDAS IMAGINE (http://gi.leica-geosystems.com): ERDAS Imagine photogrammetry suite has
all of the basic photogrammetry tools like block adjustment, orthophoto creation, metric and
non-metric camera support, and satellite image support for SPOT, Ikonos, and others. It is
perhaps one of the most popular photogrammetric tools currently.
2. ESPA (http://www.espasystems.fi): ESPA is a desktop software aimed at digital aerial
photogrammetry and airborne Lidar processing.
3. Geomatica (http://www.pcigeomatics.com/geomatica/demo.html): PCI Geomatics Geomatica
that offers a single integrated environment for remote sensing, GIS, photogrammetry,
cartography, web and development tools. A demo version of the software is also available at
their website.
4. Image Station (http://www.intergraph.com): Intergraphs Z/I Imaging ImageStation comprises
modules like Photogrammetric Manager, Model Setup, Digital Mensuration, Automatic
Triangulation, Stereo Display, Feature Collection, DTM Collection, Automatic Elevations,
ImageStation Base Rectifier, OrthoPro, PixelQue, Image Viewer, Image Analyst.
5. INPHO (http://www.inpho.de): INPHO is an end-to-end photogrammetric systems supplier.
INPHOs portfolio covers the entire workflow of photogrammetric projects, including aerial
triangulation, stereo compilation, terrain modeling, orthophoto production and image capture.
6. iWitness (http://www.iwitnessphoto.com): iWitness from DeChant Consulting Services is a
close-range photogrammetry software system that has been developed for accident
reconstruction and forensic measurement.
8/14/2013 Virtual Environment Lab, UTA 136
Software Products Available
7. (Aerosys) OEM Pak (http://www.aerogeomatics.com/aerosys/products.html): This free package
from Aerosys offers the exact same features as its Pro Version, except that the bundle adjustment is
limited to a maximum of 15 photos.
8. PHOTOMOD (http://www.racurs.ru/?page=94): PHOTOMOD, a software family from Racurs,
Russia, comprises of products for photogrammetric processing of remote sensing data which allow
to extract geometrically accurate spatial information from almost any commercially available type of
imagery.
9. PhotoModeler (http://www.photomodeler.com/downloads/default.htm): PhotoModeler, the software
program from Eos Systems, allows you to create 3D models and measurements from photographs
with export capabilities to 3D Studio 3DS, Wavefront OBJ, OpenNURBS/Rhino, RAW, Maya Script
format, and Google Earths KML and KMZ, etc.
10. SOCET SET (http://www.socetgxp.com): This is a digital photogrammetry software application from
BAE Systems. SOCET SET works with the latest airborne digital sensors and includes innovative
point-matching algorithms for multi-sensor triangulation. SOCET-SET used to be the standard by
which all other photogrammetry packages were measured against.
11. SUMMIT EVOLUTION (http://www.datem.com/support/download.html):Summit Evolution is the
digital photogrammetric workstation from DAT/EM, released in April 2001 at the ASPRS Conference.
The features of the software include subpixel functionality, support for different orientation methods
and various formats.
12. Vr Mapping Software (http://www.cardinalsystems.net): Vr Mapping Software Suite includes
modules for 2D/3D collection and editing, stereo softcopy,orthophoto rectification, aerial
triangulation, bundle adjustment, ortho mosaicing, volume computation, etc.
8/14/2013 Virtual Environment Lab, UTA 137
Open Source Software Solutions
There are three separate modules for relative orientation (relor.exe), space
resection (resect.exe) and 3D conformal coordinate transformation
(3DCONF.exe) available at: http://www.surv.ufl.edu/wolfdewitt/download.html

Another open source program is DGAP, a program for General Analytical
Positioning that can be found at:
http://www.ifp.uni-stuttgart.de/publications/software/openbundle/index.en.html
8/14/2013 Virtual Environment Lab, UTA 138
References
1. Wolf, Dewitt: Elements of Photogrammetry, McGraw Hill, 2000
2. Dial, Grodecki: Block Adjustment with Rational Polynomial
Camera Models, ACSM-ASPRS 2002 Annual Conference
Proceedings, 2002
3. Grodecki, Dial: Block Adjustment of High-Resolution Satellite
Images described by Rational Polynomials, PE&RS Jan 2003
4. Wikipedia
5. Other online resources
6. Software reviews from:
http://www.gisdevelopment.net/downloads/photo/index.htm and
http://www.gisvisionmag.com/vision.php?article=200202%2Frevie
w.html