Вы находитесь на странице: 1из 93

GEO 2204 : Photogrammetry I

Magemeso Ibrahim
Email: imagemeso@gmail.com
Aerial photos image
interpretation
• Aerial Photos
• Balloon
photography
(1858)
• Pigeon cameras
(1903)
• Kite
photography
(1890)
• Aircraft (WWI
and WWII)
• Space (1947)
Aerial photography
• Uses a camera attached to aerial platform
• Traditionally images were recorded in films (film camera)
• Panchromatic films (sensitive to all the visible range)
• Multiband photography is possible by using color filters. What
• we need is several cameras each with its own filter
• Black and white infrared film (sensitive to cover visible and infrared
portion of EM spectrum
Aerial photography

• True colour films- sensitive to Blue, Green and red


portions of EM
• Infra-red colour films – sensitive to Green, Red and
Infrared portions of EM. This is sometimes called
camouflage detection films. It was developed during
WW11 to help distinguish natural vegetation from
camouflage
Digital photography

• Uses charge couple devices (CCDs) instead of films


to record images.

• Each CCD detector records energy from surface


which is converted in digital number.
Elements of Image
Interpretation
Shadow
• Shadow reduction is of concern in remote
sensing/photogrammetry because shadows tend to
obscure objects that might otherwise be detected.
• However, the shadow cast by an object may be the only
real clue to its identity.
• Shadows can also provide information on the height of
an object either qualitatively or quantitatively.
Elements of Image
Interpretation
Tone and Color
• A band of EMR recorded by a remote sensing instrument can
be displayed on an image in shades of gray ranging from black
to white.
• These shades are called “tones”, and can be qualitatively
referred to as dark, light, or intermediate (humans can see 40-
50 tones).
• Tone is related to the amount of light reflected from the scene
in a specific wavelength interval (band).
Elements of Image
Interpretation
Texture:
• Texture refers to the arrangement of tone or color in an
image.
• Useful because Earth features that exhibit similar tones
often exhibit different textures.
• Adjectives include smooth (uniform, homogeneous),
intermediate, and rough (coarse, heterogeneous).
Elements of Image Interpretation
Elements of Image
Interpretation
Pattern:
• Pattern is the spatial arrangement of objects on the
landscape.
• General descriptions include random and systematic; natural
and human-made.
• More specific descriptions include circular, oval, curvilinear,
linear, radiating, rectangular, etc.
Elements of Image
Interpretation
Elements of Image
Interpretation
Height and Depth:
• As discussed, shadows can often offer clues to the height of
objects.
• In turn, relative heights can be used to interpret objects.
• In a similar fashion, relative depths can often be interpreted.
• Descriptions include tall, intermediate, and short; deep,
intermediate, and shallow.
Elements of Image
Interpretation
Association:
• This is very important when trying to interpret an object or
activity. Association refers to the fact that certain features
and activities are almost always related to the presence of
certain other features and activities.
Stereoscopy
Stereoscopy

Content
• Vision and depth of perception
• Stereoscopy and stereoscopes
• Parallax measurement
• Height measurement from parallax
• Photographic interpretation using stereoscopes
Vision and depth of perception
Monoscopic Vision

Vision with one eye. The depth of objects in the field of view is
perceived using depth cues(hints on depth).

Binocular Vision

Vision with two eyes. The depth of objects in the field of view is
perceived using stereoscopy.
Vision and depth of perception
Monoscopic Depth Perception (Depth cues)
• Relative size of objects
• Hidden objects
• Shadows
• Placement of objects against foreshortened objects
• Differences in focusing of the eye for objects at different distances
• Amount of detail visible on objects (visual acuity)
• etc.,
Vision and depth of perception
Monoscopic Vision – Depth cues
Vision and depth of perception
Stereoscopic Depth Perception - Formation
Vision and depth of perception
Stereoscopic Depth Perception - Formation
Stereoscopy definintion

The use of binocular vision


to achieve 3-dimensional
effects.
• Enables you to view an
object from 2 different
camera positions to
obtain a 3-dimensional
view
Conditions for stereo viewing
• Two adjacent and overlapping photos in the same flight line
• The optical axes of the cameras must be near parallel
• Cameras must be at approximately the same height
• The optical axes of the eyes must be near parallel
• The left eye must see the left image and the right eye the right image
• | A - B |  1.17 (1.3 gon)
• The images must only differ in horizontal parallax (x-parallax), i.e. py =
0
• Difference between objects in both images may not exceed 14%
Advantages of stereo vision

• Facilitates measurement of depth


• Has high visual acuity than monoscopic
vision
Stereoscopes:
• Are binocular optical instruments that helps us view two properly
oriented overlapping photos to obtain a 3-dimensional model (the
streomodel) of the real scene captures by the overlapped area of
the photos.
Types of Stereoscopes:
1. Lens (pocket) stereoscopes

 Simplest
 Least expensive
 Small
 2-4 x magnification
 Used in the field
2. Mirror stereoscope

Photos can be placed


separately for viewing.

Used in the field?


3. Scanning mirror
stereoscope

A series of lenses and prisms

Relatively expensive

Not used in the field


4. Zoom stereoscopes

Variable magnification:
2.5 - 20 x

Very Expensive

Not used in the field


5. Zoom transfer stereoscopes

Variable magnification:
2.5 - 20 x
Used to transfer features
from
a stereo-pair of photos onto
a map or other photo
Very Expensive
Not used in the field
Stereo photography Geometry

Principal Point
Geometric center of the photograph, and the intersection of
the X and Y axes.
The intersection of the North-South and East-West fiducial
marks.
Stereo photography Geometry

Conjugate Principal Points

Geometric center of a photograph's stereo pair (not the one at


hand), located on
the photograph at hand.
1. Obtain 2 overlapping photographs consecutively captured
on a flight line. (Stereo pair)
2. Locate and mark the Principal Points on each photograph.
3. Locate and mark the Conjugate Principal Points on each
photograph.
4. Under a stereoscope, line up all the 4 principal points and
adjust the distance between photographs until you see the
stereo model.

Flight Line

Note “photo-crab” the flight line is not symmetrically parallel to edges of the
photo
Rick Lathrop, Rutgers University
Stereo Pair Viewing Exercise

The "Sausage Exercise" can be helpful in developing the ability to see stereo.
In this exercise you focus your eyes on a distant object and then slowly bring
your forefingers into the line of vision.
The farther apart your fingers and the larger the sausage, the more nearly
parallel your lines of sight.
Problems Affecting Stereoscopic Vision (Avery & Berlin,
1992)
1. Eye strength needs to be balanced between your two eyes.
Wear vision aids when viewing stereo pairs.
2. Eye fatigue from mental and physical condition, poor
illumination, uncomfortable seating and viewing positions,
misaligned photos, and low-quality photos.
3. Align shadows properly and sequence photos correctly or else
you will create a pseudoscopic view.
4. Moving objects between photos will not view in stereo.
They'll show up as blurs.
5. Rapid changes in topography between photos can bias
stereoscopic interpretation.
6. Clouds, shadows, and Sun glint can degrade stereoscopic
viewing and cause loss of information.
Flight characteristics
• Overlapping photos are necessary to produce stereo effect. Photos
taken along the flight line need to have at least 50% overlap.
Normally 60% overlap is specified for a flight mission. Overlap
along flight line is also known as end lap.
• The area that is common between successive photos is called
overlap. A stereo model is created when successive photos are
viewed with a stereoscope.
• Normally when an area is flown for stereo coverage, side lap
between flight lines is necessary for complete coverage. 30% side
lap is normal for most flight plans.

Direction of flight
Parallax

Definition:
The apparent displacement of an object with respect to
a frame of reference, caused by a shift in the position
of observation.
Rick Lathrop, Rutgers University
Stereoscopic Parallax

Another “error” we can exploit in photogrammetry


Stereoscopy and Parallax
At the time of Left Right
Photography
Formation
al ar

Flight line
4
Photogrammetry - Dr. George Sithole
2
Stereoscopy-Parallax
At the time of Left Photo/Camera Right
Photography Base, B
Formation
al ar

60 % overlap
A

4
Photogrammetry - Dr. George Sithole
3
Stereoscopy
In the lab Left Right
Observation Eye Eye

al ar

Stereo model 4
Photogrammetry - Dr. George Sithole
4
Stereoscopy
In the lab Left Right
Eye Base, Be
Observation Eye Eye

al ar

Stereo model 4
Photogrammetry - Dr. George Sithole
5
Parallax measurement

Method 1: Using distances from the principal points (PP)

Method 2: Using distances between objects


Stereoscopy - Parallax
In the lab Left Right
Observation Eye Eye

ppl al ar ppr

pxl pxr

Parallax, pxa = pxl - pxr A

4
Photogrammetry - Dr. George Sithole
7
Differential parallax
Difference between the stereoscopic parallax at the top and
bottom of the object (if necessary take absolute values)

If C2 = 2.06 in and C1 is 1.46


in then
dP = 2.06 – 1.46 = 0.6 in
Average photo distance

Average photo distance can be computed as:


Absolute parallax

• Read about it!!!! How can it be computed?


Height measurement using parallax

General formula for calculating height using parallax

 H  dP 
h   
 ( P  dP) 

Where:
h = object height (required)
H = flying height ( can be obtained from photograph)
dP = differential parallax ( see slide 32)
P = avg. photo base length (see slide 33)

** Above equation is for level terrain only.


Height measurement using parallax Example
Measurements for parallax height calculations:
1. Determine average photo-base (P)

Average distance between PP and CPP for stereopair

P1 P2

PP CPP CPP PP

Example: if then
P1 = 4.5 in. P = 4.4 in.
P2 = 4.3 in.
Height measurement using parallax Example
Measurements for parallax height calculations:
2. Determine differential parallax (dP)

Difference of the distances between feature bases and tops while stereopair is
in stereo viewing position.

dt

db

PP CPP CPP PP

Example: if then
db = 2.06 in. dP = 0.6 in.
dt = 1.46 in.
Height measurement by parallax example

Required: compute the height of the tree. Take H as 2,200 ft

 H  dP 
h   
 ( P  dP) 
h = (2,200 ft. * 0.6 in.) / (4.4 in. + 0.6 in.)
= 1320 ft. in. / 5 in.
= 264 ft.
Questions

• What are the conditions for stereoscopic vision?


• Give the advantages of stereo vision.
• What are stereoscopes? Give any 3 examples of
stereoscopes
• What is the difference between principal point and
conjugate points
• How do you orient a pair of photographs for stereo
vision?
• What is parallax?
Co-ordinates by parallax
measurement

• We have already seen how parallax measurement can be


used for height measurement
• Let us explore how parallax can also be used for
measuring X and Y co-ordinates (No theodolites!!!)
Parallax equations

Conditions:

• Camera axes are parallel


• The flying height is the same at the two exposure
stations
Geometry

A
Useful equations
 X, Y are the horizontal co-ordinates
 y and x are the image co-ordinates of
point a measured from left photo
 B is the airbase
 f is the focal length
 p is the parallax of the image point (a)
 H is the flying height
 h is the height of the point A above sea
level.
These equations are often called parallax
equations and are the most useful to a
photogrammetrist
Parallax difference equations
• Work when the assumptions made in parallax equations
do not hold
 variable flying height
 Tilted photographs
 Image distortions ( scale variation, relief distortions)
 that means scale errors in parallax resulting in errors in
the (H-h) distance
We need another method to take into account these
variations.
Parallax difference equations

The above is an example of parallax difference equation where

 The formular can should be applied for


points close by.
 The differencing technique cancels out
systematic errors affecting parallax at
each point.
 In the above example, C is a control
point whose elevation is known and can
be used to compute the elevation of A
Parallax difference equations
Example: Computing height using stereoscopic parallax
Using the measurements from previous slides, “plug and chug”
h = object height (we want this)
H = flying height (2,200 ft. given in flight information)
dP = differential parallax (0.6 in.= 2.06in. - 1.46in.)
P = avg. photo base length (4.4 in.) (avg. distance, PP to CPP)

 H  dP 
 ( P  dP
h H dP
)

h    
 ( P  dP ) 
Solve for h:
h = (2,200 ft. * 0.6 in.) / (4.4 in. + 0.6 in.)
= 1320 ft. in. / 5 in.
= 264 ft.
Parallax difference equations
Parallax calculations of height are useful where the object of interest
is:

On small scale photographs (high altitude flight)

Located at or near the nadir of a single photo.

Obscured on one photo of a stereopair, but the base and top can still be
located.

Flight or camera variables (except aircraft height) are not known.


Errors theory
An error: Difference between measured and true value

Accuracy:
Degree of conformity to the true value. A value which is close to the true value has a high
accuracy. Unfortunately, it is not easy to know what a true value is and as a result the
accuracy can also never be known. Accuracy can only be estimated for example by checking
against an independent higher accuracy standard.
Precision:
Is a degree of refinement of a quantity or measuerent. This can be measured by taking
several measurements and checking the consistency of the values. If the values are close to
each other then the precision is high and the reverse implies a low precision.
Errors theory- Types
Mistakes or blunders: Gross errors caused by carelessness or negligence
and include: misidenfication of points, misreading a scale and transposing
numbers. These errors can generally be avoided by exercising care during
measurements
Systematic errors: errors that follow some mathematical or physical law.
That means that if the condition causing the error are known, measured and
properly modeled, a correction can be calculated and applied to the
measurement. This helps to eliminate the systematic errors. These errors
always remain constant in magnitude and algebraic sign if the condition
causing them remains the same. Since the sign remains the same, systematic
errors accumulate and they are often reffered to as cumulative errors.
Examples in photogrametry include: shrinkage and expnasion of
photographs, camera lens distortions and atmospheric refraction distortions.
Errors theory - Types
Random errors: these are errors that remain after blunders and systematic
errors have been accounted for. They are generally small and do not follow
any physical laws like systematic errors. These type of errors can be be
assessed using laws of probability. Random errors are likely to be positive or
negative and hence they compensate each other. This is the reason they can
also be refered to as compensating errors. In photogrammtry some sources
of random errors include: estimating the the least graduations of the scale
and indexing the scale.
Errors theory
Errors are enevitable in any measurement and also computed quanties from
measured values.
Sources of error:
Locating and marking flight lines on photos
Orienting stereopairs for parallax measurement
Parallax and photo coordinate measurement
Shrinkage and expansion of photographs
Unequal flying heights
Tilted photographs
Errors in ground control
Camera lens distortion and atmospheric errors.
Error propagation
Error propagation deals with approaches of estimating errors in
computed quantities based on errors from the measurements.
Assumptions:
Errors in the variables of the equations are correlated i.e. Error in
one variable are dependent upon errors in other variables.
Errors in the measured quantities are independent

Asssume for example that we have a quantity F which we want to


compute from n independent observations x1, x2,.........xn.
Error propagation
Vertical exaggeration
Vertical exaggeration is apparent scale disparity in the stereomodel
whereby the vertical scale appears greater than horizontal scale.
Causes:
Lack of correspondence between B/H and be /h
Where:
B = airbase
H = height above average ground
be = eye base (approx. 65mm if not given)
h = distance between the eyes and stereomodel (approx. 400 mm for
mirror stereoscope)
Magnitude of vertical
exaggeration
The magnitude of vertical exaggeration can be approximated by the
following equation:

Increasing the airbase (B) or be increases the vertical exaggeration


Stereoscopy – Exaggeration

L R R

In the lab
Observation
Increasing Be
results in
magnification
Assignment:

1. Develop and equation relating the airbase base B of


photography, the percentage overlap (PE) between photos and
the ground coverage G of the photo on the ground.
2. Develop also an equation relating the ground coverage G, flying
height above datum, focal length l and the photographic size d.
3. Combine equations in 1 and 2 to establish the formular for the
base height ratio (B/H).
Principle of floating mark

Stereoscopic measurements are possible if the floating mark is


introduced in the viewing system.
Concept:
• Identical half marks (eg. Cross, small circles) are placed in the
field of view of each eye.
• As the stereomodel is viewed the two half marks are viewed
against the photographed scene by the each eye.
• If the half marks are properly adjusted, the brain will fuse their
images into a single floating mark that appears in 3D surface
relative to the model.
Principle of floating mark

• If the half marks are moved closer, their parallax increases and
the fused mark will appear to rise
• If the half marks are moved far apart, the parallax decreases
and the fused mark will appear to fall.
• The fused mark can therefore be moved up and down until it
rests on the model surface (terrain).
• The position and elevation of the mark can be determined and
plotted on the map using a transfer device.
Stereoscopy – Floating mark
Stereoscopy – Floating mark
Stereoscopy – Floating mark
Fundamental problems in
photogrammetry
There are two fundamental problems in photogrammetry
 Resection
 Intersection
Resection problem:
Resection is the process of recovering the exterior orientation of a single
photograph from image measurements of ground control points. During
photography, light rays from total ground control points (horizontal
position and elevation known) are made to resect through the lens nodal
point (exposure station) to their image position on the photograph.
Fundamental problems in
photogrammetry
The resection process forces the photograph to the same spatial position
and angular orientation it had when the exposure was taken. The solution
requires at least three total control points that do not lie in a straight line,
and the interior orientation parameters, focal length, and principal point
location. In aerial photogrammetric mapping, the exact camera position
and orientation are generally unknown. The exterior orientation must be
determined from known ground control points by the resection principle.

The determination of the position and orientation of an image in


space from known ground positions of control points in the images.
Intersection problem

Intersection is the process of photogrammetrically determining the spatial


position of ground points by intersecting image rays from two or more
photographs. If the interior ( focal length, principle point) and exterior
orientation parameters (camera position, 3 orientation angles) of the
photographs are known, then conjugate image rays can be projected from
the photograph through the lens nodal point (exposure station) to the
ground space. Two or more image rays intersecting at a common point will
determine the horizontal position and elevation of the point. Map
positions of points are determined by the intersection principle from
correctly oriented photographs.
The calculation of the object space coordinates of a point from its
coordinates in two or more images.
Photogrammetric solutions

Photogrammetric solution requires knowledge of :

 Interior orientation parameters (focal length, principle point location)


 Exterior orientation paramters (camera position and 3 rotation angles)
 Ground coordinates of points to be mapped.
 Interior orientation parameters are always known throw camera
callibration
 Exterior orientation parameters are established through resection
Photogrammetric solutions

The overall solution to photogrammetric problems involves carrying out:


 Inner/interior orientation
 Relative orientation
 Absolute orientation
 Relative and absolute orientation are generally called exterior
orientation
These can be accomplished using
 Analogue and
 Analytical approaches
Why bother with orientations???
• Maps Vs. images
Why bother with orientations???
• We want to make maps from images BUT
Images
• Have perspective projection
• Relief displacement
• Scale variation
AND Maps
• Orthogonal projection (2D representation of 3D)
• No scale variation
• No relief displacement
Why bother with orientations???
• Orientation therefore helps to transform centrally projected images
into a three dimensional model, which can be used to plot an
orthogonal map.
Perspective Vs orthogonal
Interior orientation
Defn: Reconstruction of the geometry of the bundle of imaging rays as
they existed at the time of photography
Purpose:
• Reconstruct the bundle of light rays (as defined by the perspective
center and the image points) in such a way that it is similar to the
incident bundle on the camera at the moment of exposure.
• Interior orientation is defined by the position of the perspective center
w.r.t. the image plane (xp, yp, c).
• Another component of the interior orientation is the distortion
parameters (Radial Lens Distortion, Decentric Lens Distortion (axis
misalignment), Atmospheric Refraction, Affine Deformations, Out of
Plane Deformations)
Exterior orientation

Exterior Orientation has two components:


• The position of the perspective center w.r.t. the ground
coordinate system (Xo, Yo, Zo).
• The rotational relationship between the image and the
ground coordinate systems (w, f, k)

These are the rotation angles we need to apply to the ground


coordinate system to make it parallel to the image coordinate system.
Position of perspective centre
and rotation angles
Relative orientation
Objective:
• Orient the two bundles of a stereo-pair relative to each other in such a way that
all conjugate light rays intersect.
Result:
• A stereo Model, which is a 3-D representation of the object space w.r.t. an
arbitrary local coordinate system
• If we make at least five conjugate light rays intersect, all the remaining light rays
will intersect at the surface of the stereo-model.
• Data registered in arbitrary coordinate system-no ground coordinates
Relative orientation
Absolute orientation

Purpose: rotate, scale, and shift the stereo model


resulting from relative orientation until it fits at the
location of the control points.

• Absolute Orientation is defined by: Three Rotations,


One Scale factor, and Three Shifts
All data is assigned ground co-ordinates

Вам также может понравиться