Вы находитесь на странице: 1из 18

2D AND 3D VISION

• 2-D vision sensors can be therefore divided


into two main groups:
1. Binary vision sensors
2. Grey level vision sensors
Binary vision sensor
• This class of sensor produces images whose pixel
(pixel = picture element) values are either a black or
white luminosity level equivalent to a logic 0 or 1,
hence the name 'binary'.
• The complete picture is therefore only a series of logic
1 and O. This allows easy distinction of dark objects on
light background (and vice versa)
• In view of the low visual data content, fast image data
manipulation such as object perimeter and/or area
calculations.
Block diagram of typical binary vision sensor
Thresholding
• Definition: An image processing method that
creates a bitonal (aka binary) image based on
setting a threshold value on the pixel intensity
of the original image. While most commonly
applied to grayscale images, it can also be
applied to color images for converting in to
binary image.
Example of output of binary vision sensor.
• In the example shown the vision system has learnt the characteristics
of three objects, based on the number of holes, object size and shape
features. Thereafter, whenever the vision system is presented with an
image containing one of these objects it will be able to 'recognize' it.
• This is achieved by comparing the features of the object in the new
image with those stored in memory.
• This technique of image processing is sometimes referred to as the SRI
method (or one of its derivatives) because it was first introduced by
Rosen et al. at the Stanford Research Institute (now SRI International)
and imposes severe restrictions on the lighting conditions and on the
position of the object for a successful operation.
• The object must show up as an isolated object and its stable states
must be known in advance .Thus overlapping objects and situations
with any 3-D freedom of movement are difficult to deal with using
binary vision sensors.
Grey level vision sensors
• This class of sensor produces an output image whose pixel values are quantized into a
number of discrete levels, achieved by converting the optical transducer analogue
output into the appropriate number of digital levels.
• A grey-level image can be produced with the help of some computing power (either
within, as in the case of intelligent sensors, or without, by using the vision system
main computer).
• This procedure is based on processing an appropriate number of binary images
obtained using different exposure times, whose difference provides a measure of each
pixel intensity in the object image.
• In example, the typical grey-level image of a bicycle chain link using eight intensity
levels as provided by a 2-D grey level vision sensor based on a 256 x 128 cells DRAM
camera.
• A comparison of this image with the one obtained using a binary sensor, shows how
grey-level vision sensors (in conjunction with reflected illumination) provide more
details of the object surface features. This, however, increases the amount of visual
data and makes its processing more difficult, slower and computationally more
expensive.
Example of grey level vision sensor
3D VISION SENSOR
• Indirect methods of obtaining depth maps, based
largely on triangulation techniques, have provided
the largest input in this area.
• In computer vision triangulation refers to the
process of determining a point in 3D space given its
projections onto two, or more, images. ...
The triangulation problem is in theory trivial. Since
each point in an image corresponds to a line
in 3D space, all points on the line in 3D are projected
to the point in the image.
DISPARATIVE AND PHOTOMETRIC METHOD

• The disparity technique is based on the correlation


between images of the same object taken by two
different cameras under the same lighting conditions
(Marr and Poggio, 1976), while the photometric
technique is based on the correlation between the
images taken by the same camera under two
different lighting conditions (Ikeuchi and Horn, 1979).
(a) Disparity method 1
• Use of two stationary imaging devices. This could be defined as the
'classical' stereo vision method because of its analogy to the human vision
system.
• It consists of an illumination system and two
stationary cameras which provide the required
two 2-D images.
• This method is inherently more expensive
than the other two because it uses 2 cameras
but does not require any mechanical
movement and, therefore, compared to
method 'b' is faster and can provide more
accurate measurement of the cameras
positions as required for the disparity
calculations.
Method 2
(b) Disparity method 2.
• Use of one imaging device moved to different known positions.
• This is essentially a cost variation on the method 'a' ,since it only differs by the use
of a single camera which, to provide images from a different angle, is mechanically
moved to a different known position.
PHOTOMETRIC METHOD IN STEREO VISION.
• Use of one stationary imaging device under different lighting
conditions.
• This method relies on maintaining a camera in the same position,
thereby avoiding the pixel correspondence problem, and obtaining
multiple images by changing the illumination conditions. Processing
of these images can uniquely determine the object surfaces
orientation thus enabling its 3-D mapping (Woodham 1978).
ULTRASONIC RANGE FINDER
• The principle of operation of an ultrasonic
sensor is to measure the time delay t between
the transmitted and reflected sound pulses
which, assuming a constant velocity v for
sound propagation, is related to the obstacle
distance d by the simple formula d = vt.
Steps of ultrasonic rangefinders
• diagram illustrates the main steps of the ultrasonic
distance measuring method.
• A sound pulse is produced by the ultrasonic transducer
(pulse length approximately 1 ms and frequency
spectrum from 50 kHz to, typically, 60 kHz) which, after a
time delay proportional to the object distance from the
sensor, also receives the reflected sound pulse or 'echo'.
• The hardwired local intelligence then processes these
two signals (emitted and reflected pulses) and calculates
the obstacle distance from the sensor.

Вам также может понравиться