Вы находитесь на странице: 1из 9

A Paper Presentation On DIGITAL IMAGE PROCESSING

PRESENTED BY
Name: Sri Krishna D, No: 09AJ1F0033, ASIST.

To
Department of MCA, AMRITA SAI Institute Of Science & Technology, paritala, kanchika charala(MD), Krishna(DT).

Abstract: Over the past years forensic and medical applications of technology first developed to record and transmit pictures from outer space have changed the way we see things here on earth, including Old English manuscripts. combined, an With their talents camera electronic The computer first converts the analogue image, in this case a videotape, to a digital image by dividing it into a microscopic grid and numbering each part by its relative brightness. Specific digital imageprograms can then radically improve the contrast, for example by stretching the range of brightness throughout the grid from black to white, emphasizing edges, and suppressing random background noise that comes from the equipment rather than the document. Applied to some of the most illegible passages in the Beowulf manuscript, this new technology indeed shows us some things we had not seen before and forces us to reconsider some established readings.

designed for use with documents and a digital computer can now frequently enhance the legibility of formerly obscure or even invisible texts.

Introduction to Digital Image Processing:


Vision perceive Computer vision image. Giving computers the ability to see is not an easy task - we live in a three dimensional (3D) world, and when computers try to analyze objects in 3D space, available visual sensors (e.g., TV cameras) usually give two dimensional (2D) images, and this projection to a lower number of dimensions incurs an enormous loss of information. allows and humans understand aims to the to

Usually very little knowledge about the content of images

High level processing is based on knowledge, goals, and plans of how to achieve those goals. Artificial intelligence (AI) methods are used in many cases. High-level computer vision tries to imitate human cognition and the ability to make decisions according to the information contained in the image.

world surrounding us. vision by duplicate the effect of human electronically perceiving and understanding an

This

course

deals

almost

exclusively with low-level image processing, high level in which is a continuation of this course.

Age processing is discussed in the course Image Analysis and Understanding, which is a continuation of this course.

In order to simplify the task of computer vision understanding, two levels are usually distinguished; low-level image processing and high level image understanding.

History:

Many of the techniques of digital image processing, or digital picture processing as it was often called, were developed in the 1960s at the Jet Propulsion Laboratory, MIT, Bell Labs, University of Maryland, and few other places, with application to satellite imagery, conversion, wire photo medical standards imaging, In the 1970s, digital image processing proliferated, when cheaper computers Creating a film or electronic image of any picture or paper form. It is accomplished by scanning or photographing an object and turning it into a matrix of dots (bitmap), the meaning of which is unknown to the computer, only to the human viewer. Scanned images of text may be encoded into computer data (ASCII or EBCDIC) with page recognition software (OCR).

videophone, character recognition, and photo enhancement. But the cost of processing was fairly high with the computing equipment of that era.

Basic Concepts:

multidimensional

space.

Are

to

represent, for example, color images A signal is a function depending on some variable with physical meaning.

consisting of three component colors.

Image functions:
(e.g., (e.g.,

Signals can be
o

One-dimensional dependent on time), Two-dimensional

The image can be modeled by a continuous function of two or three variables; Arguments are co-ordinates x, y in a plane, while if images change in time a third variable t might be added.

images dependent on two co-ordinates in a plane),


o

Three-dimensional (e.g., describing an object in space),

The

image

function

values

Or higher dimensional.

correspond to the brightness at image points. The function value can express other physical quantities as well (temperature, observer, etc.).

Pattern recognition is a field within the area of machine learning. Alternatively, it can be defined as "the act of taking in raw data and taking an action based on the category of the data" [1]. As such, it is a collection of methods for supervised learning. Pattern recognition aims to classify data (patterns) based on either a priori knowledge or on statistical information extracted from the patterns. The patterns to be classified are usually groups of measurements or observations, defining points in an appropriate

pressure

distribution, distance from the The brightness optical integrates quantities -

different

using brightness as a basic quantity allows us to avoid the description formation.

of

the

very

complicated process of image The image on the human eye retina or on a TV camera sensor is intrinsically 2D. We shall call

such

2D

image about

bearing brightness
o o

microstructure marking),

and

information

points an intensity image. The real world, which surrounds us, is intrinsically 3D.

Illumination properties, And object surface orientation with respect to a viewer and light source.

The 2D intensity image is the result of a perspective projection of the 3D scene.

Digital image properties:


plane by
Metric properties of digital images:

When 3D objects are mapped into the camera perspective projection a lot of information disappears as such a transformation is not one-to-one.

Distance example.

is

an

important

Recognizing or reconstructing objects in a 3D scene from one image is an ill-posed problem.

The distance between two pixels in a digital image is a significant quantitative measure.

Recovering information lost by perspective projection is only one, mainly geometric, problem of computer vision.

The

Euclidean

distance

is

defined by Eq. 2.42

The second problem is how to understand image brightness. The only information available in an intensity image is brightness of the appropriate pixel, which is dependent on a number of independent factors such as
o o

City block distance

Object surface reflectance properties (given by the surface material,

Chessboard distance Eq. 2.44

One objects

possible using

solution

to

contiguity paradoxes is to treat 4-neighborhood using 8

Pixel images.

adjacency concept

is in

another digital

and

background

important

neighborhood (or vice versa). A hexagonal grid solves many problems of the square grids ... any point in the hexagonal raster has the same distance to all its six neighbors.

4-neighborhood 8-neighborhood It will become necessary to consider important sets consisting of several adjacent pixels -- regions.

Border R is the set of pixels within the region that have one or more neighbors outside R ... inner borders, outer borders exist.

Region is a contiguous set. Contiguity square grid paradoxes of the

Edge is a local property of a pixel given and by a its immediate and neighborhood --it is a vector magnitude direction.

The

edge

direction

is

perpendicular to the gradient direction which points in the direction growth.

of

image

function

Border and edge ... the border is a global concept related to a region, while edge expresses local properties of an image function.

Crack edges ... four crack edges are attached to each pixel, which are defined by its relation to its 4-neighbors. The direction of the crack edge is that of increasing brightness, and is a multiple of 90 degrees, while its magnitude is the absolute difference between the brightness of the relevant pair of pixels. (Fig. 2.9)

Convex hull is used to describe topological properties of objects.

r of holes in regions. The convex hull is the smallest region which contains the object, such that any two points of the region can be connected by a straight line, all points of which belong to the region.

Topological properties of digital images

Useses
A scalar function may be sufficient to

Topological properties of images are invariant to rubber sheet transformations. Stretching does not change contiguity of the object parts and does not change the number One such image property is the Euler--Poincare characteristic defined as the difference between the number of regions and the number of holes in them.

describe a monochromatic image, while vector functions are to represent, for example, color images consisting of three component colors.

CONCLUSION
Further, surveillance by humans is dependent on the quality of the human operator and lot off actors like operator fatigue negligence may lead to degradation of performance. These factors may can intelligent vision system a better option. As in systems that use gait in signature vehicle for video recognition

sensors for driver assistance.

BIBILOGRAPHY
1. The First Photo From Space, Tony Reichhardt, Air & Space Magazine, November 01, 2006 2. "50 years of Earth Observation". 2007: A Space Jubilee. European Space Agency. October 3, 2007. Retrieved 200803-20. 3. "First Picture from Explorer VI Satellite". NASA.

4. Lovholt, F., Bungum, H., Harbitz, C.B., Glimsal, S., Lindholm, C.D., and Pedersen, G. "Earthquake related tsunami hazard along the western coast of Thailand." Natural Hazards and Earth System Sciences. Vol. 6, No. 6, 979-997. November 30, 2006. 5. Campbell, J. B. 2002. Introduction to Remote Sensing. New York London: The Guilford Press 6. http://hothardware.com/N ews/Worlds-HighestResolutionSatellite-Imagery/ 7. Shall, Andrea (September 6, 2008). "GeoEye launches high-resolution satellite". Reuters. Retrieved 2008-11-07. 8. "Ball Aerospace & Technologies Corp.". Retrieved 2008-11-07. 9. RapidEye Press Release

Вам также может понравиться