Академический Документы
Профессиональный Документы
Культура Документы
The lens focuses light from objects onto the retina The retina is covered with light receptors called cones (6-7 million) and rods (75-150 million) Cones are concentrated around the fovea and are very sensitive to colour Rods are more spread out and are sensitive to low levels of illumination
Optics Review:
The EYE
Optics Review:
Farsightedness lens too flat, the image produced by the lens is too far past the retina Correction use a convex lens to converge the image on the retina
Optics Review:
The EYE Eyelash Retina Lens iris Ciliary muscles Optic Nerve
The EYE
The Camera Lens cap Film / chip Lens Diaphragm Focus ring USB cable
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Radio Micro Infrared VIS UV X-Rays Gamma Rays Longest ----------------------------------------------- shortest
RadioWaves
TV, radio, cell phones
Microwaves
Microwave ovens cooking food Telecommunications
Infrared
Remote controls Thermal imaging
UV
Tanning (also causes sun burns and skin damage) Heating lamps fast food, spas
X-Rays
Doctors and dentists use to see bones/teeth
Gamma Rays
Doctors use to target and kill cancer cells
Reflected Light
The colours that we perceive are determined by the nature of the light reflected from an object For example, if white light is shone onto a green object most wavelengths are Colours Absorbed absorbed, while green light is reflected from the object
Image Representation
Before we discuss image acquisition recall that a digital image is composed of M rows and N columns of pixels col each storing a value Pixel values are most often grey levels in the range 0-255(black-white) We will see later on that images can easily be represented as f (row, col) matrices
row
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
www.imageprocessingbook.com
Sensing arrangement
www.imageprocessingbook.com
www.imageprocessingbook.com
www.imageprocessingbook.com
www.imageprocessingbook.com
Definitions
Images generated from a physical process,
0 < f (x, y) < f (x, y) = i(x, y)r(x, y) 0 < i(x, y) < 0 < r(x, y) <1
2002 R. C. Gonzalez & R. E. Woods
Reflectance
0.01 for black velvet
0.65 for stainless steel
www.imageprocessingbook.com
Definitions
Radiance is the total amount of energy that flows from a light source Luminance is a measure of the amount of energy an observer perceives from a light source Brightness is a subjective descriptor of light perception
www.imageprocessingbook.com
Image formation
www.imageprocessingbook.com
Image formation
www.imageprocessingbook.com
Image formation
www.imageprocessingbook.com
Image formation
digital camera
www.imageprocessingbook.com
Image formation
sampled image
www.imageprocessingbook.com
Image formation
www.imageprocessingbook.com
www.imageprocessingbook.com
www.imageprocessingbook.com
Pixels
A digital image, I, is a mapping from a 2D grid of uniformly spaced discrete points, {p = (r,c)}, into a set of positive integer values, {I( p)}.
At each column location in each row of I there is a value. The pair ( p, I( p) ) is called a pixel (for picture element).
www.imageprocessingbook.com
Pixels
p = (r,c) is the pixel location indexed by row, r, and column, c. I( p) = I(r,c) is the value of the pixel at location p. If I( p) is a single number then I is monochrome. If I( p) is a vector (ordered list of numbers) then I has multiple bands (e.g., a color image).
www.imageprocessingbook.com
Pixels
Pixel : [ p, I(p)]
www.imageprocessingbook.com
www.imageprocessingbook.com
Resolution refer to the smallest discernible change M x Nspatial resolution Lgray level resolution
Line pairs per unit distance or dots (pixels)per unit distance (dpi)
Eg., 100 line pairs per mm Newspapers75 dpi, magazines133dpi, this book page - 2400 dpi Spatial resolution with respect to spatial units
4
8 16
16
256 65,536
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Lower intensity resolution results in false contouring and checker boards. Example Fig. 2.21 Images of size 256x256 pixels with 64 intensity levels and printed on a size format of 5 x 5 cm have lowest acceptable spatial and intensity resolution.
Low Detail
Medium Detail
High Detail
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Isopreference curves
Isopreference curves (ranking of subjective quality by naked eyes)points lying on an isopreference curve correspond to images of equal subjective quality (increasing subjective quality from left to right).
Image Interpolation
Introduction
What is image interpolation? Why do we need it?
Interpolation Techniques
Nearest Neighbor Interpolation Bilinear Interpolation Bicubic interpolation Digital zooming (resolution enhancement) Image inpainting (error concealment) Geometric transformations (where your imagination can fly)
Interpolation Applications
52
Introduction
What is image interpolation?
An image f(x,y) tells us the intensity values at the integral lattice locations, i.e., when x and y are both integers Image interpolation refers to the guess of intensity values at missing locations, i.e., x and y can be arbitrary Note that it is just a guess (Note that all sensors have finite sampling distance)
53
Engineering Motivations
Why do we need image interpolation?
We want BIG images
When we see a video clip on a PC, we like to see it in the full screen mode
54
f(n+a)
f(n+1)
1-a
f(n+a)=(1-a)f(n)+af(n+1), 0<a<1
55
Example
Enlarge image of size 500x500 pixels to 750x750 Pixels Nearest neighbor interpolationmake an overlay of 750x750 pixels and assign values to points from closest pixel in the original image Causes artifactsdistortion of straight edges
Nearest neighbor interpolation is the simplest method and basically makes the pixels bigger. The color of a pixel in the new image is the color of the nearest pixel of the original image. If you enlarge 200%, one pixel will be enlarged to a 2 x 2 area of 4 pixels with the same color as the original pixel. Most image viewing and editing software use this type of interpolation to enlarge a digital image for the purpose of closer examination because it does not change the color information. For the same reason, it is not suitable to enlarge photographic images because it increases the visibility of jaggies.
Bilinear Interpolation
Bilinear Interpolation determines the value of a new pixel based on a weighted average of the 4 pixels in the nearest 2 x 2 neighborhood of the pixel in the original image. The averaging has an anti-aliasing effect and therefore produces relatively smooth edges with hardly any jaggies.
Bilinear Interpolation
Bicubic interpolation
Bicubic interpolation is more sophisticated and produces smoother edges than bilinear interpolation. Notice for instance the smoother eyelashes in the example below. Here, a new pixel is a bicubic function using 16 pixels in the nearest 4 x 4 neighborhood of the pixel in the original image. This is the method most commonly used by image editing software, printer drivers and many digital cameras for resampling images.
Basic Relationships Between Pixels Neighborhood Adjacency Connectivity Paths Regions and boundaries
www.imageprocessingbook.com
Pixel neighborhood
www.imageprocessingbook.com
Pixel neighborhood
(x-1, y-1)
(x, y) (x+1, y1)
(x-1, y+1)
(x+1, y+1)
www.imageprocessingbook.com
Pixel neighborhood
(x-1, y-1)
(x,y-1) (x+1, y-1)
(x-1 ,y)
(x, y) (x+1,y)
(x-1, y+1)
(x,y+1) (x+1, y+1)
www.imageprocessingbook.com
Connectivity of pixels
Two pixels are connected if they are neighbors and their gray levels satisfy a specified criterion of similarity
0
1 0
1
1 1
1
0 0
www.imageprocessingbook.com
Pixel adjacency
Two pixels p and q with values from a set V are 4-adjacent if q is in the set N4(p)
Two pixels p and q with values from a set V are said to be mixed (m) adjacent if (1) if q is in the set N4(p) or (2) q is in the set ND(p) and the set N4(p) N4(q) has no pixels whose values are from V
Basic Relationships Between Pixels Path A (digital) path (or curve) from pixel p with coordinates (x 0 , y 0 ) to pixel q with coordinates (x n , y n ) is a sequence of distinct pixels with coordinates (x 0 , y 0 ), (x 1 , y 1 ), , (x n , yn ) Where (x i , y i ) and (x i-1 , y i-1 ) are adjacent for 1 i n. Here n is the length of the path. If (x 0 , y 0 ) = (x n , y n ), the path is closed path. We can define 4-, 8-, and m-paths based on the type of adjacency used.
www.imageprocessingbook.com
Path or curve
Find the length of shortest 4, 8, m-path between p and q if V={0,1} Repeat for V={1,2}
3 2 1 1(p)
1 2 2 0
2 0 1 1
1(q) 2 1 2
www.imageprocessingbook.com
Two pixels p and q in S (S is a subset of pixels in an image) are connected if there exist a path between them consisting entirely of pixels from S For any pixel p in S, the set pixels that are connected to it in S is called a connected component If S has one connected component then set S is called connected set
0 1 1 0 2 0 0 0 1
0 1 1 0 2 0 0 0 1
0 1 1 0 2 0 0 0 1
Lecture # 2
42
0 1 1 0 2 0 0 0 1
0 1 1 0 2 0 0 0 1
8-adjacent
0 1 1 0 2 0 0 0 1
Lecture # 2
43
0 1 1 0 2 0 0 0 1
0 1 1 0 2 0 0 0 1
8-adjacent
0 1 1 0 2 0 0 0 1
m-adjacent
Lecture # 2
44
0 1 1 0 2 0 0 0 1
1,1 1,2 2,1 2,2 3,1 3,2
1,3
2,3
3,3
0 1 1 020 001
8-adjacent
0 1 1 020 001
m-adjacent
The m-path from (1,3) to (3,3):
(1,3), (1,2), (2,2), (3,3)
Lecture # 2
45
www.imageprocessingbook.com
The eye has 3 types of photoreceptors: sensitive to red, green, or blue light.
The brain transforms RGB into separate brightness and color channels (e.g., LHS).
brain photo receptors
76