0 оценок0% нашли этот документ полезным (0 голосов)

15 просмотров14 страницimage compression

Jun 15, 2020

CHAPTER 1

© © All Rights Reserved

0 оценок0% нашли этот документ полезным (0 голосов)

15 просмотров14 страницCHAPTER 1

Вы находитесь на странице: 1из 14

CHAPTER 1

INTRODUCTION

An image is defined as a two-dimensional function of two real variables, for

example, f(x, y), where x and y are spatial coordinates with f as the amplitude (e.g.

brightness) of the image at the real coordinate position (x, y).when (x, y) and the

amplitude values of f are all finite discrete quantities then, the image is generally called

as DIGITAL IMAGE. A digital image is a numeric representation, normally binary, of

a two-dimensional image. Depending on whether the image resolution is fixed, it may

be of vector or raster type. By itself, the term "digital image" usually refers to raster

images or bitmapped images (as opposed to vector images). Raster images have a finite

set of digital values, called picture elements or pixels. The digital image contains a

fixed number of rows and columns of pixels. Pixels are the smallest individual element

in an image, holding antiquated values that represent the brightness of a given color at

any specific point. The field of digital image processing refers to processing of digital

images by means of a digital computer. One of the advantages of digital images is the

ability to transfer them electronically almost instantaneously and convert them easily

from one medium to another such as from a web page to a computer screen to a printer

according to one’s need.

Image Processing is a technique to enhance raw images received from

cameras/sensors placed on satellites, space probes and aircrafts or pictures taken in

normal day-to-day life for various applications. Various techniques have been

developed in Image Processing during the last four to five decades. Most of the

techniques are developed for enhancing images obtained from unmanned space crafts,

space probes and military reconnaissance flights. Image Processing systems are

becoming popular due to easy availability of powerful personnel computers, large size

memory devices, graphics software etc.

• Remote Sensing

• Medical Imaging

• Non-destructive Evaluation

Chapter 1

• Forensic Studies

• Textiles

• Material Science.

• Military

• Film industry

• Document processing

• Graphic arts

• Printing Industry

Since capturing an image from a camera is a physical process.

The sunlight is used as a source of energy. A sensor array is used for the

acquisition of the image. So when the sunlight falls upon the object, then the

amount of light reflected by that object is sensed by the sensors, and a

continuous voltage signal is generated by the amount of sensed data. In order to

create a digital image, we need to convert this data into a digital form. This

involves sampling and quantization. (They are discussed later on). The result of

sampling and quantization results in a two dimensional array or matrix of

numbers which are nothing but a digital image.

Analysing and manipulating the image;

Output in which result can be altered image or report that is based on image

analysis.

Chapter 1

Gray-scale images

Binary images

Indexed images

RGB images

A gray-scale image is a matrix whose values represent shades of gray. When the

matrix is of type uint8, the integer-valued elements are in the range [0,255]. By

convention, the value 0 is displayed as black, and the value 255 is displayed as white.

Values in-between are displayed as intermediate shades of gray. When the matrix is of

type uint16, then 0 is displayed as black and 65535 is displayed as white. For a

floating-point matrix, either of type double or single, the value 1.0 is displayed as

white. Originally, the Image Processing Toolbox documentation called these intensity

images, and you might still find this term used in some places. Some of our colour

scientist users complained, though, that the term intensity image meant something

slightly different in their field, so we (mostly) changed our terminology.

1.2.2. Binary Images

pixels are either black or white. (Or, a bit more generally, the pixels are either

background or foreground.) In MATLAB and the Image Processing Toolbox, we have

Chapter 1

adopted the convention that binary images are represented as logical matrices. Here's an

example of constructing a matrix whose type is logical and then displaying the result as

a binary (black-and-white) image

called X, and a colour map matrix, commonly called map. The matrix map is an M-by-

3 matrix of floating-point values (either double or single) in the range [0.0,1.0]. Each

row of map specifies the red, green, and blue components of a single colour. An

indexed image is displayed by mapping values in the index matrix to colours in the

colour map. A quirk of MATLAB is that this mapping is data-type specific. If the index

matrix is floating-point, then the value 1.0 corresponds to the first colour in the colour

map. But if the index matrix is uint8 or uint16, then the value 0 corresponds to the first

colour.

Chapter 1

at row r and column c, the three values RGB(r,c,1), RGB(r,c,2), and RGB(r,c,3)specify

the red, green, and blue colour components of that pixel. A pixel whose colour

components are [0,0,0] is displayed as black. For a floating-point array, a pixel whose

colour components are [1.0,1.0,1.0] is displayed as white. For a uint8 or uint16 array,

either [255,255,255] or [65535,65535,65535] is displayed as white.

Pixel

particular location and value. These elements are referred to as picture elements,

image elements, pels and pixels. Pixel is the smallest element of an image. Each pixel

corresponds to any one value. In a bit grayscale image, the value of the pixel between

0 and 255. The value of a pixel at any point corresponds to the intensity of the light

photons striking at that point. Each pixel stores a value proportional to the light

intensity at that particular location. You can have more understanding of the pixel

from the pictures given below.

Chapter 1

In the above picture, there may be thousands of pixels that together make up this

image. We will zoom that image to the extent that we are able to see some pixels

division. It is shown in the image below.

1.3.1. Calculation of total number of pixels

We have defined an image as a two dimensional signal or matrix. Then in that

case the number of PEL would be equal to the number of rows multiply with number

of columns.

Or we can say that the number of (x,y) coordinate pairs make up the total number of

pixels.

Chapter 1

Gray - level

The value of the pixel at any point denotes the intensity of image at that

location, and that is also known as gray level.

Pixel value.(0)

As it has already been define in the beginning of this tutorial that each pixel can

have only one value and each value denotes the intensity of light at that point of the

image.

We will now look at a very unique value 0. The value 0 means absence of light. It

means that 0 denotes dark, and it further means that whenever a pixel has a value of 0,

it means at that point, black colour would be formed.

In this section, we consider several important relationships between pixels in a

digital image. As mentioned before, an image is denoted by f(x, y) .When referring in

this section to a particular pixel, we use lowercase letters, such as p and q. A pixel part

coordinates(x,y) has four horizontal and vertical neighbours whose coordinates are

given by

(x + 1, y), (x - 1, y), (x, y + 1), (x, y - 1)

and are denoted by ND(p). These points, together with the 4-neighbors, are called the

8-neighbors of p, denoted by N8(p). As before, some of the neighbour locations in

ND(p) and N8(p) fall outside the image if (x, y) is on the border of the image.

image= 516 if we are referring to adjacency of pixels with value 1. In a gray-scale

image, the idea is the same, but set V typically contains more elements. For exam-

ple, in the adjacency of pixels with a range of possible intensity values 0 to 255, set

V could be any subset of these 256 values. We consider three types of adjacency:

(a) 4-adjacency. Two pixels p and q with values from V are 4-adjacent if q is in the

set N4(p).

(b) 8-adjacency. Two pixels p and q with values from V are 8-adjacent if q is in the

set N8(p).

Chapter 1

(c) m-adjacency (mixed adjacency). Two pixels p and q with values from V are m-

adjacent if

(i) q is in N4(p), or

(ii)q is in ND(p) and the set N4(p) ¨ N4(q) has no pixels whose values are from V.

Mixed adjacency is a modification of 8-adjacency. It is introduced to eliminate the

ambiguities that often arise when 8-adjacency is used..

A (digital) path (or curve) from pixel p with coordinates (x, y) to pixel q with

coordinates (s, t) is a sequence of distinct pixels with coordinates

where (x0, y0) = (x, y), (xn, yn) = (s, t), and pixels (xi, yi) and (xi- 1, yi- 1) are adjacent

for 1 … i … n. In this case, n is the length of the path. If (x 0, y0) = (xn, yn), the path is

a closed path. We can define 4-, 8-, or m-paths depending on the type of adjacency

specified. Let S represent a subset of pixels in an image. Two pixels p and q are said

to be connected in S if there exists a path between them consisting entirely of pix-els

in S. For any pixel p in S, the set of pixels that are connected to it in S is called a

connected component of S. If it only has one connected component, then set S is

called a connected set.

is a connected set. Two regions, Ri and Rj are said to be adjacent if their union forms

a connected set. Regions that are not adjacent are said to be disjoint. We consider 4-

and 8-adjacency when referring to regions. For our definition to make sense, the type

of adjacency used must be specified.

of which touches the image border.† Let Ru denote the union of all the K regions, and

let (Ru)c denote its complement (recall that the complement of a set S is the set of

points that are not in S). We call all the points in R u the foreground, and all the points

in (Ru)c the background of the image.

The boundary (also called the border or contour) of a region R is the set of

points that are adjacent to points in the complement of R. Said another way, the

border of a region is the set of pixels in the region that have at least one background

neighbour. Here again, we must specify the connectivity being used to define

Chapter 1

the region to distinguish it from its outer border, which is the corresponding border

in the background. This distinction is important in the development of border-

following algorithms. Such algorithms usually are formulated to fol-low the outer

boundary in order to guarantee that the result will form a closed path.. On the other

hand, the outer border of the region does form a closed path around the region.

pixels), then its boundary is defined as the set of pixels in the first and last rows and

columns of the image. This extra definition is required because an image has no

neighbors beyond its border. Normally, when we refer to a region, we are referring to

a subset of an image, and any pixels in the boundary of the region that happen to

coincide with the border of the image are included implicitly as part of the region

boundary.

The following are some of the file formats mostly used are:

There are two methods available in Image Processing.

1.4.1 Analog Image Processing

Analog Image Processing refers to the alteration of image through electrical

means. The most common example is the television image. The television signal is a

voltage level which varies in amplitude to represent brightness through the image. By

Chapter 1

electrically varying the signal, the displayed image appearance is altered. The

brightness and contrast controls on a TV set serve to adjust the amplitude and reference

of the video signal, resulting in the brightening, darkening and alteration of the

brightness range of the displayed image.

1.4.2. Digital Image Processing

In this case, digital computers are used to process the image. The image will be

converted to digital form using a scanner – digitizer and then process it. It is defined as

the subjecting numerical representations of objects to a series of operations in order to

obtain a desired result. It starts with one image and produces a modified version of the

same. It is therefore a process that takes an image into another. The term digital image

processing generally refers to processing of a two-dimensional picture by a digital

computer. In a broader context, it implies digital processing of any two-dimensional

data. A digital image is an array of real numbers represented by a finite number of bits.

The principle advantage of Digital Image Processing methods is its versatility,

repeatability and the preservation of original data precision.

• Image representation

• Image pre-processing

• Image enhancement

• Image restoration

• Image analysis

• Image reconstruction

• Image data compression

Image Representation

An image defined in the "real world" is considered to be a function of two real

variables, for example, f(x,y) with f as the amplitude (e.g. brightness) of the image at

the real coordinate position (x,y).

Chapter 1

The 2D continuous image f(x,y) is divided into N rows and M columns. The

intersection of a row and a column is called as pixel. The value assigned to the integer

coordinates [m,n] with {m=0,1, 2,...,M-1} and {n=0,1,2,...,N-1} is f[m,n]. In fact, in

most cases f(x,y)--which we might consider to be the physical signal that impinges on

the face of a sensor. Typically an image file such as BMP, JPEG, TIFF etc., has some

header and picture information. A header usually includes details like format identifier

(typically first information), resolution, number of bits/pixel, compression type, etc.

Image Pre-processing

Scaling

The theme of the technique of magnification is to have a closer view by

magnifying or zooming the interested part in the imagery. By reduction, we can bring

the unmanageable size of data to a manageable limit. For resampling an image Nearest

Neighborhood, Linear, or cubic convolution techniques are used.

Rotation

Rotation is used in image mosaic, image registration etc. One of the techniques

of rotation is 3-pass shear rotation, where rotation matrix can be decomposed into three

separable matrices.

R = | cosα –sinα | =

| sinα cosα |

| 1 –tanα/2 | | 1 0 | | 1 –tanα/2|

Chapter 1

| 0 1 | | sinα 1| | 0 1 |

Advantages

Mosaic

image without radiometric imbalance. Mosaic is required to get the synoptic view of

the entire area, otherwise capture as small images.

Sometimes image obtained from satellites and conventional and digital cameras

lack in contrast and brightness because of the limitations of imaging sub systems and

illumination conditions while capturing image. Images may have different types of

noise. In image enhancement, the goal is to accentuate certain image features for

subsequent analysis or for image display. Examples include contrast and edge

enhancement, pseudo-colouring, noise filtering, sharpening, and magnifying. Image

enhancement is useful in feature extraction, image analysis and an image display. The

enhancement process itself does not increase the inherent information content in the

data. It simply emphasizes certain specified image characteristics. Enhancement

algorithms are generally interactive and application-dependent.

• Noise Filtering

• Histogram modification

Image Analysis

Image analysis is concerned with making quantitative measurements from an

image to produce a description of it . In the simplest form, this task could be reading a

label on a grocery item, sorting different parts on an assembly line, or measuring the

Chapter 1

size and orientation of blood cells in a medical image. More advanced image analysis

systems measure quantitative information and use it to make a sophisticated decision,

such as controlling the arm of a robot to move an object after identifying it or

navigating an aircraft with the aid of images acquired along its trajectory.

Image analysis techniques require extraction of certain features that aid in the

identification of the object. Segmentation techniques are used to isolate the desired

object from the scene so that measurements can be made on it subsequently.

Quantitative measurements of object features allow classification and description of the

image.

Image Restoration

Image restoration refers to removal or minimization of degradations in an

image. This includes de-blurring of images degraded by the limitations of a sensor or

its environment, noise filtering, and correction of geometric distortion or non-linearity

due to sensors.

phenomenon such as defocus, linear motion, atmospheric degradation and additive

noise.

Image Compression

Compression is a very essential tool for archiving image data, image data transfer on

the network etc. They are various techniques available for lossy and lossless

compressions. One of most popular compression techniques, JPEG (Joint Photographic

Experts Group) uses Discrete Cosine Transformation (DCT) based compression

technique. Currently wavelet based compression techniques are used for higher

compression ratios with minimal loss of data.

problems where a two- (or higher) dimensional object is reconstructed from several

one-dimensional projections. Each projection is obtained by projecting a parallel X-ray

(or other penetrating radiation) beam through the object. Planar projections are thus

obtained by viewing the object from many different angles. Reconstruction algorithms

Chapter 1

derive an image of a thin axial slice of the object, giving an inside view otherwise

unobtainable without performing extensive surgery. Such techniques are important in

medical imaging (CT scanners), astronomy, radar imaging, geological exploration, and

non-destructive testing of assemblies.

## Гораздо больше, чем просто документы.

Откройте для себя все, что может предложить Scribd, включая книги и аудиокниги от крупных издательств.

Отменить можно в любой момент.