Вы находитесь на странице: 1из 19

Presented By Ritu Pareek M.Tech, 2nd year (Signal Processing) Enrollment No.

100658 Faculty of Engineering and Technology Mody Institute of Technology and Science, Lakshmangarh

Image

compression reduces the number of bits required to represent an image Digital Image is a 2-D array of pixels Inherently very Large in size Driving factors behind image compression

Storage space Shorten transmission time

To

achieve Compression , Reduction of redundant data is necessary


compress
Compressed Image file

Original Image

decompress

extracted Image file

Coding Redundancy
Data used to represent image is utilized in an optimal manner Coding is reversible

Spatial Redundancy
Also called Interpixel Redundancy Adjacent pixel values tend to be highly correlated

Psycho visual Redundancy


Human Eye does not respond with equal sensitivity to all visual informations Less relevant data to human eye can be eliminated

Lossless

Compression

Error free compression Original data can be recovered completely Make use of coding redundancy and inter-pixel redundancy Medical imaging , Space images ,Technical drawings etc

Lossy

Compression

Original data is approximated Less than perfect Advantage is higher compression Photographic images

Amount

of information I in a symbol of occurring probability p : I = log2(1/p) information per symbol is called entropy H

Average

H = pi x log2(1/pi) bits per codeword

Unit of information is the bit


Symbols that occur rarely convey a large amount of information

Objective

fidelity criteria

loss can be expressed as a mathematical function of

the input image and the resulting compressed (then decompressed) image

Simple and convenient

Subjective

fidelity criteria

Human-based criteria Usually side-by-side comparison

Popular

techniques for removing coding redundancy Assigns fewer bits to symbols that appear more often and more bits to the symbols that appear less often Efficient when occurrence probabilities vary widely Huffman codebook from the set of symbols and their occurring probabilities Huffman codes satisfies the prefix-condition No codeword is a prefix of another codeword Uniquely decodable

Huffman Source Reduction

Huffman Code Assignment Procedure

Arithmetic coding yields better compression Entire sequence of source symbols is assigned a single arithmetic code word
Essentially encode probability Can represent symbols using < 1 bits (cf. Huffman coding -requires at least 1 bit per symbol) Disadvantage
Slower than Huffman coding Random access is difficult

Energy Compaction Property Has fast implementation DCT applied to blocks of image pixels (8x8 or 16x16) Decorrelated Coefficients (DC & AC coefficients) DCT coefficients in each block are thresholded Remaining coefficients are quantized

Thresholding and quantization achieved by using quantization matrix Each coefficient divided by QM element and then rounded

DC(0,0) coefficient - DPCM encoder (lossless) AC coefficients: scanned in a ZigZag fashion, starting at (0,0) run length encoder, forming symbols (run, level)pair AC & DC coefficients are applied to entropy coder, e.g. Huffman

(a) Coder

(b) Decoder

Wavelet is a transform just like Fourier, DCT etc Achieves better quality and compression ratio Better matched to the HVS characteristics Wavelet is applied on an image as a whole unlike DCT Good for natural images Pack most of the visual information in few coefficients (Energy Compactness) Input signal is filtered into lowpass and highpass components through analysis filters Spectrum of the input data is decomposed into a set of band limited components, which is called subbands

3-level Decomposition

Joint Photographic Experts Group


DCT is used in JPEG image compression standard ADVANTAGES Low complexity. Memory efficient. Reasonable coding efficiency. More compression use, the more artifacts may appear (Blocking artifacts) DISADVANTAGES

Its a lossy compression, which means data is discarded when the file is compressed.

Wavelet

coding is used Offers both lossy and lossless compression in the same file stream Higher compression ratios for lossy compression Display images at different resolutions and sizes from the same image file Consequence of using Wavelets It has Region Of Interest (ROI) capability JPEG 2000 is superior to JPEG in the area of error resilience

The

basic concept of image compression is to achieve minimum size of data to provide easy storage and transmission. Lossless image compression provides no loss to retrieved image (unlike lossy), these lossless techniques are also been used in lossy for better compression. DCT based image compression standard JPEG and JPEG2000 is based on DWT. JPEG 2000 standard intends to compliment and not to replace the current JPEG standards.

[1]

[2]

[3]

Gonzalez R. and Richard E. Woods, Digital Image Processing, Pearson Prentice Hall Publications, 3rd edition,2009. M. Eskicioglu, P. S. Fisher, Image Quality Measures and Their Performance, IEEE Transactions on Communications, Vol. 43, No. 12, December 1995, pp. 2959-2965. G. K. Wallace, The JPEG Still Picture Compression Standard, Communication of the ACM, Vol. 34, No. 4, 1991, pp. 30-44.

Вам также может понравиться