Вы находитесь на странице: 1из 14

| 

 

By :
Sunny Duvani (07MC17)
Satya Bhatt (07MC05)

Harsh Brahmbhatt (07MC06)


ð 


ðhe JPEG compression algorithm is at its best on


photographs and paintings of realistic scenes with smooth
variations of tone and color.

For web usage, where the amount of data used for an


image is important.

JPEG is also not well suited to files that will undergo


multiple edits, as some image quality will usually be lost
each time the image is decompressed and recompressed
 
1. Color ðransformation

2. Down sampling

3. Block Splitting

4. DCð (discrete cosine transform)

5. Quantization & Entropy Encoding

6. Removing Artifacts
 
   | 


 



 




 
 


First, the image should be converted from RGB into a


different color space called YƍCBCR (or, informally, YCbCr). It
has three components Y', CB and CR: the Y' component
represents the brightness of a pixel, and the CB and
CR components represent the chrominance .

ðhe compression is more efficient because the brightness


information, which is more important to the eventual perceptual
quality of the image, is confined to a single channel. ðhis more
closely corresponds to the perception of color in the human
visual system. ðhe color transformation also improves
compression by statistical decorrelation.

 
ð   

   ƍV V          
  
 
  V  V   ð  
  
    

 


  
  
   

  
     
   



  
     
    
  






 
:  
      

 !-!  

"           


         
   
   

    
     
    
  

#

     
  
$    $       



     

       
ð     
 ð 

Each 8-8 block of each component (Y, Cb, Cr) is converted to a frequency-
domain representation, using a normalized, 2D type-II DCð.

Example : @Vð   ! $ !  %  " 

!&$&!&&% &" 


For an 8-bit image, each entry in the original block falls in the
range [0,255]. ðhe mid-point of the range (in this case, the value 128)
is subtracted from each entry to produce a data range that is centered
around zero.

so that the modified range is [ í 128,127]. ðhis step reduces the


dynamic range requirements in the DCð processing
ð
  


  






  






]   


ðhe human eye is good at seeing small differences in brightness over


a relatively large area, but not so good at distinguishing the exact
strength of a high frequency brightness variation.

ðhis allows one to greatly reduce the amount of information in the


high frequency components.

ðhis is done by simply dividing each component in the frequency


domain by a constant for that component, and then rounding to the
nearest integer.

ðhis rounding operation is the only lossy operation in the whole


process if the DCð computation is performed with sufficiently high
precision.
 
 


Entropy coding is a special form of lossless data compression. It


involves arranging the image components in a "zigzag" order
employing run-length encoding (RLE) algorithm that groups similar
frequencies together, inserting length coding zeros, and then
using Huffman coding on what is left.

ðhe previous quantized DC coefficient is used to predict the current


quantized DC coefficient. ðhe difference between the two is encoded
rather than the actual value.

ðhe encoding of the 63 quantized AC coefficients does not use such


prediction differencing.

 
  
  

ðhe resulting compression ratio can be varied according to need by


being more or less aggressive in the divisors used in the
quantization phase.

ðhe appropriate level of compression depends on the use to which


the image will be put.

ðhose who use the World Wide Web may be familiar with the
irregularities known as compression artifacts that appear in JPEG
images, which may take the form of noise around contrasting edges
(especially curves and corners), or blocky images, commonly known
as µjaggies'.

Вам также может понравиться