Академический Документы
Профессиональный Документы
Культура Документы
PROJECT
OC 303
A Project Report
on
Akash Khaitan
08DDCS547
FST
THE ICFAI UNIVERSITY
DEHRADUN
2ND SEMESTER-2010-11
CERTIFICATE
has been carried out by Mr Akash Khaitan, I.D. No.08DDCS547, during the II
Semester, 2010 – 2011. It is also certified that all the modifications suggested
have been incorporated in the report. The project report partially fulfills the
Date :
I would like to thank my project guide Prof. Laxman Singh Sayana whose constant guidance,
suggestions and encouragement helped me throughout the work.
I would also like to thank Prof. Ranjan Mishra, Prof. Rashid Ansari and Prof. Sudeepto
Bhatacharya for there help in understanding some of the concepts
I would also like to thank my family and friends, who have been a source of encouragement and
inspiration throughout the duration of the project. I would like to thank the entire CSE family for
making my stay at ICFAI University a memorable one.
iii
Table of Contents
Abstract
1 Introduction 1
2.2 Pixel 3
3 Inpainting Techniques 8
5 Results 22-31
8 References IX
Abstract
The project on Automatic Image Inpainting removes the unwanted objects from the image upon
the selection of object by the user and thus reduces the manual task. It uses the ideas of
interpolation of the pixel to be removed, by the neighborhood pixels. The entire work has been
tested under java as it provides appropriate image libraries in order to process an image
vi
1. Introduction
Image inpainting provides a means to restore damaged region of an image, such that the
image looks complete and natural after the inpainting process. Inpainting refers to the
restoration of cracks and other defects in works of art. A wide variety of materials and
techniques are used for inpainting.
Automatic/Digital inpainting are used to restore old photographs to their original condition.
The purpose of image inpainting is removal of damaged portions of scratched image, by
completing the area with surrounding (neighboring) pixel. The techniques used include the
analysis and usage of pixel properties in spatial and frequency domains.
Image inpainting techniques are also used in object removal (or image completion) in
symmetrical images.
1
2. Image Processing Basics
In order to understand the Image inpainting clearly, one must go through this section which
includes the basic ideas of Image processing required in Image Inpainting
The projection form is the camera is a two dimensional, time dependent continuous
distribution of light energy.
In order to convert continuous image into digital image three steps are necessary:-
2
2.2 Pixel
In digital imaging, a pixel (or picture element) is a single point in a raster image. The pixel is
the smallest addressable screen element; it is the smallest unit of picture that can be
controlled. Each pixel has its own address. The address of a pixel corresponds to its
coordinates. Pixels are normally arranged in a two-dimensional grid, and are often
represented using dots or squares. Each pixel is a sample of an original image; more samples
typically provide more accurate representations of the original. The intensity of each pixel is
variable. In color image systems, a color is typically represented by three or four component
intensities such as red, green, and blue, or cyan, magenta, yellow, and black.
Fig 2.2
An image that is 2048 pixels in width and 1536 pixels in height has a total of 2048×1536 =
3,145,728 pixels or 3.1 megapixels. One could refer to it as 2048 by 1536 or a 3.1-megapixel
image
3
2.3 Image Types
Bit Depth Colours Available
1-bit Black and White
2-bit 4 colours
4-bit 16 colours
The number of colours in an image is determined by the number of bits in an image and the
formula is given by 2n where n is the number of bits
Each value of 0 - 255 takes up 8 bits, so the total amount of space to define the colour of each
pixel is 24 bits
4
2.4 Point Operations
Point operations help in modifying the pixels of an image independent of neighboring pixels.
It helps in determining the particular pixel on the basis of its color. The main aim of
discussing point operation is that in the inpainting, the selection of image coordinates will be
done which is having a particular color is described in the later chapter
The operations mentioned above is performed on each pixel, gives a resultant image with
required operation.
Convolution is a common image processing technique that changes the intensities of a pixel
to reflect the intensities of the surrounding pixels. A common use of convolution is to create
image filters. Using convolution, you can get popular image effects like blur, sharpen, and
edge detection
The height and width of the kernel do not have to be same, though they must
both be odd numbers. The numbers inside the kernel are what impact the
overall effect of convolution. The kernel (or more specifically, the values held
within the kernel) is what determine how to transform the pixels from the
original image into the pixels of the processed image. Fig 2.4 Kernel
5
Convolution is a series of operations that alter pixel intensities depending on the intensities of
neighboring pixels. The kernel provides the actual numbers that are used in those operations.
Using kernels to perform convolutions is known as kernel convolution.
Convolutions are per-pixel operations—the same arithmetic is repeated for every pixel in the
image. Bigger images therefore require more convolution arithmetic than the same operation
on a smaller image. A kernel can be thought of as a two-dimensional grid of numbers that
passes over each pixel of an image in sequence, performing calculations along the way. Since
images can also be thought of as two-dimensional grids of numbers, applying a kernel to an
image can be visualized as a small grid (the kernel) moving across a substantially larger grid
(the image).
The numbers in the kernel represent the amount by which to multiply the number underneath
it. The number underneath represents the intensity of the pixel over which the kernel element
is hovering. During convolution, the center of the kernel passes over each pixel in the image.
The process multiplies each number in the kernel by the pixel intensity value directly
underneath it. This should result in as many products as there are numbers in the kernel (per
pixel). The final step of the process sums all of the products together, divides them by the
amount of numbers in the kernel, and this value becomes the new intensity of the pixel that
was directly under the center of the kernel.
6
Even though the kernel overlaps several different pixels (or in some cases, no pixels at all),
the only pixel that it ultimately changes is the source pixel underneath the center element of
the kernel. The sum of all the multiplications between the kernel and image is called the
weighted sum. Since replacing a pixel with the weighted sum of its neighboring pixels can
frequently result in much larger pixel intensity (and a brighter overall image), dividing the
weighted sum can scale back the intensity of the effect and ensure that the initial brightness
of the image is maintained. This procedure is called normalization. The optionally divided
weighted sum is what the value of the center pixel becomes. The kernel repeats this
procedure for each pixel in the source image.
The data type used to represent the values in the kernel must match the data used to represent
the pixel values in the image. For example, if the pixel type is float, then the values in the
kernel must also be float values.
7
3. Inpainting Techniques
The restoration can be done by using two approaches, image inpainting and texture synthesis,
whereas the meaning of the first approach is restoring of missing and damage parts of images
in a way that the observer who doesn't know the original image can't detect the difference
between the original and the restored image. It is called inpainting because the process of
painting or fill in holes or cracks in an artwork.
The second approach is filling unknown area on the image by using surrounding texture
information or from input texture sample.
This chapter is dedicated to the discussions of several inpainting techniques with their
benefits and draw backs.
Bertalmio et al [2] have introduced a technique for digital inpainting of still images that
produces very impressive results. Their algorithm, however, usually requires several minutes
on current personal computers for the inpainting of relatively small areas.
8
3.2 Total Variational (TV) inpainting model
Chan and Shen proposed two image-inpainting algorithms. The Total Variational [4] (TV)
inpainting model uses an Euler-Lagrange equation and inside the inpainting domain the
model simply employs anisotropic diffusion based on the contrast of the isophotes. This
model was designed for inpainting small regions and while it does a good job in removing
noise, it does not connect broken edges.
The Curvature-Driven Diffusion (CDD) model [4] extended the TV algorithm to also take
into account geometric information of isophotes when defining the “strength” of the diffusion
process, thus allowing the inpainting to proceed over larger areas. CDD can connect some
broken edges, but the resulting interpolated segments usually look blurry.
A Telea [4] proposed a fast marching algorithm that can be looked as the PDE based
approach without the computational overheads. It is considerably fast and simple to
implement than other PDE based methods, this method produces very similar results
comparable to other PDE methods.
The algorithm propagating estimator that used for image smoothness into image gradient
(simplifies computation of flow), the algorithm calculate smoothness of image from a known
image neighborhood of the pixel as a weighted average to inpaint, the FMM inpaint the near
pixels to the known region first which is similar to the manner in which actual inpainting is
carried out , and maintains a narrow band pixels which separates known pixels from
unknown pixels, and also indicates which pixel will be inpainted next.
The limitation of this method is producing blur in the result when the region to be inpainted
thicker than 10 pixels
9
3.5 Exemplar based methods
Exemplar based methods are becoming increasingly popular for problems such as denoising,
super resolution, texture synthesis, and inpainting. The common theme of these methods is
the use of a set of actual image blocks, extracted either from the image being restored, or
from a separate training set of representative images, as an image model. In the case of
inpainting, the approach is usually to progressively replace missing regions with the best
matching parts of the same image, carefully choosing the order in which the missing region is
filled to minimize artifacts. We can go for an inpainting method that represents missing
regions as sparse linear combinations of other regions in the same image (in contrast to, in
which sparse representations on standard dictionaries, such as wavelets, are employed),
computed by minimizing a simple functional.
Images may contain textures with arbitrary spatial discontinuities, but the sampling theorem
constraints the spatial frequency content that can be automatically restored. Thus, for the case
of missing or damaged areas, one can only hope to produce a plausible rather than an exact
reconstruction. Therefore, in order for an inpainting model to be reasonably successful for a
large class of images the regions to be inpainted must be locally small. As the regions
become smaller, simpler models can be used to locally approximate the results produced by
more sophisticated ones. Another important observation used in the design of our algorithm
is that the human visual system can tolerate some amount of blurring in areas not associated
to high contrast edges. Thus,
let Ω be a small area to be inpainted and let ∂Ω be its boundary. Since Ω is small, the
inpainting procedure can be approximated by an isotropic diffusion process that propagates
information from ∂Ω into Ω. A slightly improved algorithm reconnects edges reaching ∂Ω ,
removes the new edge pixels from Ω (thus splitting Ω into a number of smaller sub-regions),
and then performs the diffusion process as before. The simplest version of the algorithm
consists of initializing Ω by clearing its color information and repeatedly convolving the
10
region to be inpainted with a diffusion kernel. ∂Ω is a one-pixel thick boundary and the
number of iterations is independently controlled for each inpainting domain by checking if
none of the pixels belonging to the domain had their values changed by more than a certain
threshold during the previous iteration. Alternatively, the user can specify the number of
iterations. As the diffusion process is iterated, the inpainting progresses from ∂Ω into Ω.
Convolving an image with a Gaussian kernel (i.e., computing weighted averages of pixels’
neighborhoods) is equivalent to isotropic diffusion (linear heat equation).The algorithm uses
a weighted average kernel that only considers contributions from the neighbor pixels (i.e., it
has a zero weight at the center of the kernel). The pseudo code of this algorithm and two
diffusion kernels is shown below
b = 0.176765, c = 0.125.
Limitations are:
11
3.7 Color Match Inpainting
It is basically used for removing scratches from old image by marking the scratch by the
color which is not used in the image.
Drawbacks are:
This is applicable for symmetric images and can be used to remove the scratches and objects
from the image.
12
4. Source Code
The entire coding is done on java as it is platform independent and provides appropriate
image libraries to manipulate image.
package imageprocessing;
import java.awt.Graphics2D;
import java.awt.Graphics;
import java.awt.image.BufferedImage;
import javax.swing.*;
import java.io.File;
import javax.imageio.*;
import java.awt.event.*;
import java.awt.*;
import java.lang.Integer;
4.1Creating Gui
public void creategui()
{
dm = Toolkit.getDefaultToolkit().getScreenSize();
f=new JFrame();
jb=new JMenuBar();
jm=new JMenu("File");
jm1=new JMenu("Image"); //Jmenu Image
13
f.setJMenuBar(jb);
jb.add(jm);
jb.add(jm1);
p1=new JMenuItem("New");
p2=new JMenuItem("Open");
p3=new JMenuItem("Save");
p4=new JMenuItem("Exit");
jm.add(p1);
jm.add(p2);
jm.add(p3);
jm.add(p4);
p2.addActionListener(this);
p3.addActionListener(this);
p4.addActionListener(this);
f.setTitle("Image Processing");
f.setSize((int)dm.getWidth(),(int)dm.getHeight());
f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
hbar = new JScrollBar(JScrollBar.HORIZONTAL, 30, 20, 0, 300);
vbar = new JScrollBar(JScrollBar.VERTICAL, 30, 40, 0, 300);
f.add(hbar, BorderLayout.SOUTH);
f.add(vbar, BorderLayout.EAST);
hbar.setUnitIncrement(2);
hbar.setBlockIncrement(1);
plugins=new imageplugins("Transformations",f);
filters=new Convolution("Filters",f);
jb.add(plugins);
jb.add(filters);
f.setVisible(true);
}
public BufferedImage loadImage()
{
BufferedImage bimg = null;
try
{
bimg = ImageIO.read(file);
}
catch (Exception e)
{
e.printStackTrace();
}
return bimg;
}
14
}
catch (Exception e)
{
e.printStackTrace();
}
}
public void actionPerformed(ActionEvent e)
{
if(e.getSource()==p2)
{
zm=0;
jf=new JFileChooser();
int returnVal = jf.showOpenDialog(f);
if(returnVal== JFileChooser.APPROVE_OPTION)
{
file = jf.getSelectedFile();
loadImg=loadImage();
if(panel!=null) //inorder to remove the previous content of panel
{
panel.setVisible(false);
}
int x=(int)(dm.getWidth()/2)-(loadImg.getWidth()/2);
int y=(int)(dm.getHeight()/2)-(loadImg.getHeight()/2);
panel=new JImagePanel(loadImg,x,y);
f.add(panel);
f.setVisible(true);
}
}
if(e.getSource()==p3)
{
jf1=new JFileChooser();
int returnVal1 = jf1.showSaveDialog(f);
if(returnVal1== JFileChooser.APPROVE_OPTION)
{
file1=jf1.getSelectedFile();
String s1=file1.getAbsolutePath();
saveImage(loadImg,s1);
}
package imageprocessing;
import java.awt.Graphics;
import java.awt.Graphics2D;
import java.awt.image.BufferedImage;
15
import javax.swing.JPanel;
import java.awt.Dimension;
import java.awt.Toolkit;
import java.awt.image.BufferedImage;
import javax.swing.JFrame;
16
package imageprocessing;
import java.awt.Dimension;
import java.awt.Toolkit;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.awt.event.MouseEvent;
import java.awt.event.MouseListener;
import java.awt.event.MouseMotionListener;
import java.awt.image.BufferedImage;
import java.awt.image.BufferedImageOp;
import java.awt.image.ConvolveOp;
import java.awt.image.Kernel;
import javax.swing.JFrame;
import javax.swing.JMenu;
import javax.swing.JMenuItem;
17
image.panel.addMouseListener(this);
}
18
int m=1;
for(int j=fx;j>((ix+fx)/2);j--)
{
int value =tempimage.getRGB(j+m, i);
tempimage.setRGB(j, i, value);
m=m+2;
}
}
new Loadimage(tempimage,fr);
int x=0;
while(x<2)
{
tempimage=convolveregion(tempimage,elements,ix,iy,fx,fy);
x++;
}
new Loadimage(tempimage,fr);
}
}
19
val[2]=tempimage.getRGB(j+1, i-1)& 0xFFFFFF;
val[3]=tempimage.getRGB(j-1, i)& 0xFFFFFF;
val[4]=tempimage.getRGB(j, i)& 0xFFFFFF;
val[5]=tempimage.getRGB(j+1, i)& 0xFFFFFF;
val[6]=tempimage.getRGB(j-1, i+1)& 0xFFFFFF;
val[7]=tempimage.getRGB(j, i+1)& 0xFFFFFF;
val[8]=tempimage.getRGB(j+1, i+1)& 0xFFFFFF;
int k=0;
sum=0;
sum1=0;
sum2=0;
for(k=0;k<9;k++)
{
int red=((val[k]>>16) & 0xFF);
int green= ((val[k]>>8) & 0xFF);
int blue=((val[k]>>0)& 0xFF);
sum = sum+(elements[k]*blue);
sum1=sum1+(elements[k]*green);
sum2=sum2+(elements[k]*red);
}
int sum3=0;
sum3=0xFF000000+((int)sum2<<16)+((int)sum1<<8)+((int)sum);
tempimage1.setRGB(j, i,(int)sum3);
}
for(int i = iy;i<fy;i++)
{
for(int j=ix;j<fx;j++)
{
int value=tempimage1.getRGB(j, i)& 0xFFFFFF;
tempimage.setRGB(j, i,value);
}
}
return tempimage;
}
20
@Override
public void mouseReleased(MouseEvent arg0) {
BufferedImage tempimagesel = new
BufferedImage(tempimage.getWidth(), tempimage
.getHeight(), tempimage.getType());
for(int i = 0;i<tempimage.getHeight();i++)
{
for(int j=0;j<tempimage.getWidth();j++)
{
int value=tempimage.getRGB(j, i)& 0xFFFFFF;
tempimagesel.setRGB(j, i,value);
}
}
if(arg0.getSource()==image.panel && temp1!=1)
{
Dimension dm =
Toolkit.getDefaultToolkit().getScreenSize();
fx=arg0.getX()-
(int)(dm.getWidth()/2)+(tempimage.getWidth()/2);
fy=arg0.getY()-
(int)(dm.getHeight()/2)+(tempimage.getHeight()/2);
for(int i=iy;i<=fy;i++)
{
for(int j=ix;j<=fx;j++)
{
if(i==iy || i==fy || j==ix || j==fx)
tempimagesel.setRGB(j, i, 16646144);
}
}
}
new Loadimage(tempimagesel,fr);
}
@Override
public void mouseDragged(MouseEvent arg0)
{
if(arg0.getSource()==image.panel)
{
temp1=1;
Dimension dm =
Toolkit.getDefaultToolkit().getScreenSize();
int x=arg0.getX()-
(int)(dm.getWidth()/2)+(tempimage.getWidth()/2);
int y=arg0.getY()-
(int)(dm.getHeight()/2)+(tempimage.getHeight()/2);
System.out.println(+x+" "+ +y);
tempimage.setRGB(x, y, 16646144);
}
}
@Override
public void mouseMoved(MouseEvent arg0) {
}
}
21
5. Results
This chapter will show you the results we obtained by applying some of the inpainting
techniques
5.1 Experiment 1:
22
Fig 5.2 Boat Selection
23
5.2 Experiment 2
24
Fig 5.5 Tree Selection
25
5.3 Experiment 3
26
Fig 5.8 People selected
27
5.4 Experiment 4
28
Fig 5.11 Crack Selected
29
5.5 Experiment 5
Results:
• Scratches Removed
• Accuracy: 100%
30
Future Improvements
The interpolation technique for 2D matrix can be used in determining the scratches/noises in
the image and removing it automatically
• The value of the pixel would then be interpolated by the nearby value
• The error range of the interpolated value will calculated and would be added and
subtracted with the interpolated value in order to obtain the limits of safe region
• If the value in the first step lies in the range the value will not be replaced by the
interpolated value else replace it by the interpolating value
Use of Artificial intelligence can be combined with image processing in order to produce
more accurate inpainted image
A design of a convolution matrix which would detect the scratches and would remove it
automatically
vii
Discussion and Conclusion
In this report we have described and implemented inpainting algorithms which removes
unwanted objects from the image. Different inpainting algorithms were used to perform the
same purpose. The most common thing in all the algorithm is the selection of region where
the inpainting is to be done
Algorithm like Shift map removed the unwanted object from the symmetrical images like
sceneries whereas algorithm like oliveria is applicable for inpainting scratches which
practically takes smaller area.
The point algorithm was used earlier to remove scratches in small area and the marking of
the scratch was done by red color.
viii
References
[1]Bertalmio, M, Sapiro, G., Caselles, V., Ballester, C. Image Inpainting. SIGGRAPH 2000,
pages 417-424.
[2]Chan, T., Shen, J. Mathematical Models for Local Deterministic Inpaintings. UCLA CAM TR
00-11, March 2000.
Wilhelm Burger, Digital Image Processing An Algorithmic introduction using java, First
Edition, Springer,2008
[4]Manuel M. Oliveira, Brian Bowen, Richard McKenna, Yu-Sung Chang, Fast Digital
Image Inpainting, September 3-5, 2001
ix