Вы находитесь на странице: 1из 63

Photo-Morphing Detection System

Chapter 1
Introduction
1.1 Overview
Digital image manipulation software is now readily available on personal
computers. It is therefore very simple to tamper with any image and make it available
to others. Insuring digital image integrity has therefore become a major issue.
Watermarking has become a popular technique for copyright enforcement and image
authentication. The aim of this paper is to present an overview of emerging techniques
for detecting whether image tampering has taken place. Compared to the techniques
and protocols for security usually employed to perform this task, the majority of the
proposed methods based on watermarking, place a particular emphasis on the notion
of content authentication rather than strict integrity. In this paper, we introduce the
notion of image content authentication and the features required to design an effective
authentication scheme. We present some algorithms, and introduce frequently used
key techniques.

1.2 Brief Description


Pictures persuade people powerfully. Photos communicate more convincingly
than do words alone by evoking an emotional and cognitive arousal that the same
information, without the pictures, does not. A picture is a more effective conveyor of
information than its verbal and written counterparts alone in that the communication
of its message occurs in less time, requires less mental effort on the part of the
observer, incites less counterargument, and creates more confidence in the
conclusions it proffers.
People, including jurors, trust photographs. So do courts. Yet it has never been
easier for photos to misrepresent the truth than it is now. So great is the risk of a
photograph misrepresenting the truth that an international leader in digital imaging
was compelled to declare, photographs, as evidence of reality, are dead. If
photographs are so untrustworthy, why are they still considered the ultimate proof?
Why aphorisms are like photos dont lie and Ill believes it when I see it so
pervasive?

Photo-Morphing Detection System

The answer has to do with how technology has affected a paradigm shift in
the methods used to take pictures. To comprehend how the fidelity of the photograph
has been forfeited, it is first necessary to understand the previous picture paradigm
and juxtapose it with the modern domain of digital images.
A) Traditional, Analog Photography
Traditional photography is an analog science. Light enters through a cameras
lens and the image the camera views is faithfully recorded onto a negative. This
negative is then printed into a recognizable image. Although the images represented
in the photograph have typically been faithful to the image seen by the camera,
photographic trickery and distortion have long existed.
Several variables affect how a photo turns out, all of which can either subtly or
drastically change the story a photo tells. A low-angle shot, for instance, can make a
human subject seem much taller than she is in reality. Spotting, cropping, color
balancing, brightness and contrast adjustment, burning, and dodging, and adjusting
exposure time are also very common ways to manipulate the story told by a
photograph.
For decades, books, newspapers, and magazines have used photographs to tell
fantastic and impossible stories, from self-propelled, flying men to proof of the
existence of jack elopes. And yet, analog photographs maintain their integrity because
alterations and manipulations to an analog print have always been very easy to detect.
In fact, by looking for four different types of clues density, shadows splice lines, and
image continuity it becomes simple to finger a fraudulent analog photograph.
Moreover, making alterations to analog photographs is a complicated and costly
ordeal.
When the Federal Rules of Evidence were enacted in 1975, the fidelity of
photographs was presumed, which did not present a problem because the ease with
which modifications and manipulations could be identified made it a very manageable
matter for courts to protect themselves from fraudulent photographs. Since then,
however, digital technology has permeated society, making it more costly for courts
to be cavalier about what images are considered authentic. In fact, today it may be
more accurate to say that a picture is worth a thousand lies.

Photo-Morphing Detection System

B) Modern, Digital Photography


Digital photography is the new norm for image capture. Digital cameras, in
contrast to their analog complements, do not store information in a continuous
medium. Instead, information is recorded in discrete bits of information called binary
code, which is a string of ones and zeroes that makes up the storage language of hard
drives, compact discs, computers, and all other digital devices. By using a series of
numbers, instead of the continuous crests and troughs characteristic of analog
information, digital image manipulation is much easier, cheaper, and infinitely more
difficult to detect than an analog alteration.
The main aim of the project is to provide software which will help to detect
the manipulation in the photo. Most digital cameras employ an image sensor with a
color filter array such as shown on the left. The process of demosaicing interpolates
the raw image to produce at each pixel an estimate for each color channel. With
proper analysis, traces of demosaicing are exhibited in the peak of an analysis signal
as shown on the right. The presence of demosaicing indicates the image is from a
digital camera rather than generated by a computer.

Fig.1.1 Digital Cameras Image Sensor Color

Photo-Morphing Detection System

1.3 Problem Definition :


Existing methods for image authentication treat all types of manipulation
equally (i.e., as unacceptable). However, some applications demand techniques that
can distinguish acceptable manipulations (e.g., compression) from malicious ones.
It is hard to underestimate the importance of photographic evidence in today's
world. Political, legal and business users rely on images captured with modern digital
cameras to base important decisions. The credibility of such evidence thus becomes
vital.
The Impact of Fake Photographic Evidence:
Some of that evidence has been proven to be a fake. Manipulated images have
been used to make false political statements in more than once case. ElcomSoft has
published a brief abstract on some of the world's most famous fakes that made impact
on public opinion, resulted in terminated careers and loss of reputation.

Photo-Morphing Detection System

Chapter 02
Literature Survey
2.1 Background
If we consider a digital image to be merely an ordinary bit stream on which no
medication is allowed, then there is not much difference between this problem and
other data cryptography problems. Two methods have been suggested for achieving
the authenticity of digital images: having a digital camera sign the image using a
digital signature, or embedding a secret code in the image. The first method uses an
encrypted digital \signature" which is generated in the capturing devices. A digital
signature is based on the method of Public Key Encryption. A private key is used to
encrypt a hashed version of the image. This encrypted message is called the signature
of the image, and it provides a way to ensure that this signature cannot be forged. This
signature then travels with the image. The authentication process of this image needs
an associated public key to decrypt the signature. The image received for
authentication is hashed and compared to the codes of the signature. If they match,
then the received image is authenticated.
Above methods have clear drawbacks. In their propositions, authenticity will
not be preserved unless every pixel of the images is unchanged.
There are several possible approaches for authenticating the source of a digital
image.
A) An Active Approach for Manipulation Detection:
Image can be authenticated by Digital watermarking. Various watermark
techniques, have been proposed in recent years, which can be used not only for
authentication, but also for being an evidence for the tamper detection. Wang et al.
and Lin et al. Both embedded watermarks consisting of the authentication data and
the recovery data into image blocks for image tamper detection and recovery in the
future. The drawback of watermark techniques is that one must embed a watermark
into the image first. Also a watermark must be inserted at the time of recording, which
would limit this approach to specially equipped digital cameras. Many other
techniques that work in the absence of any digital watermark or signature have been
proposed.

Photo-Morphing Detection System

B) Passive Approach for Manipulation Detection:


In contrast to approaches such as active digital watermarking and
Steganography, passive techniques for image manipulation detection are carried out in
the absence of any watermark or signature. These techniques work on the assumption
that although digital forgeries may leave no visual clues that indicate tampering, they
may alter the underlying statistics of an image. The set of image forensic tools for
passive or blind approach for manipulation detection can be roughly categorized as
pixel-based techniques, format-based techniques, camera-based techniques geometric
based techniques.

Photo-Morphing Detection System

Chapter 03
Software Requirements Specification
A requirements specification for a software system, is a complete description
of the behavior of a system to be developed and may include a set of use cases that
describe interactions the users will have with the software. In addition it also contains
non-functional requirements. Non-functional requirements impose constraints on the
design or implementation (such as performance engineering requirements, quality
standards, or design constraints).
The software requirements specification document enlists all necessary
requirements that are required for the project development. To derive the
requirements we need to have clear and thorough understanding of the products to be
developed. This is prepared after detailed communications with the project team and
customer.

3.1. Introduction
The field of computer graphics is rapidly maturing to the point where human
subjects have difficulty distinguishing photorealistic computer generated images
(PRCG) from photographic images (PIM).As evidence of the proliferation of
computer generated imagery, one need look no further than Hollywood. According to
Wikipedia, the first feature-length computer animated film was Toy Story, in 1995. In
2007, a total of 14 computer animated films were released, several with stunningly
realistic imagery. In addition to computer animated films, computer graphics are
routinely used to create imagery in live action motion pictures that would otherwise
be nearly impossible to film. Partly because of the success of computer animation in
popular culture, it is well known by the general public that images can be manipulated
and are not necessarily a historical record of an actual event. When viewing movies
for entertainment, the audience is usually a willing participant when fooled into
believing computer generated images represent a fictional version of reality.
However, in other situations, it is extremely important to distinguish between PRCG
and PIM. In the mass media, there have been embarrassing instances of manipulated
images being presented as if they represent photo graphically captured events.

Photo-Morphing Detection System

In legal situations, where photographs are used as evidence. it is crucial to


understand whether the image is authentic or forged (either computer generated or
altered). Furthermore, in the intelligence community, it is of vital importance to
establish the origin of an image.
3.1.1 Purpose
There are several possible approaches for authenticating the source of a digital
image. With active watermarking an image is altered to carry an authentication
message by the image capture device. At a later time, the message can be extracted to
verify the source of the image. Unfortunately, this method requires coordination
between the insertion and extraction of the watermark. In contrast to the active
approach, statistical methods are also used to characterize the difference between
PRCG and PIM. For example in a set of wavelet features are extracted from the
images to form a statistical model of PRCG and PIM, and classification is performed
with standard machine learning techniques. In it is shown that geometric and physical
features are also effective for classifying between PRCG and PIM. In essence, both of
these approaches are effective because of the lack of perfection of the state-of-the-art
computer graphics. For example, in it is noted that PRCG contain unusually sharp
edges and occlusion boundaries. A reasonable explanation for this is that the
imperfections such as dirt, smudges, and nicks that are pervasive in real scenes are
difficult to simulate. It is far easier to construct a computer graphic of a gleamingly
new office than the image of that office after a decade of wear. In any case, as the
field of computer graphics matures with more realistic modeling of scene detail and
more realistic lighting models, it seems reasonable to assume that the statistical
differences between real scenes and computer generated scenes will diminish.
Meanwhile, researchers have recently shown that when an image is resampled
through interpolation, statistical traces of resampling are embedded in the image
signal itself. In the signature is recovered by applying a Laplacian operator to the
image. The Laplacian is shown to have a higher variance at positions corresponding to
pixel locations in the original uninterpolated image, and this pattern is recovered with
Fourier analysis. Similarly, in the EM algorithm along with Fourier analysis are used
to recover the correlations between neighboring pixels that are introduced through
interpolation.

Photo-Morphing Detection System

In addition, because a forgery is generally created by resampling an object and


inserting it into a target image, this approach has been shown to be useful for
detecting candidate forged image regionsand is robust to JPEG compression.
Other researchers have focused on matching images to specific digital camera
models using camera model specific properties of demosaicing. Further, in the authors
exploit image sensor imperfections to match images to specific cameras. These
approaches demonstrate the utility of exploiting the natural watermarks that are
inserted into images as a result of necessary image processing (in the case of
demosaicing) or practical hardware issues (in the case of sensor imperfections). In the
authors use the EM algorithm, this time for finding pixels correlated with neighbors
and for estimating the coefficients of demosaicing. Based on the estimated
coefficients, the image is classified into one of seven bins based on the technique used
for demosaicing. Further, the probability maps can be used to suggest local tampering.
This work is based mostly on simulated demosaicing without the nonlinearities
associated with post-processing.
Our contributions are the following: we describe a novel approach for
distinguishing between photorealistic computer graphic images and photographic
images captured with a digital camera based on the idea that photographic images will
contain traces of demosaicing. We recognize that finding the actual demosaicing
parameters is not necessary for distinguishing between photorealistic computer
graphics and photographic images. We achieve the highest reported accuracy on a
standard test set for distinguishing between photographic images and photorealistic
computer graphics by detecting traces of demosaicing. We demonstrate robustness by
working only with images captured and processed with consumer-grade digital
cameras, including the associated JPEG compression. Further, we extend our
algorithm to examine images locally, accurately detecting forged regions in otherwise
natural images.

Photo-Morphing Detection System

3.1.2 Intended Audience and Reading Suggestion


1. Law Enforcement:
Establish pictures as un-manipulated evidence for legal use, the system uses a
combination of encryption codes, product key and hardware security, Image
tampering is virtually impossible.
2. Government Agencies:
Due to the high level of security, it is easy to ensure sure that each picture that
has been taken using Image Authentication function is unique.
3. Media-Press:
Prevent the photographers to manipulate their pictures, adding or removing
information on the pictures or in the metadata.
4. Insurance Companies:
Use the authenticated pictures as proofs for an insurance case where date, time
and the picture itself are crucial information.
5. Police:
The criminal justice community first began using digital imaging in the early
1990. In 1997, the International Association for Identification recognized in an
official declaration that electronic/digital imaging is a scientifically valid and proven
technology for recording, enhancing, and printing images.
At first, police agencies were disappointed in the caliber of digital images; however,
as technology improved the quality and cost of the cameras, more and more film
cameras were replaced by their digital counterparts. Since then, digital cameras have
become so pervasive in law enforcement that they are the preferred means of photo
capture in nearly every major law enforcement agency in this country. This has
implications in the legal realm because police officers often submit photographs as
evidence in court.
6. Lawyers:
It should come as no surprise that lawyers have embraced digital technology.
Increasing numbers of attorneys are lending support and illustrations to their
arguments by using digital photographs. This is true in both judicial and
administrative settings. Whereas a generation ago an attorney was likely to blow up a
thirty-five millimeter picture for emphasis in the courtroom, now he is more likely to
present an enhanced digital photograph to achieve the same, or arguably better, effect.

10

Photo-Morphing Detection System

3.1.3 Project Scope:


Todays many existing system supported only particular image format, thats
why user gets the difficulty for image manipulation detection.

But our project

overcomes this limitation and supporting three image file format such as JPEG, GIF
and BMP. For the detection of manipulation there is not required a original image for
image authentication, simply required image as a input.
The in-camera processing (rather than the image content) distinguishes the
digital camera photographs from computer graphics. Our results show high reliability
one standard test set of JPEG compressed images from consumer digital cameras.
Further, we show the application of these ideas for accurately localizing forged
regions within digital camera images. Demosaicing acts as a type of passive
watermarking that leaves a trace embedded within the image signal. When traces of
demosaicing are detected, we surmise that the image is a photographic (rather than
computer generated) image. We document the performance of the algorithm on a
standard test set of 1600 images compressed with JPEG compression, and achieve
classification accuracy in the upper nineties. Further, we show the application of the
algorithm for accurately localizing forged image regions.
3.1.4 User classes and characteristics:
There are two main modules in this project
Image Authentication
Forgery detection
The Image Authentication module has four sub-modules listed below:
1.

Apply HP Filter

2.

Estimate positional variance

3.

Apply DFT

4.

Peak Analysis

11

Photo-Morphing Detection System

3.1.5 Operating environment


The Photo Morphing detection system effectively authenticates the images.
This will support the various image file format such as JPEG, GIF, PNG. The system
is easy to operate to the user. When user wants to authenticate the image then quick
response will be provided to the user within seconds.
3.1.6 Design and Implementation constraints
Design Constraints:
Morphing Detection: Morphing should be easily recognized and get solved
out.
Image Security: Level of image security should be good enough.
Screen resolution: Screen should be visible enough.
Exception: All kind of exception should be handle properly.
General Constraints
Recognition speed - Image need to be captured using camera or stored
images can be used. This process should be fast enough.
Response: User should not wait for long time for result of request.
Technical constraints:
The system supports limited range of image file format such as jpg, gif, png.
Financial Constraints:
The project has some financial constraints.
3.1.7 Assumption and Dependencies
1. Some camera manufacturers do not create a thumbnail image, or do not encode
them as a JPEG image. In such cases, we simply assign a value of zero to all of the
thumbnail parameters. Rather than being a limitation, we consider the lack of a
thumbnail as a characteristic property of a camera.
2. Process is applicable to JPEG, GIF, BMP format images.
Following is assumptions of Photo morphing System:
1. The JVM is available on current OS.
12

Photo-Morphing Detection System

3.2 System Features


Functional Description:
The field of computer graphics is rapidly maturing to the point where human
subjects have difficulty distinguishing

photorealistic computer generated images

(PRCG) from photographic images (PIM).As evidence of the proliferation of


computer generated imagery, one need look no further than Hollywood. It is shown
that geometric and physical features are also effective for classifying between PRCG
and PIM. In essence, both of these approaches are effective because of the lack of
perfection of the state-of-the-art computer graphics. For example in it is noted that
PRCG contain unusually sharp edges and occlusion boundaries. A reasonable
explanation for this is that the imperfections such as dirt, smudges, and nicks that are
pervasive in real scenes are difficult to simulate. It is far easier to construct a
computer graphic of a gleaming new office than the image of that office after a
decade of wear. In any case, as the field of computer graphics matures with more
realistic modeling of scene detail and more realistic lighting models, it seems
reasonable to assume that the statistical differences between real scenes and computer
generated scenes will diminish Partly because of the success of computer animation in
popular culture, it is well known by the general public that images can be manipulated
and are not necessarily a historical record of an actual event. When viewing movies
for entertainment, the audience is usually a willing participant when fooled into
believing computer generated images represents a fictional version of reality.
There are several possible approaches for authenticating the source of a digital
image. With active watermarking, an image is altered to carry an authentication
message by the image capture device. At a later time, the message can be extracted to
verify the source of the image. Unfortunately, this method requires coordination
between the insertions and extraction of the watermark. In contrast to the active
approach, statistical methods are also used to characterize the difference between
PRCG and PIM. For example, in a set of wavelet features are extracted from the
images to form a statistical model of PRCG and PIM, and classification is performed
with standard machine learning techniques. In it is shown that geometric and physical
features are also effective for classifying between PRCG and PIM. In essence, both of
these approaches are effective because of the lack of perfection of the state-of-the-art
computer graphics.

13

Photo-Morphing Detection System

For example, in it is noted that PRCG contain unusually sharp edges and occlusion
boundaries. A reasonable explanation for this is that the imperfections such as dirt,
smudges, and nicks that are pervasive in real scenes are difficult to simulate. It is far
easier to construct a computer graphic of a gleamingly new office than the image of
that office after a decade of wear. In any case, as the field of computer graphics
matures with more realistic modeling of scene detail and more realistic lighting
models, it seems reasonable to assume that the statistical differences between real
scenes and computer generated scenes will diminish.
3.3 External Interfaces Requirement
3.3.1 User interfaces
The user interfaces provide various features like:
1. Browsing the images from the external devices
2. It authenticates the actual photographs from digital cameras and computer
generated images.
3. It provides the space for displaying the images.
The reports, option and screen formats will conform to the existing windows
conventions.
a.

Professional look and feel

b.

Browser testing and support for IE, NN, Mozila, and Firefox.

c.

Reports exportable in .XLS, .PDF or any other desirable format

d.

Java based client for User-C & D

e.

Customizable color skins

GUI (Graphical User Interface):


The user interface of a use case consists of all dialog windows that are needed
to perform a use case. Message boxes are also considered part of the user interface.
However, we do not necessarily have just one user interface. As new
technologies emerge, we might need to design additional user interfaces for cel l
phones or PDAs, for example. User interfaces may differ dramatically in complexity.
A cell phone user interface, for example, is less complex than a full-blown user
interface on a desktop with all the bells and whistles. On a desktop computer we
usually have a graphical interface with multiple windows whereas on a cell phone we
have a Cards-like user interface with no graphics (so far).
14

Photo-Morphing Detection System

As a consequence, we may have more than one user interface for a given use
case. The question now is, if there is a way to describe the user interface in a deviceindependent manner.
.However, things are not as easy as they look like in the first instance. We
realize this when we think about what we need to describe:
It certainly is quite a challenge to describe presentation behavior. There are
many aspects to presentation behavior. Just to give a few examples, we might have
relationships and dependencies between GUI elements, such as "if the user enters a
value in field A then input is disallowed in field E" or "the value in field C must
always be less than the value in field B" or "the value of field D is always the result of
a computation (eg. Price * quantity)". As we describe relationships and dependencies
we become aware that we also describe forms logic.
Also, Universal Interface Technologies, Inc., have developed an XML based
tag set for user interfaces which they named "User Interface Markup Language"
(UIML). It already does a good job of presentation description and an adequate job
on presentation events description. However, description of behaviour is not part of
their effort at the moment.
Describing a dialog window with UIML directly is quite cumbersome and
error-prone. We would need to learn the language which is not really what we want.
A point-and-click editor (like those that are part of an IDE) would be extremely
helpful. We would then be in the position to "draw" the dialog window right on the
screen just as we are used to doing by using a conventional IDE. Doing this the
modeler would produce UIML by using a generate function. A renderer then takes
the UIML file as input and generates Java code.
If this technology unfolds as expected, we could speed up user interface
design quite significantly. The extract from a process (see below) shows how GUI
prototyping may happen. It is assumed that we edit the generated Java code. However,
this would not be necessary if we were happy with what the GUI-IDE generates.
However, the very reason for doing prototyping at this early stage is to verify
that the customer's requirements are met.
If the customer demands changes we could easily apply the changes in the
specialized IDE, regenerate a UIML file and regenerate executable Java code from
there.

15

Photo-Morphing Detection System

This would not only increase confidence that we design what the customer
really wants but also speed up the analysis and design process quite significantly. We
could regenerate many work products again and again from a single source and this
would help tremendously in our today's iterative and incremental approach.
3.3.2 Hardware Interfaces
The new functionality will run on all the standards hardware platform like Intel, and
Mac. These systems consist of standard and upgraded Windows, Apple and Mac
operating systems.
Hardware interfaces include optimal for pc with P4 and AMD 64 processor.
The minimum configuration required:
2.4 GHZ, 80 GB HDD for installation.
512 MB memory.
3.3.3

Software Interfaces

The various service providers will have different software interfaces to access the
authentication services provided by the system. They can perform their services
independently, as long as they adhere with the policies and stranded agreed upon. The
Photo-morphing Detection System will use scalable stable operating system.
JDK 1.6
Java Advance Imaging
Swing
Tool to used:
Swing:
Swing is the primary Java GUI widget toolkit. It is part of Oracle's Java
Foundation Classes (JFC) an API for providing a graphical user interface (GUI)
for Java programs.
Swing was developed to provide a more sophisticated set of GUI components
than the earlier Abstract Window Toolkit.
Swing provides a native look and feel that emulates the look and feel of
several platforms, and also supports a pluggable look and feel that allows applications
to have a look and feel unrelated to the underlying platform.
16

Photo-Morphing Detection System

It has more powerful and flexible components than AWT. In addition to


familiar components such as buttons, check box and labels, Swing provides several
advanced components such as tabbed panel, scroll panes, trees, tables and lists.
JDK 1.6:
Java technology allows you to work and play in a secure computing
environment. Java allows you to play online games, chat with people around the
world, calculate your mortgage interest, and view images in 3D, just to name a
few.
Java technology's versatility, efficiency, platform portability, and
security make it the ideal technology for network computing. From laptops to
datacenters, game consoles to scientific supercomputers, cell phones to the
Internet, Java is everywhere!

1.1 billion desktops run Java

930 million Java Runtime Environment downloads each year

3 billion mobile phones run Java

31 times more Java phones ship every year than Apple and

Android

combined

100% of all Blu-ray players run Java

1.4 billion Java Cards are manufactured each year

Java powers set-top boxes, printers, Web cams, games, car

navigation

systems, lottery terminals, medical devices, parking

payment stations,

and more.

17

Photo-Morphing Detection System

3.4. Non Functional Requirements


1.

Secure access of Image.

2.

High Scalability. The solution should be able to accommodate high number of


customers and brokers. Both may be geographically distributed.

3.

Flexible service based architecture will be highly desirable for future


extension.

4.

Better component design to get better performance at peak time.

3.4.1 Performance Requirements


Performance requirements concern the speed of operation, Response
requirements (how quickly the system reacts to a user input), Throughput
requirements (how much the system can accomplish within a specified amount of
time) of a system.
The speed of performing operation of a system is good. When user wants to
authenticate the image the system will provide quick response to the system.
High Speed:
System should process image position in parallel for to give quick response
then system must wait for process completion.
Accuracy:
System should correctly which is photographic and photorealistic computer
generated image, process it and display the image
3.4.2 Safety Requirements
Safety is the protection of a system from internal threats to the system and images
provided to the system. When
3.4.3 Security Requirements
Security requirements are included in a system to ensure:
Ensure the integrity of the system from accidental or malicious damage.

18

Photo-Morphing Detection System

3.4.4 Software Quality Attributes


The Photo-morphing Detection System is application specific system so it
should qualify all the particular system for which it is design without any further
requirements from the OS on which this system is running.
Photo- morphing detection system is a small and light project so it does not
need to be installed. All it takes is unpacking from the Zip package. It can be
transferred also in a USB stick with no additional configuration needed.
Photo- morphing detection is a project that once uninstalled from a computer,
leaves no trace behind. So there is no way passwords and other data in the database to
be found later.
Maintainable software should have
1.

Encourage in-code documentation (XML docs in javadoc, etc.)

2.

Use a wiki to maintain the documentation

3.

Unit Tests = Good for documenting specifications

4.

Comments = Good for documenting design decisions.

5.

Unit Tests + Comments = Good for documenting specifications and design


decisions= Easily maintainable software.

6.

Faster feedback from any changes made to the system

7.

Providing better transparency into the changes happening to the system

8.

Propagating environmental changes and code changes more rapidly while


maintaining control.

9.

Ease integration issues by dealing with them earlier in smaller chunks.

a) Extensibility
Extensibility allows new component to the system, replaces the existing once.
This is done without affecting those components those are in their original place.
b) Compatibility
Compatibility is the measure with which user can extend the one type of
application with another. The presentation tool is compatible with any type of
Operating system. Because of this its usability is highly flexible.

19

Photo-Morphing Detection System

c) Serviceability
In software engineering and hardware engineering, serviceability also known
as supportability, is one of the aspects (from IBM's RASU (Reliability, Availability,
Serviceability, and Usability). It refers to the ability of technical support personnel to
install, configure, and monitor computer products, identify exceptions or faults, debug
or isolate faults to root cause analysis, and provide hardware or software maintenance
in pursuit of solving a problem and restoring the product into service.
Examples of features that facilitate serviceability include:
a.

Help desk notification of exceptional events (e.g., by electronic mail or

by sending text to a pager)


b.

Network monitoring

c.

Documentation

d.

Event logging / Tracing (software)

e.

Logging of program state, such as

f.

Execution path and/or local and global variables

d) Feasibility Requirements
Feasibility studies aim to objectively and rationally uncover the strengths and
weaknesses of the existing business or proposed venture, opportunities and threats as
presented by the environment, the resources required to carry through, and ultimately
the prospects for success. In its simplest term, the two criteria to judge feasibility are
cost required and value to be attained. As such, a well-designed feasibility study
should provide a historical background of the business or project, description of the
product or service.
Feasibility study is conducted after finding out the systems objectives. In
order to carry out the feasibility study the following steps should be completed*

The users requirement

Interpreting the existing system

Analysis of the existing system

Analysis of the modifications that are going to be implemented


After completing all the above points the feasibility study is carried out

by considering the following points or we can say that following types of


feasibility needs to be carried out

20

Photo-Morphing Detection System

Economical feasibility

Operational feasibility

Resource feasibility

e) Economic feasibility
Economic analysis is the most frequently used method for evaluating the
effectiveness of a new system. If the data is stored in a database then it will be easy
job to search for required options any time. The use of Java and Swing does not
require very high configuration of hardware.
The software can be run on any system with JDK in minimum requirements..
Also the software though developed in GUI, it is very easy to operate and it is user
friendly. Hence the software is technically feasible.
f) Operational feasibility
Operational feasibility is a measure of how well a proposed system solves the
problems, and takes advantage of the opportunities identified during scope definition
i.e. through previous developed Timesheet

system and how it satisfies the

requirement identified in the requirements analysis phase of system development.


g) Resource feasibility
This involves questions such as how much time is available to build the new
system, when it can be built, whether it interferes with normal business operations,
type and amount of resources required, dependencies.

21

Photo-Morphing Detection System

3.5 Other Requirements


(A) SOFTWARE REQUIRMENTS

Programming Platform JAVA (JDK 1.6)

Software Development Tools:- Netbeans IDE

Other Software Tools Apache Tomcat server

(B) HARDWARE REQUIRMENTS

Capacity of the RAM should be at least 512.

System (computer)

hardware for development purpose

10gb hard disk

Ram: 512 MB

3.6 Analysis Models


3.6.1 Data flow diagram
The data flow diagram may be used to represent a system or software at any
level of abstraction. In fact, DFDs may be partitioned into levels that represent
increasing information flow and functional detail. Therefore, the DFD provides a
mechanism for functional modeling as well as information flow modeling. In so
doing, it satisfies the second operational analysis principle (i.e., creating a functional
model)

22

Photo-Morphing Detection System

Notations used in DFD:1) The Circle Represents The Process


2) The Rectangle represents external entity
3) Labeled arrow indicate incoming and outgoing data flow

User
Login
Form

User

Enter User Name


and Passward

Failure

Verify
User

Display
Error
message

Success
Display
Forged
Region

Image is Photo realistic


computer generated
Home

Click Browse
Browsin
g
Image

Select Image
and Upload
Exit

Go Home

Display
Image

Click
Authenticate
Image
Authent
ication

Authenticated

Display
Output

Fig.3.1 Data flow diagram

23

Photo-Morphing Detection System

3.6.2 Class Diagrams


Class Diagram is a type of static structure diagram that describes the structure of
a system by showing the systems classes, their attributes, and the relationships
between the classes.

Fig.3.2 Class Diagram


3.6.3 State-chart Diagrams
In our system there are 4 functions i.e high pass filter ,variance, peak value and
discrete fourier transform. And the result.
These all will be represented by 5 states:
S0,S1,S2,S3,S4

Fig.3.3 State Transition Diagram.


24

Photo-Morphing Detection System

3.7 System Implementation Plan


COCOMO II is cost model for estimating the number of person-months
required to develop software. The model also estimates the development schedule in
months and produces an effort and schedule distribution by major phases. This is the
top-level model, Basic COCOMO, which is applicable to the large majority of
software projects.
Here is what Boehm says about the model: "Basic COCOMO is good for
rough order of magnitude estimates of software costs, but its accuracy is necessarily
limited because of its lack of factors to account for differences in hardware
constraints, personnel quality and experience, use of modern tools and techniques, and
other project attributes known to have a significant influence on costs.
Human Resource Planning:
Sr.

Problem

Solution

No.
1

Numbers

of

hours We will tackle this problem working on

available for project work Saturday-Sundays


are less due to academics.
2

Frequently visiting DIAT We

will

be

using

communication

is not possible because it medium like email etc to save time.


takes lot of time.
3

Working time of all the We will distribute the work amongst the
members in the group does members, so that everyone can work
not match.

Members

whenever have time.


inexperienced All the documents related to technology

about the technology.

will be made available to the members,


similarly printed material and different
sites will be considered during design.

Table 3.1 Human Resource Planning

25

Photo-Morphing Detection System

Chapter 04
System Design
Systems design is the process of defining the architecture, components,
modules, interfaces, and data for a system to satisfy specified requirements. Systems
design could see it as the application of systems theory to product development. There
is some overlap with the disciplines of systems analysis, systems architecture and
systems engineering.
The goal of system design is to produce a model or representation that exhibit,
commodity and delight. It provides information about the application domain for the
software to be built. It describes the internal details of software.
Communication
Project initiation
Requirement
gathering
Planning
Estimating
Scheduling
Tracking
Modeling
Analysis
Design
Construction
Coding
Testing
Deployment
Delivery
Support
Feedback

Fig.4.1. Waterfall Model

26

Photo-Morphing Detection System

4.1 System Architecture


System architecture for Photo Morphing Detection is shown below. First a
highpass filter is applied, then the variance of each diagonal is estimated. Fourier
analysis is used to find periodicities in the variance signal, indicating the presense of
demosaicing.

Fig. 4.2 System Architecture for Photo Morphing Detection.


Combining the neighboring pixel values, an interpolated pixel value is
generated. The variance gets affected by the weight of the neighboring pixels which
produce an interpolated pixel value. This forms the pattern of variances which can be
detected and serves as the basic idea for detecting demosaicing. For demonstrating
our approach we consider channels of only specific color while use of any channel is
permitted during actual system implementation.
Figure 3 shows the basic flow of our approach. First highpass operator h(x,y)
is operated on the image i(x,y) and low frequency information is removed from it.
When demosaicing occurred, embedded periodicity is also enhanced. Operator
selection is done:
(

(1)

27

Photo-Morphing Detection System

The variance of the output of operator can be found from a distribution with variance
2. If we again make the simplifying assumption that the channel is interpolated with
linear interpolation:
( )

( )

is the variance of the output of application of (

) at positions corresponding to

original photosites in the image sensor, and thus nine pixel values from the original
sensor contribute to the filter output and four with a coefficient
coefficient , and position (x, y) itself has coefficient -3.

, four with a

Corresponds to locations

where the green value is interpolated by considering the green channel is interpolated
with linear interpolation.
In case, if missing green values were actually estimated with linear
interpolation and all other image processing operations in the camera are ignored,
then application of the filter h(x, y) yields a value of zero at each pixel location with
an interpolated green value. The choice of h(x, y) was made to maintain a large value
for

and testing using a small number of training images. A large ratio of

aids in

the detection of the periodic pattern of variances characteristic of demosaicing.


Our test images are different from the demosaicing operated images. Test
images are finished images from real consumer cameras. Demosaicing is performed
on nonlinear filter and the image processing path contains various activities such as
noise suppression, image enhancement etc.
After that, estimate of the variances is calculated using the method called
Maximum Likelihood Estimation (MLE). The statistical variance of the pixel values
along each diagonal is found to compute the MLE estimation of variance. This
projects the image down to a single-dimension signal,
the estimate of the variance corresponding to the
( )
Where,

| (

( ) where

( ) represents

diagonal:
)|

is the number of pixels along the

( )
diagonal and is used for

normalization.

28

Photo-Morphing Detection System

To find the periodicity in

( ), the DFT is computed to find | (

relatively high peak at frequency

indicates that the image is not morphed and it

is the characteristic of demosaicing. The peak magnitude at


|
Where

)|. A

)|

is calculated as:
( )

high peak value at frequency and k is is the median value of the

spectrum, by omitting the DC value. Normalizing by k was found to be vital to


differentiate between true image and images containing signals with large energy
across the frequency spectrum.

Fig 4.3 Distinguish between images containing noise with large energy across the
frequency spectrum and true demosaicing
-Mathematical Model of the proposed system:Problem description:
Let S be the system such that ,
S={I,P,F,V} where
I represents set of images; I={i0,i1,,in},
P represents set of peak values;P={p1,p2,,pn},
F represents filters; F={f1,f2,f3,f4},
V represents variance; V={v1,v2,,vn}

29

Photo-Morphing Detection System

-Activity description
Let there is direct relation between some of the sets as:
f(P)->I , f(F)->I , f(V)->P
Also there are inter-relationship between :
f(P0) -> {i0} I, f(F0) -> {i0} I, f(V0) -> {P0} P.
-Venn Diagram
As described in the activity we conclude to draw the following 3 venn
diagrams shown in the next slides:

Fig.4.4. ONE-TO ONE MAPPING


Since function returns single peak value of each image. In our system per
image will provide a single peak value one loop will execute hence single time.hence
n images will have one-toone mapping.likewise; say, for 1000 images the object
1000 peak values will be there.

30

Photo-Morphing Detection System

Fig.4.5 ONE-TO-MANY MAPPING


Since function returns n value variance of each filter. In our system each filter will
provide n values of variance.one loop of filter will execute hence n times. hence 4
filters will have one-to-many mapping.

Fig.4.6 ONE-TO-MANY MAPPING


In this venn diagram function returns 4 filters for each image. Each image will be
passed through 4 filters.one loop of image will execute hence 4 times. hence will
have one-to-many mapping. \

31

Photo-Morphing Detection System

4.2 UML Diagrams


4.2.1 Use Case diagram
Describe interactions between users and computer systems (both called actors)
Capture user-visible functions.
Achieve discrete measurable goals.
Are typically used during Analysis and Design.

Login

select image

High pass filter

graph in form of diagonal representation

Estimate variance

USER

SERVER

mean of absolute values of each diagonal

Discrete fourier transform

periodic graph

peak analysis

Result

Fig.4.7 Use Case Diagram

32

Photo-Morphing Detection System

4.2.2

Sequence Diagram
Photo Morphing
Detection System

User

Forgery
Detection

Filter

Login
Succes/Failuer
Browse Image
Upload/Display Image
Image Authentication(Apply Filter)
(Demosaicing Algorithm)
High Pass
Filter
Positional
Variance
Apply DFT
Peak Value

Peak Value
Analysis
Display Output
(If Image is Photo realistic)
Upload Original Image
Go for Forgery Detection

Forgery
Detection

Display Forged Region


Exit

Fig.4.8 Sequence diagram

33

Photo-Morphing Detection System

4.2.3 Activity Diagram

Get Image

If Valid

Yes

Brows Image

Upload Image

Authenticate Image

Display Output

Exit

No

If Image is
photorealistic

Yes

Display Forged Regtion

Fig.4.9 Activity Diagram

34

Photo-Morphing Detection System

4.2.4

Component Diagram
Image

NetBean

Photo Morphing Detection System

Forgery Detection

Image Authentication

Fig. 4.10 Component Diagram


4.2.6 Class Diagram

Fig.4.12 Class Diagram


35

Photo-Morphing Detection System

4.2.7 Deployment Diagram


JDK 1.6.0

NetBeans

Photo Mophing
Detection System

Forgery
Detection

Image
Authentication

Fig .4.13 Deployment Diagram

36

Photo-Morphing Detection System

Chapter 05
TECHICAL SPECIFICATION
5.1 Technology details used in the project
5.1.1 J2SE SDK:Java Platform, Standard Edition or Java SE (formerly known up to version 5.0
as Java 2 Platform, Standard Edition or J2SE), is the collection of Java programming
language APIs useful to many Java platform programs. The Java Platform, Enterprise
Edition includes all of the classes in the Java SE, plus a number which are more
useful to programs running on servers than on workstations.
Starting with the J2SE 5.0 (Merlin), the Java SE platform has been
developed under Java Community Process. JSR 59 was the umbrella specification for
J2SE 1.4 and JSR 176 specified J2SE 5.0 (Tiger). As of 2006, Java SE 6 (Mustang) is
being developed under JSR 270.
5.1.2 NetBeans:NetBeans is an integrated development environment (IDE) for developing
primarily with Java, but also with other languages, in particular PHP, C/C++, and
HTML5. It is also an application platform framework for Java desktop applications
and others.
The NetBeans IDE is written in Java and can run on Windows, OS X, Linux,
Solaris and other platforms supporting a compatible JVM.
The NetBeans Platform allows applications to be developed from a set of
modular software components called modules. Applications based on the NetBeans
Platform (including the NetBeans IDE itself) can be extended by third party
developers.
NetBeans Platform:The NetBeans Platform is a reusable framework for simplifying the
development of Java Swing desktop applications. The NetBeans IDE bundle for Java
SE contains what is needed to start developing NetBeans plugins and NetBeans
Platform based applications; no additional SDK is required.
Applications can install modules dynamically. Any application can include the
Update Center module to allow users of the application to download digitally signed
upgrades and new features directly into the running application.

37

Photo-Morphing Detection System

Reinstalling an upgrade or a new release does not force users to download the
entire application again.
The platform offers reusable services common to desktop applications,
allowing developers to focus on the logic specific to their application. Among the
features of the platform are:

User interface management (e.g. menus and toolbars)

User settings management

Storage management (saving and loading any kind of data)

Window management

Wizard framework (supports step-by-step dialogs)

NetBeans Visual Library

Integrated development tools

NetBeans IDE is a free, open-source, cross-platform IDE with built-in-support for


Java Programming Language.
NetBeans IDE:NetBeans IDE is an open-source integrated development environment.
NetBeans IDE supports development of all Java application types (Java SE (including
JavaFX), Java ME, web, EJB and mobile applications) out of the box. Among other
features are an Ant-based project system, Maven support, refactorings, version control
(supporting CVS, Subversion, Mercurial and Clearcase).
Modularity: All the functions of the IDE are provided by modules. Each module provides a
well defined function, such as support for the Java language, editing, or support for
the CVS versioning system, and SVN. NetBeans contains all the modules needed for
Java development in a single download, allowing the user to start working
immediately. Modules also allow NetBeans to be extended. New features, such as
support for other programming languages, can be added by installing additional
modules. For instance, Sun Studio, Sun Java Studio Enterprise, and Sun Java Studio
Creator from Sun Microsystems are all based on the
NetBeans Profiler
The NetBeans Profiler is a tool for the monitoring of Java applications: It helps
developers find memory leaks and optimize speed. Formerly downloaded separately,
it is integrated into the core IDE since version 6.0.

38

Photo-Morphing Detection System

The Profiler is based on a Sun Laboratories research project that was named
JFluid. That research uncovered specific techniques that can be used to lower the
overhead of profiling a Java application. One of those techniques is dynamic bytecode
instrumentation, which is particularly useful for profiling large Java applications.
Using dynamic bytecode instrumentation and additional algorithms, the NetBeans
Profiler is able to obtain runtime information on applications that are too large or
complex for other profilers. NetBeans also support Profiling Points that let you profile
precise points of execution and measure execution time.
NetBeans GUI Builder
GUI design tool
Formerly known as project Matisse, the GUI design-tool enables developers to
prototype and design Swing GUIs by dragging and positioning GUI components.
The GUI builder has built-in support for JSR 295 (Beans Binding technology), but the
support for JSR 296 (Swing Application Framework) was removed in ,7.1.
NetBeans JavaScript editor
The NetBeans JavaScript editor provides extended support for JavaScript, Ajax, and
CSS. JavaScript editor features comprise syntax highlighting, refactoring, code
completion for native objects and functions, generation of JavaScript class skeletons,
generation of Ajax callbacks from a template; and automatic browser compatibility
checks.
CSS editor features comprise code completion for styles names, quick
navigation through the navigator panel, displaying the CSS rule declaration in a List
View and file structure in a Tree View, sorting the outline view by name, type or
declaration order (List & Tree), creating rule declarations (Tree only), refactoring a
part of a rule name (Tree only).

39

Photo-Morphing Detection System

Chapter 06
Project estimate, Schedule and team structure
6.1. Estimating Costs:
There are two types of costs incurred when using a virtual
operating system approach:
1) The costs of writing the utilities: This is a one-time cost, since these utilities are
independent of any real operating system. The program development costs for the
utilities will be similar to those for any other software system designed for a specific
machine, since the virtual operating system utilities are designed for the virtual
machine.
2) The costs of implementing the virtual machine: These are incurred once for each
different host operating system within the organization. It is important to note that this
is the only cost in moving all personnel and software to the new computing
environment.

6.2. Project schedule


NO.
1

MILESTONE

NO

NAME
Requirement

OF REMARK

START

END

WEEKS

DATE

DATE

5/7/12

20/7/12

1/8/12

18/8/12

specification
2

Technology
familiarization

System setup

20/8/12

28/8/12

Design

14/1/13

31/1/13

GUI Frame

1/2/13

15/2/13

Working on the 4

20/2/13

29/3/13

1/4/13

15/4/13

application
8

Preparing

documentation
Table 6.1 Project schedule

40

Photo-Morphing Detection System

6.3 Project Efforts:


Modules

Member Names

Knowledge gathering

All

Software Architect and design

All

Web Interface and Design

All

Page Crawler

All

Report generation

Nilesh Ghatol, Aniket Shirke, Rahul Paigude

Multithreaded Cron tasks

All

Unit testing

All

Documentation

Nilesh Ghatol, Rahul Paigude Aniket Shirke


Table 6.2: Project Efforts

6.4. Project Team


The project has been implemented at the graduation level of computer
engineering students and hence does not have much human resource to count. As such
this project has been implemented by three members forming a team over the duration
of an entire academic year along with the guidance of their internal guide making the
total number of human resources three. The Team members name1) Nilesh Ghatol 2) Rahul Paigude 3)Aniket Shirke

41

Photo-Morphing Detection System

Chapter 07
Software implementation
7.1 Introduction
The main aim of the project is to provide software which will help to detect
the manipulation in the photo. Most digital cameras employ an image sensor with a
color filter array such as shown on the left. The process of demosaicing interpolates
the raw image to produce at each pixel an estimate for each color channel. With
proper analysis, traces of demosaicing are exhibited in the peak of an analysis signal
as shown on the right. The presence of demosaicing indicates the image is from a
digital camera rather than generated by a computer.

7.2 Important module and algorithms:


There six main modules in this software namely:
1. Browsing the image.
This is the very first module of the system. After successful login first we have
to take the image from the user which we want to test .for this we have to enter the
complete path of the image or there is a browser to brows the image from computer
system. After browsing the image click on load button to display the image.
2. Applying high pass filter.
3. Calculating positional variance.
4. Applying DFT.
5. Peak value analysis.
6. Detecting forged image regions
The algorithm shown in Section 4 can be applied locally to detect regions of
an image that have possibly been tampered with. The main work is: demosaicing
produces periodic correlations in the image signal. When an image is manipulated, an
image piece from another source (it can be from another image or a computer graphic)
is pasted over a portion of the image. In general, this image piece is resample to match
the geometry of the image. The application of the highpass filter is the same as
previously described.

42

Photo-Morphing Detection System

Estimating the variance becomes a local operation.


(
Where

| (

(
)

( )

)| the absolute value of the output of applying the

filter h(x, y) to the image i(x, y). The parameter n is the size of the local
neighborhood; by default we use n = 32. At each position (x, y), a local (256 point)
one-dimensional DFT is computed along each row, and the local peak ratio s(x, y) is
computed as described before in Eq. (2).
The above equation estimates the variance for detecting forged image regions.
6. Displying the output.

43

Photo-Morphing Detection System

Chapter 08
Software Testing
8.1 Introduction
Software testing is an investigation conducted to provide stakeholders with
information about the quality of the product or service under test. Software testing can
also provide an objective, independent view of the software to allow the business to
appreciate and understand the risks of software implementation. Test techniques
include, but are not limited to, the process of executing a program or application with
the intent of finding software bugs (errors or other defects).
Software testing can be stated as the process of validating and verifying that a
computer program/application/product:

meets the requirements that guided its design and development,

works as expected,

can be implemented with the same characteristics,

and satisfies the needs of stakeholders.

Software testing, depending on the testing method employed, can be implemented at


any time in the development process. Traditionally most of the test effort occurs after
the requirements have been defined and the coding process has been completed, but in
the Agile approaches most of the test effort is on-going. As such, the methodology of
the test is governed by the chosen software development methodology.
Different software development models will focus the test effort at different points in
the development process. Newer development models, such as Agile, often employ
test-driven development and place an increased portion of the testing in the hands of
the developer, before it reaches a formal team of testers. In a more traditional model,
most of the test execution occurs after the requirements have been defined and the
coding process has been completed.

44

Photo-Morphing Detection System

8.1.1 Static vs. dynamic testing


There are many approaches to software testing. Reviews, walkthroughs, or
inspections are referred to as static testing, whereas actually executing programmed
code with a given set of test cases is referred to as dynamic testing. Static testing can
be omitted, and unfortunately in practice often is. Dynamic testing takes place when
the program itself is used.
Dynamic testing may begin before the program is 100% complete in order to
test particular sections of code and are applied to discrete functions or modules.
Typical techniques for this are either using stubs/drivers or execution from a debugger
environment.
Static testing involves verification whereas dynamic testing involves
validation. Together they help improve software quality.
8.1.2 White-Box testing
White-box testing (also known as clear box testing, glass box testing,
transparent box testing, and structural testing) tests internal structures or workings of
a program, as opposed to the functionality exposed to the end-user. In white-box
testing an internal perspective of the system, as well as programming skills, are used
to design test cases. The tester chooses inputs to exercise paths through the code and
determine the appropriate outputs. This is analogous to testing nodes in a circuit, e.g.
in-circuit testing (ICT).
While white-box testing can be applied at the unit, integration and system
levels of the software testing process, it is usually done at the unit level. It can test
paths within a unit, paths between units during integration, and between subsystems
during a systemlevel test. Though this method of test design can uncover many
errors or problems, it might not detect unimplemented parts of the specification or
missing requirements.

45

Photo-Morphing Detection System

Techniques used in white-box testing include:

API testing (application programming interface) - testing of the application


using public and private APIs

Code coverage - creating tests to satisfy some criteria of code coverage (e.g.,
the test designer can create tests to cause all statements in the program to be
executed at least once)

Fault injection methods - intentionally introducing faults to gauge the efficacy


of testing strategies

Mutation testing methods

Static testing methods

Code coverage tools can evaluate the completeness of a test suite that was created
with any method, including black-box testing. This allows the software team to
examine parts of a system that are rarely tested and ensures that the most important
function points have been tested.http://en.wikipedia.org/wiki/Software_testing cite_note-21 Code coverage as a software metric can be reported as a percentage for:

Function coverage, which reports on functions executed

Statement coverage, which reports on the number of lines executed to


complete the test

100% statement coverage ensures that all code paths, or branches (in terms of control
flow) are executed at least once. This is helpful in ensuring correct functionality, but
not sufficient since the same code may process different inputs correctly or
incorrectly.
8.1.3 Black-box testing:

Black-box testing treats the software as a "black box", examining


functionality without any knowledge of internal implementation. The tester is only
aware of what the software is supposed to do, not how it does it. Black-box testing
methods include: equivalence partitioning, boundary value analysis, all-pairs testing,
state transition tables, decision table testing, fuzz testing, model-based testing, use
case testing, exploratory testing and specification-based testing.

46

Photo-Morphing Detection System

Specification-based testing aims to test the functionality of software


according to the applicable requirements. This level of testing usually requires
thorough test cases to be provided to the tester, who then can simply verify that for a
given input, the output value (or behavior), either "is" or "is not" the same as the
expected value specified in the test case. Test cases are built around specifications and
requirements, i.e., what the application is supposed to do. It uses external descriptions
of the software, including specifications, requirements, and designs to derive test
cases. These tests can be functional or non-functional, though usually functional.
Specification-based testing may be necessary to assure correct functionality,
but it is insufficient to guard against complex or high-risk situations.
One advantage of the black box technique is that no programming knowledge
is required. Whatever biases the programmers may have had, the tester likely has a
different set and may emphasize different areas of functionality.
On the other hand, black-box testing has been said to be "like a walk in a dark
labyrinth without a flashlight. Because they do not examine the source code, there are
situations when a tester writes many test cases to check something that could have
been tested by only one test case, or leaves some parts of the program untested.
This method of test can be applied to all levels of software testing: unit,
integration, system and acceptance. It typically comprises most if not all testing at
higher levels, but can also dominate unit testing as well.

8.2 Testing approach


8.2.1 Top-down and bottom-up
Bottom Up Testing is an approach to integrated testing where the lowest level
components (modules, procedures, and functions) are tested first, then integrated and
used to facilitate the testing of higher level components. After the integration testing
of lower level integrated modules, the next level of modules will be formed and can
be used for integration testing. The process is repeated until the components at the top
of the hierarchy are tested. This approach is helpful only when all or most of the
modules of the same development level are ready. This method also helps to
determine the levels of software developed and makes it easier to report testing
progress in the form of a percentage.

47

Photo-Morphing Detection System

Top Down Testing is an approach to integrated testing where the top


integrated modules are tested and the branch of the module is tested step by step until
the end of the related module.
In both, method stubs and drivers are used to stand-in for missing components
and are replaced as the levels are completed.

48

Photo-Morphing Detection System

8.3 Objectives of testing


8.3.1 Installation testing
An installation test assures that the system is installed correctly and working at
actual customer's hardware.
8.3.2 Compatibility testing
A common cause of software failure (real or perceived) is a lack of its
compatibility with other application software, operating systems (or operating
system versions, old or new), or target environments that differ greatly from the
original (such as a terminal or GUI application intended to be run on the desktop
now being required to become a web application, which must render in a web

browser). For example, in the case of a lack of backward compatibility, this can
occur because the programmers develop and test software only on the latest
version of the target environment, which not all users may be running. This
results in the unintended consequence that the latest work may not function on
earlier versions of the target environment, or on older hardware that earlier
versions of the target environment was capable of using. Sometimes such issues
can be fixed by proactively abstracting operating system functionality into a
separate program module or library.

49

Photo-Morphing Detection System

8.3.3 Regression testing


Regression testing focuses on finding defects after a major code change has
occurred. Specifically, it seeks to uncover software regressions, or old bugs that have
come back. Such regressions occur whenever software functionality that was
previously working correctly stops working as intended. Typically, regressions occur
as an unintended consequence of program changes, when the newly developed part of
the software collides with the previously existing code.
Common methods of regression testing include re-running previously run tests
and checking whether previously fixed faults have re-emerged. The depth of testing
depends on the phase in the release process and the risk of the added features. They
can either be complete, for changes added late in the release or deemed to be risky, to
very shallow, consisting of positive tests on each feature, if the changes are early in
the release or deemed to be of low risk.
8.3.4 Acceptance testing
Acceptance testing can mean one of two things:
1. A smoke test is used as an acceptance test prior to introducing a new build to
the main testing process, i.e. before integration or regression.
2. Acceptance testing performed by the customer, often in their lab environment
on their own hardware, is known as user acceptance testing (UAT).
Acceptance testing may be performed as part of the hand-off process between
any two phases of development.
8.3.5 Functional Vs non-functional testing
Functional testing refers to activities that verify a specific action or function of
the code. These are usually found in the code requirements documentation, although
some development methodologies work from use cases or user stories. Functional
tests tend to answer the question of "can the user do this" or "does this particular
feature work.
Non-functional testing refers to aspects of the software that may not be related
to a specific function or user action, such as scalability or other performance, behavior
under certain constraints, or security. Testing will determine the flake point, the point
at which extremes of scalability or performance leads to unstable execution. Non-

50

Photo-Morphing Detection System

functional requirements tend to be those that reflect the quality of the product,
particularly in the context of the suitability perspective of its users.
8.3.6 Software performance testing
Performance testing is generally executed to determine how a system or subsystem performs in terms of responsiveness and stability under a particular workload.
It can also serve to investigate, measure, validate or verify other quality attributes of
the system, such as scalability, reliability and resource usage.
Load testing is primarily concerned with testing that the system can continue
to operate under a specific load, whether that be large quantities of data or a large
number of users. This is generally referred to as software scalability. The related load
testing activity of when performed as a non-functional activity is often referred to as
endurance testing.
Volume testing is a way to test software functions even when certain
components (for example a file or database) increase radically in size. Stress testing is
a way to test reliability under unexpected or rare workloads.
Stability testing (often referred to as load or endurance testing) checks to see if
the software can continuously function well in or above an acceptable period.
There is little agreement on what the specific goals of performance testing are.
The terms load testing, performance testing, reliability testing, and volume testing, are
often used interchangeably.
8.3.7 Usability testing
Usability testing is needed to check if the user interface is easy to use and
understand. It is concerned mainly with the use of the application.

51

Photo-Morphing Detection System

8.3.8 Development testing


Development Testing is a software development process that involves
synchronized application of a broad spectrum of defect prevention and detection
strategies in order to reduce software development risks, time, and costs. It is
performed by the software developer or engineer during the construction phase of the
software development lifecycle. Rather than replace traditional QA focuses, it
augments it. Development Testing aims to eliminate construction errors before code is
promoted to QA; this strategy is intended to increase the quality of the resulting
software as well asthe efficiency of the overall development and QA process.

8.4 The testing process


8.4.1 Traditional CMMI or waterfall development model
A common practice of software testing is that testing is performed by an
independent group of testers after the functionality is developed, before it is shipped
to the customer.
This practice often results in the testing phase being used as a project buffer
to compensate for project delays, thereby compromising the time devoted to testing.
Another practice is to start software testing at the same moment the project starts and
it is a continuous process until the project finishes.
8.4.2 Agile or Extreme development model
In contrast, some emerging software disciplines such as extreme programming
and the agile software development movement, adhere to a "test-driven software
development" model.
In this process, unit tests are written first, by the software engineers (often
with pair programming in the extreme programming methodology). Of course these
tests fail initially; as they are expected to. Then as code is written it passes
incrementally larger portions of the test suites.
52

Photo-Morphing Detection System

The test suites are continuously updated as new failure conditions and corner
cases are discovered, and they are integrated with any regression tests that are
developed.
Unit tests are maintained along with the rest of the software source code and
generally integrated into the build process (with inherently interactive tests being
relegated to a partially manual build acceptance process). The ultimate goal of this
test process is to achieve continuous integration where software updates can be
published to the public frequently.
8.4.3 A sample testing cycle
Although variations exist between organizations, there is a typical cycle for
testing. The sample below is common among organizations employing the Waterfall
development model.

Requirements analysis: Testing should begin in the requirements phase of the


software development life cycle. During the design phase, testers work with
developers in determining what aspects of a design are testable and with what
parameters those tests work.

Test planning: Test strategy, test plan, testbed creation. Since many activities
will be carried out during testing, a plan is needed.

Test development: Test procedures, test scenarios, test cases, test datasets,
test scripts to use in testing software.

Test execution: Testers execute the software based on the plans and test
documents then report any errors found to the development team.

Test reporting: Once testing is completed, testers generate metrics and make
final reports on their test effort and whether or not the software tested is ready
for release.

Test result analysis: Or Defect Analysis, is done by the development team


usually along with the client, in order to decide what defects should be
assigned, fixed, rejected (i.e. found software working properly) or deferred to
be dealt with later.

Defect Retesting: Once a defect has been dealt with by the development team,
it is retested by the testing team. AKA Resolution testing.

53

Photo-Morphing Detection System

Regression testing: It is common to have a small test program built of a


subset of tests, for each integration of new, modified, or fixed software, in
order to ensure that the latest delivery has not ruined anything, and that the
software product as a whole is still working correctly.

Test Closure: Once the test meets the exit criteria, the activities such as
capturing the key outputs, lessons learned, results, logs, documents related to
the project are archived and used as a reference for future projects.

8.5 Test cases:


8.5.1 Test cases for User Name Field
Case

Name

Input

Expected

Actual

Pass/Fai

Output

Output

Check cursor

It should

It present

Pass

position

present at

at User

User Name

Name

field

field

Id
1

User ID
Field

1.1

1.2

Characters in

A to Z

It should

It accepts

User Id field

a to z

accept

character

Pass

character
1.3

1.4

Enter

0 to 9

It should not

It not

Number in

accept

accepts

User Id field

numbers

numbers

Enter special

!,@,#,$,%,^,&,

It should not

It not

character in

Accept

accepts

special

special

User Id field

Pass

Pass

54

Photo-Morphing Detection System

1.5

characters

characters

Enter

!,@,#,$,%,^,&,

It should not

It not

combination

accept all

accept all

of all the

A to Z

the Input

the Input

above data

A to z

Pass

8.5.2 Test cases for Password Field


Case

Name

Input

Expected

Actual

Output

Output

Check cursor

It should

It present at Pass

position

present at

password

password

field

Id
2

Pass/Fail

Password
Field

2.1

field
2.2

Characters in

A to Z

It should

It accepts

User Id field

a to z

accept

character

Pass

character
2.3

2.4

Enter

0 to 9

It should

It accepts

Number in

accept

numbers

User Id field

numbers

Enter special

!,@,#,$,%,^,&,

It should

It accepts

character in

Accept

special

special

characters

User Id field

Pass

Pass

characters
2.5

2.6

Enter

!,@,#,$,%,^,&,

It should

It accept

combination

accept all

all the

of all the

A to Z

the Input

Input

above data

A to z

Enter 9 data

123qwe456

It should not

It does not

values in

accept 9th

accept 9th

password

data value

data value

Pass

Pass

55

Photo-Morphing Detection System

field

8.5.3 Test Case for Submit button:


Case Id
3

Name

Input

Expected

Actual

Pass/Fail

Output

Output

It does

Pass

It does

Pass

It does

Pass

It does

Pass

Submit
button

3.1

Both the

Click on

It should

fields are

submit

give

empty

button

message
"Enter
Username
&
password"

3.2

Only

Click on

It should

username

submit

give

has been

button

message

entered

"Enter
password"

3.3

Only

Click on

It should

password

submit

give

has been

button

message

entered

"Enter
Username "

3.4

Both field

Click on

It should

have been

submit

give

entered,but

button

message

wrong

"Invalid

username

Username "

56

Photo-Morphing Detection System

3.5

3.6

Both field

Click on

It should

have been

submit

give

entered,but

button

message

wrong

"Invalid

password

Password "

Both field

Click on

It should

have been

submit

give

entered,&

button

message

both field

"Invalid

are wrong

Username

It does

Pass

It does

Pass

It does

Pass

& Password
"
3.7

Both field

Click on

It should

have been

submit

give

entered,&

button

message

both field

"Submitted

are correct

successfully
"

8.5.4 Test Cases for browse button


Case Id
4

Name

Input

Expected

Actual

Pass/Fail

Output

Output

It does

Pass

Expected

Actual

Pass/Fail

Output

Output

Browse
button

4.1

Functionality

Click on

File

of Browse

Browse

Browsing

Button

button

Window
should be
open.

8.5.5 Test cases for file dialog box


Case Id

Name

Input

57

Photo-Morphing Detection System

File dialog
box

5.1

List box for

List box

displaying

should be

various

present

It present

Pass

It present

Pass

It present

Pass

It present

Pass

It present

Pass

It does

Pass

Pass

drives and
directories
5.2

5.3

5.4

Text field for

Text field

display name

should be

of the file

present.

Text field for

Text field

displaying

should be

type of file

present

Open button

Open

for opening

button

the file.

should be
presnt

5.5

Cancel

Cancel

button for

button

canceling

should be

file dialog

present.

box.
5.6

5.7

Selection in

Click on

The drive

Drive list

any drive

Directory

box

Directory

or file get

or file.

Selected

Do not select Click on

It should

It gives

any file .

Open

give error

error.

button.

first select
the file

5.8

Select

Click on

It should

It opens

appropriate

Open

open that

that file.

file.

button.

file.

Pass

58

Photo-Morphing Detection System

5.9

Cancel

Click on

It should

button

cancel

Close file

functionality.

button.

dialog box.

It does

Pass

Chapter 09
Snapshots

Chapter 10
Deployment and Maintenance
10.1 Installation
Before installing Image Authentication Software, be sure that your system meets the
following requirements:
CPU

Pentium 4, 1 GHz or better (or the equivalent) recommended

OS

Preinstalled versions of Windows XP Home Edition or Windows

59

Photo-Morphing Detection System

XP
Professional (Service Pack 2 or later), or Windows 2000
Professional
(Service Pack 4 or later)
Hard-disk space

50 MB required for installation, with an additional 200 MB


required
when Image Authentication Software is running

RAM

512 MB or more recommended

Video resolution

800 600 pixels or more with 16-bit color (High Color) or more

Camera

Nikon digital single-lens refl ex (SLR) cameras that support Image


Authentication

Miscellaneous

One built-in USB port (for the USB key)


Table 10.1 Installation Requirement

Following are the installing steps for Photo Morphing Detection System
Step 1: Install JDK 1.6.0
Step 2: Install NetBeans
Step 3: Then install Photo Morphing Detection Software

Chapter 11
Conclusion and future scope
Users expect that robust solutions will ensure copyright protection and also
guarantee the authenticity of multimedia documents. There is such a strong demand
for image manipulation techniques and applications that they are becoming more and
more sophisticated and are accessible to a greater number of people.

60

Photo-Morphing Detection System

It is new photo-morphing detection framework proposed for image content


authentication such that the original image can be restore is robust to JPEG
compression and is signed with cryptographic signature algorithm. According to our
experiment result, we claim that our system survive JPEG compression with quality
factor.
Future work includes refining our method to be applicable to more images
format and to increase the robustness of the system to tolerate lower JPEG
compression quality factor.

Chapter 12
References:

Andrew C. Gallagher (Carnegie Mellon University). Tsuhan Chen (Eastman


Kodak

Company)

Image

Authentication

by

Detecting

Traces

of

Demosaicing, 978-1-4244-2340-8/08/2012 IEEE

61

Photo-Morphing Detection System

T.-T. Ng, S-F. Chang, J. Hsu, L. Xie, and M. P. Tsui. Physics-motivated


features for distinguishing photographic images and computer graphics. In
Proc. ACM MM, 2005.

S. Bayram, H. T. Sencar, and N. Memon. Source camera identification based


on cfa interpolation. In Proc. ICIP, 2005.

A. Gallagher. Detection of linear and cubic interpolation in JPEG compressed


images. InProc.CRV,2005.

A. Gallagher. Method for detecting image interpolation U.S. Patent


6,904,180,2005.

www.wikipedia.com

Appendix A: Glossary
List of Acronyms:Name

Meaning

PIM

Photographic images

62

Photo-Morphing Detection System

PRCG

Photorealistic computer graphic images

GUI

Graphical User Interface

JVM

Java Virtual Machine

63