Академический Документы
Профессиональный Документы
Культура Документы
%#############################################
%#########Author : PROJECT###########
%#########COMPUTER ENGINEERING############
\documentclass[12pt,a4paper]{report}
\usepackage{fancyhdr}
\fancyhf{}
\fancyheadoffset{0.4in}
\fancyfootoffset{0.4in}
\oddsidemargin 0.0in
\textwidth 16cm
\usepackage{graphicx}
\usepackage{color}
%\input{rgb}
%\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
% \usepackage{pdfpages}
\usepackage{amssymb}
% \usepackage{makeidx}
% \usepackage{enumitem}
% \usepackage{titletoc}
% \usepackage{float}
% \usepackage{graphicx}
% \usepackage{titlesec}
% \usepackage{setspace}
% \renewcommand{\baselinestretch}{1.5}
% \usepackage{sectsty}
% \usepackage{longtable}
% \usepackage{geometry}
% \usepackage{times}
%\usepackage[left=1.5in,right=0.5in,top=1in,bottom=1.25in]{geometry}
% \geometry{ a4paper,total={210 mm,197 mm},left=37 mm, right=25.4 mm, top=25.4 mm, bottom=32
mm}
% \renewcommand{\cftfigfont}{Figure }
% \renewcommand{\cfttabfont}{Table }
%\usepackage{txfonts}
% \renewcommand{\chaptername}{}
%\renewcommand{\thechapter}{}
% \sectionfont{\fontsize{12}{10}\selectfont}
%\subsectionfont{\fontsize{12}{10}\selectfont}
%\headerfont{\fontsize{10}{10}\selectfont}
%\footerfont{\fontsize{10}{10}\selectfont}
%\newlength\mylen
%\settowidth\mylen{\textbullet}
%\addtolength\mylen{-3mm}
%\pagenumbering{}
%\begin{document}
%\pagenumbering{gobble}
%\input{./one.tex}
%\cleardoublepage
%\input{./two.tex}
%\cleardoublepage
%\input{three.tex}
%\cleardoublepage
%\input{four.tex}
%\cleardoublepage
%\input{five.tex}
%\renewcommand{\headrulewidth}{0.4pt}
%\renewcommand{\footrulewidth}{0.4pt}
%\titleformat{\chapter}[display]
%{\normalfont\large\bfseries\centering}{\chaptertitlename\ }
%{20pt}{\Large}
%\titlespacing*{\chapter}{20pt}{-10pt}{40pt}
%\newpage
%\pagenumbering{Roman}
%\input{./abstract.tex}
%\newpage
%\input{./ack.tex}
%\newpage
%\thispagestyle{empty}
%\tableofcontents
%\newpage
%\input{./listfig.tex}
%\newpage
%\input{./listtable.tex}
%\begin{flushleft}
%\chapter*{LIST OF TABLES}
%\end{flushleft}
%\addcontentsline{toc}{chapter}{\protect\numberline{}{LIST OF TABLES}}
%\\
%\\
%\\
%\\
%\\
\pagestyle{fancy}
\lhead{}
\chead{}
\rhead{}
\lfoot{\small}
\cfoot{\thepage}
\rfoot{DYPIET}
\renewcommand{\footrulewidth}{0.01in}
\newpage
\makeatletter
\def\@makechapterhead#1{%
\vspace*{0\p@}%
\par\nobreak
\vspace*{2\p@}%
\par \nobreak
\vspace*{2\p@}%
%\vskip 40\p@
\vskip 20\p@
}}
\makeatother
\makeatletter
\renewcommand\section{\@startsection{section}{1}{\z@}%
{2.3ex \@plus.2ex}%
{\fontsize{12pt}{14pt} \selectfont\bfseries}}
\makeatother
\makeatletter
\renewcommand\subsection{\@startsection{subsection}{1}{\z@}%
{2.3ex \@plus.2ex}%
{\fontsize{12pt}{14pt} \selectfont\bfseries}}
\makeatother
\begin{document}
\chapter{Synopsis}
\section{Project Title}
\section{Internal Guide}
No Sponsorship
\section{Technical Keywords: }
General:
Software:
\item 2. JCreator
\newpage
\section{Problem Statement}
To provide a secure light weight biometric authentication service to access a cloud account.
\section{Abstract}
\begin{itemize}
The problem of fake logging in and data thefts is a major issue that cloud users are facing. To protect the
privacy and for providing security, there is a need to authenticate the user that requests access to an
account. However, the techniques used for authentication so far were not capable to guarantee the
same and thereby kept the data at high risks. So, we are using the concept of Biometric authentication
along with some additional concepts of privacy preservation to provide a more secure log in. This
project also provides a feature by using which fingerprint can be recognised even if the orientation is
changed.\\
\end{itemize}
\begin{itemize}
\item Study of encryption and decryption algorithms for image as well as character data.\\
\item Study of fingerprint recognition system along with the feature of orientation change. \\
\end{itemize}
\label{sec:math}
\begin{itemize}
N=Match the fingerprints provided during log in with the ones that\\
were stored in the database during registration.\\
Q=request is accepted\\
H=Client validation.\\
X=Wrong input. \\
Y=Poor network.\\
\\1.10.2. Benefits:\\
C. It will help us to store the data on cloud in compressed form thereby improving the memory
utilization.\\
\end{itemize}
\begin{itemize}
\item IEEE
\item ACM
\item Springer
\end{itemize}
\newpage
\subsection{Tasks}
\subsection{Project Planning}
\begin{table}[!htbp]
\begin{center}
%\def\arraystretch{1.5}
\def\arraystretch{1.5}
\end{center}
\label{tab:risk}
\end{table}
\begin{table}[!htbp]
\begin{center}
%\def\arraystretch{1.5}
\def\arraystretch{1.5}
\caption{Project planning}
\begin{tabular}{| c | c | c |}
task & Duration & Assigned members \\ \hline
\end{tabular}
\end{center}
\label{tab:riskdef}
\end{table}
\chapter{Technical Keywords}
\section{Area of Project}
Cloud Computing
\section{Technical Keywords}
% \begin{itemize}
% \end{itemize}
\item Keywords
\begin{enumerate}
\item Biometric Authentication : Biometric Authentication is any process that validates the
identity of a user who wishes to sign into a system by measuring some intrinsic characteristics of that
user. It depends on the measurement of some unique attributes of the user. They presume that these
user characteristics are unique , that they may not be recorded and reproductions provided later, and
that the sampling device is tamper-proof. \\
\item Encryption is the conversion of data into a form called a cipher text that cannot be easily
understood by unauthorized parties. It is the process of encoding messages or information in such a way
that only authorized parties can read it. In an encryption scheme the intended communication
information or message, referred to a plaintext, is encrypted using an encryption algorithm, generating
ciphertext that can only be read if decrypted. The purpose of encryption is to ensure that only
somebody who is authorized to access data will be able to read it, using the decryption key. Somebody
who is not authorized can be excluded because he or she does not have the required key, without which
it is impossible to read the encrypted information.
\item Reverse Circle Cipher is an encryption technique which uses a method of Circular
Substitution. It changes the ASCII value of text by creating the arbitrary blocks of the string and applying
Circular Substitution on it . When index position of string reaches to its maximum point, it restarts to the
initial index and converts the plain text to cipher text
\end{enumerate}
\chapter{Introduction}
\section{Project Idea}
\begin{itemize}
Today Cloud Computing has become a hot trend in IT industries. Most of the the enterprises are using
cloud to store and maintain their huge data on cloud servers. But security of critical data over the cloud
has become a concern for both cloud service users and providers. Traditional authentication mechanism
like password, key generation, encryption mechanism have failed. Hackers are able to crack these
passwords. So our data is not secure until we have a secure mechanism to protect our data from
intruders and hackers.
In this project we are dealing with a secure authentication mechanism unlike password or key which
can’t be hacked easily. Biometrics data is unique to every human being. So our project aims at using
Biometric data of user for the authentication process. A biometric system is a computer system that
implements biometric recognition algorithms. It consists of sensing, feature extraction, and matching
modules. Biometric sensors (e.g., fingerprint sensor, digital camera for face) capture or scan the
biometric trait of an individual to produce its digital representation. A quality check is generally
performed to ensure that the acquired biometric sample can be reliably processed by the subsequent
feature extraction and matching modules. The feature extraction module discards the unnecessary and
extraneous information from the acquired samples, and extracts salient and discriminatory information
called features that are generally used for matching. During matching, the query biometric sample is
matched with the reference information stored in the database to establish the identity associated with
the query. It has two stages of operation: enrollment and recognition. Enrollment refers to the stage in
which the system stores some biometric reference information about the person in a database.
We are implementing our project to match fingerprint data of user for authentication in cloud
computing. We will store the users fingerprint data in compressed form in a database for the time and
use that for matching whenever a user tries to log in next time. We are using Biometric scanner to
extract fingerprint of user. Fingerprint data will be transmitted in the compressed from for security of
users Biometric data. There is a matching module to match the fingerprints against the one stored in the
database. If the fingerprint matches, it will allow the user to log in.
Since it will be a overhead for the cloud service providers. Our project aims at creating a separate web
client between user and cloud service provider to provide service of Biometric Authentication.
\\
\end{itemize}
\begin{itemize}
\text
Now a days every small and big enterprises are using cloud service. It has become a growing trend in IT.
Enterprises prefer to store their huge data over cloud rather than maintain it on PC. But the growing risk
in Cloud Computing is a concern for the service providers and the enterprises. According to a report
from the Cloud Security Alliance released February 29, here are the 12 biggest threats right now:\\
To curb these threats we thought of implementing a secure authentication mechanism unlike password
and key generation, which can’t be hacked easily.\\
Biometric Authentication can be a perfect solution to this issue. Biometric Authentication is any process
that validates the identity of a user who wishes to sign into a system by measuring some intrinsic
characteristic of that user. Biometric samples include finger prints, retinal scans, face recognition, voice
prints and even typing patterns. Every user has its unique biometric which can be used for
authentication purpose. It can be face recognition, fingerprint matching, retina matching etc. With the
advent of Biometric authentication our data on cloud will be safe and secure
Now a days most of the smart phones are equipped with fingerprint scanner. So its completely feasible
also.
\\
\end{itemize}
\newpage
\section{Literature Survey}
\begin{itemize}
\textbullet D. J. Craft “A fast hardware data compression algorithm and some algorithmic
extensions”\\
In this paper the author has reviewed on data compression algorithms like AlDC(Adaptive Lossless
Data Compression), Adaptive Lempel-Ziv Algorithm, LZ1, LZ2, BLDC & CLDC algo.\\
\textbullet Cong Li, Zhenzhou ji, Feu Gu “Efficient Parallel design for BWT- Based DNA sequence
data multi compression algo.” \\
In this paper the autors presented research on BWT(Burrow Wheeler Transformation) based DNA
sequence compression algo. Using MPI and OpenMP. It also discuss earlier DNA compression algo like
Bio Compression alog(1993), GenCompression algo(1999) by X Chan, CTW+L2(2002) by T. Matsumoto,
GenML algo (2002) by G. karodi.\\
In this paper the author presented a research on Using encrypted data for privacy preserving
Biometric Authentication. It basically shows how encrypted data can be utilised without decrypting it.\\
\textbullet Kiran Kumar K, K.B Raja “Hybrid Fingerprint Matching using Block filter and strength
factor”\\
In this paper the author presented a research on Fingerprint matching by using the combination of
minutiae and ridges extraction method.\\
\textbullet CHENG Hongbing, RONG Chunming “Identity Based Encryption and Biometric
Authentication Scheme for Secure Data Access in Cloud Computing”\\
In this paper the author presented a research on Proposed a secure data access scheme based on
identity based encryption and biometric authentication for cloud computing.\\
\end{itemize}
\section{Problem Statement}
\item 1. Cloud accounts are easily hackable and thereby face security issues\\
\item 2. Using fingerprint authentication that can match the records even if the orientation of input is
changed.\\
\begin{itemize}
\item Creation of a feasible solution of the problem statement by using biometric
authentication.\\
\item Providing an algorithm that can match the biometric input in encrypted form.
\end{itemize}
\subsection{Statement of scope}
\begin{itemize}
\item Reducing the time factor required for searching of encrypted files.
\end{itemize}
\section{Software context}
\begin{itemize}
\item Main purpose of the project is to search the data in cloud that in encrypted format in such a way
that data should not be decrypted for searching process
\end{itemize}
\section{Major Constraints}
\begin{itemize}
\item Constraints can be defined as limiting factor or state of restriction or lack of spontaneity of a
software. Constraints are the limitations, hurdles which can stop the software team from fulfilling their
responsibilities. The constraints are anything that can restricts or dictates the actions of the project
team. That can cover a lot of territory. The triple constraints time, resources, and quality are the big
hitters, and every project has one or two, if not all three, of the triple constraints as a project driver.
Here are some constraints used for software development:\\
–\\ Time - This refers to the actual time required to produce a deliverable. Which in this case, would
be the end result of the project. Naturally, the amount of time required to produce the deliverable will
be directly related to the amount of requirements that are part of the end result along with the amount
of resources allocated to the project.\\
–\\ Cost - This is the estimation of the amount of money that will be required to complete the
project. Cost itself encompasses various things, such as: resources, labour rates for contractors, risk
estimates, bills of materials, etc. All aspects of the project that have a monetary component are made
part of the overall cost structure.\\
–\\ Scope - These are the functional elements that, when completed, make up the end deliverable
for the project. The scope itself is generally identified up front so as to give the project the best chance
of success. Common success measure for the scope aspect of a project is its inherent quality upon
delivery.\\
\end{itemize}
\section{Outcomes}
\begin{itemize}
\end{itemize}
\section{Applications}
\begin{itemize}
\item Can be used in Cloud Storage servers to search in stored encrypted data.
\end{itemize}
\begin{table}[!htbp]
\begin{center}
\def\arraystretch{1.5}
\caption { Hardware Requirements }
\begin{tabular}{| c | c | c | c |}
\hline
\hline
\hline
\hline
\hline
\hline
\end{tabular}
\label{tab:hreq}
\end{center}
\end{table}
\newpage
\textbf{Platform : }\\
\begin{enumerate}
2) Ubuntu.
\chapter{Project Plan}
\section{Project Estimates}
Use Waterfall model and associated streams derived from assignments( Annex A and B) for
estimation
\subsection{Reconciled Estimates}
\subsubsection{Cost Estimate}
\subsubsection{Time Estimates}
\subsection{Project Resources}
Project resources [People, Hardware, Software, Tools and other resources] based on Memory
Sharing, IPC, and Concurrency derived using appendices to be referred.
\begin{center}
\end{center}
The most common resource to analyse software is time and number of execution steps, this is generally
computed in terms of n. We will use an informal model of a computer and an algorithm. All the
definitions can be made precise by using a model of a computer such as a Turing machine. While we are
interested in the difficulty of a computation, we will focus our hardness results on the difficulty of yes -
no questions. These results immediately generalize to questions about general computations. It is also
possible to state definitions in terms of languages, where a language is defined as a set of strings: the
language associated with a question is the set of all strings representing questions for which the answer
is Yes.\\
\subsection{The Class P}
The collection of all problems (Algorithms or methods that we are using in our project) that can be
solved in polynomial time is called P. That is, a decision question is in P if there exists an exponent k and
an algorithm for the question that runs in time O(nk) where n is the length of the input.
P roughly captures the class of practically solvable problems. Or at least that is the conventional
wisdom. Something that runs in time 2¬n requires double the time if one adds one character to the
input. Something that runs in polynomial time does not suffer from this problem.
P is robust in the sense that any two reasonable (deterministic) models of computation give rise to the
same definition of P.\\
The collection of all problems that can be solved in polynomial time using non determinism is called NP.
That is, a decision question is in NP if there exists an exponent k and an nondeterministic algorithm for
the question that for all hints runs in time O(nk) where n is the length of the input.\\
It would seem that P and NP might be different sets. In fact, probably the most important Unsolved
problems in Mathematics and Computer Science today is: Conjecture. P ≠ NP
If the conjecture is true, then many problems for which we would like efficient algorithms do not have
them. This would be sad. If the conjecture is false, then much of cryptographies under threat.\\
\subsection{NP Completeness}
While we cannot determine whether P = NP or not, we can, however, identify problems that are the
hardest in NP. These are called the NP-complete problems. They have the property that if there is a
polynomial-time algorithm for any one of them then there is a polynomial-time algorithm for every
problem in NP.\\
\subsubsection{Definition}
\subsubsection{Example}
If (Fr’ ∈ S) ≤ T
Then system is considered as NP Complete. Our System unconditionally satisfies this problem, So we can
conclude our system as NP Complete.\\
\subsection{Risk Identification}
\subsection{Risk Analysis}
The risks for the Project can be analyzed within the constraints of time and quality.\\
\begin{figure}[!ht]
\centering
\includegraphics[width=0.6\linewidth]{tbl1.png}\\
\label{fig:Risk Table}
\end{figure}
\newpage
\caption{Risk Probability definitions}\\
\begin{tabular}{| c | c | c |}
\hline
\end{tabular}\\
\end{center}
\label{tab:riskdef}
\end{table}\\
\begin{tabular}{| c | c | c |}
\hline
High & Schedule impact or Some parts of the project have low quality & $5-10\% $ \\ \hline
Medium & Barely noticeable degradation in quality Low Impact & $ < 5 \% $ \\ \hline
\end{tabular}\\
\end{center}
\label{tab:riskdef}
\end{table}
\newpage
\subsection{Overview of Risk Mitigation, Monitoring, Management}\\\\
\begin{table}[!htbp]
\begin{center}
%\def\arraystretch{1.5}
\def\arraystretch{1.5}
\hline
\end{tabular}
\end{center}
\label{tab:risk1}
\end{table}
\begin{table}[!htbp]
\begin{center}
%\def\arraystretch{1.5}
\def\arraystretch{1.5}
\begin{tabular}{| p{6cm} | p{6cm} |}
\hline
\end{tabular}
\end{center}
\label{tab:risk2}
\end{table}
\begin{table}[!htbp]
\begin{center}
%\def\arraystretch{1.5}
\def\arraystretch{1.5}
\hline
Source & This was identified during early development and testing. \\ \hline
\end{tabular}
\end{center}
\label{tab:risk3}
\end{table}
\newpage
\section{Project Schedule}
\begin{itemize}
\item Task 1: System should communicate with the server all time.
\item Task 3: System should have good and easily understandable GUI
\item Task 4:User must be able to first create an account and then be able to login.
\item Task 5: The accuracy of test results should be within acceptable limits.
\end{itemize}
\subsection{Task network}
Project tasks and their dependencies are noted in this diagrammatic form.
\subsection{Timeline Chart}
\centering
\includegraphics[width=0.6\linewidth]{tt.png}
\caption{Timeline Chart}
\label{fig:Timeline Chart}
\end{figure}
\newpage
\section{Introduction}
The purpose of this document is to prescribe the scope, approach, resources, and schedule of the
testing activities. To identify the items being tested the features to be tested, the testing tasks to be
performed. The development of software system involves a series of production activities where
opportunities for injection of human fallibilities are enormous. Error may begin to occur at the very
inception of the process where the objectives may be erroneously or imperfectly specified, as well as in
the later design and development stages. Software testing is a critical element of software quality
assurance and represents the ultimate review of specification, design and code generation.
Once source code has been generated, software must be tested to uncover as many errors as possible.
Our goal is to design a series of test cases that have a high like hood of finding errors. That’s where
software-testing techniques enter the picture. These techniques provide systematic guidance for
designing tests that:\\
2. Exercise the input and output domains of the program to uncover errors in program function,
behaviour and performance
.\\
\textbullet Planning: In this phase, the project scope is defined and the appropriate methods
\textbullet Requirement Analysis: The functional and Non-functional requirements are gathered in
this phase.\\
\textbullet Design: The designing and implementation of Graphical User Interface is done in this
phase.\\
\textbullet Development: All the coding and implementation in each module is done in this
phase.\\
\textbullet Integration and test: All software modules are combined and tested as a group. It occurs
after unit testing and before validation testing. Integration testing takes as its input modules that have
been unit tested, groups them in larger aggregates, applies tests defined in an integration test plan to
those aggregates, and delivers as its output the integrated system ready for system testing.\\
\section{Usage Scenario}
\subsection{User profiles}
The user is at the main actor of the model. A number of actions are performed by these actors in the
suitable environment that has been provided.
\\
\subsection{Use Cases}
A use case diagram in the Unified Modelling Language (UML) is a type of behavioural diagram defined by
and created from a Use-case analysis. Its purpose is to present a graphical overview of the functionality
provided by a system in terms of actors, their goals (represented as use cases), and any dependencies
between those use cases. The main purpose of a use case diagram is to show what system functions are
performed for which actor. Roles of the actors in the system can be depicted.\\
\begin{center}
\begin{figure}[!htbp]
\centering
\fbox{\includegraphics[width=\textwidth]{Use.jpg}}
\label{fig:Use}
\end{figure}
\end{center}
\newpage
\begin{center}
\begin{figure}[!htbp]
\centering
\fbox{\includegraphics[width=\textwidth]{DFD0.jpg}}
\label{fig:Use}
\end{figure}
\end{center}
\newpage
\begin{figure}[!htbp]
\centering
\fbox{\includegraphics[width=\textwidth]{DFD1.jpg}}
\label{fig:Use}
\end{figure}\\
\end{center}
\newpage
\begin{center}
\begin{figure}[!htbp]
\centering
\fbox{\includegraphics[width=\textwidth]{DFD2.jpg}}
\label{fig:Use}
\end{figure}
\end{center}
\newpage
Activity Diagram
\begin{center}
\begin{figure}[!htbp]
\centering
\fbox{\includegraphics[width=\textwidth]{Act.jpg}}
\caption{Activity Diagram}
\label{fig:act}
\end{figure}
\end{center}
\newpage
Sequence Diagram
\begin{center}
\begin{figure}[!htbp]
\centering
\fbox{\includegraphics[width=\textwidth]{Seq.jpg}}
\caption{Sequence Diagram}
\label{fig:act}
\end{figure}
\end{center}
\subsubsection{User Interfaces}
The application is Java JDK JVM. It will interact with its user with GUI interface.
\subsubsection{Hardware Interfaces}
\subsubsection{Software Interfaces }
\section{Introduction}
two stages\\
1) Enrolment process \\
2) Verification process.\\
provider.\\
processing.
\\
\section{Architectural Design}
A description of the program architecture is presented. Subsystem design or Block diagram, Package
Diagram, Deployment diagram with description is to be presented.
\begin{center}
\begin{figure}[!htbp]
\centering
\fbox{\includegraphics[width=\textwidth]{arc.png}}
\caption{Architecture diagram}
\label{fig:arch-dig}
\end{figure}
\end{center}
\section{Data design}
A description of all data structures including internal, global, and temporary data structures, database
design (tables), file formats.
Data structures that are passed among components the software are described.
Data structured that are available to major portions of the architecture are described.
\section{Component Design}
Class diagrams, Interaction Diagrams, Algorithms. Description of each component description required.
\begin{center}
\begin{figure}[h]
\centering
\includegraphics[width=12cm]{comp.png}
\caption{Component diagram}
\label{fig:state-dig}
\end{figure}
\end{center}
\subsection{Class diagram}
\begin{center}
\begin{figure}[h]
\centering
\includegraphics[width=12cm]{CLASS.png}
\caption{Class diagram}
\label{fig:seq-dig}
\end{figure}
\end{center}
\begin{table} [ht]
\footnotesize
\centering
\caption{Test Case}\medskip
\label{tab:template}
\begin{tabular}{|p{1.0cm}|p{3cm}|p{3cm}|p{4cm}|p{1.0 cm}|}
\hline
TC No. & Test Case & Objective & Expected Result & Status \\
\hline
TC01 & File Uploading & Uploading the data to the cloud ,File selection from the user,
Feature selection is performing on uploading of the file & Successfully file will be uploaded to the cloud
in encryption format with proper feature extraction & Pass\\
\hline
TC02 & Searching & Searching file for the given query, Feature extracted data is used,
Desired files are extracted & File name vector is obtained & Pass\\
\hline
TC03 & Downloading file & Downloading the files from the cloud, File will be
downloaded at client side in original format, Files need to be stored in cloud & File will be downloaded
properly & Pass\\
\hline
\end{tabular}
\end{table}
\\
\bibliographystyle{ieeetr}
\bibliography{biblo}
\chapter{Bibliography}
\textbf{References}
\begin{enumerate}
\end{enumerate}
\newpage
\chapter{ANNEXURE A}
\addcontentsline{toc}{chapter}{\numberline{}ANNEXURE A }
Basic Theme of waterfall model is that Know all requirements in detail at start in
communication and then go for next phases with no change is allowed later. According
to that the very first gather all requirements because at current stage the system that
are going to develop contains modules which are interdependent on each other so, for
that its must to know all the requirements at the start of project so the first step that is
Then next project planning is important step in that daily project planning is done
and project work is going on according to that plan so that it will complete on time. A
next step that is designing, modelling, coding will complete according to plan which
is created. Proposed systems is divided into small modules so that its easy to implement
and understand and also as small modules are there its easy to arrange tasks to
each project member. so that its become easy to manage. As all requirements are well
\begin{figure}[hbtp]
\begin{center}
\includegraphics[scale=0.7]{Dataflow.jpg}
\end{center}
\end{figure}
\subsubsection*{Advantages:}
\begin{enumerate}
\item Easy to manage. Each phase has specific deliverable and a review.
\item Works well for projects where requirements are well understood.
\end{enumerate}
There exist various software development approaches, aptly defined and designed,
which are employed during the development process of a software. These approaches
are also referred to as "Software Development Process Models". Each process model
follows a particular life cycle in order to ensure success in the process of software development.\\
The same waterfall model is utilised in this project. Waterfall approach was first
ensure success of the project. In the waterfall approach, the whole process of software
development is divided into separate phases. These phases in waterfall model are:
\begin{enumerate}
\item Implementation
\end{enumerate}
All these phases are cascaded to each other so that the second phase is started as and
when a defined set of goals are achieved for first phase and it is signed off, and hence the
name waterfall model. All the methods and processes undertaken in the waterfall model
All possible requirements of the system to be developed are captured in this phase.
Requirements are a set of functions and constraints that the end user (who will be using
the system) expects from the system. The requirements are gathered from the end user
at the start of the software development phase. These requirements are analyzed for
their validity, and the possibility of incorporating the requirements in the system to be
serves the purpose of guideline for the next phase of the model.
Before starting the actual coding phase, it is highly important to understand the requirements
of the end user and also have an idea of how should the end product looks like.
The requirement specifications from the first phase are studied in this phase and a system
design is prepared. System design helps in specifying hardware and system requirements
and also helps in defining the overall system architecture. The system design
On receiving system design documents, the work is divided in modules/units and actual
coding is started. The system is first developed in small programs called units, which
are integrated in the next phase. Each unit is developed and tested for its functionality;
this is referred to as unit testing. Unit testing mainly verifies if the modules/units meet
their specifications.
As specified above, the system is first divided into units which are developed and tested
for their functions. These units are integrated into a complete system during integration
phase and tested to check if all modules/units coordinate with each other, and the system as a whole
behaves as per the specifications. After successfully testing the software, it is delivered to the customer.
\newpage
\chapter{ANNEXURE B }
\addcontentsline{toc}{chapter}{\numberline{}ANNEXURE B }
\section*{FEASIBILITY STUDY}
\section*{Introduction}
Feasibility study is the test of a system proposal according to its work ability, impact on the organization,
ability to meet user needs, and effective use of resources. It focuses on the evaluation of existing system
and procedures analysis of alternative candidate system cost estimates. Feasibility analysis was done to
determine whether the system would be feasible.
The development of a computer based system or a product is more likely plagued by resources
and delivery dates. Feasibility study helps the analyst to decide whether or not to proceed, amend,
postpone or cancel the project, particularly important when the project is large, complex and costly.
Once the analysis of the user requirement is complement, the system has to check for the
compatibility and feasibility of the software package that is aimed at. An important outcome of the
preliminary investigation is the determination that the system requested is feasible.
The technology used can be developed with the current equipment and has the technical
capacity to hold the data required by the new system.
Technical feasibility on the existing system and to what extend it can support the proposed
addition.
Economic analysis is the most frequently used method for evaluating the effectiveness of a new
system. More commonly known cost/benefit analysis, the procedure is to determine the benefits and
savings that are expected from a candidate system and compare them with costs.
If benefits outweigh costs, then the decision is made to design and implement the system. An
entrepreneur must accurately weigh the cost versus benefits before taking an action. This system is
more economically feasible which assess the brain capacity with quick & online test. So it is
economically a good project.
\subsubsection*{Performance Feasibility}
\end{appendices}
\end{document}
%#############################################
%#########Author : PROJECT###########
%#########COMPUTER ENGINEERING############
\documentclass[12pt,a4paper]{report}
\usepackage{fancyhdr}
\fancyhf{}
\fancyheadoffset{0.4in}
\fancyfootoffset{0.4in}
\oddsidemargin 0.0in
\textwidth 16cm
\usepackage{graphicx}
\usepackage{color}
%\input{rgb}
%\usepackage[utf8]{inputenc}
\usepackage{amsmath}
\usepackage{amsfonts}
% \usepackage{pdfpages}
\usepackage{amssymb}
% \usepackage{makeidx}
% \usepackage{enumitem}
% \usepackage{titletoc}
% \usepackage{float}
% \usepackage{graphicx}
% \usepackage{titlesec}
% \usepackage{setspace}
% \renewcommand{\baselinestretch}{1.5}
% \usepackage{sectsty}
% \usepackage{longtable}
% \usepackage{geometry}
% \usepackage{times}
%\usepackage[left=1.5in,right=0.5in,top=1in,bottom=1.25in]{geometry}
% \geometry{ a4paper,total={210 mm,197 mm},left=37 mm, right=25.4 mm, top=25.4 mm, bottom=32
mm}
% \renewcommand{\cftfigfont}{Figure }
% \renewcommand{\cfttabfont}{Table }
%\usepackage{txfonts}
% \renewcommand{\chaptername}{}
%\renewcommand{\thechapter}{}
% \sectionfont{\fontsize{12}{10}\selectfont}
%\subsectionfont{\fontsize{12}{10}\selectfont}
%\headerfont{\fontsize{10}{10}\selectfont}
%\footerfont{\fontsize{10}{10}\selectfont}
%\newlength\mylen
%\settowidth\mylen{\textbullet}
%\addtolength\mylen{-3mm}
%\pagenumbering{}
%\begin{document}
%\pagenumbering{gobble}
%\input{./one.tex}
%\cleardoublepage
%\input{./two.tex}
%\cleardoublepage
%\input{three.tex}
%\cleardoublepage
%\input{four.tex}
%\cleardoublepage
%\input{five.tex}
%\renewcommand{\headrulewidth}{0.4pt}
%\renewcommand{\footrulewidth}{0.4pt}
%\titleformat{\chapter}[display]
%{\normalfont\large\bfseries\centering}{\chaptertitlename\ }
%{20pt}{\Large}
%\titlespacing*{\chapter}{20pt}{-10pt}{40pt}
%\newpage
%\pagenumbering{Roman}
%\input{./abstract.tex}
%\newpage
%\input{./ack.tex}
%\newpage
%\thispagestyle{empty}
%\tableofcontents
%\newpage
%\input{./listfig.tex}
%\newpage
%\input{./listtable.tex}
%\begin{flushleft}
%\chapter*{LIST OF TABLES}
%\end{flushleft}
%\addcontentsline{toc}{chapter}{\protect\numberline{}{LIST OF TABLES}}
%\\
%\\
%\\
%\\
%\\
%\\
\pagestyle{fancy}
\lhead{}
\chead{}
\rhead{}
\lfoot{\small}
\cfoot{\thepage}
\rfoot{DYPIET}
\renewcommand{\footrulewidth}{0.01in}
\newpage
\makeatletter
\def\@makechapterhead#1{%
\vspace*{0\p@}%
\vspace*{2\p@}%
\par \nobreak
\vspace*{2\p@}%
%\vskip 40\p@
\vskip 20\p@
}}
\makeatother
\makeatletter
\renewcommand\section{\@startsection{section}{1}{\z@}%
{2.3ex \@plus.2ex}%
{\fontsize{12pt}{14pt} \selectfont\bfseries}}
\makeatother
\makeatletter
\renewcommand\subsection{\@startsection{subsection}{1}{\z@}%
{2.3ex \@plus.2ex}%
{\fontsize{12pt}{14pt} \selectfont\bfseries}}
\makeatother
\begin{document}
\chapter{Synopsis}
\section{Project Title}
Providing Biometric Authentication as a Service for cloud account
\section{Internal Guide}
No Sponsorship
\section{Technical Keywords: }
General:
Software:
\item 2. JCreator
\newpage
\section{Problem Statement}
To provide a secure light weight biometric authentication service to access a cloud account.
\section{Abstract}
\begin{itemize}
The problem of fake logging in and data thefts is a major issue that cloud users are facing. To protect the
privacy and for providing security, there is a need to authenticate the user that requests access to an
account. However, the techniques used for authentication so far were not capable to guarantee the
same and thereby kept the data at high risks. So, we are using the concept of Biometric authentication
along with some additional concepts of privacy preservation to provide a more secure log in. This
project also provides a feature by using which fingerprint can be recognised even if the orientation is
changed.\\
\end{itemize}
\begin{itemize}
\item Study of encryption and decryption algorithms for image as well as character data.\\
\item Study of fingerprint recognition system along with the feature of orientation change. \\
\end{itemize}
\label{sec:math}
\begin{itemize}
N=Match the fingerprints provided during log in with the ones that
were stored in the database during registration.
Q=request is accepted.
H=Client validation.
X=Wrong input.
Y=Poor network.
1.10.2. Benefits:
C. It will help us to store the data on cloud in compressed form thereby improving the memory
utilization.
\end{itemize}
\section{Names of Conferences / Journals where papers can be published}
\begin{itemize}
\item IEEE
\item ACM
\item Springer
\end{itemize}
\label{sec:survey}
Author:- Song\\\\
They used word by word document encryption. Each word in a document is independently encrypted by
using a special two-layered encryption. In this server is given the capability to search on encrypted
document and show if the keyword is there in the document. For both keyword (a search with
encrypted index) and non-keyword (sequential scan without index) based schemes they gives solutions
theoretically. They focus more on keyword based search. But the dis- advantages of this scheme are it is
not secure for statistical analysis of multiple queries .Also queried keywords positions in document can
be leaked. Computation complexity is linear to the whole collection length and memory overhead is
large. Also encryption time increases as document size increases.\\
\newline
For that for each document in set they collected term frequency information for building indices. To
protect against statistical attacks they secure indices. De- pending on encrypted queries it ranks the
documents and document having most rank will be pushed up using ranked method. The cryptographic
techniques such as order-preserving encryption is applied by using term frequencies and other
document information. It calculates the relevance score for every document and identifies documents
which relevant most. They compare the performance of sys- tem for search accuracy with system for
non encrypted data. The given method is well suited for large documents and also It provides higher
accuracy and security. But for this method computational cost is high and protecting communication link
is bit di_cult task and combating traffic analysis is not done.\\
\newline
3. Content-based multimedia retrieval over encrypted databases directly in the encrypted domain \\
Firstly feature vectors are extracted and vocabulary tree is used to cluster them hierarchically. Indexing
is done based on vocabulary tree which represented as a bag of visual words. It describes how many
times the representative feature vectors in the vocabulary tree occur in the questioned image. In order
to secure index scheme such as mini-Hash sketches and secure inverted index, it uses jointly
exploiting technique like cryptography, image processing, and information retrieval. first schema
exploits randomized hash functions and the second schema makes use of inverted indexes of visual
words. This model is further en- handed to overcome mini- Hash scheme that require longer sketches to
achieve better performance in order to achieve performance similar to that of the inverted index
scheme. The approach used is different slightly in terms of feature. Query does not consider specific
feature for an image but all features of an image.\\
\newline
Their solution is practical and efficient. In this scheme without leaking any information about plain text
the computational overhead on users is reduced by participating cloud service providers in partial
decipherment. They showed that their scheme performs better than the scheme by Bone.This scheme is
provably secure and without being aware of any keyword and email information it enables cloud service
provider (CSP) for the determination of whether the keyword is present in given email. Under the Bi-
linear Dixie {Hellman (BDH) assumption and the random oracle model (Boneh and Franklin, 2003) it can
be proved to be
Semantically secure.
The main difference between privacy preserving keyword search (PEKS) and SPKS is that in PEKS
standard public key encryption algorithm is used to encrypt an email without specifying its specific
implementation, and all the decryption is done by a user, whereas SPKS uses the EMBEnc algorithm for
encryption of email, it uses a user's public key and the CSP's public key so that CSP can participate in the
partial decipherment.\\
\newline
They took the scenario where encryption of records and keyword index is done by multiple data owners
so that multiple users can search. They proposed an authorization framework where search abilities to
users are obtained from local trusted authorities according to their attributes.
Two novel solutions for APKS are proposed using Hierarchical Predicate Encryption (HPE). In the first
solution they enhanced the search efficiency using at- tribute hierarchy and in the second they
enhanced query privacy via the help of proxy servers. Results show that by using APKS reasonable
search performance is achieved. This system makes use of personal health records for the purpose of
testing the system. The above techniques will get fails if entity synonyms or morphological variants are
used.\\
\newline
Proposed an approach which uses Locality Sensitive Hashing (LSH) which is a nearest neighbor algorithm
for the index creation. In this similar features are hashed to the same bucket with a high probability due
to the property of LSH while the features which are not similar reside in the different buckets. If the data
set is small then the communication cost and search time required by this scheme is better. But if the
database size increases the time required for communication and for search also increases rapidly. In
this method every query feature increases linearly with the size of database. They used bloom filter for
translation of strings. It is a data structure which gives information about whether an element is present
in the collection or not. But the disadvantage of this structure is that it is a probabilistic data structure.
The false positive rate is more in this structure. It shows if keyword definitely not in the set of encrypted
documents or may be present in the encrypted collection of documents. Also Jaccard distance is used by
them to measure the Distance between strings.\\
\newpage
\chapter{Technical Keywords}
\section{Area of Project}
% \begin{itemize}
% \end{itemize}
\item Keywords
\begin{enumerate}
\item 1.Locality sensitive hashing: The basic building block of secure index is the state-of the-art
approximate near neighbor search algorithm in high dimensional spaces called locality sensitive
hashing(LSH). LSH is extensively used for fast similarity search on plain data in information retrieval
community. \\
\item 2.Encrypted data: Data encryption translates data into another form, or code, so that only
people with access to a secret key (formally called a decryption key) or password can read it.Encrypted
data is commonly referred to as ciphertext, while unencrypted data is called plaintext.The purpose of
data encryption is to protect digital data confidentiality.
\item 3.Cloud computing: Cloud computing is a type of Internet-based computing that provides
shared computer processing resources and data to computers and other devices on demand. It is a
model for enabling ubiquitous, on-demand access to a shared pool of configurable computing resources
which can be rapidly provisioned and released with minimal management effort. Cloud computing
provides users and enterprises with various capabilities to store and process their data in third-party
data centers that may be located far from the user–ranging in distance from across a city to across the
world. Cloud computing relies on sharing of resources to achieve coherence and economy of scale,
similar to a utility over an electricity network.
\item 4.Data security: Data security refers to protective digital privacy measures that are applied
to prevent unauthorized access to computers, databases and websites. Data security also protects data
from corruption.
\item 5.Query matching: Searches for your exact keyword phrase, or close variations of your
exact keyword phrase, with potentially other words before or after that phrase.
\end{enumerate}
\chapter{Introduction}
\section{Project Idea}
\begin{itemize}
The basic idea of searching system over encrypted data in cloud comes from the fact that the data in
cloud is having poor security norms. So data need to be always store in encrypted format while storing.
So to search the required data by the user on the encrypted data requires data to be decrypted first and
then search, so this eventually slow down the process of searching.
So to overcome this our idea is searching required to search over the encrypted data without decrypting
the original data enhances the process of searching. This can be achieved by the extracting data features
while uploading the file into cloud and search is conducted on this features data to get the relevant
files.\\
\end{itemize}
\begin{itemize}
\text
The cloud storage systems are most vulnerable for data security due to their internal data sharing
among the servers. To overcome this data is always stored in the cloud by applying strong cryptographic
techniques. But eventually this doesn't solve the problem of storing process as cloud is known for
providing big storage capacity so performing search on this huge encrypted data in cloud is pose as real
challenge and eventually slows down the process of searching.
To solve this many ideas are proposed to perform search over the encrypted data, but no system is
providing complete accuracy as this mainly depend on the document content.
So this motivated us to find some novel approaches and also put forward an idea of increasing speed of
searching technique using correlation between the data. Proposed system uses inverted index model to
handle huge data for searching in cloud based on the extracted features of the original data.
\\
\end{itemize}
\newpage
\section{Literature Survey}
\begin{itemize}
Author:- Song\\\\
They used word by word document encryption. Each word in a document is independently encrypted by
using a special two-layered encryption. In this server is given the capability to search on encrypted
document and show if the keyword is there in the document. For both keyword (a search with
encrypted index) and non-keyword (sequential scan without index) based schemes they gives solutions
theoretically. They focus more on keyword based search. But the dis- advantages of this scheme are it is
not secure for statistical analysis of multiple queries .Also queried keywords positions in document can
be leaked. Computation complexity is linear to the whole collection length and memory overhead is
large. Also encryption time increases as document size increases.\\
\newline
For that for each document in set they collected term frequency information for building indices. To
protect against statistical attacks they secure indices. De- pending on encrypted queries it ranks the
documents and document having most rank will be pushed up using ranked method. The cryptographic
techniques such as order-preserving encryption is applied by using term frequencies and other
document information. It calculates the relevance score for every document and identifies documents
which relevant most. They compare the performance of sys- tem for search accuracy with system for
non encrypted data. The given method is well suited for large documents and also It provides higher
accuracy and security. But for this method computational cost is high and protecting communication link
is bit di_cult task and combating traffic analysis is not done.\\
\newline
3. Content-based multimedia retrieval over encrypted databases directly in the encrypted domain \\
Firstly feature vectors are extracted and vocabulary tree is used to cluster them hierarchically. Indexing
is done based on vocabulary tree which represented as a bag of visual words. It describes how many
times the representative feature vectors in the vocabulary tree occur in the questioned image. In order
to secure index scheme such as mini-Hash sketches and secure inverted index, it uses jointly
exploiting technique like cryptography, image processing, and information retrieval. first schema
exploits randomized hash functions and the second schema makes use of inverted indexes of visual
words. This model is further en- handed to overcome mini- Hash scheme that require longer sketches to
achieve better performance in order to achieve performance similar to that of the inverted index
scheme. The approach used is different slightly in terms of feature. Query does not consider specific
feature for an image but all features of an image.\\
\newline
Their solution is practical and efficient. In this scheme without leaking any information about plain text
the computational overhead on users is reduced by participating cloud service providers in partial
decipherment. They showed that their scheme performs better than the scheme by Bone.This scheme is
provably secure and without being aware of any keyword and email information it enables cloud service
provider (CSP) for the determination of whether the keyword is present in given email. Under the Bi-
linear Dixie {Hellman (BDH) assumption and the random oracle model (Boneh and Franklin, 2003) it can
be proved to be
Semantically secure.
The main difference between privacy preserving keyword search (PEKS) and SPKS is that in PEKS
standard public key encryption algorithm is used to encrypt an email without specifying its specific
implementation, and all the decryption is done by a user, whereas SPKS uses the EMBEnc algorithm for
encryption of email, it uses a user's public key and the CSP's public key so that CSP can participate in the
partial decipherment.\\
\newline
They took the scenario where encryption of records and keyword index is done by multiple data owners
so that multiple users can search. They proposed an authorization framework where search abilities to
users are obtained from local trusted authorities according to their attributes.
Two novel solutions for APKS are proposed using Hierarchical Predicate Encryption (HPE). In the first
solution they enhanced the search efficiency using at- tribute hierarchy and in the second they
enhanced query privacy via the help of proxy servers. Results show that by using APKS reasonable
search performance is achieved. This system makes use of personal health records for the purpose of
testing the system. The above techniques will get fails if entity synonyms or morphological variants are
used.\\
\newline
Proposed an approach which uses Locality Sensitive Hashing (LSH) which is a nearest neighbor algorithm
for the index creation. In this similar features are hashed to the same bucket with a high probability due
to the property of LSH while the features which are not similar reside in the different buckets. If the data
set is small then the communication cost and search time required by this scheme is better. But if the
database size increases the time required for communication and for search also increases rapidly. In
this method every query feature increases linearly with the size of database. They used bloom filter for
translation of strings. It is a data structure which gives information about whether an element is present
in the collection or not. But the disadvantage of this structure is that it is a probabilistic data structure.
The false positive rate is more in this structure. It shows if keyword definitely not in the set of encrypted
documents or may be present in the encrypted collection of documents. Also Jaccard distance is used by
them to measure the Distance between strings.\\
\end{itemize}
\section{Problem Statement}
\item 1. The data in cloud is having poor security norms.So data need to be always store in encrypted
format while storing.\\
\item 2. So to search the required data by the user on the encrypted data requires data to be decrypted
first and then search, so this eventually slow down the process of searching.\\
\item 3. Thus an idea of searching required to search over the encrypted data without decrypting the
original data enhances the process of searching and saves time. \\
\begin{itemize}
\item Aim of the project is to search the data in cloud that in encrypted format in such a way
that data should not be decrypt for searching process. Due to this process the time of searching can be
save efficiently.\
\item Objective of the project includes the following results to find out in database tampering:
1.Pre-Processing
2.Feature Extraction
4.Searching
5.Correlation
6.Inverted Index \
\end{itemize}
\subsection{Statement of scope}
\begin{itemize}
\item Reducing the time factor required for searching of encrypted files.
\item Utilization of a state-of-the art algorithm for fast near neighbor search in high dimensional
spaces called locality sensitive hashing is used.
\end{itemize}
\section{Software context}
\begin{itemize}
\item Main purpose of the project is to search the data in cloud that in encrypted format in such a way
that data should not be decrypted for searching process
\end{itemize}
\section{Major Constraints}
\begin{itemize}
\item Locality-Sensitive Hashing (LSH) is used for solving the approximate or exact Near Neighbor Search
in high dimensional spaces.
\item Fault Tolerant Keyword Search is used for fault tolerant keyword search over encrypted data.
\end{itemize}
\item The single problem can be solved by different solutions.This considers the performance
parameters for each approach. Thus considers the efficiency issues.
\end{itemize}
Explain the scenario in which multi-core, embedded and distributed computing methodology can be
applied.
\section{Outcome}
\begin{itemize}
\end{itemize}
\section{Applications}
\begin{itemize}
\item Can be used in Cloud Storage servers to search in stored encrypted data.
\end{itemize}
\begin{table}[!htbp]
\begin{center}
\def\arraystretch{1.5}
\begin{tabular}{| c | c | c | c |}
\hline
\hline
\hline
\hline
\hline
\hline
\end{tabular}
\label{tab:hreq}
\end{center}
\end{table}
\textbf{Platform : }\\
\begin{enumerate}
2) Ubuntu.
\end{enumerate}
\chapter{Project Plan}
\section{Project Estimates}
\subsection{Reconciled Estimates}
\subsubsection{Cost Estimate}
\begin{itemsize}
\end{itemsize}
\subsubsection{Time Estimates}
\begin{itemize}
Estimation of KLOC:
The no of lines required for implementation of various modules can be estimated as Follows:
correlation - 1.88
AES - 2.20
Efforts:
E = 3.2 * (9.88)
E = 31.61
Development Time for Implementation and Testing
D = E/N
D = 2 + 7.9
D = 9.9 month
\end{itemize}
\subsection{Project Resources}
Project resources [People, Hardware, Software, Tools and other resources] based on Memory
Sharing, IPC, and Concurrency derived using appendices to be referred.
This section discusses Project risks and the approach to managing them.
\subsection{Risk Identification}
For risks identification, review of scope document, requirements specifications and schedule is done.
Answers to questionnaire revealed some risks. Each risk is categorized as per the categories mentioned,
Please refer tables for all the risks. You can refer following risk identification questionnaire.
\begin{enumerate}
\item Have top software and customer managers formally committed to support the project?
\item Are end-users enthusiastically committed to the project and the system/product to be built?
\item Are requirements fully understood by the software engineering team and its customers?
\item Does the software engineering team have the right mix of skills?
\item Do all customer/user constituencies agree on the importance of the project and on the
requirements for the system/product to be built?
\end{enumerate}
\subsection{Risk Analysis}
The risks for the Project can be analyzed within the constraints of time and quality
\begin{table}[!htbp]
\begin{center}
%\def\arraystretch{1.5}
\def\arraystretch{1.5}
\end{center}
\label{tab:risk}
\end{table}
\begin{table}[!htbp]
\begin{center}
%\def\arraystretch{1.5}
\def\arraystretch{1.5}
\begin{tabular}{| c | c | c |}
\hline
\end{center}
\label{tab:riskdef}
\end{table}
\newpage
\begin{table}[!htbp]
\begin{center}
%\def\arraystretch{1.5}
\def\arraystretch{1.5}
\hline
Very high & $> 10 \%$ & Schedule impact or Unacceptable quality \\ \hline
High & $5-10 \%$ & Schedule impact or Some parts of the project have low quality \\ \hline
Medium & $ < 5 \% $ & Schedule impact or Barely noticeable degradation in quality Low Impact
on schedule or Quality can be incorporated \\ \hline
\end{tabular}
\end{center}
\label{tab:riskImpactDef}
\end{table}
\begin{table}[!htbp]
\begin{center}
%\def\arraystretch{1.5}
\def\arraystretch{1.5}
\hline
\end{tabular}
\end{center}
\label{tab:risk1}
\end{table}
\begin{table}[!htbp]
\begin{center}
%\def\arraystretch{1.5}
\def\arraystretch{1.5}
\hline
\end{tabular}
\end{center}
\label{tab:risk2}
\end{table}
\begin{table}[!htbp]
\begin{center}
%\def\arraystretch{1.5}
\def\arraystretch{1.5}
\hline
Source & This was identified during early development and testing. \\ \hline
Strategy & Example Running Service Registry behind proxy balancer \\ \hline
\end{tabular}
\end{center}
\label{tab:risk3}
\end{table}
\newpage
\section{Project Schedule}
\begin{itemize}
\item Task 1: System should communicate with each other for Result.
\item Task 3: System should have good \& easily understandable GUI
\item Task 4:User should have ability to select the source data Files
\item Task 5: The accuracy of test results should be within acceptable limits.
\end{itemize}
\subsection{Task network}
Project tasks and their dependencies are noted in this diagrammatic form.
\subsection{Timeline Chart}
Timeline charts, or also called Gantt charts, are developed for the entire project, for tracking and control
of all activities that need to be performed for project development. \\
\begin{figure}[!ht]
\centering
\includegraphics[width=0.6\linewidth]{time.png}
\caption{Timeline Chart}
\label{fig:Timeline Chart}
\end{figure}
\newpage
\begin{table}[H]
\centering
\caption{Team Structure}
\hline
\hline
3& November 2016 & Gather all Requirements and Learning phase
\begin{figure}
\includegraphics[width=\linewidth]{execution.jpg}
\caption{Plan of Execution.}
\label{fig:execution}
\end{figure}
\end{tabular}
\label{my-label}
\end{table}
Timeline charts, or also called Gantt charts, are developed for the entire project, for tracking and control
of all activities that need to be performed for project development. \\
\begin{figure}[!ht]
\centering
\includegraphics[width=1.0\linewidth]{time.png}
\caption{Timeline Chart}
\label{fig:Timeline Chart}
\end{figure}
\begin{table}[H]
\hline
\hline
\hline
\hline
3 & 5.8.2016 & Abstract version 1 \\
\hline
\hline
\hline
\hline
\hline
\hline
\hline
\hline
\hline
\hline
\hline
\hline
\end{tabular}
\centering
\label{tab:Reporting and Communication}
\end{table}
\section{Introduction}
The purpose of this document is to provide a detailed overview of our software product “ Searching
system over encrypted data in cloud using correlation process ”, its parameters and goals. This
document describes the project's target audience and its user interface, hardware and software
requirements. It defines how our client, team and audience see the product and its functionality.
The scope of this project includes project developer assisted by project guide. The scope thus far has
been the completion of the basic interfaces that will be used to build the system. The database used,
has been set up and given the necessary permissions and proper cloud setup is done for the data
storage.\\
$ \bullet $ If you are considering a job as Software Developer here is a list of the most standard
responsibilities and duties for the Software Developer position.\\
$ \bullet $ Design, initiate and handle technical designs and complex application features.\\
$ \bullet $ Initiate and drive major changes in programs, procedures and methodology.\\
\section{Usage Scenario}
A usage scenario, or scenario for short, describes a real-world example of how one or more people or
organizations interact with a system. They describe the steps, events, and/or actions which occur during
the interaction. Usage scenarios can be very detailed, indicating exactly how someone works with the
user interface, or reasonably high-level describing the critical business actions but not the indicating
how they're performed. \\
Usage scenarios are applied in several development processes, often in different ways. In derivatives of
the Unified Process (UP) they are used the help move from use cases to sequence diagrams. The basic
strategy is to identify a path through a use case, or through a portion of a use case, and then write the
scenario as an instance of that path. For example, the text of the "Withdraw Funds" use case would
indicate what should happens when everything goes right, in this case the funds exist in the account and
the ATM has the funds. This would be referred to as the "happy path" or basic course of action. The use
case would include alternate paths describing what happens when mistakes occur, such as there being
insufficient funds in the account or the ATM being short of cash to disburse to customers. You would
write usage scenarios that would explore the happy path, such as the first scenario above, as well as
each of the alternate courses. You would then develop a sequence diagram exploring the
implementation logic for each scenario.\\
\subsection{User profiles}
The system creates a user profile the first time that a user logs on to a computer. At subsequent logons,
the system loads the user's profile, and then other system components configure the user's
environment according to the information in the profile.\\
A use case diagram at its simplest is a representation of a user's interaction with the system that shows
the relationship between the user and the different use cases in which the user is involved. A use case
diagram can identify the different types of users of a system and the different use cases and will often
be accompanied by other types of diagrams as well.\\
\begin{center}
\begin{figure}[!htbp]
\centering
\fbox{\includegraphics[width=\textwidth]{usecase.png}}
\label{fig:usecase}
\end{figure}
\end{center}
\newpage
\subsection{Data Description}
Data objects that will be managed/manipulated by the software are described in this section. The
database entities or files or data structures required to be described. For data objects details can be
given as below
The main aim of data models is to support the development of information systems by providing the
definition and format of data. According to West and Fowler (1999) "if this is done consistently across
systems then compatibility of data can be achieved. If the same data structures are used to store and
access data then different applications can share data. The results of this are indicated above. However,
systems and interfaces often cost more than they should, to build, operate, and maintain. They may also
constrain the business rather than support it. A major cause is that the quality of the data models
implemented in systems and interfaces is poor"\\.
A function model, similar with the activity model or process model, is a graphical representation of an
enterprise's function within a defined scope. The purposes of the function model are to describe the
functions and processes, assist with discovery of information needs, help identify opportunities, and
establish a basis for determining product and service costs.\\
1. The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to
represent a system in terms of input data to the system, various processing carried out on this data, and
the output data is generated by this system.\\
2. The data flow diagram (DFD) is one of the most important modeling tools. It is used to model
the system components. These components are the system process, the data used by the process, an
external entity that interacts with the system and the information flows in the system.\\
3. DFD shows how the information moves through the system and how it is modified by a series of
transformations. It is a graphical technique that depicts information flow and the transformations that
are applied as data moves from input to output.\\
4. DFD is also known as bubble chart. A DFD may be used to represent a system at any level of
abstraction. DFD may be partitioned into levels that represent increasing information flow and
functional detail.\\
\begin{center}
\begin{figure}[!htbp]
\centering
\includegraphics[width=8cm]{DFD0.png}
\end{figure}
\end{center}
\begin{center}
\begin{figure}[!htbp]
\centering
\includegraphics[width=8cm]{DFD1.png}
\end{figure}
\end{center}
\begin{center}
\begin{figure}[!htbp]
\centering
{\includegraphics[width=10cm]{dfd2.png}
\end{figure}
\end{center}
\newpage
\begin{center}
\begin{figure}[!htbp]
\centering
\fbox{\includegraphics[height=430pt]{activity.png}}
\caption{Activity diagram}
\label{fig:act-dig}
\end{figure}
\end{center}
A state diagram is a type of diagram used in computer science and related fields to describe the
behavior of systems. State diagrams require that the system described is composed of a finite number of
states; sometimes, this is indeed the case, while at other times this is a reasonable abstraction.\\
\subsection{State Diagram:}
Fig.\ref{fig:state-dig} example shows the state transition diagram of Cloud SDK. The states are
represented in ovals and state of system gets changed when certain events occur. The transitions from
one state to the other are represented by arrows. The Figure shows important states and events that
occur while creating new project.
\subsection{Design Constraints}
In system design, a design constraint refers to some limitation on the conditions under which a system is
developed, or on the requirements of the system. The design constraint could be on the systems form,
fit or function or could be in the technology to be used, materials to be incorporated, time taken to
develop the system, overall budget, and so on. A design constraint is normally imposed externally, either
by the organisation or by some external regulation. During system design, it is as important to identify
each design constraint as it is to elicit requirements since the design constraints place an overall
boundary around the system design process.\\
A User Interface Description Language (UIDL) is a formal language used in Human-Computer Interaction
(HCI) in order to describe a particular user interface independently of any implementation. Considerable
research effort has been devoted to defining various meta-models in order to define rigorously the
semantics of a UIDL. These meta-models adhere to the principle of separation of concerns. Any aspect
of concern should uni vocally fall into one of the following meta-models: context of use (user, platform,
environment), task, domain, abstract user interface, concrete user interface, usability (including
accessibility), work flow, organization, evolution, program, transformation, and mapping. Not all these
meta-models should be used concurrently, but may be manipulated during different steps of a user
interface development method. In order to support this kind of development method, software is
required throughout the user interface development life cycle in order to create, edit, check models that
are compliant with these meta-models and to produce user interfaces out of these methods. Reviewing
the state of the art of software support for a UIDL in the context of any development method (e.g.,
formal method, model-based, model-driven).\\
\chapter{Design Document}
\section{Introduction}
With the aim of choosing a subset of good features with respect to the target concepts, feature subset
selection is an effective way for reducing dimensionality, removing irrelevant data, increasing learning
accuracy, and improving result comprehensibility. Many feature subset selection methods have been
proposed and studied for machine learning applications. They can be divided into four broad categories:
the Embedded, Wrapper, Filter, and Hybrid approaches. The embedded methods incorporate feature
selection as a part of the training process and are usually specific to given learning algorithms, and
therefore may be more efficient than the other three categories. Traditional machine learning
algorithms like decision trees or artificial neural networks are examples of embedded approaches. The
wrapper methods use the predictive accuracy of a predetermined learning algorithm to determine the
goodness of the selected subsets, the accuracy of the learning algorithms is usually high. However, the
generality of the selected features is limited and the computational complexity is large. The filter
methods are independent of learning algorithms, with good generality. Their computational complexity
is low, but the accuracy of the learning algorithms is not guaranteed. The hybrid methods are a
combination of filter and wrapper methods by using a filter method to reduce search space that will be
considered by the subsequent wrapper. They mainly focus on combining filter and
wrapper methods to achieve the best possible performance with a particular learning algorithm with
similar time complexity of the filter methods. The wrapper methods are computationally expensive and
tend to over fit on small training sets. The filter methods, in addition to their generality, are usually a
good choice when the number of features is very large.\\
\section{Architectural Design}
For system developers, they need system architecture diagrams to understand, clarify, and
communicate ideas about the system structure and the user requirements that the system must
support. It's a basic framework can be used at the system planning phase helping partners understand
the architecture, discuss changes, and communicate intentions clearly.\\
\begin{center}
\begin{figure}[!htbp]
\centering
\fbox{\includegraphics[width=\textwidth]{archi.png}}
\caption{Architecture diagram}
\label{fig:arch-dig}
\end{figure}
\end{center}
\section{Data design}
A description of all data structures including internal, global, and temporary data structures, database
design (tables), file formats.
Data structures that are passed among components the software are described.
Data structured that are available to major portions of the architecture are described.
\subsection{Database description}
Database(s) / Files created/used as part of the application is(are) described.
\section{Component Design}
Class diagrams, Interaction Diagrams, Algorithms. Description of each component description required.
\begin{center}
\begin{figure}[h]
\centering
\includegraphics[width=12cm]{class.png}
\label{fig:state-dig}
\end{figure}
\end{center}
\subsection{Sequence diagram}
A sequence diagram in Unified Modeling Language (UML) could be a reasonably interaction diagram
that shows however processes operate with each other and in what order. It's a construct of a Message
Sequence Chart. Sequence diagrams square measure typically known as event diagrams, event
situations, and temporal arrangement diagrams.\\
\begin{center}
\begin{figure}[h]
\centering
\includegraphics[width=12cm]{sequence.png}
\caption{Sequence diagram}
\label{fig:seq-dig}
\end{figure}
\end{center}
\begin{table} [ht]
\footnotesize
\centering
\caption{Test Case}\medskip
\label{tab:template}
\begin{tabular}{|p{1.0cm}|p{3cm}|p{3cm}|p{4cm}|p{1.0 cm}|}
\hline
TC No. & Test Case & Objective & Expected Result & Status \\
\hline
TC01 & File Uploading & Uploading the data to the cloud ,File selection from the user,
Feature selection is performing on uploading of the file & Successfully file will be uploaded to the cloud
in encryption format with proper feature extraction & Pass\\
\hline
TC02 & Searching & Searching file for the given query, Feature extracted data is used,
Desired files are extracted & File name vector is obtained & Pass\\
\hline
TC03 & Downloading file & Downloading the files from the cloud, File will be
downloaded at client side in original format, Files need to be stored in cloud & File will be downloaded
properly & Pass\\
\hline
\end{tabular}
\end{table}
\\
\bibliographystyle{ieeetr}
\bibliography{biblo}
\chapter{Bibliography}
\textbf{References}
\begin{enumerate}
\end{enumerate}
\newpage
\chapter{ANNEXURE A}
\addcontentsline{toc}{chapter}{\numberline{}ANNEXURE A }
Basic Theme of waterfall model is that Know all requirements in detail at start in
communication and then go for next phases with no change is allowed later. According
to that the very first gather all requirements because at current stage the system that
are going to develop contains modules which are interdependent on each other so, for
that its must to know all the requirements at the start of project so the first step that is
Then next project planning is important step in that daily project planning is done
and project work is going on according to that plan so that it will complete on time. A
next step that is designing, modelling, coding will complete according to plan which
is created. Proposed systems is divided into small modules so that its easy to implement
and understand and also as small modules are there its easy to arrange tasks to
each project member. so that its become easy to manage. As all requirements are well
\begin{figure}[hbtp]
\begin{center}
\includegraphics[scale=0.7]{Dataflow.jpg}
\end{center}
\end{figure}
\subsubsection*{Advantages:}
\begin{enumerate}
\item Easy to manage. Each phase has specific deliverable and a review.
\item Works well for projects where requirements are well understood.
\end{enumerate}
There exist various software development approaches, aptly defined and designed,
which are employed during the development process of a software. These approaches
are also referred to as "Software Development Process Models". Each process model
follows a particular life cycle in order to ensure success in the process of software development.\\
The same waterfall model is utilised in this project. Waterfall approach was first
ensure success of the project. In the waterfall approach, the whole process of software
development is divided into separate phases. These phases in waterfall model are:
\begin{enumerate}
\item Implementation
\end{enumerate}
All these phases are cascaded to each other so that the second phase is started as and
when a defined set of goals are achieved for first phase and it is signed off, and hence the
name waterfall model. All the methods and processes undertaken in the waterfall model
Requirements are a set of functions and constraints that the end user (who will be using
the system) expects from the system. The requirements are gathered from the end user
at the start of the software development phase. These requirements are analyzed for
their validity, and the possibility of incorporating the requirements in the system to be
serves the purpose of guideline for the next phase of the model.
Before starting the actual coding phase, it is highly important to understand the requirements
of the end user and also have an idea of how should the end product looks like.
The requirement specifications from the first phase are studied in this phase and a system
design is prepared. System design helps in specifying hardware and system requirements
and also helps in defining the overall system architecture. The system design
On receiving system design documents, the work is divided in modules/units and actual
coding is started. The system is first developed in small programs called units, which
are integrated in the next phase. Each unit is developed and tested for its functionality;
this is referred to as unit testing. Unit testing mainly verifies if the modules/units meet
their specifications.
As specified above, the system is first divided into units which are developed and tested
for their functions. These units are integrated into a complete system during integration
phase and tested to check if all modules/units coordinate with each other, and the system as a whole
behaves as per the specifications. After successfully testing the software, it is delivered to the customer.
\newpage
\chapter{ANNEXURE B }
\addcontentsline{toc}{chapter}{\numberline{}ANNEXURE B }
\section*{FEASIBILITY STUDY}
\section*{Introduction}
Feasibility study is the test of a system proposal according to its work ability, impact on the organization,
ability to meet user needs, and effective use of resources. It focuses on the evaluation of existing system
and procedures analysis of alternative candidate system cost estimates. Feasibility analysis was done to
determine whether the system would be feasible.
The development of a computer based system or a product is more likely plagued by resources
and delivery dates. Feasibility study helps the analyst to decide whether or not to proceed, amend,
postpone or cancel the project, particularly important when the project is large, complex and costly.
Once the analysis of the user requirement is complement, the system has to check for the
compatibility and feasibility of the software package that is aimed at. An important outcome of the
preliminary investigation is the determination that the system requested is feasible.
The technology used can be developed with the current equipment and has the technical
capacity to hold the data required by the new system.
Technical feasibility on the existing system and to what extend it can support the proposed
addition.
Economic analysis is the most frequently used method for evaluating the effectiveness of a new
system. More commonly known cost/benefit analysis, the procedure is to determine the benefits and
savings that are expected from a candidate system and compare them with costs.
If benefits outweigh costs, then the decision is made to design and implement the system. An
entrepreneur must accurately weigh the cost versus benefits before taking an action. This system is
more economically feasible which assess the brain capacity with quick & online test. So it is
economically a good project.
\subsubsection*{Performance Feasibility}
\end{appendices}
\end{document}